AI and Reflective Equilibrium

The proposals to combine neural networks with model and rule-based reasoning remind me of reflective equilibrium in political philosophy. In this metaphor, the case-based results of a neural network correspond to intuitions, and the models and associated rules correspond to explicit principles and beliefs.

In an AI system comprising both neural network and model/rule-based reasoning components, a management layer (with human intervention?) could perhaps intermediate and adjust the two components to strive for consistency between them (and perhaps self-consistency in the neural network part). The management component could look for patterns in the case-based results from the neural network side and create and adjust the model and rules to correspond. Similarly, the neural network weights and biases could be adjusted when the results differ from what is consistent with the model/rule side.

References