AI and Reflective Equilibrium
Posted on by Fred C Yankowski
The proposals to combine neural networks with model and rule-based reasoning remind me of reflective equilibrium in political philosophy. In this metaphor, the case-based results of a neural network correspond to intuitions, and the models and associated rules correspond to explicit principles and beliefs.
In an AI system comprising both neural network and model/rule-based reasoning components, a management layer (with human intervention?) could perhaps intermediate and adjust the two components to strive for consistency between them (and perhaps self-consistency in the neural network part). The management component could look for patterns in the case-based results from the neural network side and create and adjust the model and rules to correspond. Similarly, the neural network weights and biases could be adjusted when the results differ from what is consistent with the model/rule side.
References
-
M. Mitchell and D. C. Krakauer, The Debate Over Understanding in AI’s Large Language Models, February 2023
-
Peter Norvig, On Chomsky and the Two Cultures of Statistical Learning
-
E. Bender, T. Gebru, A. McMillan-Major, S. Schmitchell, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, 2021
-
Elizabeth Weil, You Are Not a Parrot, Intelligencer, 2023