Specialists in a Generalist World

This is a somewhat technical screed about the way the specialist AI and AGI fields ignore generalist-level philosophy and psychology as sources of necessary code. I hint a bit at the costs in mimicking intelligence. Intelligence is strongly associated with moral success, as well as many other positive outcomes. Elsewhere, I’ll make the point that moral and broader philosophic success aren’t necessarily antecedent to intelligence as people suppose, but is better thought of as often causing intelligence; that they must be built into any attempt at intelligence replication as a causal agent. I should also say that I have some acumen within general philosophy and psychology,  and some limited degree of acumen with neuroscience. I’ve been a CIO, large corporate software project manager, business analyst, database designer, and coder. I’m pretty worthless in scads of neural net technology, as well as related data structuring and most of the cool symbolic and misc rad and creative approaches. I do try to learn about it regularly; a fine nerd pudding.

Thomas Dietterich is a brilliant, prominent artificial intelligence (AI) researcher. He recently posted a super helpful delineation of potential approaches to achieving artificial general intelligence (AGI), which might be thought of as a combination of having common sense and being a sharp cookie.

He listed ‘the’ four techniques for chasing it. I have little to say of these methods; a quibble about his attitude about one, nothing more, as I’m not an AI expert. There’s a fifth approach he neglected that I am qualified to propose, though more of a required overlay to the others than its own method: finding the philosophical and psychological requirements for “minimal priors” of general intelligence. In other words, it’s finding what we have to teach a computer about human life and thinking well for it to run programs that will allow it to act and communicate like us.

Though “one should not cross disciplinary boundaries unescorted,” as Dr. Dietterich says in his post, almost everyone who works on AGI seems to me to think it’s fine to do so with these two fields, that arguably are forced to do the heavy lifting when it comes to our (most basic) priors. The two sciences are simply the best qualified to specify what general intelligence is, how it relates to language, the body, and a myriad of odd-sounding-yet-important subjects like epistemic justification. They’re the only fields positioned to deal acceptably with questions of context, prioritization, and many issues of purpose and meaning, their detection, expression, and dynamics. All of which seem quite firmly off the radar of cognitive and computer science folks.

(I’m slighting biology fields here by not dragging them in, by not talking about three sets of fields impinging on AI instead of two. I just can’t represent them and others like language and physics well without mucking things. I assume their arguments will be different, and perhaps not parallel. Anyway, there are many other fields that have significant claims on turf.)

Nerds may see what it’s like to try to solve a problem without having the priors you’d normally have in a situation, by playing this video game with the setting of no priors. I gave up instantly; it was like trying to win a debate with someone speaking Chinese, while being spun upside down. There are also game versions with just some of the priors taken away, so we can get a sense which assumptions we use in life are the most important.

At first, a person hearing that social scientists might help with AGI’s priors may think this is good news, that these less exotic, less technical teammates can come help with a few basics to constrain the real work nicely. Eek out a few definitions, and get out of the way. Unfortunately, it can’t work that way. When one wades in, one discovers that much of what a four-year-old brought with them genetically, added to what they’d learned since conception with that go-go sponge of a brain– is a monstrously large set of knowledge, schemas, and processes. It’s not going to make for elegant or compact code. There are also arguments about lots of it. It’s the opposite of a grand theory, because there are thousands of psychic details that dribbled out all over the neighborhood while we were building a real life based on general intelligence. Now we have to range all over our unconscious selves to pick up those pieces we dropped, and translate them for a machine. Except we can’t, because they’re almost all unconscious, much harder to observe. We have to depend on a bunch of careful psychologists to play Sherlock Holmes in thousands of ways to define who we are for the machine, using two-way mirrors, genius, luck, a statistical package, and lots of takeout.

The AI researchers don’t know this battle is ahead. They act as if somebody’ll hand them a data stick soon with the priors, and off we’ll all go to a future of robot friends and farmers and security guards that work for nothing. AGI, the attempt to replicate generalized human intelligence, is just one problem among many for them; they see no need to emphasize it above other problems, especially when everyone’s making such good cash now. That attitude by the AI community’s intellectual leaders is one reason why we’re waiting around, like lightning bugs, butt lights happily in sync, for AI geniuses who speak in tongues to deliver an efficient, elegant, minimal kernel of priors. So they can juggle, and eventually understand jokes. As Dr. Dietterich does here, there’s a tendency to vibe or emphasize that there are lots of other problems, too; that AGI can be dealt with like any other technical problem

Ironically, the parts of the task of AGI that might well be the hardest to create aren’t even thought of as necessary by the vast majority of the field. I’m embarrassed for us; embarrassed for science. There are whole reams of work to be referenced by Ruth Millikan (meaning/purpose/intention), Susanna Schellenberg (perception), Yves Citton and many others on attention, Bruce Russell (justification/uncertainty), biology philosophy (the unintuitive likely need for embodiment in AGI), veins of relatively unambitious moral psychology/philosophy, plus likely hundreds of related subjects. None of those can reasonably be left out of the priors, because they’re all jiggling around in that four-year-old. Those are just my touchpoints in the field where I see priors required, by the way; I don’t have any special insights into the pieces of priors needed. Part of the whole point is that no one has a good sense of the scope of the priors yet. 

Philosophy, which has a reputation for unfathomably “deconstructing” things, or being post-post-something or other, has in fact been building a picture of what constitutes the mostly-unconscious building blocks of language use, concept manipulation, and intelligent thought for decades. That work will need to continue to be argued over, mapped as pseudocode, and instantiated as limits and opportunities, all through the presumably heterogenous systems we end up with as AGI.

That work, which is often mind-bendingly complicated and occasionally mathy, mostly addresses our incredible unconscious capabilities, which are fundamental to the general intelligence at the core of who we are. For instance, to understand what’s involved when we look at footprints in the snow, you have to read, and then probably reread, about 50 pages of Ruth Millikan, delivering some of the most painful, inexorably accurate logic known to humanity. You’ve seen the footprints and had a couple of thoughts you barely noticed, that’s all; you haven’t even done anything, and yet you’re up to about 6 hours of the kind of reading that causes suicide in the weak to sum up what just happened. All to figure out the basics of what you paired up with that sight unconsciously; what you got to ignore, and why; what you compared the sight with unconsciously to figure out they were footprints; how you don’t go past the limits of what you know; linking the sight with various purposes and meaning; how your trust of the footprint to be a human’s is ‘statistical only’–– on and on and on with an avalanche of alien facts and concepts, revealing a scaffolding of meaning and purpose that goes below and above and around this glance, to give each act its place and time in the world.

This detail is surprising to most of us, but in a way, it shouldn’t be: a four-year-old can comfortably use a few languages fluently, with exposure, precisely because they’re capable at that age of learning so many priors so incredibly fast. Ruth Millikan can’t possibly keep up with three seconds of four-year-old life, even with 10,000 dense words, just like we can only fathom what happens in us when we glance at tracks in the snow by slowing the experience to a crawl, and turning it into language.

The many hard problems we already know about– defining/relating objects/concepts/purposes, nesting concepts, hierarchies of meanings (prioritization/attention), abstraction, analogy, the various challenges the psychologist-turned-AI-researcher Gary Marcus has elucidated as likely beyond today’s neural nets– any one of these cannot possibly be attended to adequately without a couple of social scientists standing next to the folks with scalpels at the operating table.

When I listen to Yann Lecun, I hear a fellow who seems annoyed at the focus of AGI ignoring all the cool stuff being done by the AI that’s right here, right now. I very much sympathize, and I think we all should, in the sense  that there are many immediate implications and actualities that we should be pondering and leveraging. The point is also that we will need to desperately unleverage some of them in the future, if we fail to instantiate the common sense that we so loosely and casually toss on the laundry in the corner.

[if you don't have a laundry corner in your room, good for you. Everybody has to get off your high horse for this last part.]

It’s not a coincidence that Mr. Marcus, originally a social scientist, feels obligated to squawk so awkwardly and continually on the depth and urgency of the notion of priors, to the annoyance of most in the field. Yes, one can ignore innateness and still work miracles with various neural nets, from medicine to McDonald’s. At the moment, CNN’s have matured and reign supreme in a variety of settings, with lots of sizzle being provided by seq2seq and a handful of other deep learning methods unearthed in the last few years, in the midst of this heyday in commercial AI we’re having now.

But study of innateness isn’t progressing. It’s not a problem that’ll sort itself, especially when we don’t even bothering to systematically recognize the nature of it and encourage the interdisciplinary acumen needed to solve it. We’re ignoring that the question “what are our priors?” is solely concerned with psychology, biology, and philosophy; we want to skip right to the ‘let’s code it up!’ part. The diffidence with which this subject is met by experts in the field, and the confidence with which the AI community unwittingly isolates itself, both concern me. I can’t tell if we’ll still be waiting in this nonsensical way in twenty years, or if the various projects on common sense at many universities and think tanks will begin to leverage adequate interdisciplinary cooperation. As with so many things in this business, we’ll see. 

Leave a Reply