Solaris and our insatiable desire to anthropomorphize
I just reread one of my favorite pieces of sci-fi, Stanislaw Lem's Solaris. In my morning journaling, I realized there's an odd connection between the ideas around first contact and anthropomorphization and the present moment of cyclic collective AI psychosis.
My journal entry is included unedited below.
Just reread Solaris. It's oddly applicable to the present moment. Humanity's almost 'limbic' compulsion to anthropomorphize Solaris feels like Twitter's cyclic AI psychosis.
Karpathy mirrors Kelvin. The scientist with a deep understanding of the details who still can't shed his human, limbic desire to see something human-like where there's no reason to think there is. Kelvin 'knows' that Harey isn't a 'real human', but he can't shed his noble, romantic desire to escape the station with her.
The LLM turned out to be the perfect growth hack for beating our collective Turing test and as its ability to beat it convincingly increases, we see ourselves in it more and more. As Turing himself observed in the paper describing the test, this isn't intelligence, but it's the closest we can come to identifying something like 'thinking' in a machine.
More importantly, the Turing test was never a 'test' in the sense that 'passing it' means we now have 'intelligence'. Turing just observed that it's the best we can do at saying "this thing is thinking".
At some level of sophistication, it may be silly to deny that the machine is 'effectively intelligent.' But every step along the way creates delusions. Small bubbles of speculative frenzy and psychosis. Just as the solaricists seemed to go through almost cyclic periods of optimism and cynicism with respect to making contact with Solaris.
Is there an 'escape velocity' bubble like was once theorized for cryptocurrency?
A bubble from which we emerge in a new phase of reality where the thing is now (perceived as) 'intelligent' forever?
Interestingly, the possibility of such a 'last bubble' depends less on the intelligence of the machine and more on the distribution of expertise among humans. What matters is how convinced we are collectively that it passes. It requires that enough domain experts are convinced that it's intelligent in those domains (in addition to normies/midwits who are just trying to signal that they're hip with 'the future').
Statistically, of course, we'd expect a few experts in a domain to get tricked early (Karpathy), but in my own experience, most experts aren't convinced yet. For example, most productive engineers are far from convinced that this replaces human judgment in code production. Beyond a certain sophistication of 'thinking', the machine produces slop.
Gell-Mann Amnesia creates an odd 'graph covering' situation by which we might reach the escape bubble without achieving expertise in any domain. One of our defects as humans is that we're bad at perceiving expertise in domains that aren't our own, no matter how good we are at perceiving it in our domain. Michael Crichton originally developed the term to explain how experts might trust a publication generally despite seeing how wrong it is about things they know about (easy to see with news media, for example).
Apply this thinking to perceiving AI expertise. For my literate friends, think about how cringe and detectable AI writing is. For my code literate friends, think about how sloppy the code produced by AI is.
Why then are we so easily tricked by generative AI applied to video or audio? Or game development?
We must realize (and quickly will if we talk to experts) that the work produced in those domains is just as sloppy. We're just forgetful.