Great review of two books that are now on my reading list. Thanks.
Tech bro's seem to have no problem applying fallibilism to LLMs: "We know they hallucinate and are wrong sometimes, but that's okay" but when you challenge them on their dogmatic belief that AGI is somehow merely months away they will call you a luddite and block you. Such a fragile bunch of true believers.
This is one of my favorites entries to far, Rob! I learned so much by reading how you compare these two books! AI Snake Oil has been on my list for a whole, but The Ordinal Society is new to me!
Thanks, Jason! The Ordinal Society came out this year, but it is more about data than AI. Its visibility seems limited to sociology despite its broad range. It has really helped me get my head around social media and algorithms, so glad to introduce you to it.
I feel like thatβs the kind of research we should be doing, in making connections!
Also, a thought occurred to me while reading our article. I considered one of Altmanβs favorite talking points, that βChatGPT sucks.β
He rolls it out a lot when trying to amp up discussions of AGI. It does the exact kind of work you mention here: it makes AGI the product, not ChatGPT.
Itβs a con move. It makes the product something that doesnβt exist (and that we donβt even have a good definition for). I hadnβt thought of it in that context before.
He is a real master. And how good he is at it drives his critics to anger, which leads them to overstate their skepticism about AGI, which amps up the discussion.
Thank you Rob for this brilliant analysis of the long and short of AI hopes and hype.
I particularly liked your historical perspective of prior tech waves as I recently wrote a short post on the βeternal sunshine of the spotless technology mindβ that chooses to ignore the messy realities of implementations.
The intriguing question that I think we are about to face as a humans might not be how we define AGI, but whether human behaviour is uncomfortably probabilistic and deterministic when observed at computer scale.
Thank you, Juan, for the kind works and link to your work. I opened the tab and will take a look at my next opportunity.
One of the ideas I've been turning over in my head is whether the analogy of the human mind as an electronic computer has run its course, and what new analogies might be useful to make sense of intelligence and other cognitive processes. Undoubtedly, humans think probabilistically, and much about our behavior, including cognitive behavior, is determined or constrained.
I think this is an area where the analogy to a computer or algorithm fails. The human experiences that shape us and our behavior are obscured in important ways when we think in terms of computation. This is especially true at scales where we compare the collective intelligence of humans to networks of computers.
Thanks Rob, a fascinating topic. I do agree that the analogy has run its course.
The human experience that makes each of us unique as individuals also makes us fairly predictable as a collective when expressed in probable outcomes⦠but this does not mean we are just a processing machine.
Computers are increasingly good at creating the illusion that they are human, but they are not.
"skid marks of online activity" π
Great review of two books that are now on my reading list. Thanks.
Tech bro's seem to have no problem applying fallibilism to LLMs: "We know they hallucinate and are wrong sometimes, but that's okay" but when you challenge them on their dogmatic belief that AGI is somehow merely months away they will call you a luddite and block you. Such a fragile bunch of true believers.
This is one of my favorites entries to far, Rob! I learned so much by reading how you compare these two books! AI Snake Oil has been on my list for a whole, but The Ordinal Society is new to me!
Thanks, Jason! The Ordinal Society came out this year, but it is more about data than AI. Its visibility seems limited to sociology despite its broad range. It has really helped me get my head around social media and algorithms, so glad to introduce you to it.
I feel like thatβs the kind of research we should be doing, in making connections!
Also, a thought occurred to me while reading our article. I considered one of Altmanβs favorite talking points, that βChatGPT sucks.β
He rolls it out a lot when trying to amp up discussions of AGI. It does the exact kind of work you mention here: it makes AGI the product, not ChatGPT.
Itβs a con move. It makes the product something that doesnβt exist (and that we donβt even have a good definition for). I hadnβt thought of it in that context before.
He is a real master. And how good he is at it drives his critics to anger, which leads them to overstate their skepticism about AGI, which amps up the discussion.
Exactly!
Thank you Rob for this brilliant analysis of the long and short of AI hopes and hype.
I particularly liked your historical perspective of prior tech waves as I recently wrote a short post on the βeternal sunshine of the spotless technology mindβ that chooses to ignore the messy realities of implementations.
https://www.valuecreationplan.com/p/the-eternal-sunshine-of-the-spotless
The intriguing question that I think we are about to face as a humans might not be how we define AGI, but whether human behaviour is uncomfortably probabilistic and deterministic when observed at computer scale.
Thank you, Juan, for the kind works and link to your work. I opened the tab and will take a look at my next opportunity.
One of the ideas I've been turning over in my head is whether the analogy of the human mind as an electronic computer has run its course, and what new analogies might be useful to make sense of intelligence and other cognitive processes. Undoubtedly, humans think probabilistically, and much about our behavior, including cognitive behavior, is determined or constrained.
I think this is an area where the analogy to a computer or algorithm fails. The human experiences that shape us and our behavior are obscured in important ways when we think in terms of computation. This is especially true at scales where we compare the collective intelligence of humans to networks of computers.
Thanks Rob, a fascinating topic. I do agree that the analogy has run its course.
The human experience that makes each of us unique as individuals also makes us fairly predictable as a collective when expressed in probable outcomes⦠but this does not mean we are just a processing machine.
Computers are increasingly good at creating the illusion that they are human, but they are not.
I look forward to reading more of your articles.
Rob, I think you will find this research interesting in the context of our conversation above. Have a nice weekend.
https://substack.com/@aidisruptor/note/c-79762343