When I give talks about AI, I often present my own research on ChatGPT muffing academic references. By the end I make sure that I present some evidence of how good ChatGPT can be, to make sure the audience walks away with the correct overall impression of where technology is heading. On the topic of rapid advances in LLMs, interesting new claims from a person on the inside can by found from Leopold Aschenbrenner in his new article (book?) called “Situational Awareness.”
https://situational-awareness.ai/
PDF: https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf
He argues that AGI is near and LLMs will surpass the smartest humans soon.
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.
Based on this assumption that AIs will surpass humans soon, he draws conclusions for national security and how we should conduct AI research. (No, I have not read all if it.)
I dropped in that question and I’m not sure if anyone has, per se, an answer.
You can also get the talking version of Leopold’s paper in his podcast with Dwarkesh.
I’m also not sure if anyone is going to answer this one:
I might offer to contract out my services in the future based on my human instincts shaped by growing up on internet culture (i.e. I know when they are joking) and having an acute sense of irony. How is Artificial General Irony coming along?