first paragraph: Unfolding as some cosmic story, the rapid advancements in artificial intelligence have given rise to large language models (LLMs) with unprecedented accuracy, fluency and utility. It's almost as if there's a cognitive big bang exploding into our human universe.
Nope, no, nonono, you're full of shit and I refuse to read the rest. Good day sir. I said good day!
Could we be witnessing the emergence of a new kind of singularity, where the boundaries between human and machine cognition start to blur and dissolve?
People are increasingly turning to LLMs for a wide range of cognitive tasks, from creative writing and language translation to problem-solving and decision-making.
If this guy's circle of acquaintances includes an increasing number of people who rely on fancy autocomplete for decision-making and creative writing, I might have an idea why he thinks LLMs are super intelligent in comparison.
To achieve human escape velocity, we might need to leverage the very technologies that challenge our place in the cognitive hierarchy. By integrating AI tools into our educational systems, creative processes, and decision-making frameworks, we can amplify our natural abilities, expand our perspectives, and accelerate innovation in a way that is symbiotic rather than competitive.
Wait, let me get this straight. His solution to achieve human escape velocity, which means "outpac[ing] AI's influence and maintain human autonomy" (his words, not mine) is to increase AI's influence and remove human autonomy?
Wait, let me get this straight. His solution to achieve human escape velocity, which means “outpac[ing] AI’s influence and maintain human autonomy” (his words, not mine) is to increase AI’s influence and remove human autonomy?
Well how do YOU plan on shilling for the tech industry by scaring people up about LLMs?
With each interaction, we feed more data into these systems and our unique perspectives and thought patterns become part of their ever-expanding neural networks.
Such a simple way to give away that you know absolutely fuck-all about actual Machine Learning as a science. Neural networks (ML term of art) don't fucking expand with data, they have a fixed size. A neural network is just a spicy matrix with real numbers.
This also seems to suggest the author thinks "bigger = better", which is just false. The whole field would probably be much easier if it were true, but you learn this im ML 101, just making the matrix bigger can reduce accuracy.
Neural networks are such an unfortunate name. They kinda look like a layered network and propagation-backpropagation kinda might give you "neurons firing" vibes, so scientists named it like that; only for dipshit pundits to immediately go "neuron means brain means machine thinks!!!" years later...
Yeah, I work in a field where machine learning has incredible applications, but also we're facing a deluge of trash. It's been clear for a while now that smaller, domain specific models are where it's at
this is so stupid. this is what happens when you're so enamoured with your metaphor that you don't stop and think what it is exactly you're trying to describe.
which i suppose is exactly how an LLM would write so he might be right in his case.
My initial reaction was essentially a mix of “sorry to this man” and “well I don’t really expect psychology today to publish anything approaching meaningful on tech/ai” so I wanted to see if those reactions were justified. Well…
This makes PT look like a total rag. The author, John Nosta, bills himself as “The World’s Leading Innovation Theorist and Keynote Speaker” which really just sounds like “lecture circuit grifter” to me. Let’s take a look at the rest of his front page:
STRATEGIST
Driving change that is changing the world. John’s informed voice has become a beacon of insight to help dissect and define innovation in health, medicine, and technology.
INNOVATOR
Not just a simple observer, John is directly engaged with top companies, thinkers and initiatives. His perspective is from the inside out and provides an “insiders” view of a complex and changing world.
THOUGHT LEADER
John is consistently ranked among the top names in health technology and innovation. Beyond simply an influencer, he is also defined as “most admired” to “top disruptor” in technology, life sciences and medicine.
So yeah if you asked chatGPT to come up with the profile for an AI lecture circuit grifter, it’d probably look like the above.
Looking through his contributions to PT you might think he is just recycling armchair AI philosophy to make a quick buck and honestly I’m having a hard time thinking otherwise.
You say that as if just anyone could end up serving on the board of Google health. I'm sure they have a rigorous vetting process to ensure that only thoughtful and experienced health and technology professionals are on tha-- naaah I'm just messing with you, I bet you just need the right connections.