Fair chance this has happened to you – you were using Google Maps to navigate, and it guided you into a narrow by-lane. And while stuck in there, you wondered if the main road you knew takes you to your destination is indeed worse.
While stuck in such a jam this week, I read a piece of news. Developer community platform Stack Overflow, where developers go to find answers to technical questions, is laying off a large number of its employees. The reason? Developers are getting AI to correct their code, and offer the same suggestions other humans on Stack Overflow did.
And now, as I write this article, a magic pencil icon is omnipresent right beside my cursor, exhorting me to use AI to write what I need to.
AI has been in the news so much lately, I wouldn’t fault you for ‘mehing’ out of this article right now. It’s overwhelming, seems too much of a fad like crypto was for a couple of years, and frankly, there’s too much information to process.
Yet, I think there’s nuance in the news flow that’s coming out, and noticing some of these things will help us think about this better. And the news headlines should read better than the barrage of news about startups launching in the AI space, or the burgeoning market cap of some technology companies.
The thing that stuck out in the news, that we should care about, is what Stack Overflow’s CEO Prashanth Chandrasekar said.
Ironically, GenAI tools like OpenAI or Bard were likely trained on Stack Overflow’s data, something the platform is now trying to charge them for.
Secondly, and importantly, if platforms like Stack Overflow cease to exist for lack of meaningful interactions between human developers solving issues together, it will eventually lead to model collapse.
Model collapse is perhaps the challenge that will keep creative, and sometimes desperate human beings ahead of Generative AI. Put simply, as more content on the internet is produced using AI, it reads the same, looks the same. Too much homogeneity means we will be awash in a land of garbage in, garbage out.
For example, if enough human beings on the internet said the sky is turning purple, and AI models continue to suck up that piece of ‘information’ , there’s a fair chance AI models will start hallucinating, giving reasons for the sky going that way, and throwing other information about changes in refraction of light and what not.
Now, why should you care about it? There’s a fair chance that AI will start showing up in your life in the next six to eight months. If you write or approve copy, it already has. If you design, you’ve used bits and bobs of it, but you will make your first full-fledged design. When you are finished, you might feel overwhelmed with what your job’s future looks like.
I’m here to argue that the threat is more magnified than it needs to be. AI, at least currently, is dependent on flawed, broken, somewhat faulty human beings to train itself. Yes, as time moves on, and the ‘system’ moves to its next level, we will see the progress we do need to fear.
I realise this is a complex topic, and folks smarter than myself have spoken about this more eloquently. If you are interested, please search ‘Wait but Why’ on the internet, and click on the link that talks about Artificial Intelligence. Grab a coffee or a beer, and understand where humanity might be going.
(The author is Director – Marketing at EnableX, a communications SaaS company. He posts on X @ironymeter. Views expressed are personal.)