There has been an obsession lately with the use of Generative Pre-Trained Transformers (Commonly known as GPTs, such as ChatGPT), and the supposed fear of mass job loss as a result of a proposed emergence of a proposed super-AI or some type of Artificial General Intelligence. While I think this makes for catchy, attention grabbing headlines, the idea of a a Jetsons-esque – I find it unlikely there will be some great mass job-loss associated with the rise of “AI”.
I propose instead, a much more mundane and rather bland alternative, free of the fanfare of bombastic sensationalism that has swept the tech-news cycle. I think our relationship with AI will be the same as the Farmer and his Mule, and the Farmer’s son and his Farmall Tractor. This is insofar as the relationship between the tool and and the tool’s owner.
AI is a tool. It is a powerful tool, but it is a tool like any other. To fear an exotic tool is normal. After all, a circular saw can easily amputate a finger if used irresponsibly. However, that circular saw does not exist in a vacuum, even if connected to a robot arm, it does not cut by itself. It requires human input to determine what it is that it needs to cut, and how.
AI is no different, suppose you have a team of AI Agents (Like with AgentGPT, for example) that are self communicating to produce a specific objective. Who set the goal? Who decided why the goal should be what it is? Who monitored the AI’s conversation to ensure it doesn’t hallucinate into an endless feedback loop of gibberish? A human did.
AI will not replace all human labor, but it will change the way that those same humans works. I propose the that AI, will make MORE work for humans, and not less – because now we have the responsibility to equip our industries with AI integrations. We have the responsibility to train people how to use these AI systems correctly. We have the responsibility to engineer AI friendly systems integrations. AI provides with the ability to dramatically increase our laboral output, but it is not thermodynamically “free energy”.
I theorize that the future of AI adjacent labor will be human interaction with AI interfaces. It will be something vaguely related to the ingest of corporate datapoints into vector databases for retrieval and actions for use of Generative Pretrained Transformers. Companies will have a hybrid collection of:
- Corporate AI API access (Like ChatGPT, Claude, and Gemini/Bard/Whatever Google is calling it now)
- Locally hosted AI systems for restricted industries (Like medical, with HIPAA and other data regulations)
- A VectorDB system for calling proprietary company-specific data into pre-trained language models.
Every company will have some variation of this stack. It will be as ubiquitous as a Router, or a front desk. I don’t believe that humans will be put out of jobs as this technologically advances, but I do believe that legacy tools such as Sharepoint for example, will be replaced by VectorDBs, and I believe that it will be the very same users of that software who will do the replacing.
There is enormous opportunity for businesses to grow at the intersection of AI/LLMs, and corporate system infrastructure. However, we as computer scientists have an obligation to avoid the use of bombastic language around AI, as it it does nothing but to stroke fear that is not necessary.
Could AI revolt in a Asimov-like fashion? Perhaps, but I think that it would be malicious humans, not malicious AI that causes that to happen. I propose AI sandboxing as an extension of conventional corporate cyber security measures. There SHOULD absolutely be best practices around AI safety. But the fears around LLMs, and the calls for mass-regulation are severely overblown. The reality is that AI safety in the Enterprise, as it pertains to its use, will be centered around adherence to data governance rules, and CyberSEC best practices that exist across a wide range of applications.