The recent advances in artificial intelligence (AI) and the boom in AI-related technologies have got many people excited. AI tools such as ChatGPT and Midjourney have received lots of coverage, mostly positive.
We are flooded with new and exciting things that GPT-4 and its ilks can do so we have little time to reflect on the impact these technologies might have on our life. Those who sound alarm bells or advocate a more cautious approach to experimenting with AI are called Luddites hampering progress.
Among the AI sceptics are some who fear superintelligent machines becoming rogue and coming into conflict with humans and our values, potentially exterminating the human race. Perhaps far-fetched but better be safe than sorry, they say. So understandably, many want to put a moratorium on AI development; others want more drastic measures, shutting down entirely.
The Cassandras may be right and their fears warranted. But there are more immediate concerns that confront us; not apocalyptic but problems with far-reaching ramifications. And these concerns and implications of GPT-4 are many.
Implications of GPT-4 on jobs security
The most tangible fear that most people have of GPT-4 is the impact it could have on jobs. The fear is not unjustified; research supports it. In a working paper, researchers from OpenAI and the University of Pennsylvania posit that at least 80% of US workers could be impacted by large language models such as GPT-4.
Significantly, the research suggests that higher-income jobs are more at risk of being displaced by GPT-4 and other AI technologies. In particular, programming and writing show high exposure to the impacts of GPT-4 and other language models.
The disruption cannot be forestalled; the genie is already out of the bottle. It may not even be wise to thwart the progress of AI, even if it is possible to do so. The best thing we could hope for is that the disruption of the job market by GPT-4 and the like would be gradual giving us enough time to reskill and upskill.
Fake news and misinformation
Another major concern with GPT-4 is its ability to generate fake and misleading information at scale quickly and with ease. GPT-4 could be used to generate fake news that appears legitimate and authentic by mimicking the style and tone of reputable news outlets or journalists.
Another way GPT-4 could be used in a nefarious way is generating fake quotes, tweets, or other social media content that appear to come from real people or organisations. This could be used to spread false information or to create the appearance of support for certain ideas or causes.
This misinformation can then be coupled with fake images that are indistinguishable from real photographs generated with tools such as Midjourney or Stable Diffusion.
While it has always been possible to create fake news and images, it wasn’t easy especially doing so at scale. It requires time and resources. With GPT-4, everyone can create misinformation and run disinformation campaigns—everyone with internet access and a bit of mischief.
Malware and cybercrime
ChatGPT is a good coding assistant. GPT-4 with even more advanced capabilities is a godsend both to those who want to use it to help them with coding—for good as well as for ills. Tools such as GPT-4 lower the bar for committing and perpetuating cybercriminal activities.
GPT-4 can help those with little technical knowledge make malicious tools. OpenAI admitted in the GPT-4 research paper that without safety mitigations, GPT-4 can give detailed guidance on how to conduct harmful and illegal activities.
But even with safety measures in place, the restrictions can be bypassed or “jailbroken”, allowing malicious actors to achieve their objectives without much hindrance. GPT-4 can help seasoned criminals as well as novices in the game. It could be used to increase the sophistication of existing malware as well as create new ones from scratch.
Check Point Research, a cyber threat intelligence provider, notes five scenarios in which GPT-4 can be used for cybercrime. These include bank impersonation and creation of malware among others. The researchers note: “Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”
Misalignment with human values
As language models such as GPT-4 become more advanced and make decisions on their own and on our behalf, there is every chance that their values, if they have any, could come into conflict with human’s. This becomes all the more pressing because we don’t know how these AI models work, which would make them difficult to rein in should they go rogue.
A further problem arises from the fact that humans have conflicting values. Whose values do we imbue the intelligent chatbots powered by GPT-4 with? Even if we can agree on certain principles and values, how do we imbue into the systems such values?
Even if we could teach artificial intelligence systems like GPT-4 human values, there’s no guarantee that they’ll learn it the way we want or evolve the way we intended. An example is that of the hypothetical scenario posited by philosopher Nick Bostrom. If, he said, we tell an intelligent machine to make paperclips, it might decide that humans switching it off thwarts its efficiency and decide to do away with them. It might even realise that humans could be used as raw materials for making paperclips.
While this may be far-fetched, it is not implausible to assume artificial intelligence systems such as GPT-4 going against human values. This has wide-ranging implications as GPT-4 and similar AI make inroads into our everyday lives, such as autonomous driving, criminal justice, hiring and healthcare. A judge in Colombia is said to have used ChatGPT to make a court ruling. It won’t be the last time; nor would it be the only way GPT-4 could be used to make decisions.
The implications of GPT-4 are not limited to the few cases outlined here. And as it encroaches on even more areas of life and takes over human functions—but hopefully not humans—it’s important to ensure that language models like GPT-4 are transparent, accountable, and subject to human oversight and control.