Four researchers asked whether humans will still write code in 2040 in a paper published in 2017. They answered in the negative. They are almost certainly right: by 2040, machines will write most of the code, if not all.
They were a bit generous with the time, however. GPT-4 has brought their prediction much sooner. GPT-4 can create perfectly functional codes from mere descriptions and even sketches. And we don’t know what other generative tools are being developed that specialise in generating codes and not text as GPT-4 is. And we don’t know how much better GPT-4’s successors will be at coding.
And so but we’re here in 2023 wondering if human programmers will remain relevant, not in distant 2040 but a year or two from now—perhaps even sooner. Could GPT-4 really replace programmers?
GPT-4 excites non-programmers, worries programmers
GPT-4’s ability to generate code with high efficiency and accuracy from natural language prompts has made lots of people excited; programmers, understandably, are not exhilarated. People with no programming experience can now, given they have an idea of how to piece codes together, create games and apps. Coding skills are no longer a requirement for programming—or so it seems so based on the hype.
This raises questions about the future of programming and the extent to which GPT-4 could potentially replace or reduce the need for human programmers. Some programmers may worry that the increasing capabilities of GPT-4 and other AI systems may render their skills and expertise less valuable or even unnecessary in the future.
GPT-4 and its coding capabilities
Human languages and programming languages work in similar ways. Both have certain rules, semantics and syntax. Programming languages are, arguably, more complex than the human kinds; but the complexities with a rigid set of rules and unambiguous semantics make it expedient for machines to learn and execute.
GPT-4 can create perfectly coherent sentences in obscure languages with limited examples and training data. It doesn’t simply generate the answers in English (or one of the other popular languages) and translate them to the language in question—so it (GPT-4) claims. This capability could extend to programming languages; paucity of sample data is no bar. GPT-4 doesn’t need huge samples of codes to be able to create efficient working codes.
The coding capabilities of GPT-4 were already evident, if not obvious, from its predecessor, GPT-3.5 on which ChatGPT was based. ChatGPT could create snippets of code in a fraction of the time it would take for a human programmer to write the same code. And it could also debug and correct its own code given hints and suggestions; it could debug even human-generated codes.
GPT-4 is a lot better than GPT-3.5 was. It has a larger context window, meaning it has longer “memory” and so will be less likely to go haywire. The larger context window also means GPT-4 will be able to generate lengthy codes with hundreds of lines without losing a sense of what is there some blocks or lines away.
Some people have been using it to develop games from scratch; not triple-A games but 3D games that are nonetheless impressive. And not just mobile games but payment apps, websites, plugins for WordPress, you name it.
More impressive still is that GPT-4 can generate codes from sketches of websites or apps. Users need not be good at verbal prompting. They can now upload a picture of the sketch or outline of the thing they want the code for.
This is not to say that programming skills will become redundant; far from it. A human will still be needed in compiling, running, testing and deploying the code. And GPT-4 and similar AI systems are still fraught with limitations.
Limitations of GPT-4 in coding
GPT-4 capabilities in coding look impressive from a distance, at least from an amateur’s point of view. On closer examination, however, it is far less dazzling. Its many limitations come to light.
The first is that it is not human. It is not sentient (contestably) and does not know programming languages; it merely has knowledge of them. It gives an answer in response to a prompt that is most likely to be true but itself does not know whether it is right or wrong. It may be able to recognise programming concepts and syntax but it may not fully understand the semantics of the code it generates. This could result in code that is technically correct but does not function as intended.
It is not wholly trustworthy and cannot be used especially where high stakes are involved, such as data security. And users must have at least a basic understanding of programming so that they can guide and direct it to make it work as it should. Otherwise it’d be like a carpenter assembling a car and not knowing where to put the steering wheel, for example. And so that takes us to the second weakness of GPT-4.
GPT-4 is susceptible to bad prompts. It spouts answers confidently regardless of whether the prompt makes sense; or for that matter, whether its answer makes sense. It never doubts, and so it never asks questions. While this may not be considered a limitation per se and the writer of this piece may be perceived as being too keen to take the side of the human and falsely implicate AI (the writer is a human after all), asking, refining and iterating are crucial elements of the development process. GPT-4 is a little too subservient to be a reliable assistant.
Another limitation of GPT-4 for coding is that it has a rather short context window, though significantly more than its predecessors. This makes it not quite suitable for programs with complex and lengthy code. The lengthier the code, the more likely GPT-4 is to produce code that will malfunction.
Be that as it may, GPT-4 is a big leap in AI’s capability in coding. It may not only reach human-level competency but soon outcompete humans. Indeed a number of noteworthy figures, most notably Elon Musk, apprehend AI taking over humanity and want to put a moratorium on AI research and development. Is the fear warranted? Will GPT-4 put human programmers out of work?
Does GPT-4 pose a risk to the future of programmers at risk?
To give a short answer, yes; but also no. GPT-4 will replace aspects of programming that involve repetitive and routine tasks such as writing fairly simple codes, refactoring codes and documentation. But programming involves a range of skills beyond the purely technical that machines cannot replicate or match humans.
Programmers play a critical role in designing and architecting software systems. GPT-4 cannot replace the creativity and problem-solving skills that human programmers possess. And GPT-4 may be able to generate functional code but there’s still a need for a human expert to inspect and analyse the code and ensure that it works as intended.
Moreover, programming involves more than just coding. It needs other skills that AIs like GPT-4 either do not possess or falls flat in the face of humans. Problem-solving, critical thinking, and understanding software architecture and design are all essential skills that will be difficult to teach to machines.
And while GPT-4 may be proficient at generating code from natural language prompts, it lacks the ability to understand the context and purpose behind the code it generates. Human programmers, on the other hand, possess the necessary understanding of a project’s requirements and can make informed decisions about the appropriate code to use.
GPT-4: A threat or a treat?
Like all groundbreaking technologies, GPT-4 is a boon for some and a bane for others. It will certainly make certain jobs obsolete but it will also make the field of programming much more accessible and allow people with little coding expertise to create and develop programs.
As AIs replace humans in the mundane aspects of programming, humans can devote their time and energy to things that machines cannot do or perform well. This will allow humans to channel their (our) creative potential to innovative pursuits and enterprises. And to use machines for human benefit.
GPT-4 is, therefore, likely to be more effective as a complementary tool rather than a complete replacement for human programmers.