The world of artificial intelligence (AI) has been a hot topic for years. However, it wasn't until November 2022 that the true significance of AI came to light with the release of ChatGPT. Like many others, I was intrigued by the capabilities of this new technology. Developed by OpenAI, ChatGPT quickly became the talk of the town as it gained unprecedented traction in just two months, attracting a staggering 100 million monthly users, making it the fastest-growing app in history. This milestone even surpassed the impressive growth rate of the social media behemoth TikTok, which took nine months to reach the same number of users after its launch.

But what exactly is ChatGPT, and what sets it apart from other AI systems? In essence, it's a sophisticated language model designed to process natural language, generate human-like responses, and carry out a wide range of tasks. The system is built on a neural network that can analyse vast amounts of data, allowing it to learn and improve over time. This enables ChatGPT to write your boring essay that was supposed to be submitted yesterday, pass high school and law school exams with scores ranking in the 90th percentile, provide suggestions for your upcoming meal, or even manage your overflowing email inbox.

However, not everyone shares the same enthusiasm for AI. A study from Monmouth University reveals that 41% believe that AI will do more harm than good, while a mere 9% think the opposite. Another research group found that only 15% of Americans were more excited than concerned about the increased use of AI in daily life. The apprehension surrounding the adoption of AI seems to originate from two separate concerns.

The first reason is job losses. It is almost as if the only way to talk about the implications of AI is through job losses. Every article is always about how many jobs will be lost. OpenAI, the people behind ChatGPT, recently conducted a study on ChatGPT's impact on a wide range of industries, which concluded that it would render millions of jobs redundant globally. Other recent studies echo the same results. One argues that 80% of Americans have the potential to automate at least 10% of their work tasks.

The industries that rely heavily on writing and programming are expected to be the most affected, and I am a prime example of this. However, I don't view this as a negative development. In fact, I am writing this very article in half the time it would usually take me, thanks to the aid of ChatGPT. By providing it with direction on what I want, it easily refines my rambling sentences, which significantly decreases my workload. Automation is not taking our jobs. It helps streamline processes, increase productivity, and reduce errors.

Let me give you an example of another technology quite similar to ChatGPT. Calculators were first invented in the 1960s, and their introduction sparked a significant change in how people approached mathematics. However, it wasn't until the 1970s that calculators began to be incorporated into classrooms on a widespread basis. Initially, there was a great deal of resistance to the idea of using calculators in the classroom. Many educators were concerned that relying on calculators would cause students to become overly reliant on technology and would undermine their ability to develop strong arithmetic skills. There were even protests from teachers against the use of calculators, much like the upheaval we see today regarding the use of ChatGPT in schools and universities.

"Math teachers protest against calculator use"

Despite these concerns, the use of calculators gradually became more accepted, and by the 1990s, they had become a staple of mathematics education. Today, calculators are ubiquitous tools in classrooms around the world, and they are used to assist students in everything from basic arithmetic to advanced calculus. However, calculators have continued to evolve over the years, and today, they have developed into powerful computers and software applications like Microsoft Excel.

These tools have not made people more stupid or eliminated jobs, as some feared they would. Instead, they have helped to increase productivity and accelerate growth in a wide range of industries. The path followed by calculators and the concerns initially associated with them is quite similar to that of AI. Many individuals have apprehensions, envisioning AI as a technology that will render traditional education obsolete or replace many jobs, leaving humans insignificant. There's no need to fear automation. In fact, it has the potential to free workers from mundane tasks. This would be a huge benefit for tight labour markets in advanced economies, providing much-needed relief and new opportunities.

The second worry about AI feels as if it's straight from a science fiction film plot. Earlier this year in March, several high-profile tech leaders urged for a halt in AI development, citing potential risks to humanity. Big names like Elon Musk, the serial entrepreneur, and Steve Wozniak, co-founder of Apple, are just some of the many voices expressing concern over AI. Looking forward, some experts sound the alarm that AI could become a serious threat, even leading to our own demise. Elon Musk even said this in an interview with Tucker Carlson: “It [AI] has the potential of civilization destruction”. Musk asserted that the possibility of superintelligent AI must be carefully considered before it is developed, and safety measures should be prioritized during the current development phase.

AI experts caution that in the short term, AI systems could worsen existing biases and inequalities, spread misinformation, disrupt politics and the economy, and facilitate cyberattacks. Contrary to the pop culture image of AI as a menace akin to the Terminator, machines in and of themselves don't pose a direct danger. A lot of the fear surrounding AI springs from its dramatic representation in fiction, like the rogue HAL 9000 in the film '2001: A Space Odyssey', who infamously turns against and kills the spaceship's crew.

I believe an alternate explanation may exist for the prevalent pessimism about AI among the general populace. Over the past forty years, technology has seemingly done little to boost income growth for the low-income and regular middle-class sectors in more developed economies, leading to widespread disillusionment. As I discussed in a recent piece, the income growth observed since the 1980s has predominantly benefited the burgeoning Asian middle class and the top 1% of the global population. This has driven many in the lower and middle-income brackets to perceive their economy as a zero-sum game, where new technological advancements merely profit those already affluent, leaving them in an even more precarious financial position. This growing sense of unease and frustration among the lower and middle classes may have given rise to a fear of technological progress.

In conclusion, the rise of AI has the potential to be a force for good or bad, depending on how it is developed and implemented. While concerns about job losses and the potential risks of AI are understandable, it's important to recognize that AI has the power to streamline processes, increase productivity, and provide new opportunities. It's crucial that safety measures are put in place during the development phase to mitigate potential risks, and that the benefits of AI are shared across all income brackets to avoid exacerbating existing inequalities. As with any new technology, it's up to us to determine how AI is used and to ensure that it serves the greater good.