a computer screen with a text description on it

ChatGPT & AI – The Benefits and Dangers

Reading Time: 6 minutes

It feels like you can’t open Twitter, read a news article, or watch a video clip without somebody mentioning ChatGPT and OpenAI’s latest innovations with artificial intelligence. You have the doomsayers, claiming that tools like ChatGPT are going to replace humans in the workforce and make schools obsolete, and you have the yes-men of the internet saying ChatGPT can do no wrong, and we simply must accept it as the latest and greatest tool in an ever growing tool belt.

So, what’s actually going on with these new AI chatbots and tools? Are the useful, or are they the beginnings of an end to our understanding of technology? And how exactly does it work?

AI 101

Let’s start with some basics. First, I am no expert in artificial intelligence. My expertise is in web and software development on a small to medium scale, and my education gives me some broad technical communication expertise to have the ability to read through gibberish research articles and figure out what the hell they’re talking about, in general terms.

So, with that in mind, what is artificial intelligence (AI) and how does it work?

AI is a subset of computer science that involves large amounts of data, a lot of math and statistics, and algorithms to enable problem solving on a large scale. CGP Grey has a great video about how machine learning algorithms work, so I won’t dissect that topic here.

Generally speaking, AI works by taking data, “teaching” an algorithm that input A should give output B, and then grading the algorithm on its accuracy. AI systems are like third graders doing their multiplication tables. They learn patterns and tricks to get the answer. If they get the answer right, they get a good grade and a candy reward. If they get the answer wrong, they get a bad grade and deep everlasting shame.

With AI, over time, you end up with a system that is pretty good at doing the one very specific thing you have trained it to do. If you train an AI to recognize pictures of cars, and you feed it lots and lots of traffic data, it will get very good at identifying cars confidently. But if you suddenly show the car classifier a picture of a dog, it either won’t know what to do, or it will confidently (and incorrectly) say that your Boston Terrier is, in fact, a Ford Focus.

ChatGPT & Large Language Models

ChatGPT is a specific kind of artificial intelligence called a Large Language Model (LLM). Simply put, an LLM is a machine learning program that takes in unstructured, natural language text like books, songs, articles, magazines, poems, and conversations to train the model. Because the input and training data is unlabeled, the LLM can generate huge numbers of possible responses to any given input, and the developers of the program fine tune the algorithm to produce text that is understood by and appears to be written by humans.

LLMs can be trained on general text, like ChatGPT, or they can be trained on specific text, like GitHub CoPilot, which is trained on publicly available code. Both ChatGPT and CoPilot are powerful LLMs, but they represent two distinct training sets. ChatGPT can respond to questions like “what is the closest planet to the sun”, while GitHub CoPilot can respond to code comments like “connect to the database and retrieve the row ‘users’ with the column ‘id’, ‘name’, and ‘age’”.

Neither ChatGPT nor GitHub CoPilot “knows” the answer to these prompts the way a human “knows” that 2 + 2 = 4. Instead, these LLMs generate their output based on probabilistic models built from the patterns in the data they were trained on. When either of these LLMs receive a prompt, they generate a range of possible outputs and then select the output based on the probability that the selected output is “good”. This reasoning is entirely based on the training data, not on any intuitive understanding of the material at hand.

While the LLM’s statical approach to natural language processing is incredibly effective, it certainly has its limitations. Because training data is unlabeled, and is often too large to analyze by people, it can include harmful information, blatantly false information, or sensitive and biased text. If you give an AI racist data, you’re just going to end up with a racist AI.

Benefits

Large language models such as ChatGPT can transform the way we approach repetitive tasks in a wide range of industries. Take customer service, for example. ChatGPT can handle multiple requests simultaneously, reducing wait times for customers. It can be trained on industry-specific data and text, allowing for hyper accurate predictive models that are specific to that company. Apple could use LLMs like ChatGPT to focus on customer support for their line of products only, streamlining the customer support experience.

LLMs are also able to perform quite well at tasks like editing and writing simple messages or code. Instead of wasting hours every day editing technical documents to be more readable by general audiences, a LLM can read through every document produced by the company in a given time period, and output simplified versions. Companies would still need to verify the validity of the simplified versions, but the task of editing documents can be at least partially automated; increasing output and decreasing time spent.

Tools like ChatGPT aren’t going to take over entire industries, but they will be integrated into our work over time. I already use tools like ChatGPT and GitHub CoPilot to assist me in writing unit tests for my code. Writing unit tests for every line of code is time consuming and, frankly, deeply boring. So rather than waste hours of my day writing unit tests, I can have CoPilot write the tests for me, and I can instead spend my time fine tuning the tests and making sure our codebase is stable.

Dangers

We can see the benefits of AI tools like ChatGPT, especially for easy-to-repeat tasks, and well organized information with known answers. But what about those grey areas where we don’t have exact answers, or where the answers to our questions require some intuition? What about self driving cars? And all these new tools that can replace our jobs?

I’m here to say, with a fair degree of confidence, that ChatGPT isn’t going to take your job. It might change your job, but it’s not going to take over. ChatGPT is, ultimately, a tool for humans to use, not a tool to replace humans. For the vast majority of knowledge workers (i.e. those of us with office jobs), AI is going to become a kind of “coworker”, or supervisor.

Doctors might use a special medical AI to review your charts and x-rays, but the AI isn’t going to push the doctor out of the office. An AI might write a draft of a legal contract, but it won’t replace the human arguing for your defense in a court room. Tools like ChatGPT won’t replace bank tellers and financial advisors, but they might one day write investment portfolio proposals for your Roth IRA.

The real danger we are seeing now with artificial intelligence and large language models is that people are accepting the outputs of these systems as factual. We have seen at least two major news stories where researchers or journalists claim that a LLM is “alive” or “sentient” because these systems are eerily good at mimicking human speech. We have also seen cases like Bing’s AI chat confidently claiming that the year is still 2022, when the system wasn’t released until 2023.

Microsoft’s Tay AI on Twitter denying the Holocaust.

Just like our teachers used to tell us not to believe everything we read on Wikipedia because “everyone can log in and make changes”, we shouldn’t believe everything written by an AI. The danger of AI comes from humans not understanding how these systems work, and yet still trusting the output of them implicitly. The danger comes not from the AI itself, but from how we use it. If we continue to train these systems on sensitive information, or biased and factually incorrect statements, we are going to continue to receive biased, incorrect, or potentially dangerous output.

If you give an AI racist data, you’re just going to end up with a racist AI.

The reality is that these tools exist and can’t be un-invented. We have to teach our kids how to use technology appropriately, how to be safe online, and how to have a healthy dose of media literacy and skepticism around new technologies. The world is changing fast, and we have to be diligent in keeping up with it. The funny thing is, maybe someday, AI will be able to help us keep up.