Taking a Break from GPT
This article is not a description of yet another invention or idea, nor is it empty text generated by ChatGPT just to check a box. It is my reflection on a fairly timely and personally important topic. To begin, I will briefly outline my profile as an introduction so it is clear how I arrived at the conclusions presented below. Please judge harshly; I will be glad to hear any tough critique of my ideas.
So. I was born in 1979. Just kidding, I am not going to retell my childhood and adolescence—straight to adulthood. I was lucky enough to study at an incredible department (IU7) of an incredible technical university (Baumanka) during a pivotal period in computer technologies and networks (1996–2002). Looking back, I am thrilled that I was born when I was, and that I became interested in programming and computers. I saw new programming languages, new technologies, new approaches to software development, new protocols, and new operating systems emerge right before my eyes. The Internet became publicly accessible. I got to see Bill Gates and Steve Wozniak in person—people who were building the future. I am still amazed that I personally knew, spoke with, and vividly remember people who had been born in the 19th century. The 19th century! In short, I am old and experienced. I have worked in IT my whole life. There was a period of stagnation when I moved into the IT business side, but even then I was developing within a fairly narrow sphere. Later I got back into the flow and have tried to keep up with trends and understand what is happening in the industry and in technology.
In the fall of 2022, ahead of many other IT people, I registered an OpenAI account using my foreign phone number and got acquainted with ChatGPT. To say that I was impressed would be an understatement. In college, I was taught the algorithmic and mathematical foundations of artificial intelligence and expert systems (the course was taught by Valentin Ivanovich Nezemsky—a very eccentric professor with very strange examples and an unusual way of dealing with students). Because of the peculiarities of that class and the limited computing power of computers at the time, it was impossible to implement sophisticated algorithms or gather enough data, so we wrote primitive little programs that simulated decision-making based on extremely limited information. Back then it did not seem like anything magical. But ChatGPT blew my mind. I was especially impressed by how it had learned—still rather primitively at that time, but nonetheless—to generate software code. That version of GPT (I think it was 3.5) could stitch programs together out of separate functions, taking those functions almost as-is. It would not rename variables, it made mistakes, it did dumb things, but it still generated code.
Since 2022, AI technologies have become a powerful and reliable tool for me—one I use consciously every day and see directly affecting the quality and speed of my work. Generative models have become much more accessible, faster, and easier to understand, and now a huge number of people, at least in my environment, have started using AI. And that is where I began to see a very big problem.
Suddenly, people who knew nothing about design, technology, medicine, architecture, programming, education, and many other knowledge domains began to feel like all-powerful specialists and started creating. Ordinary people suddenly gained access to easy information that assembles itself like magic. And the information they get is hundreds of times better than anything they could have created themselves. Very few of them stop to think about where that information comes from, or how their prompts—with grammatical mistakes, typos, and poorly formed thoughts—are turned into content that exceeds their expectations. What is frightening is that these people started presenting generated information as their own thoughts, and started believing that they themselves had thought it up and created it. "Specialists" began appearing like mushrooms after rain—people who started teaching others, running blogs, consulting, and advising, without ever having worked for a single second in the field they had suddenly invaded. Many people use the results of generative models without even reading the texts, merely skimming over the output because it looks like real writing.
At work, I now have to read chat-generated technical specifications, concepts, reports, ideas, and even regional and federal regulatory documents almost every day. All documents around me have suddenly become full of em dashes, academic terminology, and complex turns of phrase, while the "creators" of this content often do not even understand those terms. Sometimes I even catch people showing off their brilliant presentations, glittering with trendy buzzwords and abbreviations, who actually do not know the meaning of the words in their own documents. The Dunning–Kruger effect has taken on new colors now that it has become difficult to find people who truly understand a subject. The Chinese took an important step that, I believe, we will also eventually need to take in our own country—they banned people without higher education from running blogs and teaching. In my view that is a wonderful decision. As for me, I would like to suggest that the creators of ChatGPT and other chat tools remove the copy-text function—let people use generated knowledge, but not simply copy and paste it into their own files. Let them type it out, let them read it, let them engage with it.
To add to the above, I have also started using chat more and more as a search engine: I generate an answer to a question, read it, and then write the document myself from scratch.
I would like to believe that the smart people building AI mechanisms have seriously thought through algorithms that prevent generated text from being included in the knowledge bases of future generative models. Otherwise, I foresee rapid degradation in future models whose output will be based on generated and unverified texts.
P.S. No text generation, grammar checking, or similar tools were used in writing this text. Pure HTML.