No need to introduce ChatGPT, the new technological star that talks to you as a human would via Whatsapp or Telegram. Whatever the subject, the flow of its conversation urges you: “Ask me anything, and I’ll produce it for you!”. Journalists, lawyers, writers, musicians, graphic designers – you’re obsolete now. And once again, the media coverage underlines this paradox: technology floods our lives, but understanding how it works seems irrelevant when it comes to assessing its potential and dangers. Only emotion takes precedence. Here, fascination and fear.
Opinion Article by Prof. Benoît Garbinato, Department of Information Systems, HEC Lausanne
The main principle of ChatGPT is easy to understand, however. It is an on-steroid version of the algorithm that suggests the next word in a sentence you write on Whatsapp, for example. Its answers are simply the most likely sequence of words produced from previously analyzed texts. Where ChatGPT impresses, however, is in the dizzying volume of data and parameters that determine its responses.
There is a popular misconception that ChatGPT is constantly refining its “knowledge.” However, this is not the case for a very good reason: if you let a conversational agent feed on what it finds on the Internet, it will not be long before it peddles all sorts of nauseating remarks. This is what happened in 2016 when Microsoft released its conversational agent.
So why does ChatGPT produce answers that are often reasonable or even correct? The answer is confoundingly banal: a large number of humans, paid less than $2/hour, have manually cleaned up the data used to train it. So, ChatGPT doesn’t know that the earth is round, it has just been trained on a corpus of texts purged of all sentences like “the earth is flat”.
Why do we confuse human intelligence with the automatic production of probabilistic texts? The answer may be that humans often behave like automatons, reproducing, with minor variations, standardized and predictable discourses. The question, then, is not so much whether automata are (or will be) capable of behaving like humans, but to what extent humans behave like automata in many situations.
In the near future, humans will not be replaced by intelligent machines, but by far fewer humans who know how to use tools like ChatGPT. The real danger lies rather in the diminishing capacity of a majority of people to construct their own vision of the world and express it in an articulate manner. If tomorrow, most graphic designers change jobs and new images are nothing more than automatically generated aggregates, there will come a time when artificial intelligence will re-ingurgitate its own images and produce insipid variants ad nauseam.
Like many innovations, ChatGTP is a fabulous tool in the hands of experienced users, but terribly dangerous when handled by candid users. The future will tell whether this tool is in line with what the computer has been up to now, namely a kind of bicycle for the mind, which doesn’t replace our muscular strength but multiplies it and enables us to go further, according to a metaphor dear to Steve Jobs. With artificial intelligence comes the electric bike: its multiplication factor is greater, but the principle remains the same, since we still provide the basic motion. A darker alternative is however before us: artificial intelligence becomes an SUV, crushing everything in its path – the subtlety of our human intelligence, our creativity, our empathy with the world.
The good news is that, for the time being, the choice is still ours.
Read the full article on Prof. Garbinato’s blog
Watch the CSE UNIL video on this topic (in French) : IA génératives et enseignement supérieur : entretien avec le professeur Benoît Garbinato