ChatGPT: it’s trendy, it’s viral. Do I want to?

“As has been usual, it seems that only what goes viral, what is ‘trendy’, is relevant.”
0 Shares
0
0
0

What is this ChatGPT thing? Right now we can describe it as a chatbot prototype developed by OpenAI, a big language model (LLM) using a huge sum of data. It’s not the first open access chatbot, but it’s surely the one that marks a turning point in the spread of this kind of tool for being the opposite of Ingres’ violin: everything it does, from generating python code to essays on a wide variety of topics, it does well. If it were human, one would say “it’s good”. But if it were not, the free availability of its generating capacity without any safeguard of possible consequences ignores several of the European Union’s ethical considerations for reliable and responsible Artificial Intelligence. It is not about limiting innovation here, it is about promoting, again, an inverse process in which it is tried first and policies and guidelines are defined later, already in reaction to the consequences, which will inevitably flow from the massive use since its launch in November.

To test ChatGPT simply create an account and start the experiment. You can either ask a question, or be asked to write code in a programming language. This AI model, or chatbot, generates content in a similar way to that developed by humans, including predictive. But we also know that ChatGPT does not have the ability to verify the veracity of the content produced, although it does have the ability to force its credibility even if biased.

We tested the chat with several open-ended questions about historical events, philosophical, political and legal notions, scientific theories and experiments, artistic movements, medical and health care. The answers are invariably well-structured, presenting various perspectives in a descriptive way. It is not only to the naked eye that they appear to be written by humans, so the fear that they are a medium used to replace an original thought process and composition is justified.

The study“Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods” shows us how the abuse of Natural Language Generation (NLG) models is multiplying the negative social impact (and not only) with the emergence of cases of abusive or harmful use. The question that arises is: why don’t we have media outlets (and other stakeholders) promoting a careful analysis on measures that mitigate the abuse of AI systems? Will we talk about how ChatGPT can be a tool to detect, or promote, cybersecurity attacks? About the possible escalation of fake news? About ways to identify and prevent possible academic fraud? About creative ways of trying to take advantage of this tool in a pedagogical context? About what kind of impact it will bring to the exercise of various professions, from teaching to computer engineering?

As usual, it seems that only what goes viral, what is trendy and therefore potentially the target of financial investment, is relevant. This seems to demonstrate three things: we are a poor country, with poor digital literacy and in which it is easy to buy social attention through news production that relies on terms like: ‘capital‘; ‘millions’; ‘investors’. As Paulo Leminski would say, “Were it not this and it was less. Were it not so much and it was almost”.


This article is published in o largo. under the project “Culture, Science and Technology in the Media” (Cultura, Ciência e Tecnologia na Imprensa), promoted by the Portuguese Press Association

Found a mistake in the article? Tell us: select it and doCtrl+Enter.

0 Shares
You May Also Like