The rush to the newest AI technology on the block, ChatGPT, an OpenAI chatbot focused on generating human-like responses to search queries, brings to scrutiny the ongoing fight for user privacy, the scepticism of consent as a solution for consumer privacy, and the issues of ethical conflict of interest in the big data space have left tech investors, innovators, and end-users asking “is ChatGPT a threat?”.
ChatGPT is a natural language processing model designed for conversational applications which can be used in alarmingly disruptive ways.
However, in an era where consumers have become more careful about sharing data, with regulators stepping up privacy requirements, ChatGPT has been flagged to also have the potential to share personal data, breaching most of the world’s data protection laws, and infringing on intellectual property rights.
“A concern, however, is that ChatGPT has been reported to return false results and formulate hypotheses that do not hitherto exist. For it, it seems more like a “genius” that knows it all. Woe betide that it has no response to anything. This raises concerns of trusts. In any case, besides the trust concern, I’m also concerned about privacy. ChatGPT is not exactly a privacy-first product. How much farther would we be getting in this renewed battle for commoditization of data,” Oyindolapo Olusesi Lead, Legal Services at Kora posed in a submission.
And a study by Lokke Moerel, a Senior of Counsel at Morrison & Foerster in Berlin, and Christine Lyon, partner at Morrison & Foerster in Palo Alto, California, has revealed that this commoditization of data has become a problem, not a solution.
As new technologies are fueled by huge volumes of data, with data-driven technologies continuing to transform lives, disrupting existing business models, and presenting new privacy issues, the commoditization of data sees data as an independently valuable asset that is freely available on the market.
Where governments leverage the data for regulatory, administrative, service provision and security reasons, availability and access to these data also result in exponential growth for big tech companies, the creation of new business models, and technology and marketing departments also rely heavily on data to perform their own unique functions.
“In a world driven by AI, we can no longer fully understand what is happening to our data,” Moerel and Lyon shared.
Meanwhile, besides ChatGPT’s privacy-related concerns, the AI chatbot has also flagged ethical concerns such as its potential to create fake news and other misleading content, spread misinformation, as well as hateful statements and biases.
As ChatGPT takes over and similar technologies emerge, the situation sheds more light on Moerel and Lyon’s concern’s that social resistance to the excesses of the new data economy has become increasingly visible and leads to calls for new legislation.
According to the experts, the underlying logic of data-processing operations and the purposes for which they are used have now become so complex that they can only be described by means of intricate privacy policies that are simply not comprehensible to the average citizen.
“It is an illusion to suppose that by better informing individuals about which data are processed and for which purposes, we can enable them to make more rational choices and to better exercise their rights.”