Would You Trust an Alien Without Knowing Anything About That Alien?

We are at the brink of exactly doing that.

Like many others, I have poked around a little bit into the topic of A.I. driven computerized models, whether language models which make the news all over the place, such as GPT-4, “a state-of-the-art language model that makes it feel like you are communicating with a person rather than a machine“, or the amazing capabilities of A.I. such as DALL-E 2, an “AI system that can create realistic images and art from a description in natural language.

In both cases, I quote from the respective websites where these systems are hosted. GPT- 4 is introduced as a system that literally makes me feel like I converse with a human being. Or, in the case of DALL-E 2, it does not only talk about realistic images, but also about “art”.

So I asked GPT-4 “What is art?”. I got the following response: “Hello! “Art” is a term that refers to various creative and expressive activities that involve the production of visual, auditory, or performance-based works that are valued for their aesthetic or emotional appeal. It can include various forms of media, such as painting, sculpture, music, literature, film, photography, and dance. Art can be created for a variety of reasons, including personal expression, communication, cultural or political commentary, or simply for the sake of beauty or entertainment. The definition of art is subjective and can vary depending on cultural and historical context, as well as individual interpretation.

Please note that GPT-4 does not give me any sources of reference for this definition. This definition is a product of the language model itself, which has been trained against an immense data set retrieved from public sources on the Internet, and which has compiled its own answer from those many sources without explaining its path from parsing the dataset towards providing me with the definition that you can read above.

If I google “What is art?”, I get many responses for which the Wikipedia definition is just an example: “Art is a diverse range of human activity, and resulting product, that involves creative or imaginative talent expressive of technical proficiency, beauty, emotional power, or conceptual ideas.” The process of reading through all different sources with definitions of art is my own intellectual process, and if I adhere to some proper standards, I refer to several sources and then explain why I have come up with a select definition.

This is missing in GPT-4’s answer to my question.

Now: Is this deliberate, or just an ommission? The problem is that nobody can reproduce how GPT-4 has come to an answer of the above, by parsing its vast knowlege database. A.I. language models are highly complex “neuronal network models”. There is no way to read these neuronal networks out, like that you can trace and reproduce the process in a classical linear piece of software code performing iterations on an algorythm. Too much technological speak? Sorry for that, this is important here.

So, next step: I am asking GPT-4 “Is art a human feature?“. Long story short, GPT-4 gives me an explanation about a dispute in science whether creating art is a uniqely human capability, or whether animals, or some animals, share this. I note that GPT-4 is only talking about humans and other animals, not about machines.

Which leaves me with the impression that the creators of the website which is introducing DALL-E 2 have an understanding of “art” that would include the assumption that A.I. can create art. Otherwise they wouldn’t have said that, or?

Except, hold on, here I came up with a seemingly weird thought: What if the creators of DALL-E 2’s website would ask GPT-4 to generate a website template for them? For the non-initiated: GPT-4 has vast abilities which go lightyears beyond my little example above. You can use GPT-4 for writing poems, articles, essays, or to write complex EXCEL-formulae, or highly sophisticated computer code. Meaning, you can ask (and many do) GPT-4 to create a template for a website, too. Then you would have a situation in which one A.I-system describes another A.I.-system, and no external visitor of that website would have any means to identify which language is coming from humans, and which language is coming from the use of GPT-4. This is, as I will try to explain, the core of the problem.

And that is, for the purpose of screwing up your mind at whichever time of the day you are reading this, enough to find an entry point into a mind-boggling discussion and a development in industry which appears to be unprecedented in terms of speed. We do not talk years, we do not even talk several months, the news about further developments and implementations of A.I. language systems into anything from social media into faking your PHD-thesis paper, or decisionmaking, or the inclusion of GPT-4 into renowned web browsers such as Microsoft’s BLING, these news hammer the streets in a battle rhythm defined by weeks, or days.

And the point I want to make is that we have no clue about the consequences. Many people won’t understand, or just being fascinated, or not knowing about the ever more pervasive use of these models in any daily errand we undertake, at all.

Yet, we are at the brink of a revolution of which some of the most renowned experts on this globe say that it might eat up it’s own children.

What? Why? Because language creates our reality which we perceive. In a sane environment, if I listen to language, something called “trust” is involved, since I allow myself to listen to something which creates, or shapes, my reality. If I don’t trust, or trust the wrong people, the result is that, for example, social media is messing around with my sanity. But in all these cases, I listen to, and perhaps trust, real people.

If I don’t know whether the information I get is originating from people, or from an A.I., and if I combine the creative power of A.I. with the cpability of generating complex constructs such as a website in my little example above, an article, or a book, leaving the impression that these products are human-made, I simply render my trust to an Artificial Intelligence, which I have to trust, in case I know about its participation, or where I am left in the dark in case the use of such A.I. is not made known to me. Because of this possible use of such A.I. based language models creating entirely non-human-made realities, indistinguishable from human information and manipulation, this is giving the creeps to a large body of well-minded scientists, and to interested laymen.

Confused? Well, imagine you are giving GPT-4 the task to overhaul Wikipedia. Wikipedia is based on common contributions by millions of people, and a quality mechanism. Now, if GPT-4 enters this crowd in disguise, what if, all of a sudden, the “human” feature in defining “art” is not part of the definition of art any longer? Which simply means that we allow A.I. to shape how we perceive the world. And the question is: “Do we know what we are doing, and do we have enough checks and balances built in?”

Simply because it won’t only be the well-meaning people who explore the fascinating abilities of new forms of A.I. like this. Anyone with a creative malicious mind will explore the power of these models, as well. Some already have. These models have already been used to generate malicious computer code meant to crack systems wide open. Just the beginning.

We begin to trust Aliens in creating our reality in which we live. Here is what people with a renowned academic background have to say on this matter:

I have written about the work of Yuval Noah Harari on several occasions. Quoting from a Guest Essay in the New York Times, “Yuval Noah Harari is a historian; the author of “Sapiens,” “Homo Deus” and “Unstoppable Us”; and a founder of the social impact company Sapienship. Tristan Harris and Aza Raskin are founders of the Center for Humane Technology and co-hosts of the podcast “Your Undivided Attention.”

March 24, 2023, Yuval Harari, Tristan Harris and Aza Raskin wrote a Guest Essay in the New York Times titled “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills“. I have read quite some of Yuval’s fascinating books, starting with “Sapiens”, which opened my eyes in many ways. In other books, he is describing advances in genetic technology and advances in Artificial Intelligence as some of the most future-defining developments of our times. He does so since many years, so the alarm bells rung by Harari, Harris, and Raskin are not a sign of panicking, but of profound research on these issues.

They are not alone. “WIRED” is one of many news outlets reporting about an open letter signed by hundreds of prominent A.I. experts, including Elon Musk. Frankly, that name does not necessarily lend too much credibility to me, but I am biased there, for many reasons including an unhinged narcissistic attitude towards the power of capitalism. Yet, it does not mean he is wrong in joining, for whichever motivation. The fact that hundreds of the World’s best of the best warn “In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT“, it gives important weight to this discussion which requires so much of a technological and societal understanding which is not necessarily a part of the common toolbox of people who allow this sort of technology creeping into the foundations of how our societies and ourselves as individuals “function”.

What is it about?
The use of Artificial Intelligence is not new, and it is, in many ways, already pervading into all corners of our daily lifes. I will explain, in order to work out the profound difference which GPT-4 and other such models make.

Cybernetics and computer technology are a field of interest for me since many decades. Pretty much for the same time, I have read of attempts to replicate the inner workings of the brain, through its neuronal networks, within computing systems. Those early mathematical and computational models attempting to replicate neuronal functions of living brains, they existed in universities and laboratories for a long time. They translated into computer code, much of which was leading to little demonstrations only. Then, these neuronal networks were hosted in ever more powerful computational environments. At the same time, processing power of chips made so incredible advances that any drone, any smartphone, any robotic vacuum cleaner has enough computing power to host software that has neuronal network simulations built in as a vital part.

The same goes for the vast computing networks driving internet-wide applications. Whether for the purpose of global logistics, whether complex predictive systems used to forecast the weather, whether military applications, medical applications, predictive policing software, or, notoriously, social media, they all use A.I. in the form of applications which can learn, are able to identify patterns, and to produce predictive results.

The key, in layman’s terms, sits with the way how these digital neuronal networks store information. They do not store that information in a form which would allow us to retrieve it from a harddisk, look at it, and understand it. Very much like in biological brains, the network itself stores the information as an “inner state”. You put some information through the entry sensors of such a network, the “inner state” will compute, the output side of the system will produce a result, and you can even feed this result back to the sensor side with added information about whether the prediction was correct, or not precise enough. Subsequently, the system will learn.

These systems have amazing capacities. The lenses of your smartphone will compute the picture which you are taking by using A.I. This software will detect “things”, and optimize the photo-shooting. Then, go into the photo app and ask for pictures including mountains, or clouds, or cats, or “Stefan Feller”. In case there are pictures with such features, chances are high the app will present a considerable number of findings. The robotic vacuum will learn how to navigate in your mess. The medical system will become better in diagnosing your desease, or ailment. FaceBook will learn how to push ads, depending on things including your mood, or anything else. In my line of professional work, the forensic software which is comparing shell casings of bullets will come up with similar pictures if they are in its database. Examples are literally countless.

Yet, all these systems operate with very limited language communication. You type in “cat” and you get a set of cat pictures. Alexa, Siri and others are not based (not currently) on GPT-4 – technology, their ability to understand you and to respond in plain language is very limited. GPT-4 is different.

Within a rapid development where new versions are put out in days, or weeks, language based models are able to communicate in natural language and they give you results in complex sentences, paragraphs, or even spoken, which can include the generation of a computer programme, a website, an article, or a casual conversation about love. The fact that you are talking to a machine is almost invisible for those who don’t know. The complexity of the answer, based of vast data retrieved from the Internet, does hide the source and any wrong or biased dataset can add to the learning mistakes which GPT-4 may be making in the act of communicating. There is a growing body of evidence that provocative language, cursing, provocation or many other examples (including that a language model used by Microsoft professed it’s love, or pretended to be a visually impaired human, but not a computer software) is based on the quantity of profane or emotionalised language retreived from bilions of social media entries.

Amongst all these funny stories, and all the amazing benefits which the uncounted avenues of human ingenuity offer, the danger zone gets drowned. Drowned like earlier, when we only realized the damage inflicted by social media when it was too late. Or, when scientists warned about the dangers of nuclear fission and fusion, calling for self-restraint and not to harness this power militarily.

Here a link to warnings about the potential criminal implications, in a warning from EUROPOL.

The most recent news: This article in the German tech magazine “Heise” explains how to install a local GPT-Clone on local “bread-and-butter”-hardware. Meaning that one can use the power of this software without leaving traces in external server-logs. So, buy your own little server park, install such a system, embed it into some computer code, and let it hammer out fake and manipulative news on social media and blogging sites with a speed which is not limited through the number of persons writing fake articles, but by the raw processing power of your computer setup.

Or, to quote Harari et. al. again: “By 2028, the U.S. presidential race might no longer be run by humans.

So, on one hand we have a body of experts which truly says that we may be in a situation where we have not enough knowledge about what we are unleashing onto society. In Harari et. al. words: “We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents.

On the other hand, the chance of a responsible proactive discussion may be slim, taking into account how we collectively stumbled into any new world offered by technology. Remember the warnings on the potentially devastating consequences of nuclear fission, and nuclear fusion? But I agree we need to have such a discussion.

I don’t find better words than Harari et.al.:

“A.I. indeed has the potential to help us defeat cancer, discover lifesaving drugs and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine. But it doesn’t matter how high the skyscraper of benefits A.I. assembles if the foundation collapses.

The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it.”

One thought on “Would You Trust an Alien Without Knowing Anything About That Alien?

  1. Pingback: Addendum on Cyber Warfare | Stefan Feller

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s