András Aszódi

Sapere aude!

Don't "AI" me. Follow the money

Four-fingered AI girl
2024-02-17

The limits of creativity

Humans are creative beings: we can "make something out of nothing". Our creativity flows into scientific discoveries, engineering and into works of art. On the foundations of science, engineering and art we build civilisations that enable us to live comfortable and enjoyable lives.

However, building civilisations is a very slow and unreliable process because creative people are still trained and work like mediaeval artisans. This is a very serious limitation. We have schools and universities, but they do not guarantee that talented individuals will develop their full potential. It is a matter of luck whether another Newton or Michelangelo emerges.

So let us get rid of this uncertainty by industrialising the creative process. Let us construct machines that can reliably produce new knowledge and new works of art. Machines that never get tired, never have a nervous breakdown. Machines that don't need two decades of training. Machines that are orders of magnitude faster and orders of magnitude cheaper than humans.

Out of this desire the "AI" industry is just being born.

The industrialisation of creativity

Current "AI" technologies are not based on modelling the human creative process, first and foremost because we don't understand at all how creativity works. Instead, statistical models are fit onto very large datasets using nonlinear regression (this is called "training"). The trained models are then switched to prediction mode to generate new data. The output is random, and it will inevitably share the statistical properties of the training data. The results are fascinating: due to the immense amount of training data, current "AI" tools can produce surprisingly "human-like" output. It works!

Let's sell it then.

Costs

The "AI" business model is the digital entrepreneur's wet dream par excellence, because the raw materials come free of charge from the Internet. People chat on the Internet, so the "AI" will be able to chat, too. People post pictures and videos on the Internet, so the "AI" will paint pictures and make movies, too. People share source code on the Internet, so the "AI" will write computer programs, too.

All of us who ever shared anything on the Internet have thus become the useful idiots of the "AI" industry. Our contributions are priceless: both in the sense of "incredibly valuable" and also in the literal sense of "having no price". The "AI" industry accesses this priceless resource free of charge, making us look very stupid indeed. We are all, to quote Mark Zuckerberg, "dumbfucks" in this regard.

Of course there are real costs associated with building and running the "AI" machinery. But these costs are negligible compared to the true value of the raw training data.

Marketing, or "what is in a name"?

The product needs a good name. If you truthfully call it "generative pre-trained transformer" or "very large-scale, probably heavily overfitted nonlinear regression", nobody will be interested. The solution is to call it "Artificial Intelligence".

The irresistible magic of those words makes us anthropomorphise a computer program. We unconsciously endow the chatbot with a personality, assume it thinks, and... we are hooked.

Now that the magical product name is found, a similar word magic shall be applied to the company that manufactures the chatbot. "OpenAI", for instance, would be a good choice.

Openness sounds good -- it evokes mental pictures of selfless sharing, working towards the benefit of everyone. It also associates with "open source software", implicating that a company called "OpenAI" is in fact not a commercial enterprise at all but rather a sort of high-tech charity.

In the case of OpenAI, nothing can be farther from the truth. The only "open" aspect of OpenAI is that they build their product by using openly available data sources. The rest is kept secret.

The sales pitch

"AI" is the ultimate enabling technology. Welcome to Paradise!

Profit!

The real mission of the "AI" industry is of course not to enable the talentless and the ignorant to become creative professionals. It is the other way round: to replace the expensive and unreliable "creatives" by mechanizing the content production process. The Hollywood screenwriters understood this when they went on strike in 2023.

But it's not only the "creatives". It turns out that we, humans are not that creative after all. Much of the data we produce is just a rehash of previously existing data, often subject to strict formal criteria. Consequently, most of our data output has low entropy, i.e. it is "unsurprising" and thus easy to predict with statistical tools. No wonder that "AI" algorithms could automate this process. For instance an "AI" chatbot could pass a barrister's exam, which may look like an astonishing feat at first, but in truth it's not that surprising, given the rigid formal constraints of legal texts that make them predictable.

And this is where the real big profit will come from: automating away millions of low-entropy jobs in management and administration, journalism, the legal profession etc.

Revolutions eat their children

Who needs "AI"?

Creative professionals surely don't. Human creativity is still widely available, despite the declining quality of education. And creating something out of nothing is fun. It is a deeply rewarding experience to paint a picture, to discover a new galaxy or to come up with a robust and efficient implementation of an algorithm. And, somewhat paradoxically, the inevitable drudgery is part of the enjoyment because overcoming one's own inertia by working hard towards a goal can be enormously satisfying.

What about the rest of us? The "low-entropy workers" definitely don't need "AI" either. If their jobs are automated away, then they will be thrown out of their comfortable offices and will be forced to earn a living by doing something harder and financially less rewarding. What this "something else" might be, is never explained by the peddlers of "AI solutions".

Furthermore, chatbots cannot and must not be used in automated decision making processes because they are unreliable by design. "Stochastic parrots" in management, administration and the judiciary system would be extremely dangerous. All of us would suffer the consequences if computational bullshit generators organised our society.

These considerations leave us with a tiny fraction of mankind who really and truly need "AI"; namely, the fledgling captains of the "AI" industry. They are looking forward to generating enourmous profits. However, their business model is not sustainable, the "AI" revolution will eat its children. Here is why:

If we think these aspects consequently through, the conclusion emerges that "AI" is not worth pursuing.

All technologies have benefits and drawbacks, but in some cases the negative aspects are far more serious than the positives. Consider chemical warfare agents: nobody needs Nowitchok (apart from the Russian security services perhaps). There's a good reason why chemical weapons are internationally banned.

Freedom and its enemies

The "AI" industry clearly targets creativity and thus it's an enemy of creative freedom. Thankfully the technology is so unreliable that most likely it will go out of fashion next year when people realise that it's just another overhyped techno-bullshit. However, we shall remain vigilant. New, perhaps more serious technological threats will surely emerge, and we must protect our political, scientific and cultural freedoms from those who peddle these wares. As Ovid wrote, "Halt its beginnings".