April 26, 2024

Valley Post

Read Latest News on Sports, Business, Entertainment, Blogs and Opinions from leading columnists.

Artificial intelligence machines do not suffer from “hallucinations”. They have algorithmic garbage

Artificial intelligence machines do not suffer from “hallucinations”.  They have algorithmic garbage

Amidst the many debates surrounding the rapid spread of so-called artificial intelligence, there is a relatively obscure skirmish centering around the choice of the word “illusion”.

With this sentence, Naomi Klein begins and continues an interesting op-ed in the Guardian.

This is a term coined by AI engineers and advocates to describe responses given by chatbots that are completely fake or completely wrong. Like, for example, when you ask a bot for a definition of something that doesn’t exist and it gives you, rather convincingly, a definition complete with artificial footnotes.

Photo: Julian Trommer/Unsplash

with ready materials

“No one in this field has solved the problems of illusion yet,” Sundar Pichai, CEO of Google and Alphabet, said recently in an interview.

That’s right – but why are call errors “hallucinations”? Why doesn’t the algorithm trash? Or malfunctions? Well, hallucinations refer to the human brain’s mysterious ability to perceive phenomena that do not exist, at least not in traditional physical terms. Adorning a word commonly used in psychology, psychedelia, and various forms of mysticism, proponents of artificial intelligence, while acknowledging the failure of their machines, are at the same time fueling the field’s most cherished myth: by creating these great language models and training them to All we are humans. Written, said and visually represented, they are in the process of giving birth to a living intelligence on the cusp of our species’ evolutionary leap.

However, distorted hallucinations do occur in the world of artificial intelligence – but they are not borne by the robots, but by the tech CEOs who launched them, along with a group of their henchmen, who find themselves in the midst of a runaway hallucination, both apart. And collectively. Here I define hallucinations not in a mystical or psychedelic sense, navigational states of mind that can actually help access previously unimagined deep truths. no. These men just get off on it: they see, or at least claim to see, elements that don’t exist at all, and even conjure entire worlds that will use their products for universal upliftment and education.

See also  Rumor: Nintendo Live will take place later this month

The solution to everything

They tell us reproductive artificial intelligence will eradicate poverty. He will cure all diseases. It will solve the problem of climate change. It will make our jobs more important and exciting. It will liberate the lives of leisure and meditation, and help us regain the humanity we have lost due to belated capitalist mechanization. Loneliness will end. It will make our governments rational and flexible. I’m afraid these are the true fantasies of artificial intelligence and we may all have heard them frequently since I started GPT chat late last year.

This is what Naomi Klein writes and continues to do.

There is a world in which AI, as a powerful tool for predictive research and heavy-duty performance, can indeed be used to benefit humanity, other species, and our common home. But for this to happen, these technologies must be developed within an economic and social system very different from ours, one that aims to meet human needs and protect the planetary systems that support all life.

But as we understand, our current system has nothing to do with it. Instead, it is built to maximize the extraction of wealth and profit—from both humans and the natural world—a reality that has brought us to what we might think of as the “necrotic tech” stage of capitalism. In this reality of highly concentrated power and wealth, AI – far from living up to all these utopian fantasies – is likely to become a terrifying tool for further plunder and dispossession.

Because what we are witnessing is the richest companies in history (Microsoft, Apple, Google, Meta, Amazon…) unilaterally taking all existing human knowledge in digital form and embedding it in proprietary products, many of which will directly target the people whose work they “train” machines without their permission or consent.

This should not be legal.

Photo: Gerard Siderios/Unsplash

steal our work

Illustrator and illustrator Molly Krabappel helps lead a movement of artists who condemn this theft. AI generators are trained on massive datasets, which contain millions of copyrighted images collected without their creator’s knowledge, let alone compensation or approval. It’s basically the biggest art theft in history. They are being committed by seemingly respectable corporate entities backed by venture capital in Silicon Valley. This is daylight robbery,” says a new open letter he wrote.

The trick, of course, is that Silicon Valley usually calls theft a “disorder” — and often gets away with it. We know the move: You lunge into lawless territory – claim that the old rules don’t apply to your new technology – shout out that regulations will only help China – all while you have hard facts on the ground.

By the time we all get over the newness of these new toys and begin to take stock of the social, political, and economic disasters, the technology is already so pervasive that courts and policymakers throw up their hands.

We saw this by scanning Google Books and Arts. With Elon Musk colonizing space. With Uber’s attack on the taxi industry. With Airbnb attacking the rental market. With Facebook neglecting our data. Don’t ask permission, vandals like to say, ask for forgiveness.

See also  A 13-year-old boy achieved the impossible: he became the first to finish the game Tetris (video)

Are we finally relieved?

In the age of surveillance capitalism, Shoshana Zuboff shows how Google Street View Maps circumvented privacy rules by sending out camera cars to photograph public streets and the exteriors of our homes. By the time the privacy lawsuits started, Street View was so ubiquitous on our devices (and so cool and convenient) that few courts outside of Germany were willing to step in.

By now, most of us have heard of the survey that asked AI researchers and developers to estimate the likelihood that their advanced systems would cause “human extinction or similar permanent and severe weakening of the human species.” Frighteningly, the average answer was that there is a 10% chance.

How can one justify going into business and promoting tools that take such existential risks? Often the reason given is that these systems also carry huge potential advantages – only these advantages are, for the most part, illusory.

*With information from Naomi Klein’s opinion piece published on theguardian.com

* Naomi Klein is a columnist and writer for the US Guardian. She is the best-selling author of No Logo and The Shock Doctrine, Professor of Climate Justice and Co-Director of the Center for Climate Justice at the University of British Columbia.

Follow in.gr at google news And be the first to know all the news