April 29, 2024

Valley Post

Read Latest News on Sports, Business, Entertainment, Blogs and Opinions from leading columnists.

A new AI algorithm can predict someone’s time of death more accurately

A new AI algorithm can predict someone’s time of death more accurately

A pioneering new ChatGPT-like AI system, trained on the life histories of more than a million people, is remarkably accurate at predicting someone’s age as well as their risk of premature death, a new study has shown.

Scientists at the Technical University of Denmark (DTU) said the AI ​​model was trained using the personal data of the Danish population, and was shown to predict people’s chances of death more accurately than any other existing system.

in StadyThe researchers analyzed health and labor market data for 6 million Danes collected from 2008 to 2020, including information on individuals’ education, visits to doctors and hospitals, resulting diagnoses, income and occupation.

life2vec: A new language model similar to ChatGPT

The scientists converted the data set into words to train a large language model called “life2vec”, similar to the technology behind artificial intelligence applications such as ChatGPT. Once an AI model learns patterns in data, it can outperform other advanced systems and predict outcomes such as personality and time of death with high accuracy, according to the study published in the journal Nature Computational Science on Tuesday.

The researchers took data from a group of people between the ages of 35 and 65 – half of whom died between 2016 and 2020 – and asked the artificial intelligence system to predict who would live and who would die. They found that its predictions were 11% more accurate than those of any other existing AI model or method used by life insurance companies to price policies.

“What’s amazing is to think of a human life as a long series of events, similar to the way a sentence in a language is made up of a series of words,” said study author Sun Liman from DTU.

See also  A comet three times larger than Mount Everest explodes and reaches Earth (video)

What groups of people are associated with a higher risk of death?

“This is typically the type of work transformer models are used for in artificial intelligence, but in our experiments, we use them to analyze what we call life sequences, that is, events that have occurred in a human’s life,” Lehman said.

Using the model, researchers sought answers to general questions such as the odds that a person will die within 4 years. They found that the model’s responses are consistent with existing psychological findings, such that when all other factors are taken into account, people in leadership positions or with high income are more likely to survive while being male, skilled, or having a diagnosis. It is associated with a higher risk of death.

“We used the model to answer the basic question: How well can we predict events in your future based on conditions and events in your past?” Dr. said. Lehman. He added: “Scientifically, what fascinates us is not the prediction itself, but rather the aspects of the data that allow the model to provide such accurate answers.”

The model can also accurately predict personality test scores in a segment of the population better than current AI systems. “Our framework allows researchers to identify new potential mechanisms that influence life outcomes and the associated potential for personalized interventions,” the researchers wrote in the study.

Can one ask the question: “Am I going to die in four years?”

By treating each part of your life as if it were words in a sentence, life2vec predicts where the story will end based on what has been written so far.

See also  God of War Ragnarok: Valhalla - some tips on how to save your progress

Just as ChatGPT users ask him to write a song, poem, or article, scientists can ask life2vec simple questions like “Die within four years?” For a specific person. Based on their demographic data, it correctly predicted who would die by 2020 in more than three-quarters of cases.

In the same way that ChatGPT and other large language models are trained on a body of existing written works, life2vec is trained on data from people’s lives, written as a series of data-rich sentences.

These include sentences such as “In September 2012, Francisco received twenty thousand Danish kroner as a guard at a castle in Elsinore” or “During her third year in high school, Hermione took five electives.”

Lyman and his team assigned different points to each piece of information, and all of that data was mapped in relation to each other.

The categories in people’s life stories cover the full range of human experience: a broken forearm is represented by S52 – work in a tobacco shop is symbolized by IND4726 – income is represented by 100 points of different numbers – and ‘bleeding during childbirth’ is represented by O72.

Many of these relationships are intuitive, such as career and income—some jobs bring more money.

But what life2vec does is map the wide range of factors that make up a person’s life, allowing someone to ask them to make a prediction based on millions of other people and many, many factors.

It even makes predictions about people’s personalities

It can also make predictions about people’s personality.
To do this, Lehman and his team trained the model to predict people’s answers to questions on a personality test. The test asks participants to rate 10 items based on how much they agree with them, such as “The first thing I always do in a new place is make friends” or “I rarely express my opinions in group meetings.”

Why it should not be used by insurance companies

However, scientists warn that life insurance companies should not use this model due to ethical concerns.

“Obviously our model should not be used by an insurance company, because the whole idea of ​​insurance is that by sharing the lack of knowledge about who is going to be the unfortunate person who gets into an accident or dies or if you lose your backpack, we can somehow To share this burden.” Leachman in New Scientist.

There are also ethical issues

The researchers also warn that there are other ethical issues surrounding the use of life2vec, such as protecting sensitive data, protecting privacy, and the role of bias in data. “We emphasize that our work is an exploration of what is possible, but should only be used in real-world applications under regulations that protect the rights of individuals,” they said.

follow him On Google News Be the first to know all the news
Find the latest news from Greece and the world on