AI Knows When You'll Die

Implications of Ultra-Accurate Health AI

Recently, researchers figured out how to use AI to fairly accurately predict a given person's death. They gathered health data from half a million people and used it to train the AI. The AI then was used to predict the probability of premature death in individuals. The AI did its job pretty well: "the deep-learning algorithm delivered the most accurate predictions, correctly identifying 76 percent of subjects who died during the study period." And this isn't a one-off — there have been other AIs developed in recent years that have been able to predict Alzheimer's through looking at brain scans with 84% accuracy, as well as AIs who could do the same with autism in infants.

These incredible scientific advancements do raise a few ethical questions. Personally, I think more information is always better than less information. If I knew that I had a risk of premature death because of some disease, then I would be better able to adjust my lifestyle to hedge against that risk. That's a huge benefit I would have from having more information.

But what happens if this data gets into the wrong hands, or is not kept private? The implications are far-ranging and serious. For instance, what if we all knew when both ourselves and others around us would die? If, hypothetically, we were confronted with the fact that we would outlive all our friends, or that our family members were close to death, how would that change our behavior? I could see possibilities where people that had high likelihoods of death in the near future would be discriminated against in job markets and even social relationships? After all, would you want to hire or start dating someone if you knew they could very well die soon?

Another pertinent example I thought of was the life and health insurance industries. If insurance companies had this kind of data at their disposal, then they could more accurately price insurance plans according to the risk of death/illness predicted by the AI. This could lead to price gouging and price discrimination. On the surface, this looks terrible. But one could also see this as just purely logical. The whole purpose of life insurance is to make a bet on whether someone will die or not in the near future. AI would simply allow the betters to make more accurate decisions, to make bets with more information at their disposal. They would then be able to alter their bets accordingly (in the form of different pricing), as any logical person would. In this way, this could be seen as analogous to simply a better, more accurate form of determining credit scores, except here it's for the issuance of health/life insurance, not credit cards.

AI offers great information to everyone. The possibilities of what people can do with that information lie along a very broad spectrum. But if everything predicted is true and accurate, is it really wrong to use true and accurate information for seemingly unjust purposes? Or is it just logical — simply drawing rational conclusions from true data?



Comments

Popular Posts