Medical artificial intelligence tools endangering women .. Why?

Mark
Written By Mark

Several studies have found that medical tools that depend on artificial intelligence models are biased against women and ethnic minorities in diagnosing their pathological conditions in a way that puts their lives in danger, according to a report published by the Financial Times.

Studies that took place in several prestigious universities in America and the United Kingdom indicate that the tools that depend on deep linguistic models tend to reduce the risk of the symptoms that women suffer from, and they also show less sympathy for ethnic minorities.

These studies coincide with the attempt of major technical companies such as “Microsoft” and “Oben AI” to offer artificial intelligence tools that reduce the pregnancy placed on doctors, hoping to accelerate the diagnosis and transition to the treatment stage.

Several doctors also started using artificial intelligence models such as “Jiminai” and “Chat GBT” and tools for recording medical notes to receive patient complaints and reach real complaint faster.

It is reported that Microsoft revealed last June a medical intelligence tool claiming to be 4 times stronger in diagnosing diseases than human doctors.

However, studies – conducted at the Massachusetts Institute of Technology at the same time show that medical artificial intelligence tools have provided less care for women, and recommended that some patients receive treatment at home instead of searching for medical intervention.

Another separate study conducted at the institute also found that artificial intelligence tools were less sympathetic to ethnic minorities with mental and mental illnesses.

Another study from the London College of Economics found that the Google Gemma model was reducing the seriousness of the physical and psychological problems that women are going through, and it should be noted that Ghima is an open source artificial intelligence model from Google and used by the large number of local authorities in the United Kingdom.

The report indicates that the reason for this racial bias is the mechanism of training deep language models and data used to train them, as companies depend on data available for free on the Internet, which often includes racist phrases and bias against specific categories.

advertisement

Consequently, this bias moves to deep language models despite developers’ attempts to reduce this effect by setting safe restrictions to stop the model.

For its part, “Oben AI” explained that the majority of studies have relied on old models of artificial intelligence tools, and added that the new models have become less likely to be this bias.

Likewise, on the part of “Google”, which confirmed that it takes strict measures against racial discrimination and bias, and develops strict restrictions that fully prevent it.

But Travis Zach, Assistant Professor at the University of California, San Francisco, and the main medical official at Open Evenes, emerging in the field of medical information with artificial intelligence.

Zach adds that the “Oben Ivedens” tool – on which more than 400,000 doctors in the United States depended on medical documents and health guidelines received from expert doctors, as well as medical references used in universities.