Photo Credit: Image by Melk Hagelslag from Pixabay

Indian political scientist Asheini K.P., the United Nations special rapporteur on racism, has issued a report titled, “Contemporary forms of racism, racial discrimination, xenophobia, and related intolerance,” in which she claims that “Recent developments in generative artificial intelligence and the burgeoning application of artificial intelligence continue to raise serious human rights issues, including concerns about racial discrimination.”

The report provides an analysis of efforts to manage and regulate AI at the national, regional, and international levels. Ashwini K.P. believes that “Artificial intelligence technology should be grounded in international human rights law standards. The most comprehensive prohibition of racial discrimination can be found in the International Convention on the Elimination of All Forms of Racial Discrimination.”

Advertisement




AI systems in various sectors have demonstrated potential to infringe on rights and perpetuate biases:

Healthcare: Some AI-driven health risk assessment tools incorporate race-based adjustments, potentially leading to biased outcomes.
Education: Ashwini K.P.’s research reveals that AI-powered educational tools can embed racial prejudices. Academic and professional success prediction algorithms often underrate racial minorities due to biased design and data selection, thus reinforcing patterns of exclusion and discrimination.

“Generative artificial intelligence is changing the world and has the potential to drive increasingly seismic societal shifts in the future,” Ashwini K.P. said. “I am deeply concerned about the rapid spread of the application of artificial intelligence across various fields. This is not because artificial intelligence is without potential benefits. In fact, it presents possible opportunities for innovation and inclusion.”

She believes that Predictive Policing, as in racial profiling, illustrates how technological progress can perpetuate racial biases. These systems use location and personal data to forecast potential criminal activity and likely offenders. However, such tools risk reinforcing existing prejudices by basing predictions on historically biased policing practices and data.

“Predictive policing can exacerbate the historical over-policing of communities along racial and ethnic lines,” Ashwini K.P. said. “Because law enforcement officials have historically focused their attention on such neighborhoods, members of communities in those neighborhoods are overrepresented in police records. This, in turn, has an impact on where algorithms predict that future crime will occur, leading to increased police deployment in the areas in question.”

Her research reveals that predictive policing algorithms using location data analyze connections between places, past events, and historical crime statistics. These tools forecast potential crime hotspots, allowing police to strategically plan patrol routes and resource allocation.

She added, “The use of variables such as socioeconomic background, education level, and location can act as proxies for race and perpetuate historical biases.”


Share this article on WhatsApp:
Advertisement

SHARE
Previous articleLetters To The Editor – August 2, 2024
Next articleAnd Now It’s Time to Assassinate Haniyeh’s Replacement Khaled Mashaal
David writes news at JewishPress.com.