Researchers call for harnessing, regulation of AI
Artificial intelligence (AI) appears to be “widening inequality,” and its deployment should be subject to tough regulations and limits, especially for sensitive technologies such as facial recognition, a research report said Thursday.
The AI Now Institute, a New York University center studying the social implications of AI, said that as these technologies become widely deployed, the negative impacts are starting to emerge.
Article continues after this advertisementThe 93-page report examined concerns being raised “from AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and underrepresented communities.”
“What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who don’t,” the researchers noted.
The researchers said AI systems are being deployed in areas such as healthcare, education, employment, criminal justice “without appropriate safeguards or accountability structures in place.”
Article continues after this advertisementThe report said governments and businesses should halt use of facial recognition “in sensitive social and political contexts” until the risks are better understood, and that one subset — “affect recognition” or the reading of emotions by computer technology — should be banned in light of doubts about whether it works.
Emotion recognition “should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school,” the report stated.
It also called for tech workers “to have the right to know what they are building and to contest unethical or harmful uses of their work.”
The AI Now report said medical organizations using advanced technologies need to implement data protection policies and allow people “affirmative approval” opportunities to withdraw from the study or treatment, and from research using their medical information.
More broadly, the researchers said the AI industry needs to make “structural changes” to ensure that algorithms are not reinforcing racism, prejudice or lack of diversity.
“The AI industry is strikingly homogeneous, due in large part to its treatment of women, people of color, gender minorities, and other underrepresented groups,” the report said.
Efforts to regulate AI systems are underway, but “are being outpaced by government adoption of AI systems to surveil and control,” according to the report.
“Despite growing public concern and regulatory action, the rollout of facial recognition and other risky AI technologies has barely slowed down,” the researchers said.
“So-called ‘smart city’ projects around the world are consolidating power over civic life in the hands of for-profit technology companies, putting them in charge of managing critical resources and information.” RGA
RELATED STORIES:
Abu Dhabi unveils world’s first Artificial Intelligence university