Wednesday, October 18, 2017

Google Aritificial Intelligence chief claims biased algorithms are a big danger

Social concerns are at the forefront of Artificial Intelligence conversations right now. John Giannandrea AI chief at Google is concerned that bias is being built in to many of the machine-learning algorithms by which the robot makes decisions.

At a recent Google conference on the relationships between AI systems and humans Giannandrea said: “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.”
This problem is likely to become more widespread as AI technology spreads to areas such as law, and medicine and as more people use the technology who do not have a deep understanding of the technical problems with the algorithm's being used. Indeed many who use the technology stress that it eliminates human bias, as is boasted in the following ad:
Two flavours of AI
In the early stages of the development of AI there were two main models. There was a logic, rules-based model and a competing biological model. The logic model would have developed AI in a manner that anyone who cared to could determine exactly how a robot or other device made decisions but the biological model took a quite different approach:
Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

Tommi Jaakkola a professor at MIT who works on applications of machine learning said: “It is a problem that is already relevant, and it’s going to be much more relevant in the future. Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”
Giannandrea elaborated on the problem: “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems. If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”
COMPAS
There have already been complaints of bias in AI algorithm programs. A system called COMPAS predicts defendant's likelihood of re-offending and is even used by some judges in determining whether an inmate is to be granted parole. Although the workings of COMPAS are kept secret an investigation by ProPublica has argued that there is evidence that the model used may involve bias against minorities.
The issue is complicated and Northpointe, the company that created COMPAS, has issued rebuttals to ProPublica's arguments and there has been a continuing back and forth debate that is discussed in detail in this article. The Wisconsin Supreme Court has upheld the use of COMPAS in sentencing.
The COMPAS program assigns defendants scores from 1 to 10 based on how likely they are to re-offend based on more than 100 factors none of which directly include race. The scores have been found to be highly predictive of re-offending. However, ProPublica questions whether they are fair.
Northpointe argued that the assessments were fair in that the scores meant the same regardless of race. However, ProPublica pointed out that among defendants who ultimately did not re-offend, blacks were more than twice as likely as whites to be classified as high risk, 42 percent versus 22 percent. Hence, even though these black defendants did not go on to commit a crime they were subjected to harsher treatment by the courts.
Complex intelligence
Another issue that is not being adequately faced is that the new deep learning techniques are so complex and opaque that they often cannot be understood. However, researchers are attempting to create ways that give them at least some approximation of how they work to engineers and end users.
Karrie Karahalios, a professor of computer science at the University of Illinois showed that even commonplace algorithms, such as those used by Facebook to filter the posts they see in their news feed are not usually understood by users and thus it is difficult to detect bias. The appended video discusses some of the controversy over Facebook filters. Facebook showed clear evidence of censoring Conservative media as shown on the appended video. Google has been accused of censorship by left wing sources in the ranking algorithm it uses for its search engine.


No comments:

US will bank Tik Tok unless it sells off its US operations

  US Treasury Secretary Steven Mnuchin said during a CNBC interview that the Trump administration has decided that the Chinese internet app ...