The Inscrutable Algorithm, Under Fire

By on June 4, 2018

June 4, 2018

The problem with machine-generated algorithms is they “somehow seem to absorb the bias around them,” says Jacob Weisberg in a review of two recent books that grapple with the issue. Algorithms, he writes, often behave “in ways that reflect patterns and prejudices deeply embedded in history and society.” Imposing any kind of regulation in this field, however, proves to be difficult because of two related problems. One is “inpenetrability,” which refers both to the fact that algorithms are difficult for most people to understand and that in many cases they have the protection of trade secrets. The second problem is diffused responsibility, which is exacerbated when machines begin to learn on their own. Of the two books covered in this review, one sets out to make the case that marginalized people suffer grievous harm by Google, and according to Weisberg it largely fails to make its case. The second, Automating Inequality by Virginia Eubanks, “gets much closer to the heart of the problem,” he writes. “Its argument is that the use of automated decision-making in social service programs creates a ‘digital poorhouse’ that perpetuates the kinds of negative moral judgments that have always been attached to poverty in America.”

Read the full article at:

The New York Review of Books

Leave a Reply

Your email address will not be published. Required fields are marked *

Do NOT follow this link or you will be banned from the site!