The insurance practice denounced by Consumer Reports does not completely ignore this incentive; misconduct while driving continues to be sanctioned. However, this is undermined if accident-free driving is worth less to the insurer than the customer’s account balance. Those who suffer from that are often poorer people who are dependent on their car. Thus, those who are already disadvantaged by low income and low creditworthiness are burdened with even higher premiums.
Economically it may make sense for an insurance company to have solvent rather than law-abiding drivers as customers. That is not new. Now, however, algorithmic tools are available that can quickly and reliably assess customers’ creditworthiness and translate it into individual rates. Without question, the computer program in this example works: It fulfils its mission and acts on behalf of the car insurers. What the algorithmic system, due to its normative blindness, is unable to recognize on its own is that it works against the interests of a society that wants to enable individual mobility for all citizens and increase road safety. By using this software, insurance companies are placing their own economic interests above the benefits to society. It is an ethically questionable practice to which legislators in California, Hawaii and Massachusetts have since responded. These states prohibit car insurers from using credit forecasts to determine their premiums.
Lack of diversity: Algorithmic monopolies jeopardize participation
Kyle Behm just does not get it anymore. 10He has applied for a temporary student job at seven supermarkets in Macon, Georgia. He wants to arrange products on shelves, label them with prices, work in the warehouse – everything you do in a store to earn a few extra dollars while going to college. The activities are not excessively demanding, so he is half horrified, half incredulous when one rejection after the other arrives in his e-mail inbox. Behm is not invited to a single interview.
His father cannot understand the rejections either. He looks at the applications that his son sent – there is nothing to complain about. Kyle Behm even has experience in retail and is a good student. His father, a lawyer, starts investigating and discovers the reason. All seven supermarkets use similar online personality tests. Kyle suffers from bipolar disorder, a mental illness, which the computer programs recognized when they evaluated the tests. All the supermarkets rejected his application as a result.
Behm’s father encourages him to take legal action against one of the companies. He wants to know whether it is permissible to categorically block a young man from entering the labor market simply because an algorithm is being used. Especially since Behm is in treatment for his illness and is on medication. Moreover, his doctors have no doubt that he could easily do the job he applied for. Before the case goes to trial, the company offers an out-of-court settlement. Behm obviously had a good chance of winning his case.
Larger companies in particular are increasingly relying on algorithms to presort candidates before asking some in for an interview. The method is effective and inexpensive. An algorithmic system has no problem doing it, even if several thousand applications are to be considered. However, it can become a problem for certain groups of people if all companies in an industry use a similar algorithm. Where in the past a single door might have closed, they now all close at once. The probability of such “monopolies” being formed is increasing because digital markets in particular adhere to the principle “The winner takes it all,” i.e. one company or product wins out and displaces all competitors. Eventually only one software application remains – to presort jobseekers or to grant loans.
That does not bother a lot of companies: Such software allows them to save time and increase the effectiveness of their recruiting procedures. And for some applicants the algorithmic preselection also works out since their professional competence and personal qualities count more than the reputation of the university they attended, or their name, background or whatever else might have previously prevented them from getting the job (see Chapter 12). While some people’s chances on the labor market increase and become fairer, other groups are threatened with total exclusion, such those who suffer from a health condition, as Behm does. Such collateral damage cannot be accepted by a society that believes in solidarity. In areas impacting social participation, an oversight authority is therefore required that recognizes algorithmic monopolization at an early stage and ensures diverse systems are present (see Chapter 16).
As these six examples have shown: Algorithms can be deficient and produce unwanted results, data can reflect and even reinforce socially undesirable discrimination, people can program software to achieve the wrong objectives or they can allow dangerous monopolies to take shape. Thus, blind faith is inappropriate. Algorithms are merely tools for completing specific tasks, not truly intelligent decision makers. They can even draw wrong conclusions while fulfilling their mission perfectly. After all, they do not understand when their goals are inappropriate, when they are not subject to the necessary corrections or when they deprive entire groups of the opportunity to participate in society. They can do considerable harm with machine-like precision. When algorithms are mistaken, we cannot let them remain in their error – to return to the adage by Saint Jerome quoted in the last chapter. People are responsible for any wrongdoing of this sort. They determine which objectives algorithms pursue. They determine which criteria are used to reach those objectives. They determine whether and how corrections are carried out.
And just like Carol from the Little Britain sketch, they hide behind algorithms when they do not want to or cannot talk about the software’s appropriateness. For example, the head of Human Resources at Xerox Services reported that algorithms are helping her department reduce the high turnover at the company’s call center. The software used to parse applications predicts a potential employee’s length of stay at the company (see Chapter 12). When asked what criteria the program used, the HR director replied, “I don’t know why this works. I just know it works.” 11Such answers forestall any debate about which candidates are rejected and why and whether there might be a systematic bias.
A second example is provided by Germany’s Federal Ministry of the Interior. It used facial recognition software in a pilot project at the Südkreuz train station in Berlin to search for criminals and terrorists. Its official statement for the project reads: “We achieved a 70-percent and above recognition rate of the test subjects – a very good figure.” 12This means that the software correctly recognized seven out of ten wanted persons. But that is not the entire story. The ministry did initially not disclose the number of innocent passers-by falsely identified by the system. Its complete interim report has been kept under lock and key. 13
Both users, Xerox Services and the Ministry of the Interior, are thus making it more difficult to have a public discussion on the use of algorithms, one that is sorely needed. Both the question of possible discrimination in selecting employees and the right balance between surveillance and security needs are sensitive issues in a free society. Citizens can legitimately demand that users of algorithms assume responsibility and not hide behind a machine. More facts and figures need to be on the table for a real debate to take place. After all, only those who understand how their systems work can detect and eliminate errors and biases.
Читать дальше