Here’s an interesting article: “Why People Don’t Trust Algorithms To Be Right”. (The actual title is “Why People Don’t Trust Machines To Be Right”, but algorithms don’t always run on machines, and the article also conflates algorithms with data).

Anyway, it’s an interesting problem. A good algorithm can be an extremely efficient way of making decisions.

Gerd Gigerenzer, a German psychologist, talks more about this. For example, in his book Risk Savvy, he spends several pages talking about the 1/n stock market portfolio, which basically just means that you allocate your money equally to each of n places.

This performs really well as an investment strategy. It beats a bunch of alternative strategies in most measures of investment performance in this paper by DeMiguel, Garlappi, and Uppal. The “buy an index fund and hold it” investment strategy boils down to this strategy.

But there are whole industries based on making continuous, non-algorithmic decisions about how to invest. Decisions based on consistently re-evaluated human judgment. This doesn’t really appear to work, but people do it anyway.


No algorithm’s perfect [and] that little error seems to be a real problem for an algorithm to overcome…

Once people have seen an algorithm err they assume that it’s going to keep making mistakes in the future, which is probably true to some degree.

The bad assumption is that the human won’t keep making errors and the human could even improve, which probably in a lot of contexts isn’t true. The human will keep making worse mistakes than the algorithm ever would.

When an algorithm has made an error, you know that it has made an error. There’s no illusion of perfection. So you know you can expect it to continue making errors, even small ones, whereas with more deliberate decision-making, you have the hope of reducing the number of errors.

In the stock market, if your index fund strategy has a bad year, you’re tempted to sell out of it altogether, under the illusion that you can stop that from happening next time.

The kind of strange solution that the article suggests to this problem is to let people meddle with the algorithm anyway, but in ways that don’t effect the outcome dramatically.

So, for example, the algorithm puts out a number and you can adjust it up or down by five. And we found that people like that much, much more than not being able to give any of their input.

And actually, when that method errs and people learn about it, they don’t necessarily lose confidence in it. So, as long as they had some part of the decision and they got to use their judgment, that might actually help them use the algorithm.

The interview doesn’t say what the results of these changes are. Presumably, the human second-guessing of the algorithm doesn’t actually improve it. But if it makes people more accepting of the algorithm, that should still improve overall decisionmaking.

To me, this solution points to another reason why people are biased against algorithms: fear. If a set of rules can replace your human judgment, then that decision isn’t yours anymore. People like having control over their environment and over their lives; an algorithm replaces that. And the success of an algorithm also means that our judgment doesn’t matter as much as we’d like.