Photo by  Andy Kelly  on  Unsplash

Photo by Andy Kelly on Unsplash

Every day, our robots gain a little more autonomy. With every increment in artificial intelligence and self-governing technology, we reduce the workload for humans and hand over a little more of our decision-making to machines. Despite the horror stories about super-intelligent robots taking over the Earth and destroying humanity, the advantages of this progress are clear. Every time we free up human potential, it eventually gets focused on new activities that add to a nation’s productivity and quality of life.

But there may be some less obvious benefits to automated machines too. Researchers from the US Army Research Lab, Northeastern University, and University of Southern California, teamed up to investigate what self-driving cars mean for human cooperation. Society is a better place when we work together, but cooperation may require self-sacrifice. It’s not always easy to trade our personal comforts for societal gains, especially when we experience the personal costs immediately. We often tell ourselves that we’ll do better another day instead.

Do Robots Make Us Nicer?

To find out how people would make social and ethical decisions, the researchers put participants into a computer game simulating a social dilemma. They first arranged the participants into groups of four players, and sat them separately so that they could not communicate. Across 10 rounds of play, each participant then answered a simple question about a car they received within the game: would you like to turn the air conditioning on or off?

Each player earned a particular number of points depending on how everybody acted. Participants knew that their responses reflected an underlying moral decision: turning on the air conditioning gives them comfort, but increases harmful environmental emissions.

So when everybody cooperated and turned off the air conditioning, all players gained a happy 16 points in the game. But if one person defected and gave in to the temptation for air conditioning, while everybody else resisted, the sneaky defector would get 20 points while everybody else would get only 12 points. And if everybody decided to turn on their air conditioning, they each received a lowly 8 points thanks to their maximal contribution to climate change. The more points a player had at the end of the game, the more likely they were to win money.

This setup is analogous to the decisions we make every day of our lives. If we use up energy while everybody else decides to reduce their usage, we get all the benefits of reduced carbon emissions without giving up our own energy-fueled products. But at the same time, if everybody thought that way and maximized their fuel usage, we’d all be worse off.

To see how machines would affect these kinds of decisions, the researchers included a twist in their experiment: some participants were personally driving the car in the game, while other participants were using a self-driving car. So how did each group react?

When people were driving a car, instead of programming a self-driving car before setting off, they were significantly more likely say “yes” to air conditioning. For some reason, people were more selfish when driving, and less selfish when thinking “how should my self-driving car behave while it operates?”. So what exactly was going on? Why were people more cooperative when deciding on the preprogrammed operations for their self-driving car?

Robots Play With Our Minds

To learn more about the mechanisms behind this effect, the researchers repeated their experiment but included additional cues to emphasize how each decision earned quick money or harmed the environment. People were more likely to cooperate when their attention was focused on long-term environmental consequences rather than short-term monetary consequences.

The data pointed to an important conclusion: people’s tendency to cooperate when programming self-driving cars was caused by their reduced focus on short-term rewards. When we program our autonomous machines, we naturally focus on long-term behavior in future conditions rather than short-term rewards associated with immediate behavior. That moderates our impulsivity, and instills us with a greater desire to cooperate with others.

Importantly, this psychological pattern does not apply exclusively to self-driving cars. When the researchers adjusted their experiment to present an abstract version of the social dilemma game featuring generic computer agents instead of self-driving cars, the results were the same. There is something special about acting through machines rather than directly through our own bodies. The added layer between us and the world acts as a buffer against our more primitive selfish impulses.

At this point in history, our decision-making is fusing with the intelligence of our machines. We are surrendering our input where it matters. Self-driving cars do a better job of steering and braking during driving, so they should take over that decision-making entirely, and reduce our chances of killing ourselves or others. But we still get to decide whether we want the heating on, or whether the windows should be open.

It’s nice to know that even our own decisions may edge toward greater cooperation as machines increase their reach in our lives. All of us come well-stocked with cognitive biases that affect our behavior, and autonomous technology may ride on those biases in creating a better world. But we also need to keep an eye on how the biases push us into poor decisions, such as uncontrollable gambling, overspending on products that make us miserable, and making contradictory ethical choices.

The Tyranny of Short-Term Thinking

Peter Singer highlights some of the ethical hypocrisy in our decision-making. If we see a child drowning in a shallow pond, all of us agree that we would be monsters if we did not jump in to save them, even if the water and mud was to destroy our clothes. And yet, many of us avoid donating money to charities that guarantee saving multiple children’s lives with the smallest contributions.

This gap in our decision-making is driven by a powerful cognitive bias: we are more likely to act when we see the immediate payoff in front of us with our own eyes. When the consequences occur in another country or time, we perceive a weaker connection between our actions and their effects. And that weak connection can stop us from donating to a charity, even if we logically know that our donation would save lives.

But as the research above shows, weakening the connection between actions and immediate rewards can also help us cooperate in achieving long-term goals. When we decide to turn on the air conditioning as we drive a car, we feel the immediate cool air and comfort. In our mind, this immediate positive effect outweighs the distant negative effects of environmental harm, so we flick the switch. In contrast, if we preprogram the ongoing behavior of our self-driving cars, the short-term consequences are expressed later in time, so the playing field between personal benefits and social costs evens out a little. In other words, the same psychological distance that masks the benefits of our charitable donations can also increase our social cooperation by reducing our focus on immediate selfish gains.

These kinds of biases will always have advantages and disadvantages, depending on the context surrounding our decisions. We are less swayed by long-term or distant outcomes, and more swayed by short-term or nearby outcomes. If we soften our focus on counterproductive short-term rewards, we are more likely to choose long-term benefits. So when the immediate comforts of wasteful energy usage are less vivid, we are more likely to save energy.

Of course, reinforcing a hard focus on the immediate harms of impulsive actions can also urge us away from undesirable options. A leader who clearly sees the societal fallout of a bad economic policy or missile launch is less likely to act on instinct, even if they know the action would win votes.

When we want to do something uncomfortable today to achieve something better tomorrow, we need to strengthen the perceived links between our actions and their most desirable outcomes. That way, we make healthier decisions. We are more likely to forego a hamburger and hit the gym. We are more likely to skip our coffee today and donate the money to charity instead. And when our self-driving cars are finally parked outside our homes, perhaps we’ll be one step closer to social cooperation that is good for the planet.

Comment