How to Control Your Emotions and Achieve More

Comment

How to Control Your Emotions and Achieve More

Photo by  Michael Afonso  on  Unsplash

Sometimes, our emotions get the better of us. We have all faced the challenges of performing under pressure, whether we are speaking to an audience, working to a deadline, or undergoing a test to prove our talents. Students are all too familiar with the importance of tests that mark the end of an academic year. The results of those tests can be life-defining: they often decide whether a student will be able to pursue their cherished ambitions. Failure could force them down a less desirable career path.

The stress of high stakes

My own experiences as a university student certainly ran along these stressful lines. Knowing that I wanted to work in neuroscience at a major academic institution, I had no choice but to get decent grades. I could not have a bad day because there were no second chances. The tests at the end of the year were the only opportunity to prove myself.

The emotional challenges of that journey were as difficult as the academic challenges. However, we are generally not taught how to handle the emotional side. We know that we need to study and revise, so that we have the required knowledge to answer as many test questions as possible. But nobody tells us how to remain calm, composed, and controlled while being tested.

Although I was lucky to fall into the right mindset while undergoing the tests, I failed to control my anxieties and protect my mental health while studying for them. Over several days of multiple examination sessions, I averaged less than an hour of sleep a night, and felt the situational stress weighing on my shoulders every moment of the day.

My daily rhythm involved completing an exam, getting home as quickly as possible, rushing through pre-prepared meals, frantically studying until the small hours, going to bed, failing to sleep, then leaving to catch an early train back to university for the next exam. During the hours of the actual tests, the adrenaline in my body was sufficient to perform well, but during my commute home after the test, I would promptly crash back into a dizzying zombie-like state again.

Does it have to be this way? Why should we receive years of training, supervision, and preparation in acquiring knowledge for a test, but receive no tutoring whatsoever in how to mentally prepare for the high-stakes conditions?

It is likely that some productivity failures are attributable to emotional breakdowns rather than a lack of training, resources, or skills. In fact, perhaps some of the academic underperformance of students from low-income families could be explained by the additional stress and pressure to perform well. We need to learn these answers, and we need to know how to address any disadvantages too. One team of researchers targeted exactly these questions at the end of 2018.

The science of emotional regulation

Christopher Rozek at Stanford University, together with his colleagues from other universities around the US, decided to run an experiment with ninth-grade students in a Midwestern science classroom. They recruited over 1000 students, 285 of whom were classified as coming from a lower-income background because they had free lunch status. The researchers wanted to investigate whether methods of emotional regulation would help to improve test outcomes.

At the end of their first and second semesters, the students were randomly assigned to participate in one of four interventions immediately before their final tests:

  1. Control group: students read summary advice that told them to ignore their feelings of anxiety.

  2. Expressive emotional regulation: students considered their emotions and thinking patterns before the test and described them in writing.

  3. Reappraisal emotional regulation: students answered questions about a scientific text explaining that anxiety before a stressful event was helpful rather than harmful for performance.

  4. Combined emotional regulation: students completed both the expressive and reappraisal exercises for emotional regulation.

To understand whether emotions are a relevant variable in student performance, and whether regulation techniques boost performance, let’s look at the differences in test performance between the groups.

First, students who used the emotional regulation interventions before their tests achieved higher scores than students in the control group. So the interventions worked. However, the data confirmed that students from lower-income backgrounds performed significantly worse than students from higher-income backgrounds overall. Therefore, we should check whether the intervention effects were different for each socioeconomic group.

After analyzing the lower-income and higher-income students separately, the researchers found that the emotional regulation advantage was only true for the lower-income students. The interventions did not meaningfully help the higher-income students. Emotional regulation reduced the performance gap between rich and poor students by 29% compared to the gap found in the control group.

Total numbers of pass versus fail grades are perhaps more important than raw scores. Fortunately, through improving lower-income student performance, the interventions also significantly reduced the gap between income groups in the proportions of passing grades achieved. The inequality in passing rates dropped 58% compared to students in the control group.

But did the students feel any different following their interventions? Consistent with the performance numbers, higher-income students reported the same feelings about test anxiety whether they were in the intervention or the control groups. And overall, their views on anxiety were more optimistic than the perceptions of the lower-income students.

In contrast, the lower-income students developed a healthier view about the effects of anxiety following emotional regulation. Compared to lower-income students in the control group, the emotion regulators were more likely to agree with statements such as: “A test will go well if I am a little nervous before taking it”. The gap in anxiety optimism between rich and poor students fell 81% following the emotion interventions.

In all outcomes, the researchers found no difference between the various types of emotional regulation strategy. Whether expressing emotions, reappraising emotions, or both, the most disadvantaged students developed a more optimistic outlook about their anxieties, and they improved their academic performance compared to the control group.

So what does it all mean?

A strong academic record opens up many early career options for students. Whether trying to get into a top university, looking for entry programs in industry leadership, or trying to secure a place in a STEM field, grades matter.

We want to do everything we can to help disadvantaged students catch up with peers who secured a head start. Lower-income students have access to fewer resources and potentially fewer escape routes toward alternative successes if they fail academically. That means they are likely to experience particularly high levels of stress and anxiety when trying to prove themselves and achieve their full potential.

In our world of scarce resources, it is not easy to create a perfectly even playing field while living in a free society. But some helpful solutions are easily applied. The evidence above provides a great example of how simple emotional regulation strategies could help those with the fewest resources and often the heaviest pressures to perform well. When we have a healthier emotional mindset, we approach stressful challenges with greater confidence and accomplish greater success.

Of course, these insights will apply beyond education. As highlighted by the varying gains between the lower-income and higher-income students, our sensitivity to the advantages of emotional regulation may depend on our level of stress. We are likely to respond particularly well to emotional regulation when we face the challenges that make us most anxious and insecure.

How to beat fear in our own lives

The next time we feel the pressure mounting, there are simple steps we can take to calm our nerves. It all boils down to the two core principles of emotional regulation described above: expression and reappraisal.

For example, when we are feeling shaky before stepping up to speak to an audience, we can spend a few minutes writing out our emotional experiences before hitting the stage. This concrete expression of our emotions allows us to realistically process each feeling, rather than exaggerating or catastrophizing the experience in the recesses of our mind. By throwing our abstract thoughts down on paper, we see them as less of a threat, and we prevent them from spiraling out of control within our own head. If we clearly see what our problem is, we are better at finding a way to handle it.

Similarly, if an upcoming test or appraisal at work is making us nervous, we can remind ourselves of the essential value of our anxieties: without anxiety, we would not care enough to perform at our best. If we are not giving a task the emotional weight it deserves, we are more likely to feel distracted by irrelevant thoughts and mental intrusions that harm our performance. A little stress focuses our mind on exactly what needs to be done. Tunnel vision is often a good thing.

Those of us with the fewest resources and the greatest anxieties need all the help we can get. With some quick and painless efforts aimed at properly expressing our feelings and focusing on the advantages rather than the costs of stress, we can give ourselves an extra psychological push toward achieving everything we are capable of.

Comment

How Physical Experiences Improve Our Ability to Learn

Comment

How Physical Experiences Improve Our Ability to Learn

Photo by  Gustavo Torres  on  Unsplash

The internet has opened up incredible opportunities for learning. It is easier than ever to share information, and we can gain valuable knowledge from people we will never meet. But in improving access to knowledge, the internet cuts away several important features of traditional classroom learning.

When we read an article or join a course online, there is no teacher standing beside us. If we see them through video, we frequently miss subtle changes in their body signals and spoken language that can hinder communication. The teacher is also unlikely to see the confusion on a student’s face when they misunderstand a particular point, or the subtle reactions of excitement and insight when they hear something interesting. This reduced two-way communication means that a teacher, or any other leader, is less able to optimize their approach to fit the evolving dynamics of their audience.

But there’s more. Think back to your science classes and some of the physical demonstrations you participated in. A good teacher would set up experiments that you could see, hear, and feel, in order to vividly demonstrate particular scientific principles. In a chemistry class, you might have seen and smelled the products of exciting chemical reactions. In a biology class, you might have felt the structure of plants and animal organs with your fingers. In a physics class, you might have held a bicycle wheel gyroscope in your hands to directly experience the properties of angular momentum.

Could these physical experiences actually improve learning? If they do, we may need to think about how to optimally structure our teaching and training for the modern digital world. The good news for eager learners is that researchers at the University of Chicago and DePaul University have tested exactly this question.

The science of physical learning

The researchers took 22 pairs of college students with no college-level physics experience, and asked them to read some text on the principles of angular momentum. They then tested each participant’s baseline understanding of those principles with a quiz about the force exerted by spinning objects in several videos.

After these tests, one participant in each pair was assigned as an actor, and the other was assigned as an observer. The actors were asked to hold and tilt different sets of spinning wheels by their axles while trying to keep them as steady as possible. The observers were asked to simply watch what happened. A laser pointer firing directly out of the axle toward a vertical line on a wall showed both participants how the spinning wheel behaved as it moved.

The important point to keep in mind is that both actors and observers could see how the wheels created different forces depending on spin direction, speed, and size. But only the actors could physically feel the effects of those forces.

So did this experiential difference in learning have any meaningful impact? After 10 minutes of training with the spinning wheels, the pairs repeated the video quiz test that they completed before training. The researchers analyzed their scores, and found that the actors from each pair significantly improved their test performance, but the observers made no progress.

The physical forces that the actors personally experienced actually improved their knowledge of what was going on, even though in principle, all participants had access to the same factual information. The additional bodily sensations involved in physical learning solidified the information in actors’ minds.

So reading and watching are great for learning, but complementing them with physical application is even better.

What happens in our brain when learning through experience?

The results above support the idea that physical experiences promote learning, but it’s still not entirely clear why the advantage exists. Perhaps there are indicators in our brain that would help us understand what’s going on. The researchers went a step further to answer this question.

They repeated their experiment with new participants. But this time, after the spinning wheel training, each participant completed their final quiz test while lying in a brain scanner.

Once again, the actors from each pair performed significantly better in their test than the observers. The actors answered 74.5% of their questions correctly while the observers only reached 52.2% accuracy overall.

But let’s turn to compare their brains. When the actors were answering their quiz questions, they showed enhanced activity in several brain regions known to be important for action planning and body movement, including the motor cortex, premotor cortex, and somatosensory cortex.

More importantly, their levels of motor and somatosensory activity actually predicted how well they performed in the tests. The sensory and motor structures in their brain, which were presumably recruited while they trained with the spinning wheels, actually carried over to help when recalling the relevant knowledge in their minds. In other words, rich sensory experiences support the information we process and embed within our memories.

How useful are these effects?

It’s all well and good finding benefits in basic lab quizzes, but could the advantages of physical experiences extend further into our real-life outcomes? The researchers introduced a final twist in their research. They took some university students from an introductory physics course and randomly split them into groups of four with two actors and two observers. Just as in the previous experiments, the students read about angular momentum and then completed the spinning wheels training, with half of them physically holding the wheels and half of them watching.

Several days after this workshop and an additional course lecture based on the material, all students took part in a class quiz on angular momentum, including multiple-choice, short-answer, and mathematical questions. When they got their grades back, the actors had outperformed the observers.

With evidence that our physical experiences affect the quality of our learning, it’s worthwhile thinking about how to optimize our studying as students, teaching as supervisors, and training as employers. Learning isn’t just a trivial exercise; it’s the foundation for all of human progress.

So what do we do now?

Some abstract types of learning may not benefit from active physical exercises. If we do introduce additional sensory experiences in training programs, we need to make sure they are directly relevant to the skills we want to acquire. But when you start to think about it, it’s surprising just how many challenges benefit from the inclusion of physical practices.

Imagine learning to drive a car purely through studying books rather than driving. No matter how much we prepare with books and online articles, the first time behind the wheel is always a shock to our brain. In addition to mentally rehearsing the formal rules of driving, we need plenty of physical experience shifting gears and steering before we become safe companions on the road.

The examples are endless. For people learning a new language, spending time in a relevant foreign country is a major step up. For sports fans, actively playing a sport boosts knowledge compared to purely watching the games and reading the rules. Even the psychological torture of completing tax returns could be reduced with programs that provide good working examples to journey through.

The point of this article is not to diminish the value of the internet, reading, or traditional lectures. There is incredible knowledge available to us at the click of a button, turn of a page, or structuring of a sentence. And the benefits of these resources are self-evident in all of our lives. But when we have the opportunity to include physical experiences in training programs, we should not underestimate the advantages of doing so.

Physical expertise provides additional inputs to our brain that we use in learning and reliving experiences. When dancers watch videos of their practiced dance styles, their sensory and motor brain areas are more active than when they view unpracticed styles. When expert athletes listen to people talk about the movements involved within their sport, their motor system is automatically engaged purely through listening to the relevant sentences. For all of us, physical experiences recruit additional resources in our brain during learning, and those resources assist our understanding. We get a dramatic push up the ladder toward expertise.

If we can build training programs that combine the easy access of online learning with teacher-guided physical practices, we’ll have found an ideal compromise in our ongoing struggle for efficient learning. Sometimes, we need to step away from the textbook and pick up the toolbox.

Comment

How Machines Help Us Cooperate

Comment

How Machines Help Us Cooperate

Photo by  Andy Kelly  on  Unsplash

Photo by Andy Kelly on Unsplash

Every day, our robots gain a little more autonomy. With every increment in artificial intelligence and self-governing technology, we reduce the workload for humans and hand over a little more of our decision-making to machines. Despite the horror stories about super-intelligent robots taking over the Earth and destroying humanity, the advantages of this progress are clear. Every time we free up human potential, it eventually gets focused on new activities that add to a nation’s productivity and quality of life.

But there may be some less obvious benefits to automated machines too. Researchers from the US Army Research Lab, Northeastern University, and University of Southern California, teamed up to investigate what self-driving cars mean for human cooperation. Society is a better place when we work together, but cooperation may require self-sacrifice. It’s not always easy to trade our personal comforts for societal gains, especially when we experience the personal costs immediately. We often tell ourselves that we’ll do better another day instead.

Do Robots Make Us Nicer?

To find out how people would make social and ethical decisions, the researchers put participants into a computer game simulating a social dilemma. They first arranged the participants into groups of four players, and sat them separately so that they could not communicate. Across 10 rounds of play, each participant then answered a simple question about a car they received within the game: would you like to turn the air conditioning on or off?

Each player earned a particular number of points depending on how everybody acted. Participants knew that their responses reflected an underlying moral decision: turning on the air conditioning gives them comfort, but increases harmful environmental emissions.

So when everybody cooperated and turned off the air conditioning, all players gained a happy 16 points in the game. But if one person defected and gave in to the temptation for air conditioning, while everybody else resisted, the sneaky defector would get 20 points while everybody else would get only 12 points. And if everybody decided to turn on their air conditioning, they each received a lowly 8 points thanks to their maximal contribution to climate change. The more points a player had at the end of the game, the more likely they were to win money.

This setup is analogous to the decisions we make every day of our lives. If we use up energy while everybody else decides to reduce their usage, we get all the benefits of reduced carbon emissions without giving up our own energy-fueled products. But at the same time, if everybody thought that way and maximized their fuel usage, we’d all be worse off.

To see how machines would affect these kinds of decisions, the researchers included a twist in their experiment: some participants were personally driving the car in the game, while other participants were using a self-driving car. So how did each group react?

When people were driving a car, instead of programming a self-driving car before setting off, they were significantly more likely say “yes” to air conditioning. For some reason, people were more selfish when driving, and less selfish when thinking “how should my self-driving car behave while it operates?”. So what exactly was going on? Why were people more cooperative when deciding on the preprogrammed operations for their self-driving car?

Robots Play With Our Minds

To learn more about the mechanisms behind this effect, the researchers repeated their experiment but included additional cues to emphasize how each decision earned quick money or harmed the environment. People were more likely to cooperate when their attention was focused on long-term environmental consequences rather than short-term monetary consequences.

The data pointed to an important conclusion: people’s tendency to cooperate when programming self-driving cars was caused by their reduced focus on short-term rewards. When we program our autonomous machines, we naturally focus on long-term behavior in future conditions rather than short-term rewards associated with immediate behavior. That moderates our impulsivity, and instills us with a greater desire to cooperate with others.

Importantly, this psychological pattern does not apply exclusively to self-driving cars. When the researchers adjusted their experiment to present an abstract version of the social dilemma game featuring generic computer agents instead of self-driving cars, the results were the same. There is something special about acting through machines rather than directly through our own bodies. The added layer between us and the world acts as a buffer against our more primitive selfish impulses.

At this point in history, our decision-making is fusing with the intelligence of our machines. We are surrendering our input where it matters. Self-driving cars do a better job of steering and braking during driving, so they should take over that decision-making entirely, and reduce our chances of killing ourselves or others. But we still get to decide whether we want the heating on, or whether the windows should be open.

It’s nice to know that even our own decisions may edge toward greater cooperation as machines increase their reach in our lives. All of us come well-stocked with cognitive biases that affect our behavior, and autonomous technology may ride on those biases in creating a better world. But we also need to keep an eye on how the biases push us into poor decisions, such as uncontrollable gambling, overspending on products that make us miserable, and making contradictory ethical choices.

The Tyranny of Short-Term Thinking

Peter Singer highlights some of the ethical hypocrisy in our decision-making. If we see a child drowning in a shallow pond, all of us agree that we would be monsters if we did not jump in to save them, even if the water and mud was to destroy our clothes. And yet, many of us avoid donating money to charities that guarantee saving multiple children’s lives with the smallest contributions.

This gap in our decision-making is driven by a powerful cognitive bias: we are more likely to act when we see the immediate payoff in front of us with our own eyes. When the consequences occur in another country or time, we perceive a weaker connection between our actions and their effects. And that weak connection can stop us from donating to a charity, even if we logically know that our donation would save lives.

But as the research above shows, weakening the connection between actions and immediate rewards can also help us cooperate in achieving long-term goals. When we decide to turn on the air conditioning as we drive a car, we feel the immediate cool air and comfort. In our mind, this immediate positive effect outweighs the distant negative effects of environmental harm, so we flick the switch. In contrast, if we preprogram the ongoing behavior of our self-driving cars, the short-term consequences are expressed later in time, so the playing field between personal benefits and social costs evens out a little. In other words, the same psychological distance that masks the benefits of our charitable donations can also increase our social cooperation by reducing our focus on immediate selfish gains.

These kinds of biases will always have advantages and disadvantages, depending on the context surrounding our decisions. We are less swayed by long-term or distant outcomes, and more swayed by short-term or nearby outcomes. If we soften our focus on counterproductive short-term rewards, we are more likely to choose long-term benefits. So when the immediate comforts of wasteful energy usage are less vivid, we are more likely to save energy.

Of course, reinforcing a hard focus on the immediate harms of impulsive actions can also urge us away from undesirable options. A leader who clearly sees the societal fallout of a bad economic policy or missile launch is less likely to act on instinct, even if they know the action would win votes.

When we want to do something uncomfortable today to achieve something better tomorrow, we need to strengthen the perceived links between our actions and their most desirable outcomes. That way, we make healthier decisions. We are more likely to forego a hamburger and hit the gym. We are more likely to skip our coffee today and donate the money to charity instead. And when our self-driving cars are finally parked outside our homes, perhaps we’ll be one step closer to social cooperation that is good for the planet.

Comment

Conversation Is Life’s Happy Drug

Comment

Conversation Is Life’s Happy Drug

Photo by  Ali Yahya  on  Unsplash

Photo by Ali Yahya on Unsplash

We all want the keys to happiness. One of the biggest keys is our social life, but how exactly does it contribute to improving our life satisfaction?

It’s not enough to simply own the key — we need to find the appropriate lock for it, and our social relationships are composed of many important variables: conversations, social touch, emotional expressions, shared experiences; the list goes on.

If we could work out the contributions of each input, and the ways in which it impacts our happiness, it may inspire some extra effort geared toward optimizing the social interactions in our own lives.

Science Shows That Social Activity Can Increase Our Happiness

When we ask the right questions, scientific experiments are great at collecting evidence and revealing answers. But there is an important challenge in testing how social interactions impact our happiness: in a lab, people often act a little strangely because labs are not natural environments.

It’s difficult to physically interact with people as you normally would if you’re being watched by a researcher, or if you’re lying in a brain scanner. So, to get reliable and practical answers, we need to look to technology that can measure what we do in the real world.

Have you watched the TV series The Wire? Well, it shows that people were already using this technology a long time ago. However, instead of eavesdropping on criminal suspects without their knowledge as in the series, we can hook up willing participants with unobtrusive audio recorders as they go about their normal lives.

Then, we can track the frequency and quality of their conversations. Although participants may adjust their behavior when they remember that they are being tracked for a science experiment, their normal living environment has to be more natural than sitting around in a lab.

In 2018, a group of US researchers combined the data from four separate studies that all used an audio recorder to analyze people’s daily conversations. For a total of 3–4 days, the device in each study automatically recorded audio for 30–50 seconds, approximately every 10 minutes, as the participants continued with their daily activities. The participants wore the device from the time they woke up to the time they went to bed, and couldn’t tell exactly when it was recording.

The researchers wanted a reliable insight into how the quality and frequency of conversations would influence participants’ life satisfaction. Each of the four studies used a different group of participants in order to cover a range of personal backgrounds: the first study used 79 undergraduate students, the second used 50–51 breast cancer patients and their partners, the third used 184 healthy adults who signed up for a meditation trial, and the fourth used 122 adults who were recently divorced. In comparison to typical psychology studies, that’s a very diverse sample.

In addition to the audio recordings, all participants completed a questionnaire that measured their personalities and life satisfaction levels. After the experiment, the researchers listened to all of the conversations picked up by the recordings and defined each conversation as either small talk (e.g. “What are you up to?”) or meaningful discussion (e.g. “They have already raised 10 million dollars for Haiti”).

Overall, when pooling the data across the studies, the researchers found that life satisfaction decreased as more time was spent alone and increased as people spent more time talking to others.

You may think that this effect would depend on personality traits such as our level of extraversion, but in fact, personality made practically no difference to the strength of the association overall.

Social activity was globally favorable for happiness, which suggests that we should motivate ourselves to meet people regularly and attend social events.

Does Small Talk Count, Too?

Social interaction is certainly important where happiness is concerned, but how about the quality of our conversations?

The data from the studies showed that the frequency of small talk had no significant impact on life satisfaction. So if you’ve ever met someone who frowns upon small talk as a waste of time, they may be right if they mean that small talk makes no difference to their long-term happiness.

But, of course, that’s no excuse to be impolite when meeting strangers. Short-term happiness matters too.

Unlike small talk, meaningful conversations had a positive impact on life satisfaction overall in the studies. And once again, personality had only a negligible impact on the strength of this association. The more frequently we engage in deep and purposeful discussions with other people, the happier we feel about our lives.

Photo by  Ben White  on  Unsplash

Photo by Ben White on Unsplash

All in all, frequent social activity and meaningful conversations are a great way to take care of our mental health. Small talk has no substantial cost or benefit on our life satisfaction, but we probably never expected it to. We generally use small talk as a gateway into more substantial conversations, or simply to be polite when we meet friendly people in cafes and bars. When we are pleasant to be around, the people around us are more pleasant in return.

The Takeaway

Even for the introverts among us — and I include myself here — coming up with excuses to avoid meeting people is probably not a great idea for our long-term happiness.

Thinking back on my own past experiences, social occasions have rarely ended in regret, even when I was initially resistant to leave the comfort of my home.

That’s not to say that we can’t enjoy our own company and appreciate our quiet moments with a book or movie. We absolutely should create space for those activities and meditate on the pleasures of pure mindful awareness. But it’s equally important to engage with others when the opportunities arise.

Conversation is the only tool we have for building a bridge between us and them. We long for social connection, and it’s no surprise that a lack of human interaction harms our satisfaction with life. Other people provide us with support, warmth, and intellectual stimulation.

Our online interactions are helpful for connecting with larger groups of people and providing a way to reach distant friends and relatives. But they do not fully replace the benefits of face-to-face communication with the people standing next to us.

To stay on top of our mental health, we need to schedule regular opportunities for grabbing a coffee with loved ones and meeting ones to be loved.

Each opportunity allows us to craft our words into patterns that make our conversational partner smile, laugh, and think. And with some luck, they will do the same for us.

Comment

This Active Ingredient in Mindfulness Improves Your Social Life

Comment

This Active Ingredient in Mindfulness Improves Your Social Life

Photo by  Adam Jang  on  Unsplash

Photo by Adam Jang on Unsplash

Loneliness devastates our mental health because humans crave social support and interaction. And yet, in the modern world, many of us find it difficult to meet new people. Social awkwardness, shyness, and judgment, are all threats that harm our self-esteem and prevent us from forging valuable friendships. A new study, published in February 2019, suggests that mindfulness may provide a way out of these self-sabotaging cycles.

The researchers recruited 153 adults in Pittsburgh who suffered from high levels of stress. They split these participants into three groups, and each group was prescribed a separate behavioral intervention. All interventions were delivered via a smartphone app, and all had the same basic structure of 14 training sessions, each containing a 20-minute audio lesson and some practice exercises. The only important difference between these groups was the content of the lessons.

The lessons for the first group emphasized the monitoring and acceptance principles of mindfulness: monitoring is the capacity to focus our attention on our present-moment experiences, while acceptance teaches us to appreciate those experiences without judging or overthinking them. Unlike the first group, the lessons for the second group emphasized only the monitoring aspects of mindfulness, and the lessons for the third group were entirely unrelated to mindfulness (they instead focused on analytical thinking and problem-solving).

In the 3 days before and after the group interventions, the researchers assessed each participant’s social activity. They did this by sending survey links to participants, approximately every 2.5 hours, which assessed their total number of social interactions in the last couple of hours, and the number of individual people that they interacted with. These surveys were all sent out between 9am and 7pm, and participants also completed a diary entry at the end of their day assessing their experiences of loneliness.

The researchers were aiming to answer one important question with all of this data: would mindfulness improve people’s daily social interactions, and more specifically, would acceptance be a key active ingredient in making that happen?

* * *

Let’s look at the results. First, it’s important to note that the groups before the interventions were practically identical: they had the same overall sex, age, ethnicity, education, and crucially, loneliness distributions. We can therefore reasonably attribute any differences that emerge between the groups after the intervention to the features of their training sessions.

After the interventions, the diary data showed that the first group, who trained in both monitoring and acceptance, experienced a decline in their loneliness. But the monitoring-only and non-mindfulness groups showed no change in their subjective loneliness. Clearly, the acceptance principle is a necessary component in mindfulness for helping people to feel less lonely.

But putting aside the diary reports, let’s look at the more objective data on social interaction frequencies during each participant’s day. Consistent with the diary data, participants who trained in monitoring and acceptance interacted with people more frequently following their intervention. But once again, the other two groups who trained without acceptance principles in their intervention showed no change in their daily social lives.

To hammer in the final nail, the researchers also found the same pattern of results for the total number of individual people that participants interacted with. With training in monitoring and acceptance, participants interacted with a significantly greater number of people after their intervention, while the other groups were just as lonely as they were before their interventions.

* * *

So the acceptance element in mindfulness has several benefits for our social life. It encourages us to engage in more frequent interaction, talk to a larger number of people, and feel less lonely. These results highlight two important messages:

  1. Mindfulness is important for not only our personal experiences, but also our social experiences. In fact, the results reiterate the tight connection between these two domains: our personal mental health and wellbeing depend on our social interactions.

  2. Acceptance is a key active ingredient in mindfulness, at least in the context of improving our social life. Mindfulness exercises that do not incorporate the principle of acceptance may provide little benefit to our social behavior.

But why is acceptance so important for social interaction? Why would a personal meditation exercise help us to interact with more people? The simple answer is that it may promote comfort. Any introvert will tell you about the discomfort they often experience when interacting with new people. This discomfort comes primarily from the threat of being judged by others, and the constant self-conscious auditing of our own behavior in the presence of that perceived threat. Indeed, I’ve previously written about a dreadful psychological bias that we all experience to some degree, which researchers have appropriately labeled “the liking gap”: we consistently assume the worst after a new social interaction, and we underestimate how much a person likes us. Generally speaking, people like us more than we think they do.

When we train ourselves to accept our own feelings without judgment, perhaps we also worry less about the judgment of others. If the acceptance principle in mindfulness teaches us to be less judgmental and more comfortable with our feelings of anxiety, then presumably, those effects will apply equally well to social anxiety. When we become more comfortable with how we feel, we naturally become more comfortable with how other people feel about us.

As long as we are not actively trying to hurt people, we are wasting our time if we continuously worry about what they think of us. Besides, most of the time, we are simply wrong when we think that a new acquaintance dislikes us. We should accept how they feel, accept how we feel, and find something meaningful to talk about whenever we meet a welcoming person. If we’re lucky, they could become a future friend. Social connection is an essential vitamin for developing a healthy mind, so we had better start taking our regular dose.

Comment

How the News Media Traumatize Us

Comment

How the News Media Traumatize Us

Photo by  Toa Heftiba  on  Unsplash

Photo by Toa Heftiba on Unsplash

News is easier to consume than ever before. It is beautifully curated in our smartphone apps, social media accounts, and television channels, and it is continuously optimized to make it increasingly irresistible. The more terrifying, enraging, and shocking the headline, the more likely we are to click and read more.

The advantages of up-to-date information about global events are obvious. When we know what’s going on at home and around the world, we’re better informed in our travel plans, living plans, and our political decision-making. But the ease of access to information also comes with a curse. Sometimes, information is actively harmful for our wellbeing, and I don’t just mean biased information or fake content. Even factual descriptions of an event can adjust our psychology in dysfunctional directions that reduce the quality of our decision-making.

* * *

In new work published in February 2019, a group of European researchers compared the psychological effects of threats experienced directly, to threats that we learn about second-hand. They designed a simple experiment in which participants saw either a blue or a yellow block on a computer screen, one of which was accompanied by a painful electric shock. However, participants learned about the electric shock in one of three different ways depending on their randomly assigned group. Group 1 received the shock themselves, group 2 observed another participant on a computer screen receiving the shock, and group 3 were simply told that one of the blocks was linked to an electric shock but never received the shock themselves. The participants ran through many repetitions of these blue and yellow blocks in this initial phase of the experiment — let’s call it the “conditioning phase”. They were being conditioned to respond to each block with either a threat response — electric shock incoming! — or a neutral response.

In a sense, groups 2 and 3 learned about the electric shocks the way we normally learn about dramatic events in the news: we either see video footage of people suffering through natural disasters and wars, or we are told how those traumatic events are unfolding. Group 1 experienced the drama for themselves, so they would typically be the subjects of a news broadcast.

The researchers wanted to see how these different experiences would change people’s threat-related decision-making. So after this initial conditioning phase of the experiment, all participants completed a second task in which they actively chose between a blue and yellow block 70 times. One of the blocks had a 75% chance of shocking the participants (let’s call this the “dangerous block”), while the other had only a 25% chance of shocking them, but participants had no awareness of these probabilities. The dangerous block could be the same block they were conditioned to, or it could be the opposite color block; they simply had to learn through trial and error.

This decision-making task was split into two halves. For some participants, the dangerous block in the first half was the same dangerous block that they learned about during their conditioning phase. Then in the second half, the dangerous block reversed to the opposing color, just to make things extra difficult for the participant. For other participants, the reversed colors applied in the first half of their decision-making task, while the originally conditioned colors appeared during the second half of the task. So for each of the three groups, some participants made decisions about their conditioned colors followed by decisions about the reversed colors. And the rest of the participants made decisions about the reversed colors followed by decisions about their conditioned colors (although the conditioning itself probably wore off by then).

So how did each of the participant groups react in their decision-making? Group 1 participants, who were conditioned directly with electric shocks, showed a specific pattern of responses depending on which half of the decision-making task they engaged with. In the first half, the participants who received a shock from the same color to which they were conditioned made significantly better decisions than participants who were shocked by the opposite color. In essence, they were biased toward applying the same rule they were conditioned too, so it allowed them to learn more quickly. They made fewer mistakes and did better in choosing the color least likely to shock them.

However, after the reversal in the second half of decision-making, those participants actually performed worse than participants who initially had to work against their conditioning. The participants who had to flip-flop most frequently didn’t have to override such a strong association built in their brains through repetition, so they learned more effectively after the final rule reversal. The participants who built a strong association, through conditioning and then compatible early decision-making, struggled to switch their behavior when required.

Most surprisingly, participants in groups 2 and 3, who were conditioned to electric shocks through observation or purely spoken instruction, all showed precisely the same decision-making outcomes as participants in group 1 who were conditioned by receiving shocks to their own body. Although the underlying brain mechanisms that influenced the decisions for each group may have been different, they pushed participants into the same eventual response patterns.

Why is this so interesting? Because in conventional thinking, when we receive information purely through spoken language, we should not expect to develop a strong conditioned response. We are building a conscious model about how the world works, rather than developing an automatic Pavlovian reaction to rules that we directly experience in the environment. But somehow, participants adjusted their decision-making to spoken instructions in the same way that they adjusted their decision-making to immediate direct electric shocks. Under the late rule reversal that changed which color caused shocks, participants struggled to force their mind out of habitual and persistent responses, even though they were never directly conditioned into those responses through electric shocks, but rather simply taught through spoken instruction.

* * *

These results may be an important lesson in the effects of our typical news consumption. The more we check our news feeds, the more aware we are of important events around the world. But at the same time, we also open ourselves up to attention-grabbing headlines and drama, which occupy limited space in our conscious mind. We live vicariously through the news: the more horror and trauma we read about, the less happy and motivated we feel in our everyday life. And the more we learn about the vicarious mechanics of our mind, the more we realize that second-hand traumatic news can have strikingly similar effects to first-hand traumatic experiences.

All-or-nothing advice is rarely appropriate. Staying away from the news entirely may not be optimal in the modern developed world, because for every benefit we introduce by quitting the news, we are also likely to introduce a cost. But we can certainly consider what changes are optimal in our own life. Perhaps some of us read too much news, in which case we need to carefully evaluate how the consequences harm us. And maybe some of us read too little and completely miss out on important events and interesting conversations that our friends and colleagues are engaging in.

If the first thing we do in the morning is scroll through our news feed, do we feel that adds to the quality of our life or takes away from it? Do we leave the house feeling informed and happy, or sad and unfocused? If it’s the latter, there is clearly room to adjust those daily habits to create a healthier life. Rather than never reading the news again, it might be better to read the news over lunch when we’re less vulnerable to ruining our entire day. When we’ve already warmed up our minds earlier in the day with practical work, bad news may hit us less severely.

We simply have to be attentive in considering the activities that help us and the activities that hinder us. The amazing world of smartphone tech streamlines our lives by reducing our daily efforts in communication and information sharing. But that streamlining can also push us into habitual routines that we never question. It’s only through stopping, thinking, and questioning, that we can snap out of our robotic cycles and consciously weigh the costs and benefits of each behavior. To mend a broken and unhealthy habit, we first need to detect it. If we notice that our engagement with the news cycle is hurting our emotional wellbeing, then we can rethink how we spend our time and mental effort.

Empathy is not always a good thing when it’s directed toward events that we can do nothing about. Pointless suffering is clearly something we want to avoid, and some news features that stimulate our feelings of anger, sadness, and pity could be accused of carelessly toying with our emotional fragility.And worst of all, it doesn’t just impact our emotional reactions, it also biases the active decisions and choices we make. When we learn to control our attention and energy, we can spend our mental resources wherever they make us happiest.

Comment

Your ADHD Diagnosis May Depend on Your Birthday

Comment

Your ADHD Diagnosis May Depend on Your Birthday

Imagine you were conceived and born a month earlier than you actually were. As long as you’re not a believer in astrological signs, you would probably expect to be the same person you are today, just a month older. That’s a very reasonable assumption because you would have the same genes, parents, and general childhood environment. But there is one thing about your early life that could change rather dramatically: your starting age at school.

If you were born in August, you probably started school almost a whole year earlier than your age-matched peers born in September. You were separated by a grade throughout your early educational life. This might not seem like a big deal but it does make a meaningful difference. Because of the way the school calendar is arranged in countries like the US and UK, August-borns are the youngest in their class while September-borns are the oldest in their class. In adulthood, an 11-month age difference relative to a colleague does not substantially affect mental ability or developmental characteristics. But in childhood, the experiences and behaviors of kids aged 8 vary from those of kids aged 7. And as they grow and compete in the same class, that conflict can have notable consequences.

I’ve previously highlighted many of these consequences in another article, but more recently, researchers have looked at another important outcome: diagnoses of attention deficit hyperactivity disorder (ADHD). Of all neurodevelopmental disorders diagnosed in childhood, ADHD is the most frequent, and its typical symptoms include chronic difficulties with focused attention and heightened impulsivity/hyperactivity. These symptoms can lead to school performance disadvantages, social challenges, and increased risks of injury and substance abuse disorders. In 2016, 8.4% of US children aged 2–17 had a diagnosis of ADHD and 62% of those children were on medication. Among children aged 2–5 years old, the prevalence of ADHD diagnoses increased by 57% between 2007 and 2012.

There is an ongoing debate about how ADHD is diagnosed and treated, but like many psychological and neurodevelopmental disorders, diagnoses are usually made on the basis of behavior. For children, this can introduce some complexity because behavior changes and develops rapidly in early life relative to late life. Our behavior also tends to be judged in the context of other people who are similar to us — it’s perfectly reasonable for a 6 year old to throw a tantrum in a supermarket because it resembles the behavior of her 6-year-old peers, but if a 30 year old started stamping his feet, we wouldn’t be quite so generous in our judgment.

If we assume that younger children tend to be less attentive and more impulsive than older children, then these differences due to age could appear similar to ADHD symptoms in a classroom. A child aged 7 may misbehave more than a child aged 8 in the same class simply because they haven’t had the time to mature to the same level. But if this misbehavior is mistakenly attributed to ADHD, it could result in a diagnosis error.

So are ADHD diagnoses more common for children born in August compared to September in the US? A group of academics in Massachusetts tested this question by analyzing an insurance database filled with anonymized information on 80 million Americans. They focused their search on children born between 2007 and 2009, and examined whether there was a difference in rates of ADHD diagnoses for August-borns versus September-borns in 18 US states with a September 1 cutoff for school entry. Their analysis ended up including 407,846 children who entered kindergarten between 2012 and 2014.

For every 10,000 children born in August, 85 of them had an ADHD diagnosis. For every 10,000 children born in September, 64 of them had an ADHD diagnosis. That means children born in August were 34% more likely to be diagnosed with ADHD. Similarly, August-borns were 32% more likely to be treated for ADHD. When the researchers split the children by sex, only the boys showed a significantly greater risk for ADHD with an August birth, but the pattern of results for girls pointed in the same direction.

At the age of 4, before entering the school system, there was no difference in diagnosis rates for children born in August compared to September. The difference emerged as significant by the age of 7, after all children had started school. This suggests that school behavior is a critical variable in the divergence between kids born in August and kids born one month later. On top of that, the researchers found no significant difference in ADHD rates for children who grew up in US states without a September 1 cutoff for school admission. It therefore seems likely that factors specifically related to age of entry at school explain most of the increased risk that August-borns face for an ADHD diagnosis.

* * *

If all you knew about two children was that they were born a single month apart, you would struggle to predict any meaningful characteristics that are unique to each of them. In fact, you would probably assume that the task is impossible; after all, what meaningful difference could a single month make? But the peculiarities of our school entry system mean that a child born on August 31 is likely to be treated differently to a child born on September 1, purely by virtue of their status as youngest vs oldest in the class. That different standard may be enough to introduce a bias in the probabilities of being diagnosed and medically treated for ADHD.

The data from the study above should intrigue doctors, teachers, and parents. We all want to give children the best possible start in life, and ADHD is a vulnerability that requires attention. But when it comes to diagnosis, we may be judging identical children by different standards based on the classmates they happen to join when they begin their education. If this leads to more false positives in detecting ADHD, it’s worth considering the possible impact of the misplaced diagnosis as a child grows up. Perhaps most importantly, any unnecessary medical treatments could cause adverse effects that nobody expects or intends.

By remaining vigilant with efforts to detect oddities and identify shortcomings in our understanding of human wellbeing, we can continue to steer some of our misdirected energies toward their desired goals. Studies that reveal unintended biases in decision-making contribute to fueling our relentless progress in the treatment of mental and physical health.

Comment

How to Stop Choking Under Pressure

Comment

How to Stop Choking Under Pressure

Photo by  Kevin Ku  on  Unsplash

Photo by Kevin Ku on Unsplash

Anxiety is the dizziness of freedom

— Søren Kierkegaard

Rewards are usually a motivator, but sometimes they are just too much. When enough rides on your good performance, you begin to feel the pressure. Professional athletes at the top of their game experience this during critical moments, and so do the rest of us when we realize that other people depend on our success, or when we step up to give an important presentation at work. Perhaps paradoxically, we are most likely to let ourselves down when we least want to. But at the same time, we need to keep attempting major challenges in our life, because without them we may be stuck at a dead end.

The main problem with pressure is the anxiety it causes us. Some level of anxiety is necessary when we face a challenge because it’s a sign that our efforts really matter. Sensible anxiety keeps us on our toes and gives us the impetus to prepare properly so that we do not mess up. If we don’t sufficiently care about an important event in our life, we are more vulnerable to failing simply because we did not prioritize it properly in our thoughts and preparations. But when anxiety is too high, it can become dysfunctional by pushing us into a shaky mindset as we approach the challenge itself. We need enough anxiety to care, but not so much that we choke.

There are two major competing theories about how pressure and anxiety actually choke us:

  1. Distraction: When we desperately want to succeed, it can distract us from performing at our best. The anxiety shifts our attention from typical performance rituals to thoughts of what it means to fail. We end up overthinking activities that normally come naturally and automatically to us. Imagine that I offered you $10,000 to type out a long sentence quickly on your computer without making an error. Although you probably do this successfully 9 out of 10 times in your daily life without even thinking about it, the added pressure of the money is likely to make you think too much about the exact locations of your fingers as you type, or the pain of losing $10,000. This unnatural focus to your attention means that you’re more likely to underperform by typing slower than usual or accidentally hitting the wrong button. You are thrown off your smooth expert mode.

  2. Over-motivation: When we are too emotionally active, we tap into our instinctive circuits that automatically pull us away from threats. If we are deeply anxious about losing or underperforming in some way, this fight-or-flight reaction is likely to overwhelm our practiced habits, reactions, and strategies, that usually guide us toward performing well.

If we could work out which theory explains our choking under pressure, it would put us in a better position to come up with a fix. Fortunately, each theory makes an opposing prediction about one crucial question: is there any difference between conscious explicit learning and incidental automatic learning? Let’s take the example of a boxer who needs to prepare for an upcoming fight with a new opponent. Would the boxer fare better under pressure if their trainer helped them analyze and memorize rules about the opponent’s behavior, or if the trainer immediately played out those rules in the practice ring and forced the boxer to react automatically and unknowingly to them?

The over-motivation theory would predict that both learning styles are equally vulnerable to choking under pressure, because our instinctive emotional reactions would overwhelm conscious and unconscious behavioral systems in the same way. The distraction theory, on the other hand, would predict a greater choking vulnerability for the boxer who prepares with the more conscious learning strategies, because the pressure will exclusively distract their conscious minds with worries about failure. The automatically learning boxer relies less on their conscious mind, so conscious distractions will interfere less with their performance.

Let’s get testing

Present fears are less than horrible imaginings.

— William Shakespeare

A group of researchers from across the US devised a clever experiment to directly test this question and find out which theory would come out on top. They randomly split 64 participants into two groups: an instructed learning group and an incidental learning group. Both groups sat in front of a computer and saw a row of four squares on the screen, one square for each finger of their hand excluding the thumb. Whenever one of the squares illuminated, participants had to press the button under their corresponding finger, and continue until a sequence of actions was completed. The sequences were made up of eight actions, and participants had to complete the sequences correctly under strict time pressure. Each participant learned a total of three different sequences, repeated 32–192 times, in a random order during training.

What exactly was the difference between the instructed learning and incidental learning groups? The instructed group saw a colored cue before each sequence began, which predicted the sequence they were about to practice — a yellow, blue, or green sequence. So these participants primarily used a conscious strategy in learning the sequences, thinking for example, “ok it’s blue, which means I will be pressing button 2, then 4, then 1…”. The incidental learning group had no such cue and were told that each sequence would be a completely random sequence. So they learned everything through a more automatic detection and reaction system, and couldn’t rely on their conscious predictions and rules.

After training, all participants continued with the same task they practiced. But now, they were playing for money. Before each sequence started, the computer told them how much money it was worth: $5, $10, or $20.

Over the course of training, both groups improved their speed in completing the sequences. The instructed learning group, who saw the predictive color cues for each sequence, showed a learning advantage over the incidental learning group. Their conscious predictions helped them progress more quickly in their training.

But the big question was whether the groups choked equally when faced with the high-stakes $20 sequences. The instructed learning group showed the characteristic pattern of choking under pressure. The added incentive of the $10 sequences improved their performance accuracy compared to the $5 sequences, but the $20 sequences created enough pressure to significantly harm their performance compared to the $10 sequences. So their peak performance was in the middle: enough incentive to care about getting the sequence correct, but not so much that it made them choke.

In contrast, the incidental learning successfully worked its magic in helping participants develop a resilience to the changing incentives. The levels of reward for each sequence made absolutely no difference to performance. The participants’ limited conscious knowledge about the sequences during training was actually a blessing in disguise. It prevented them from choking when it mattered. In fact, even if the predictive color cues were introduced just before playing for money, they still didn’t choke. As long as they learned and trained in implicit and automatic conditions, they were resistant to failing from too much pressure.

These results are a strong hint that choking is driven by our conscious knowledge and control processes in performing a task. Although conscious learning strategies help us to pick up and master a skill more quickly, they also introduce a cost when it comes to performance under pressure. We choke because those conscious strategies are interrupted by other conscious demands associated with a large emotional weight. In other words, the distraction theory explains the causes of our choking better than the over-motivation theory.

So what does it all mean?

What is the right attitude towards criticism?…To investigate candidly the charge; but not fussily, not very anxiously. On no account to retaliate by going to the other extreme — thinking too much.

— Virginia Woolf

The results have worked out in our favor. The distraction theory means that our implicit learning systems, which do not rely on conscious knowledge and awareness, are spared when the pressure mounts. Wherever possible, we can therefore adjust our training styles to fit. The pressures and anxieties caused by screaming fans and high stakes distract our conscious minds from applying the rules we learned back on the training ground. If we limit our awareness of these rules during training, there are fewer opportunities for our anxieties to interrupt our flow. Our implicit and automatic behavioral systems get on with the job they were trained to do, while our conscious minds are busy worrying about failure and judgment. To put it bluntly, there’s less interference because there’s less to interfere with, at least in our conscious performance.

So what can we do about the delicate balance of sufficient but not excessive anxiety for ideal performance? Can we lean away from choking territory? We have a couple of options: 1) Train ourselves in the most automatic and implicit ways we can when preparing for a major test, in order to build a resistance to pressure-related distractions, 2) Reframe our perceptions of the stakes, in order to reduce the pressure on our shoulders.

Questions such as “what’s the worst that can happen?” help to reframe our perceived stakes during a challenge, especially when we happen to be exaggerating the costs of failure in our anxious minds. Before giving important talks in my earlier career, I would get excessively anxious over vague thoughts such as “oh it would be just horrible if I embarrassed myself here” and “I’ll never forgive myself if I ruin this opportunity”. But as my experience evolved over time, with both fruitful and regretful highlights, new opportunities finally allowed me to ask “what is actually the worst that can happen here?”. It was then easy to see that my biggest fears were improbable and irrational nonsense.

Excessive anxiety, and unnatural attention to our conscious performance dynamics, are both choke-manufacturers. They derail us when we most need to remain focused. Introducing automaticity and reactivity to our training schedules, and rethinking the consequences that depend on our performance, can help with optimally utilizing each opportunity we find. Realizing that we won’t lose everything if we do a terrible job is freeing. It’s always worth appreciating the successes and pleasures that we have already achieved in our lives, because they are still likely to be there if we fail our next challenge. And besides, there’s always next time.

Comment

Multitasking Is a Myth You Should Believe In

Comment

Multitasking Is a Myth You Should Believe In

Photo by  rawpixel  on  Unsplash

Photo by rawpixel on Unsplash

Multitasking is the new norm. In the modern world of smartphones and continuous internet access, information and distractions are all around us. I never manage to write a full article or work on a project without regularly drifting toward WhatsApp, Twitter, and the news. Even when I’m not drifting toward distractions, I am always listening to music while I work. My excuse is that music helps to drown out distractions, and helps to put me in a more focused mental state. But if I’m honest, my singing and head-nodding breaks suggest that music acts as a frequent distraction too.

The truth about multitasking is that it’s not really multitasking at all. The attention processes in our brains aren’t built to simultaneously focus on several streams of information across different tasks. Instead, we shift between individual tasks that require our attention. Some people can do this fast enough that it looks like true multitasking — just watch professional gamers battle each other in demanding computer games — but when multiple tasks require focused attention, you’re never really engaging all of them at the same time.

This is why learning to drive a car is so difficult. When you’re learning, you need to pay careful attention to the several things you need to do: steering, checking mirrors, shifting gears, managing foot pedals, etc. You need to switch your attention between the tasks quickly and effectively to drive safely. But when you’re an expert, each of the tasks becomes habitual and requires less attention. So you can focus your mind on watching the road while everything else pretty much runs itself. You go from “multitasking” to single-tasking.

* * *

Rather than tackling the technicalities of what multitasking means in the brain, one group of researchers wanted to test how beliefs in multitasking affect performance. Most people think they’re great multitaskers: 93.32% of Americans in a survey believed they multitasked as well as, or better than, the average person. A large number of those people in the survey must be wrong, but perhaps their beliefs are good for them.

In their first experiment, the researchers recruited 162 participants and asked them to transcribe an educational video while watching it. The instructions for each participant were slightly different depending on which of two groups they randomly fell into. The first group was labelled the multitasking group, and participants were asked to complete two tasks at the same time: 1) learning the video content, 2) transcribing the video content. The second group was labelled the single-tasking group, and participants were instead asked to complete the single task of learning and transcribing the video content. In other words, all participants took part in exactly the same task, but only half of them were told that it would require multitasking.

Purely through this difference in perceptions and beliefs, the results between the groups diverged. The multitasking group outperformed the single-tasking group by accurately transcribing significantly more words (224 words vs 177 words on average). They also performed better in a pop quiz that tested knowledge of the video, after the transcribing part of the experiment ended. And these performance benefits emerged even though the two groups spent the same amount of time watching and transcribing the videos.

In a second experiment, rather than manipulating people’s perceptions, the researchers decided to look for possible effects of naturally occurring differences in participant perceptions. They asked 80 participants to complete two word puzzles presented side by side on a screen. The first puzzle was a simple word search while the other was an anagram task in which participants had to create as many words as possible out of a 10-letter string. After the puzzles, participants were asked how much they felt they were multitasking during their efforts. Stronger feelings of multitasking correlated positively with the number of correct words found.

To extend this word puzzle experiment, the researchers took a new set of participants and repeated their manipulations of multitasking perceptions from the first experiment. But this time, the researchers were slightly more subtle in their language. They told the people randomly assigned to the multitasking group that the two word puzzles came from separate studies, while telling the single-tasking participants that the puzzles came from the same study. Once again, the multitasking group performed better than the single-tasking group, finding significantly more correct words in the puzzles (13.65 words vs 7.5 words on average).

With the attentive diligence that marks any good scientist, the researchers repeated the word puzzle experiment a final time after manipulating perceptions, but this time included eye-tracking technology that allowed them to measure how much participants’ pupils dilated during the task. Pupil dilation is linked to greater mental effort, attention, and arousal, so if multitasking believers actually engaged better with the task, you would expect to see their pupils grow larger than the pupils of single-tasking believers.

In line with these predictions, participants in the multitasking group not only repeated their superior performance in the word puzzles, but also showed larger pupil dilation than the single-tasking group. You might think the larger pupils were due to the exciting arousal associated with performing better, but in fact, their pupils were already larger before they even found their first word. The larger dilation then continued throughout the rest of the task. Multitaskers’ brains and bodies physiologically engaged more deeply with the task as soon as participants were attempting the puzzles.

Amazingly, the researchers ran a total of 30 experiments focused on the question of how multitasking perceptions directly improve performance. So the last flick of their wand was to combine the data from all of these studies and understand the strength of the overall effect with a meta-analysis. They measured the magnitude of the difference between the multitasking and single-tasking groups in each study (the effect size), and then calculated the average effect size across the studies with a statistical model that took into account the size of each study. The overall effect was significant and moderate in magnitude, so a belief in multitasking meaningfully and consistently enhanced performance.

* * *

It’s always astonishing to find out how powerful our beliefs and perceptions really are. Everything from placebo effects to superstitions can dramatically influence our behavior and its outcomes. We can call it the power of faith and confidence. Thinking positive thoughts is not just a cheap trick that fools you into believing everything is going well; sometimes, things really do go better when you are optimistic. It’s not a supernatural energy or force at work, it’s simply your beliefs and perceptions improving how you approach and deal with a problem.

When it comes to multitasking, the idea that we can do several things at once may be technically incorrect. However, the belief that we are multitasking is enough to make us single-task more efficiently. So this may be a rare situation that calls for feelings over facts. Multitasking might be wrong, but it works.

Comment

E-Cigarettes May Be Your Best Hope to Quit Smoking

Comment

E-Cigarettes May Be Your Best Hope to Quit Smoking

Smoking remains one of the worst things we do to our health. We get hooked on the nicotine and then regularly ingest a list of poisons that slowly consume our organs. Health systems around the world are desperate to get people to stop smoking in a bid to reduce the healthcare burden of patients with smoking-related diseases. But their recommendations can only do so much when up against the physiological addictions in smokers’ brains.

As far as doctors and scientists today can tell, e-cigarettes do less bodily damage than traditional tobacco cigarettes, although our existing knowledge on their health risks is far from complete. With the overwhelming difficulty associated with going cold turkey on nicotine, vaping may provide a welcome aid on the journey to smoking cessation. Recent media attention has focused on the hazards of vaping, especially on its growing use among adolescents. While we should all agree that we need to curb the glamorous advertising of nicotine addictions to minors, we can still look for the potential advantages of e-cigarettes as a replacement device for smokers.

Do e-cigarettes actually help you quit smoking? At the end of January 2019, a noteworthy trial by researchers in the UK was published in the New England Journal of Medicine, and it targeted exactly that question.

The researchers recruited a group of people attending stop-smoking services within the UK National Health Service, and randomly split them into two groups: one group that was prescribed e-cigarettes and another that was given a choice from a list of other nicotine-replacement products including gum, patches, and nasal/mouth sprays. People in both groups also received one-on-one behavioral support every week from a local clinician. It’s worth noting that all of these people were clearly motivated to quit smoking given their attendance at the stop-smoking clinics. Without a strong motivation to quit, it seems unlikely that any smoking cessation aid would do much good.

The two groups of participants were asked to persist with their efforts to quit smoking for a full year, at which point the researchers followed up to measure outcomes. Of the 886 people who started the trial, almost 80% completed their final follow-up assessments at the end of their 52 weeks. As you’ve probably guessed, the most interesting question for the researchers was whether either group would have a significant advantage in overcoming their smoking habits.

The researchers considered a participant abstinent if they reported smoking less than five cigarettes from the two weeks after they started their trial. They confirmed this report with a breath test for carbon monoxide, which is a harmful element found in tobacco smoke but not in the products of typical e-cigarettes or nicotine-replacement products. 18% of participants in the e-cigarette group and 9.9% of participants in the nicotine-replacement group remained abstinent from smoking cigarettes at the end of the year. On top of that, among participants who failed to remain entirely abstinent, carbon monoxide tests confirmed that 12.8% of people in the e-cigarette group, compared to only 7.4% in the nicotine-replacement group, managed to cut their smoking by at least 50%.

In the group of participants who remained abstinent after a year, 80% were continuing to use their e-cigarettes while only 9% were using their alternative methods of nicotine replacement. Although neither method could rival the pleasure of a traditional cigarette, participants reported feeling greater satisfaction from using their e-cigarettes than other replacement products, and found them more helpful in quitting smoking. In the early days of their efforts to quit, e-cigarette users reported less trouble with irritability, restlessness, and failures to concentrate than the other nicotine-replacement users.

The data suggest that e-cigarettes are a helpful tool for people who want to ditch their cigarette habit. They reduce the severity of withdrawal symptoms, and they lead to greater success in continuing abstinence from tobacco. The proportion of successful quitters overall in the trial wasn’t enormous, which is a testament to just how difficult the challenge is. It takes motivation, effort, and probably a few encouraging nudges from friends and loved ones.

Photo by  Matheus Lira  on  Unsplash

Several advantages of e-cigarettes could explain their greater success over other stop-smoking products. Perhaps the nicotine dosages are better-tailored to user needs than most other methods. As a behavioral scientist, my own mind leans toward the practical behavioral qualities of e-cigarettes as a delivery vehicle for nicotine. They don’t just drive nicotine into your system, they do it in a way that closely matches traditional cigarettes. They offer all of the habitual cues that smokers have adapted to: the small stick between the fingers, the glowing tip as nicotine is inhaled, and the gentle release of smoke during an exhale.

These behavioral elements might sound like nothing compared to the physical absence of nicotine in an addicted brain. Although nicotine absence may be the primary driving force behind failed attempts to quit smoking, all of the contextual elements that surround the habit of smoking may also make a difference between success and relapse. 

In the days before e-cigarettes were widely available, I witnessed family members doing all kinds of bizarre things in their efforts to patch up the hole left in their lives from throwing out the cigarettes when they decided to quit. They played with prayer beads to keep their fingers busy, they held candy sticks between their lips to mimic cigarettes, and they enjoyed the feeling of blowing vapor out of their mouths on a cold day. Now, e-cigarettes offer all of these in a more convenient and less silly-looking package.

But let’s not forget the problems of e-cigarettes. Non-smokers who choose to take up vaping are still putting themselves at risk of becoming vulnerable nicotine addicts. As the results of the study above highlighted, e-cigarette users were far more likely to continue with their new habit than other nicotine-replacement users. If we are talking about adults who willingly make this decision while understanding the dangers, it’s not necessarily a big deal. But if we are talking about children and teenagers who will grow to regret their addiction, and are blindly driven to the habit by exciting adverts, the problem is more obvious. Although it currently seems as though the health hazards of e-cigarettes are less severe than tobacco cigarettes, we may identify harms as the science and evidence develops in this area. And, of course, vaping costs money. In hard times when you want to spend less, the last thing you want to deal with is a nicotine addiction.

If you are trying to beat a harmful cigarette habit, an assist from e-cigarettes may be a sensible bet. For now, they appear to be the best available product for dealing with smoking cessation. As a society, we can use them in efforts to reduce the prevalence of smoking-related diseases, while doing our best to avoid throwing vulnerable groups such as children and adolescents into just another vortex of addiction.

Comment

Aerobic Exercise Beats Muscle Training in Improving Brain Function

Comment

Aerobic Exercise Beats Muscle Training in Improving Brain Function

Photo by  Jacob Postuma  on  Unsplash

Choose rather to be strong of soul than strong of body.

— Pythagoras

As intelligent as Pythagoras was, he was almost certainly wrong in his intuitive hard-edged separation of body versus soul. It hasn’t been long since I last wrote about the wonders of physical exercise for brain and mental function. But new studies have continued in the meantime, and it’s important to stay on top of them. Scientific progress is incremental, and evidence in support of a particular theory either continues to accumulate or wavers until a few killer studies finally disprove the theory.

By keeping up to date with evidence as it appears, we can minimize the chance that we hold outdated or erroneous assumptions about how the world or our bodies work. So is there still a strong case in support of regular aerobic activity as a cognitive enhancer? And does aerobic activity beat muscle training in boosting our mental function?

When we think about cognitive function and possible activities to enhance, stimulate, or protect it, we typically consider mental exercises; this is what makes brain training and problem-solving games so popular. We feel that our cognitive ability is being challenged and that our performance is improving in the game, so we make a natural analogy to the challenges of lifting progressively heavier weights at the gym: physical exercises push the limits of our body and improve it in the process, while mental exercises push the limits of our brain and improve it in the process.

But as my previous article highlighted, it’s a mistake to think about physical and mental activity as independent domains. They are both indispensable parts of the same unified structure that we call a human. So it shouldn’t be too surprising to learn that the aerobic activity that benefits our body also benefits our mind. All of our actions, thoughts, feelings, and experiences come from the operations of our biological organs after all.

In line with this logic, it’s reasonable to expect that physical exercise should benefit the brain and mind, but we still need good evidence to demonstrate that it actually happens. A new study published at the end of January 2019 in the journal Neurology targets exactly this question.

The group of researchers from Columbia University wanted to test the cognitive effects of regular aerobic exercise for adults between the ages of 20 and 67. They recruited 132 participants with normal cognitive ability but below-average physical fitness. Then, they randomly sorted these people into two groups: an aerobic exercise group and a stretching/toning group.

Participants in the aerobic exercise group selected from a list of different activities, but all activities were organized to meet a standard heart rate intensity during the exercise. It doesn’t particularly matter whether you run, cycle, or swim. As long as it gets your heart pumping in the same way, you should achieve similar aerobic effects. The stretching/toning group instead performed exercises that enhanced physical flexibility and core strength.

Recruits in both groups individually attended a fitness center four times a week, for a total of 24 weeks. Each session lasted approximately an hour under the guidance of trainers and coaches, and heart rate was continuously monitored to make sure that the aerobic training was hitting its desired targets.

Over the course of the experiment, the researchers assessed aerobic capacity (VO2 max) and cognitive ability three times for each participant: first before the experiment started, second after 12 weeks of training, and finally after the full 24 weeks of training. This allowed the researchers to analyze the change in aerobic and cognitive ability as participants trained throughout the 6 months of the study. If physical exercise does affect cognitive ability, it’s possible that it influences each of our mental functions differently. To make sure they were able to detect this, the researchers tested several cognitive skills including language, attention, and memory.

Photo by  Michal Vrba  on  Unsplash

Photo by Michal Vrba on Unsplash

Fortunately for us, we can unwrap the data now to learn the results rather than waiting 6 months like the diligent researchers. But first, a few sanity checks. Before the experiment started, the aerobic exercise and stretching/toning groups did not differ in age, sex, education, IQ, or any of the cognitive assessments that the researchers used. This is good news because it suggests the randomized grouping successfully removed any major bias in splitting the participants. The aerobic group had slightly more participants with hypertension, but the rarity of medical conditions overall meant that only six participants (out of the 132) were affected.

Now that the sanity checks are in order, let’s look at what happened to the measures of aerobic capacity. Unsurprisingly, the aerobic training group showed greater improvements in aerobic capacity than the stretching group. Their VO2 max was elevated to similar levels for both the 12-week and the 24-week assessments, while the stretching group showed no change. As expected, the aerobic training succeeded in improving participants’ cardiovascular fitness.

The more exciting question relates to how the cognitive outcomes changed. Most of the cognitive traits did not change with either training program, but one particular trait jumped out of the data. That trait is called executive function, which typically refers to our ability to effortfully guide our behavior toward specific goals. When you organize or prioritize your options, suppress inappropriate actions, categorize information, and focus your attention on critical aspects of a task, you are using executive function.

The researchers found that participants in the aerobic exercise condition showed a greater improvement in their executive function performance after 24 weeks of training compared to the stretching group. They also spotted an interesting interaction with participant age: older participants saw larger benefits to their executive function from the aerobic exercise.

To work out whether the executive function improvements might be linked to brain changes, they also ran some imaging scans. The scans revealed larger growth in a small area of the middle frontal cortex on the left hemisphere of the brain after aerobic training compared to stretching/toning. However, the amount of growth did not seem to correlate with improvements in executive function.

* * *

Bodily decay is gloomy in prospect; but of all human contemplations the most abhorrent is body without mind

— Thomas Jefferson

The results of this study support the idea that aerobic exercise is good for our cognitive function. As with any study though, it has its limitations and we need to wait for even better evidence before believing that we’ve hit upon the correct answer. Better evidence means bigger studies, with more participants followed over longer time frames, that replicate these effects. Perhaps the positive effect on executive function was just a fluke? If an independent lab finds the same pattern, we can be more confident that it’s a legitimate effect.

For now, the evidence is moving in an optimistic direction that suggests if you care about your mental wellbeing, jumping on your bicycle or treadmill regularly is a good idea. And let’s face it, even if it doesn’t improve your cognitive function, you’ll almost certainly be helping out your cardiovascular health, and there’s no real sign of a downside from regular exercise.

There are, of course, mental activities that benefit our mental wellbeing too. Mindfulness is growing in popularity and the evidence to support its psychological benefits is getting stronger. And rather like the benefits of physical exercise for cognition, mindfulness may also have important benefits for physical health. Whatever way we look at it, the road between mind and body is a continuous two-way street. Our body produces our mind and our mind feeds back to affect our body. This seamless cycle is a great reason to pay equal attention to our physical and mental health.

Comment

Psychological Targeting Makes You More Likely to Click “Buy”

Comment

Psychological Targeting Makes You More Likely to Click “Buy”

Photo by  Tony Reid  on  Unsplash

Photo by Tony Reid on Unsplash

It’s always surprising to learn about how similar we all are, but businesses often care more about the differences between us. It’s all about “target audiences”, “customer profiling”, and “personalization”. Lumping customers into a single group and holding the same umbrella over them is not as effective as detailing their differences and using tailored strategies to profit from them. Showing everyone an advert for a million-dollar luxury yacht is not as efficient as showing the yacht to high-income groups while promoting cheaper holiday breaks for low-income groups.

When content is personalized to our own tastes and circumstances, it means we are being shown what is most relevant to us. It minimizes our workload in accessing the information we want or need. If we open up our internet browser to buy a new pair of shoes, it’s far more convenient to immediately see an advert for our favorite style of shoes than to search several outlets to find them ourselves. The more information that companies have about us, the better they can filter out the irrelevant content that we do not want to see.

You may already be thinking about the potential hazards of companies knowing too much about us, and you’d be right to raise that concern. Privacy is an important priority in our lives, and the more of it we give up, the less protection we have against those who want to mould or mislead us. But the frequent scare stories around this issue make it easy to forget the ways in which selling off some of our personal data is actually streamlining our lives. I’m not arguing that we no longer need to worry. I’m arguing that it’s worth keeping sight of why we make these sacrifices.

The major online services that you use every single day, but don’t pay for upfront, are likely to be making their money by selling the data they gather on you to other businesses. Those businesses use your data to show you adverts that fit your online activity patterns and personal information. The better they can tailor their content to suit you, the more likely they are to sell you a product at a minimal advertising cost. They want to pump their conversion rates — the probability that you will buy the product when you see their advert — as high as they will possibly go. To do that, they want to know everything about you. And one particularly useful pot of gold may be your personality.

A team of academics wanted to test just how useful your personality could be to an advertiser. They didn’t need to interview users or even send them a questionnaire in order to assess their personalities. They only needed to analyze one important piece of information: Facebook Likes.

When we Like a piece of content on Facebook, we’re not just expressing that we like that specific feature. We are revealing deeper aspects of our identity, because the things we like and enjoy depend on our personalities. If we are extraverted rather than introverted, we may be more likely to enjoy social content. If we have high rather than low openness, we may be more likely to enjoy adventurous content. The truckload of Likes that Facebook has on each user allows them to analyze that data and infer a user’s personality. In fact, that data allows a computer to predict our personality better than our friends or family can.

So by assessing our personality through our Facebook Likes, the academics ran three experiments, all focused on identifying whether messages that fit Facebook users’ personalities would be more likely to convince them to buy a product.

In the first study, they created two versions of a beauty product advert: one they believed would be ideal for extraverted users and another that was designed for introverted users. The extraverted adverts would use messages that appeal to an outgoing and sociable nature such as “Dance like no one’s watching (but they totally are)”. In contrast, the introverted adverts would lean toward messages targeting a quieter and more withdrawn demeanor, such as “Beauty doesn’t have to shout”.

Their advertising campaign reached over 3 million Facebook users. When the advert matched a user’s personal level of extraversion, the researchers found that the user was 50% more likely to buy the product than when the advert was mismatched. So personality-based targeting allows adverts to connect with users on a deeper level, and is more likely to sway them toward clicking the buy button.

Photo by  Joshua Earle  on  Unsplash

The result above was more than just a lucky shot. The researchers ran a second experiment, this time with an advert for a crossword app, which was tailored in its messaging to people who were either high or low in openness. The high-openness messages appealed to users’ curiosity and imagination (e.g. “Aristoteles? The Seychelles? Unleash your creativity and challenge your imagination with an unlimited number of crossword puzzles!”). The low-openness messages instead appealed to tradition and familiarity (e.g. “Settle in with an all-time favorite! The crossword puzzle that has challenged players for generations.”).

Once again, after reaching over 84,000 users on Facebook and Instagram, the adverts that fit a user’s personality were over 30% more likely to convince the user to install the app than conflicting adverts. However, this time, the effect was primarily driven by those people who were low in openness. The high-openness people seemed to care less about which advert they saw, and were equally likely to install for both messages.

In a final third experiment, they decided to put their theory to a direct test in adjusting an existing company’s advert messaging in line with user personality, and examining whether it improved user interactions and conversion rates. The company in question was trying to sell a bubble shooter game and they usually targeted audiences who had downloaded similar games, using their standard advert: “Ready? FIRE! Grab the latest puzzle shooter now! Intense action and brain-bending puzzles!”. The researchers analyzed the personalities of the target audience and learned that they were highly introverted. So they created an advert that instead probed those users with a less excited and outgoing message: “Phew! Hard day? How about a puzzle to wind down with?”.

Over half a million Facebook users saw the adverts, and the researchers replicated their previous results by showing that the new personality-adjusted advert attracted more clicks, app installs, and a significantly better conversion rate than the standard advert.

* * *

Persuasion convinces people to change their behavior, but there are many variables that determine exactly how persuasive a message is. The more of those variables that an organization can get their hands on, the more intuitively and effectively they can speak to us. I’ve previously written about how well Facebook knows us, but I wanted to give the issue of targeted marketing some more comprehensive airtime, especially because adverts are rapidly moving in the direction of increasing personalization.

We all have different hobbies, interests, and temperaments, so we are attracted to different features of the world: some of us are drawn more toward a quiet night with a book while others are drawn more toward a loud and crowded party. We are also more easily convinced by people who have similar characteristics to us: when an acquaintance shares our age group and our interests, we find them more relatable, and we allow them more room and opportunity to persuade us. This all feeds into the power of personally-tailored adverts and messages.

Clearly, a personalized approach to content curation is influential, but perhaps it is also a little divisive. Most of us bury ourselves in social bubbles and curated news feeds, as we interact exclusively with our personal interests or networks. The deeper we root into social networks and targeted content, the more resistant our bubbles become. It takes effort and willpower to break out of a bubble, because we don’t particularly want to do it; we are engaged by personalization precisely because it gives us so much of what we want, and that’s why it’s so profitable for businesses.

However, breaking free occasionally and exploring the world beyond our bubble is likely to be an adventure we won’t regret. Whether it’s uncomfortable political commentary, new stimulating hobbies, or adventurous artistic tastes, venturing over to the other side of the wall gives us a much-needed refreshing escape from our tightening leashes.

Comment

Want Your Friends to Wash Their Hands? Try This Subconscious Nudge

Comment

Want Your Friends to Wash Their Hands? Try This Subconscious Nudge

Photo by  Kelly Sikkema  on  Unsplash

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society… Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society.

— Edward Bernays

Behavioral “nudging” is taking over the world. Businesses, nonprofits, and governments are sprouting entire departments that design psychological tricks to change people’s behavior. When we need people to quit smoking, drive safely, or use their new smartphone properly, it’s often not enough to spend millions on new adverts or include an instruction manual. We’ve been telling people to stop smoking and eat healthily for decades, but their existing habits make it too difficult to simply follow instructions. We are all too tired, too busy, and too lacking in willpower to do all the things we are supposed to do.

Here enters the world of behavioral science. Rather than repeatedly telling brick walls to do better, we can use research from experimental psychology to design interventions that actually work. For example, we know that laziness is common, and that even when we are aware of a good idea, we still often don’t get around to doing it.

Consider the problem of organ donation. Many of us believe that donating our organs when we die is a great idea: we will save lives and contribute to scientific progress by giving away body parts when we are too dead to use them. And yet, too many of us still haven’t opted in for the program. It was only a couple of years ago, while I lived in the UK, that I finally found the opportunity while renewing my driver’s license. A compulsory message asked me (I am paraphrasing), “Hey, we’re going to donate your organs when you die. Is that cool with you?”. Of course it was cool with me, and I’m glad they finally asked.

This kind of opt-out rather than opt-in approach is useful because it gets around our lazier instincts. Countries with opt-out rather than opt-in programs end up with a greater number of liver and kidney transplants. If a simple technique like this can nudge us toward donating our cherished organs, then there are likely to be many other amazing outcomes it can achieve.

A recent study tackled the issue of hand-washing. Despite the title of this article, this is a serious problem. When medical staff or workers in other infection-ridden industries forget to wash their hands, people can die. Stickers on the wall that say “Please wash your hands” just don’t work well enough. So what else can we do?

Perhaps “the decoy effect” could help. This is a strategy frequently used in marketing, where a decoy product, which a business knows nobody will ever buy, can enhance the allure of another more realistic product. For example, imagine a magazine subscription with two options: an online-only option that costs $59 for a year, and an option offering both online and print versions for a total of $125 a year. As Dan Ariely explains in his book, with only these two options available, the large majority of subscribers choose the cheaper online-only version. But introduce a third nonsense option next to the other two, which offers a print-only version for $125, and suddenly preferences shift entirely. Now, an even larger majority of subscribers choose the expensive print-plus-online option rather than the cheaper online-only version. And that’s purely because the print-only decoy made the identically-priced print-plus-online option look so much better. Although nobody picks the decoy itself, its presence makes them feel “woah, the print-plus-online option is such a great deal because it’s the same price as print-only”. So their preference shifts from the cheapest online-only option toward the more expensive “good deal”.

A team of researchers from the US and China tested whether they could use the decoy effect to encourage food-factory factory workers to wash their hands more often. They went into a factory where workers were supplied with a sanitizer spray on their work desks, and placed an additional sanitizer squeeze bottle next to the spray on half of the desks. This was the decoy: it was just as hygienic as the spray, but it was less convenient to use because it required more effort to turn the squeeze bottle over and apply the hand sanitizer. So in effect, rather like the magazine subscription example I gave earlier, workers had three options: don’t clean at all (convenient but unhygienic), clean with the spray bottle (somewhat inconvenient but hygienic), or clean with the squeeze bottle (very inconvenient but hygienic).

Among the group of workers who saw no change on their desks and had access to only their normal spray bottle (let’s call them the control group), around 70% passed sanitary requirements in a test. But for the group of workers with the additional decoy squeeze bottle on their desk, pass rates were at around 90%. And that was because they increased their usage of the original spray bottle, which all workers, including those in the control group, had continuous access to. In a second experiment that replaced the squeeze bottle with an even less convenient decoy for workers, namely a sanitizing basin within which workers had to soak their hands for 30 seconds, results were even stronger. Across the 20 days of the intervention, sanitary test pass rates increased to an average of 98%, thanks to the convenient charm of the ordinary spray bottle sitting on the desk.

* * *

We are sorest bent and troubled by invisible hands.

— Friedrich Nietzsche

Many people are understandably concerned about a world of nudges. Isn’t it psychological manipulation and possibly even brainwashing? If governments and businesses can use it against us, for bad rather than for good, are we in trouble?

If it makes you feel any better, you should know that organizations have been nudging you, and you have been nudging other people, for a long time. It just hasn’t previously had a consistent formal name. For decades, advertisers have been pushing your buttons and pulling your strings with images of attractive famous people and overhyped products. Retailers have been arranging their products in ways that make you more likely to pick them up. And we’ve all been adjusting our language and behavior to enhance our appearance in front of other people. We are built to convince others to like us and help us, and we are also built to respond predictably to the nice or nasty actions of everybody else. Whether we talk about angry mobs on Twitter or typical everyday conversations with friends, behavior is predictable enough that the world makes sense.

If nudging did not work at all, we would be living in a troubling and messy world. Life is more pleasant when we understand what’s going on, and people are behaving as we expect. We generally avoid uncertainty because it can be dangerous. A natural consequence of this healthy predictable mindset is that other people can manipulate it. Casinos make us gamble more, businesses sell us damaging products like cigarettes, and social pressure pushes us into situations we’d rather avoid. But we can’t forget the countless good things: health services can encourage us to prolong our lives, digital products can simplify our lives, and society can urge us to conform to positive social values. As long as we keep our wits about us, we can maintain some control over which influences we choose to allow into our own life, and which pressures we choose to reject.

There will always be external forces in our lives, outside our awareness, that push our behavior in particular directions. It simply comes with the job description of being human. With some additional attention to our motivations and reasoning each time we make an important decision, we can give ourselves the best chance of avoiding harmful behaviors and maximizing the actions that are good for us. We don’t need to worry about manipulation when it is in our interest and helping to increase our health and happiness. If you really want your pals to sanitize before leaving your bathroom, just put a particularly unpleasant soap dispenser right next to the normal soap.

_____________________________________________________________________________________________

Since writing this article, I have come across an important note from an editor at Psychological Science, the journal that published the original food factory study. You can read his comments here. After he discusses an investigation into the data from the research team, his conclusion states: “These considerations undermine confidence in these data. But, in my opinion, they do not constitute clear evidence of fraud. I also note that Li and Sun [authors of the work] cooperated very helpfully in the investigation of this case.”

Comment

Love Flicks Your Brain’s Commitment Switch

Comment

Love Flicks Your Brain’s Commitment Switch

Photo by  Evan Kirby  on  Unsplash

Photo by Evan Kirby on Unsplash

Love is a smoke raised with the fume of sighs

— William Shakespeare

Romantic love is a powerful social force, but you don’t need me to tell you that. Many of our most dramatic memories are likely to come from interactions with people we adore. When we profess our love but the feeling isn’t reciprocated, it can leave us feeling lonely, embarrassed, and even depressed. When love is requited, it is almost the mirror image: elation, motivation, and boosted self-worth. These experiences have inspired some of the greatest literature and most popular entertainment in history, from Shakespearean tragedies to Witherspoonian romantic comedies. How do we fall in love in the first place, and why is it so breathtaking?

Let’s start in the brain. Are there any patterns of brain activity that predict whether we will like someone when we meet them? Researchers tested this question by putting participants in a brain scanner and analyzing their brain activity while they looked at photographs of prospective romantic partners. After the brain scanning, participants actually got to meet the people from the photographs at a speed-dating event. This gave the researchers a great opportunity to examine whether the brain activity they measured in response to the photographs predicted decision-making during dating.

The researchers identified two areas of the brain that were active while participants weighed up the photos, and that predicted their later choices. The first was the paracingulate cortex — an area on the medial surface of the brain — which coded for judgments of physical attractiveness. Beauty judgments were fairly consistent across participants. The second relevant brain area was the rostromedial prefrontal cortex — another more frontal medial area — which instead coded for judgments about perceived personality and likability, preferences that varied between participants.

The medial frontal surface of our brain therefore computes several bits of information about people who could become future romantic partners, including general information that we all tend to agree on, and information that is more specific to our personal preferences. Love at first sight may depend on the levels of activity in your paracingulate and rostromedial frontal cortices.

We understand some of what the brain is doing when we look at prospective partners, but what exactly is it that makes two people compatible and successful in building a relationship? A few obvious possibilities may spring to mind — a similar sense of humor, shared experiences, matching personalities, etc. But the answer is a lot more difficult than you might expect, because these variables are not great at predicting relationship outcomes.

In 2017, researchers tried to uncover what makes a couple compatible, but learned that it is incredibly difficult to predict romantic desire based on personal attributes before two people meet. The researchers assessed over 100 traits and characteristics for a group of undergraduate students who would then attend a speed-dating event where they would interact with around 12 people. Although the traits could predict people’s general tendency to romantically desire other people, and to be desired by other people, they could not predict relationship outcomes for a specific couple. Your personality and attitudes may explain why people generally find you attractive, but they won’t explain why you’re particularly compatible with your current romantic partner.

After we’ve dated someone a few times, we face the prospect of falling in love with them. As intense romantic love develops over the first couple of months and years of a relationship, the brain shows some specific patterns of activation. When we look at a photo of our recently established romantic partner, reward and motivation areas of the brain boost their firing. Those areas include the ventral tegmental area and caudate nucleus, which are typically involved in releasing and utilizing the neurotransmitter dopamine, an important chemical within the brain’s reward systems.

Love is essentially a motivation function and differs from the feeling of sexual arousal; the neural networks underlying love and sex overlap to some degree, but they are also distinct in important ways. Our sex drive pushes us to look for new mates, while our love drive encourages us to stick with a specific partner and take care of important responsibilities like raising children.

The activity in some of these early-stage love areas of the brain can actually predict long-term relationship outcomes. One group of researchers contacted their participants from a previous experiment on budding relationships, and asked them to return to the lab 40 months later. Half of them were still with their previous partners while the other half had separated. The researchers found that the people who showed more activation in their caudate nucleus during the early experiment were more likely to remain with their partners 40 months later and more likely to report greater relationship commitment. They found the opposite pattern in a brain structure called the nucleus accumbens: deactivation was associated with better relationship outcomes. Low nucleus accumbens activity in the presence of temptation has previously been linked to better self-control, suggesting that perhaps those with a better ability to control their impulses are more likely to remain in committed relationships over the long term.

Long-term love has some additional components in the brain. It recruits some of the same dopaminergic areas stimulated by early romantic love, but it also recruits areas involved in maternal love such as the globus pallidus and substantia nigra, which are structures packed with oxytocin hormone receptors. So, in a sense, we view our spouse as a disturbing mix of parent and lover. Oxytocin is a hormone that facilitates social bonding in humans and other species, helping us to strengthen attachments with family and romantic partners. Among early lovers, particular variants of an oxytocin receptor gene, specifically variants that are associated with social disturbances, can predict poor empathic communication. Many of us have probably experienced firsthand how a lack of empathy can be harmful to a relationship.

* * *

Love is anterior to life, 
Posterior to death, 
Initial of creation, and 
The exponent of breath.

— Emily Dickinson

We need love to be truly happy, and the online dating revolution has opened up a whole new world of opportunities for meeting prospective romantic partners. This increased opportunity is a blessing for many, especially those who have typically struggled to meet new people. But it may be worth keeping an eye on the possible costs too. Online interactions with strangers lack many of the rich social signals and qualities associated with meeting people in person. During the first interaction, we can’t look into their eyes and assess them based on the subtle way they physically speak or act in front of us. By looking at a static idealized photo as the first point of contact, we throw out all those years of evolution that fine-tuned us for rejecting unpleasant people and attracting us toward compatible people.

In essence, we may be swiping away the love of our life and arranging dates with people who would never have passed our initial sensory checks in the physical world. We may also be shifting our priorities toward a shallower mindset that isn’t necessarily suited to building the healthiest long-term relationships. In the traditional face-to-face social world, the person we swiped away in an app might have had a second chance to impress us with their other behavioral qualities. To be clear, I don’t consider myself a dating app skeptic; I’m actually an optimist but also a worrier who tries to look at both sides of every coin.

Love will continue to be the biggest priority in our life, whether it’s toward our families, partners, or children. It gives us a reason to live and motivates us to be a good person that people want to associate with. Although unreciprocated love can make us feel as though we never want to love again, our persistence in finding the right person pushes us into environments that develop and enhance our character. As we move closer to finding our lifetime companion, we become better people, and ultimately tie ourselves to a wonderful person who is willing to accept our remaining flaws.

Comment

Psychopaths Lack Human Social Reflexes

Comment

Psychopaths Lack Human Social Reflexes

Photo by  Ashley Jurius  on  Unsplash

Psychopaths have a strange allure about them. We enjoy reading their stories, talking about them, and watching them in Hollywood movies. It’s almost as though we are fascinated by their utter lack of care for our feelings. Mystery drives curiosity, even when the mystery is not particularly good for us. The simple fact that they lack empathy — a typical human trait that the rest of us automatically experience — makes us want to learn more about them.

Psychopathy (or the related diagnosis of antisocial personality disorder) is characterized by selfishness, callousness, impulsivity, and a lack of empathy. Less than 5% of people truly meet the criteria to be properly diagnosed as psychopaths. But like with many other psychological characteristics, psychopathic traits sit along a continuum, and a person is diagnosed when their symptoms exceed some predefined threshold. So even if we don’t personally know a real psychopath, it probably is true that most of us know someone who is a bigger psychopath than we are.

* * *

Empathy is a pivotal feature for telling apart a psychopath from a typical person, but we can study empathy from several perspectives. The first relates to what scientists call “theory of mind”, which refers to our ability to reliably attribute mental states to other people. It allows us to read their behaviors, intentions, and beliefs. Without it, we struggle to understand what people are doing and what they are likely to do next.

Psychopaths do not seem to have much difficulty with theory of mind. Depending on how you look at it, their normal function in this domain may sound surprising. If theory of mind is a critical feature of empathy, then you would predict that selfish psychopaths lack that capacity for reading minds; after all, how can we understand other people’s beliefs and motivations without empathy? 

But what if you consider theory of mind as just one facet of empathy rather than its critical core? Then, it’s less surprising to know that psychopaths are good at reading people. In fact, much of their callous manipulation of others probably depends on an intact theory of mind. People are much harder to manipulate if you cannot understand and predict their thoughts and behaviors.

The truth is psychopaths understand other people’s feelings perfectly well, but simply don’t care too much about them. Therein lies another facet of empathy. If our conscious comprehension of other people’s mental states is one half of the story, then perhaps the other half is our unconscious reaction to those perceptions. How deeply do other people’s intentions and experiences — their thoughts, pains, and pleasures — affect us?

Photo by  Maxime Roedel  on  Unsplash

In early 2018, one study investigated this unconscious perspective of empathy in the context of psychopathy. They took over 100 male convicts in a high-security prison and ran through a standard psychopathy checklist with them. The prisoners then completed a computerized task that featured a human character standing in a room; the right and left walls of that room had a number of red dots painted on them and the computer character faced one of those walls.

The task itself was straightforward: participants had to report either how many dots they could see painted on the walls from their own perspective, or how many dots the computer character could see from his animated perspective.

Consistent with previous research using this type of task, the researchers first found that participants were faster in judging how many dots they could see themselves versus how many dots the computer character could see. Even with our decent theory of mind ability, it’s still easier to judge the world from our own perspective than somebody else’s perspective.

Participants were also faster to judge the number of dots when their own perspective matched the perspective of the computer character. If both they and the character could see the same number of dots, they responded quickly. If they could see a different number of dots to the character, then it would take them longer to indicate the correct number of dots because of the interference from the inconsistent perspectives.

The interference patterns overall differed depending on whether the participants judged their own perspective or the character’s perspective at the time. We can consider two types of interference: altercentric interference is when the number of dots that the computer character sees disrupts our own assessment of how many dots we can see; egocentric interference is when our own perspective interferes with how we count the other person’s dots. In other words, you could argue that egocentric interference is an automatic sign of human selfishness, while altercentric interference is an automatic sign of human empathy.

On the whole, participants experienced both types of interference, consistent with the behavior of non-criminal populations who have previously completed the task. However, the researchers found a more unique pattern when they analyzed the data depending on each prisoner’s level of psychopathy. 

The more serious psychopaths were less affected by altercentric interference, that is, they were less affected by the perspective of the other person when reporting how many dots they could see themselves. There was no such effect for egocentric interference. So psychopaths were perfectly unencumbered by the other person’s perspective when judging their own perspective, but they could not block out their own perspective when trying to judge how many dots the other person could see. They lacked the automatic sign of human empathy that characterizes a non-psychopath, but showed no such deficit in automatic human selfishness.

Unsurprisingly, a participant’s level of psychopathy as measured by the researchers’ checklist predicted the number of assault charges on their criminal record. But here’s a more interesting question: could their performance in the computer task also predict their real-world behavior? The answer is yes. A lower score on altercentric interference (automatic empathy), combined with a higher psychopathy score, predicted a larger number of assault charges. Their lack of an automatic empathy reflex did not just improve their ability to selfishly count dots in the computer task; it also made them more likely to violently assault other people.

* * *

We will all continue to be intrigued by a psychopath’s total disregard for our welfare. Our eyes and wallets remain open for terrifying stories about Mansons, Bundys, Geins, and Dahmers, but it’s also worth studying the less dramatic details of their psychological profiles if we want to understand their behavior. We may never intuitively relate to their severe callousness, but we can at least begin to explain it.

Psychopaths can read our minds and behavior as well as anybody, but when you look for the more unconscious and uncontrollable signs of empathy, you begin to see their depravity through clearer glass. That’s when they can no longer fool us. Only the very best actors can control their unconscious impulses well enough to mask their ordinary nature. The rest of us struggle to hide our thrills, desires, and automatic empathy. 

When we notice that psychopathic killers lack the immediate unconscious reflex of absorbing other people’s perspectives, their cold blood becomes less mysterious, even if it becomes no warmer. 

Comment

It Hurts When People Stare Because Their Eyes Are Like Force Beams

Comment

It Hurts When People Stare Because Their Eyes Are Like Force Beams

Photo by  Joshua Davis  on  Unsplash

Just over a month ago, I was hiking through a mountain rainforest in Uganda in search of a wild gorilla family. Upon finding them, my knowledgeable guide’s first piece of advice before our careful approach toward the animals was simple: do not stare into their eyes.

The logic of this advice was straightforward in the context of the gorillas: they can perceive a stare-off as a threat. And of course, when a 350-pound silverback perceives a threat, the human on the receiving end is likely to be the significant underdog in any consequent fight.

I didn’t think much about it at the time, but in hindsight, the threatening nature of stares applies similarly to humans. I’ve witnessed fights erupt between angry men following a question along the lines of “what are you looking at?”, when one deems that the other has maintained eye contact for too long. And I can only imagine the tiresome frustration that many women experience from the overzealous and not-so-subtle sexualizing stares of certain men while walking down the street.

But why can a look have such a powerful effect? There’s no physical contact involved, so you might think it should be easy to ignore a stare, just as it’s easy to ignore someone’s cough or sneeze. We can think of the question “why?” in two parts.

The first part of the question is to ask what it means when a person stares into our eyes. It’s easy to understand why information is most important to us when it’s personally relevant. We can ignore a nearby conversation until somebody mentions our name, at which point we are suddenly all ears. Hand movements also don’t mean anything until we notice somebody pointing directly at us. Whenever others behave in a way that relates to our sense of identity, we desperately want to find out what is going on. What are they thinking about me? Why are they pointing at me? Why are they looking at me? Any thought or intention that the offending person harbors at that moment could be a sign of danger, and so we are built to pay attention and respond accordingly.

The second part of the question is the trickier one. What is the mechanism by which a look can exert its powerful effects? Even when we know it makes sense to ignore somebody looking at us on public transport, we can’t help but occasionally glance back to see if they’re still staring. A recent study out of Princeton University tackled exactly this question, and their results give new meaning to the phrase “shooting daggers”.

The researchers showed participants an image of a paper tube standing on a table and asked them to indicate the critical angle at which they thought the paper tube would succumb to gravity and fall over if tilted. Next to the tube was an image of a human character’s face, and its eyes could either be looking directly at the tube or be covered with a blindfold.

To understand whether participants treated the tube differently depending on whether someone stared at it, the researchers compared the tilt angle judgments for a tube falling toward the direction of the face and a tube falling away from the face. If participants felt any “push” coming from the eyes, then they would judge a tube to fall sooner when tilting away from the eyes, in line with the push.

Participants consistently estimated a smaller angle when the tube fell backwards away from the face; in other words, their estimates suggested the tube would fall more easily when tilted in the direction to which the character was looking. This difference did not exist when the face in the image was blindfolded. People attributed some sense of physical force to the visual gaze of the character in the picture, with an intensity equivalent to a light puff of air.

The experiment was set up in a way that made it difficult for participants to notice any patterns in their responses. So, without being aware of it, they were somehow intuitively and implicitly believing that staring directly at a falling paper tube would give it an extra push.

In a repeat experiment, the researchers replaced the blindfolded character images to see whether they could replicate the effect in different conditions. They instead used images with a non-blindfolded character who looked in entirely the opposite direction from the tube. Participants responded in the same way: a direct look at the tube was judged to project an invisible physical force toward it, and the force was absent when the character looked the other way.

In a final experiment, the researchers tried another interesting manipulation. Using exactly the same images, they told one group of participants that the character was looking directly at the tube, and told another group of participants that the character was actually looking past the tube at the wall on the opposite side of the table. They wanted to test whether participants were really attributing a force to the inferred focus of the eyes rather than, for example, just the direction of the head.

The first group of participants who knew that the eyes were focused on the tube responded in the same way as the participants in the previous experiments: they reported that it would take a smaller tilt for the tube to fall over backwards (away from the eyes) than forwards (toward the eyes) when that character was staring at the tube. But the second group of participants, who believed that the character was looking at the wall rather than the tube, showed no such effect. Looking at the wall was treated in the same way as being entirely blindfolded. The eyes could only exert their imaginary force on the tube when people believed that the character was gazing directly at it.

Participants were expressing an unconscious bias in their perceptions of how eyes work. When specifically questioned on their beliefs, only around 5% of people actually believed that the eyes could exert any direct physical force on an external object, and none of them were aware of how the character in the image was affecting their reactions to the tube tilt. And yet, their actual judgments during the experiment showed that they couldn’t help but feel an invisible force beaming out of the character’s eyes.

Photo by  Jared Rice  on  Unsplash

Photo by Jared Rice on Unsplash

The idea that beams are emitted from the eyes during vision, often referred to as “extramission”, has been historically and culturally pervasive, dating all the way back to Greek philosophers around 400 BCE. This primitive intuition may explain the physical eye-force that participants perceived during the experiments above, and it may more generally explain the overwhelming power and influence that we can sense from extended eye contact.

Gaze is particularly powerful because it is one of our most reliable social signals for inferring attention in other people. Even newborns are sensitive to the direction in which other people are looking, because we are born with an instinct to use gaze signals in understanding the world around us. When we talk to someone, and we notice their eyes shift and fixate on something behind us, we cannot help but turn around to check it out ourselves. We assume that whatever they are staring at must be worthy of our own attention too.

Context is of course all-important when it comes to eye contact. The same loving look from our romantic partner can appear predatory from a stranger. We might experience a physical beam from both of them, but one hits us with a feeling of love while the other hits us with a feeling of discomfort. But unless we’re associating with Superman or X-Men’s Cyclops, both beams are invented within our skulls.

Next time we notice an obnoxious person staring incessantly at us with no hint of polite subtlety, we can reflect on why their line of sight is causing us to feel frustrated. Even in a completely safe environment, a stranger’s eyes can act like a faint push in the chest. We feel an illusory physical force breaking into our personal space and we perceive it as a social violation. But when we expect no real harmful threat, we can try to rationalize our reactions rather than start a fight or make ourselves more angry than we need to be. No matter how physical their staring feels, it’s worth remembering an obvious but perhaps camouflaged fact: the look itself is making no direct contact with our body. As long as others keep their distance, there will always be only light and air between us.

Comment

Even Honest People Want a Partner in Crime

Comment

Even Honest People Want a Partner in Crime

“The time to guard against corruption and tyranny is before they shall have gotten hold of us.“

— Thomas Jefferson

Corruption may be an in-built feature of our brains. Some people are more corrupt than others, but when given the opportunity, many of us will choose an action that benefits us at a cost to somebody else. Athletes dope, bankers mislead, lovers cheat, and I’m sure the rest of us can recall a time we acted unfairly. I recently wrote about altruism and the pressures that produce it, but now it’s time to consider the flip side: how deep do the roots of corruption reach into our psyche?

Somebody somewhere is likely to be hurt by a corrupt action, so we can never doubt that honesty is the most ethical way forward. At the same time, corruption often requires instincts and behaviors that we view more fondly. As we have seen in many major news scandals over the past few years, when multiple people are involved in fraud or bribery, it takes a great deal of cooperation and mutual trust for them to pull it off.

So perhaps a social pressure to collaborate could lead us to corruption. In a study from June 2018, pairs of participants took turns to privately roll a die and let each other know the number they had rolled. The experiment was set up so that they would receive a monetary payoff when their numbers matched, and the payoff would be greater for rolling higher numbers. So there was an incentive for lying, and that incentive made a heck of a difference to participant’s behaviors.

Compared to what you’d expect from complete honesty, the pairs were 489% more likely to report rolling the same numbers. The responsibility of reporting a matching number fell with Player 2 because they had to react to the number called by Player 1. But Player 1 was certainly not being honest either, because they were inflating the numbers they reported to earn more money. The mean number you would expect to roll after many attempts is 3.5 (1 + 2 + 3 + 4 + 5 + 6 divided by 6). Player 1’s mean was significantly higher than this at 5.02. So in contributing to the corrupt cooperation, Player 1 was calling higher numbers, and Player 2 was calling more matches. In other words, both players were most certainly cheating.

In a way, there is something quite sweet about this joint and blatant cheating. The participants did not know each other or speak to each other; their only communication was through the numbers they were rolling and reporting on a computer while they sat in their individual cubicles. And yet, in the context of this game for which they were arbitrarily paired together, they began to cooperate successfully. Humans are social creatures, and cooperation is in our DNA, even when it means cheating together.

Another more recent academic paper took this experiment to the next level. The researchers kept the die-rolling task but introduced an element of choice in which members of each participant pair could choose to change their partner. This allowed the researchers to check whether corrupt participants were more likely to look for a corrupt partner in crime in order to maximize their earnings.

The most stable pairs of participants — the ones who were least likely to switch partners — were the ones composed of two liars: Player 1 asked to switch only 1.4% of the time while Player 2 asked to switch 5.6% of the time. In contrast, when paired with an honest Player 2, a dishonest Player 1 would request to switch partners a whopping 40% of the time. Similarly, a dishonest Player 2 would request to switch almost 50% of the time when paired with an honest Player 1.

But what did the honest players do? An honest Player 2 switched partners at approximately the same rate for honest and dishonest partners. But an honest Player 1 showed a rather different pattern: they were far more likely to request a switch when paired with an honest partner than a dishonest partner. In fact, they enjoyed playing with a corrupt Player 2 so much that the more matches a dishonest partner called, the more likely that Player 1 was to stick with them.

The difference in decisions between an honest Player 1 and an honest Player 2 is likely to be driven by payoff differences. Player 1 has far more money to gain from a dishonest partner than Player 2, because Player 2 can only react to the number being put on the table. So the temptation to keep a corrupt partner and get rid of an honest partner is simply greater for Player 1. But regardless of the total extent of the corruption, honest players overall did not search for honest partners in the same way that dishonest players searched for dishonest partners.

The authors of the research paper gave the honest but corruptible players the label of “ethical free riders”. Those players have a strong enough moral compass to avoid lying themselves, but not sufficiently strong to prevent them from participating in a dishonest partner’s corruption. They enjoy the benefits that come from the partner’s lies too much. Perhaps they more closely resemble hypocrites than liars, another sad quality to which all of us are susceptible.

* * *

“Time indeed changes manners and notions, and so far we must expect institutions to bend to them. But time produces also corruption of principles, and against this it is the duty of good citizens to be ever on the watch.”

— Thomas Jefferson

The results of the studies above probably support what many of us already suspected. Angels are rare on Earth, and we all surrender to minor sins occasionally. In fact, temptations to cheat can sometimes come from the more noble parts of our personalities. We are driven to cooperate and we are motivated to make progress in life, and these pressures can occasionally shift our scales toward corruption rather than honesty.

At the same time, basic instincts, thoughts, and interventions can push us away from corruption too. As one of my previous articles explains, we are actually quite good at detecting corruption in other people’s faces. And generally speaking, we may be less convinced by temptations when fully aware of the costs and consequences of our behavior, like who we might be hurting and how we might get caught.

Often, better information is sufficient to help us in making a more ethical decision. A research trial published in June 2018 examined whether basic text message communications about government budget irregularities would sway voting behavior during the 2016 Ugandan district elections. When messages conveyed more budget irregularities than expected, recipients reported voting for the incumbent officials less often. And when fewer irregularities were reported, votes for incumbents increased.

If text messages can help to curb corruption, then there must be plenty more we can do. Of course, it may also raise our levels of concern for biased news and outright dishonest information circulating around our online networks. It’s easy for us to get lost in media bubbles and political partisanship when we are trying to vote in ways that truly benefit us.

Elements of corruption will likely remain in our politics, our media, and our ordinary life for a long time yet. In many parts of the world, we have succeeded in crafting less corrupt systems over time, and we will continue to do so. We still have much to learn about how corruption emerges and develops. For example, the slippery slope metaphor that suggests corruption gradually gets more extreme as we engage in increasingly dishonest behaviors may not tell the full story. Some evidence suggests exactly the opposite: we are often more likely to impulsively corrupt our behavior when we come across a sudden opportunity.

While acknowledging our immense progress in building safer and more honorable societies to live in, we should take nothing for granted. We are still only human after all. Plenty can go wrong, and we should make every effort to stamp out corruption in our own behavior and disincentivize it in others. Whether we are being honest or dishonest, we are all tempted by the thought of jumping on a corrupt bandwagon when the rewards are staring us in the face. But we would be right to remain optimistic about the better angels of our nature.

Comment

For Better Teamwork, Pump the Brakes on Communication

Comment

For Better Teamwork, Pump the Brakes on Communication

“In solitude the mind gains strength and learns to lean upon itself.”

— Laurence Sterne

Constant communication may be limiting your productivity. In the modern world of cell phones and the internet, communication is easier than ever. We’ve been harvesting the advantages of this for years. We communicate with people on the other side of the world, stay in touch with colleagues outside the office, and immediately share work outputs with anyone who asks. But we have failed to spot one possible disadvantage this whole time: seamless team communication may prevent individuals from contributing their full potential.

We have suspected that isolated decision-making may be valuable for a long time. The classic “wisdom of crowds” effect has shown us that a total lack of communication between individuals can be a good thing, for example, when guessing the quantity of jelly beans in a large jar. If everyone throws their own independent estimate into the ring, you can take the average of all those estimates, and reach a surprisingly accurate answer. This is primarily because extreme guesses can fall either side of the truth: some people will guess too low, others too high, but the average will balance out those effects to approach the true number.

If people communicate in guessing the number of jelly beans, effects of social influence will tend to pull estimates in a particular direction. The most convincing member of the group may succeed in attracting people toward their point of view, even if they are dramatically wrong. With their estimate acting as the new anchor for other estimates, the average of all guesses can drift away from the true number. If history has taught us anything, it is that when leaders are wrong, they take others down with them.

However, in iterative tasks characterized by several rounds, social information can be helpful if it spreads among a decentralized group where each participant is equally connected to others. This is because the individuals with more accurate estimates tend to be the ones who attract other estimates toward them in successive rounds of the task. Instead of the loudest or most charismatic people becoming the anchor for other people’s estimates, the smartest people become the influencers.


* * *

Jelly beans are one thing, but when it comes to typical problem-solving tasks at work, it may be hard to believe that communication among team members could ever a bad thing. But a recent experiment published in August 2018 presents exactly this conclusion.

These researchers in Massachusetts developed a tricky problem-solving task to compare performance between different teams. Participants saw a basic map on a computer screen with a number of cities distributed across it, and had to find the optimal route around the map, visiting each of those cities only once and returning to their starting point. The task was complex enough to rule out the simplest solutions and strategies, so good performance depended on careful thinking and planning. On top of that, participants had less than a minute to attempt a solution in each of 17 rounds for a specific problem. However, these multiple rounds meant that they had a chance to gradually improve their solutions and strategies over time for each problem.

The participants did not know each other but were randomly assigned to complete the task in anonymous teams of three. Different teams had to stick to different communication rules. One type of team was allowed constant communication, which meant that team members could see each other’s previous solutions during every round of the task. Another type of team only had intermittent communication, in which they could see each other’s previous solutions only every three rounds. The final type of team were in fact not really a team at all because they were allowed no communication whatsoever; they had to complete the task without using any information from other people’s solutions.

So how did these different teams compare in solving problems? First, consistent with some previous evidence, the researchers found that people who did not communicate at all ended up finding the optimal solution more often than teams who constantly communicated (44% vs 33% of problems). The lack of communication led to greater diversity in thinking styles and responses, increasing the chances that at least one of the independent individuals would land on the optimal solution.

Photo by  Max Langelott  on  Unsplash

In contrast, the teams who constantly communicated performed better than non-communicators when researchers calculated the quality of their average solutions. Communication allowed people with bad solutions to improve the quality of their decision-making by looking at the successes of their teammates, raising the average standard for the team overall. So at the cost of reduced creativity and diversity compared to the independent individuals, the communicators could better build on the best solutions among them in each consecutive round of the task.

But what about the intermittent communicators? In principle, they could achieve the best of both worlds or the worst of both worlds. The results showed that they found the optimal solution to 48% of the problems, matching the independent non-communicators in their performance, and beating the constant communicators. Interestingly, when the researchers turned their attention to average performance rather than total optimal solutions, intermittent communicators also matched the high-performing constant communicators, and beat the non-communicators.

So according to the evidence, the intermittent communicators enjoyed the benefits of both non-communicators and constant communicators. They could make use of the diversity associated with isolated work and the cooperative development associated with communication. Their biggest improvements came when seeing the solutions of their teammates after a period of disconnected decision-making.

* * *

“One can be instructed in society; one is inspired only in solitude.”

— Johann Wolfgang von Goethe

We may finally have an answer to the long debate about whether independent thinkers or highly interconnected teams make better decisions. The unexciting answer is that each of them has their strengths and downfalls. Independent thinkers can have flashes of ingenuity uncontaminated by the thoughts or interruptions of others, while communicative groups achieve a higher average standard by sharing the strengths of each person. The exciting answer is that we may be able to do something about this dichotomy: we can build teams characterized by intermittent rather than ongoing communication.

Isolation has been a useful tool throughout our history. Many historical geniuses have had their moments of insight and inspiration while working alone in their creative spaces. While they may have stood on the shoulders of giants, they didn’t have those giants standing beside them during their most critical moments of thinking and problem-solving. They found their foundation in the books that they read and theories they came across, and then gave themselves the necessary period of solitude to allow their most creative sparks to take hold and spread.

There may even be parallels in biological evolution. New species tend to emerge when old species are physically separated into multiple groups. Those isolated groups then evolve their own unique adaptations to their environments and gradually become different enough to the original species that they can no longer mate with each other. In other words, the isolation that leads to greater biological diversity over the long-term may be analogous to the isolation that leads to greater decision-making diversity over the short-term.

We have entered an era of hyperconnected communication in every aspect of our lives. We check our social network feeds and the news headlines every morning, we frequently video-chat with friends and colleagues, and we have access to all the existing knowledge in the world through a quick Google search. I remain appreciative of this extraordinary situation, but no longer sit with the illusion that there are no major costs. We may well be sacrificing elements of creativity and diversity that were previously so valuable for innovative progress.

With a little effort, it may be possible for us to break away from our communication rituals and stop corrupting some of our best ideas with the perspectives of others. But it may require a dose of internet withdrawal when we most need to be creative and alone with our thoughts. If we want to design a solution better than both a horse and a camel, we may need to temporarily suspend the committee.

Comment

Grit? It’s Not All Perseverance, You Need Passion Too

Comment

Grit? It’s Not All Perseverance, You Need Passion Too

Photo by  Jaco Pretorius  on  Unsplash

“In the realm of ideas everything depends on enthusiasm… in the real world all rests on perseverance.”

— Johann Wolfgang von Goethe

Is grit really one of life’s power-potions? Can it really determine whether we succeed or fail? Controversies are common in the world of psychology research, and grit is no exception. Recent conversations have questioned whether grit science is mostly hype. With the shadow of the hotly-discussed replication crisis looming over researchers, news writers eagerly await the next topic to bash. The bashing is not necessarily unfair or misguided; much of the reporting on grit has been surprisingly sensible. But often, in the midst of the media storms, it’s easy for us to lose track of the original point.

More than 10 years ago, Angela Duckworth and her colleagues published a seminal paper all about the power of grit. They defined it as “perseverance and passion for long-term goals”. It’s important to revisit this definition and the original research, especially because somewhere along the line, many of us forgot the second fundamental principle within the concept of grit: passion.

In labeling new ideas, it is common for researchers to adopt an existing word that is similar to their imagined idea, before formalizing it as a distinct scientific concept. This helps scientists get a quicker intuitive grasp of the concept, but leaves us with two versions of the same word, each with a subtle but critically different meaning. One circulates in the scientific world, while the other circulates in the mainstream world, and occasionally they will uncomfortably butt heads.

Grit may be one of these stories. Its original usage as an English noun came before the 12th century in referring to gravel or sand. Toward the end of the 16th century, “gritty” was used as an adjective to describe a resemblance to small hard granules. Then, in the early 19th century, it entered American slang with a link to courage and persistence. We still use it in this same sense today, but at the start of this millennium, psychologists and researchers also began using it as label for a character trait defined by passion in addition to persistence.

So therein lies the potential for conceptual conflict. When someone refers to grit, are they talking about perseverance or are they talking about perseverance plus passion? In a paper published in late 2018, one group of researchers suggests this simple conceptual confusion could be responsible for swaying the data on grit from “powerful” to “so-so”.

* * *

Before I jump into this recent paper, what exactly was so exciting about grit in the first place? In Duckworth’s original research, a person’s level of grit (perseverance and passion) predicted their success in life. That success included educational grades, retention in a military academy, and ranking in the National Spelling Bee. So over and above intelligence and other personality traits, grit played a critical role in the difference between progress and failure in several domains.

The question now is whether the research as a whole supports the idea of grit as a uniquely meaningful predictor of success, or whether it reveals a faux concept riding the coattails of other personality traits like pure perseverance or conscientiousness.

Photo by  Cam Adams  on  Unsplash

Photo by Cam Adams on Unsplash

With the original findings on grit out of the way, let’s return to the 2018 paper from research groups in New York and Frankfurt. The researchers argue that a bulk of science and commentary on grit either ignores its core pillar of passion or fails to adequately measure it. Frequently used measurement scales for grit equate passion with “consistency of interests”, which they explain is actually different to passion and more similar to perseverance. Imagine a writer with a consistent interest in proofreading and correcting typos in their text. That writer will not necessarily have a sense of passion or drive to complete that activity; they simply do it because they should.

So in running their own studies to test the effects of grit, the researchers combined the typical questions used to measure grit — in their mind perseverance — with more specific questions for measuring passion, asking for example whether people feel as though they lack sufficient passion in their everyday work.

With their revised assessment method in hand, they approached a tech company, and measured both perseverance and passion for over 400 employees. They found that employee job performance was best predicted by using both perseverance and passion in calculations. When an employee had little passion, high or low perseverance had little effect on their performance. But when their passion was high, perseverance did make a difference: high perseverance led to better performance than low perseverance.

The researchers then recruited 248 university students, collected their GPA scores, and assessed their perseverance and passion. In an additional twist, they also measured each student’s levels of immersion in their work by including questions about how well they could block out all other distractions while they studied.

The results replicated what the researchers found at the tech company: both perseverance and passion mattered when it came to performance. In addition, the passionate students showed a meaningful connection between perseverance and immersion. Perseverance improved performance partly through increasing students’ immersion while they studied, but only when they emotionally cared about their work.

In a final part of the project, the researchers combined and re-analyzed data from previous studies on grit and performance. After analyzing 127 studies, they first replicated the results of a preexisting meta-analysis by finding only a small benefit of perseverance on performance overall. But then, they additionally recruited independent judges to quantify the relevance of passion for participants in each of those studies. As an example, passion would be highly relevant in a study that recruited entrepreneurs starting their own companies, but would be less relevant in a study that recruited students taking compulsory tests.

Consistent with the results of the researchers’ own experiments, their new meta-analysis showed that the relationship between perseverance and performance was stronger in the context of high passion relative to low passion. Hard work and persistence pays off most when you are passionate about what you’re doing.


* * *


“Life is not easy for any of us. But what of that? We must have perseverance and above all confidence in ourselves. We must believe that we are gifted for something, and that this thing, at whatever cost, must be attained.”

— Marie Curie


Grit is more than perseverance, and it is also different — although correlated — to what we might call self-control or self-discipline. We can persevere without having grit. And we can resist unhealthy temptations without having grit. Grit’s special predictive power in our victories comes from a combination of both perseverance and passion toward a particular long-term goal.

When we have the choice, we need to find paths that matter and mean something to us. If a task has no personal importance in our lives, we struggle to find any motivation or driving force to push us toward the best possible answers, even if we are capable of working hard. And if a task is personally important but we cannot trigger the gumption to work hard, we never make sufficient progress. To reliably improve performance, whether at school or at work, we need to train perseverance while crafting a context of passion. That symbiosis is the only real sense in which we can have grit.

Comment

The Placebo Effect — A Love Story

Comment

The Placebo Effect — A Love Story

Photo by  Evan Kirby  on  Unsplash

Photo by Evan Kirby on Unsplash

Hope and expectation are powerful psychological forces. For this reason, we often use them as a baseline against which to test the efficacy of new medical treatments. This baseline is referred to as the placebo control, and has been relevant in medicine for several centuries. We know that some drugs will cure our problems purely because we anticipate that they will. So when we test whether medical innovations are a success, we want to know that they will do a better job than a chemically useless but psychologically convincing sugar pill.

Inert treatments may be more powerful than you think. When patients with Parkinson’s disease receive a placebo drug, their brains release additional dopamine and activate one of the primary systems damaged by the disorder. Patients with irritable bowel syndrome benefit from placebo acupuncture, but benefit even more from placebo acupuncture combined with a warm, attentive, and confident practitioner. The colors of placebo pills also affect our expectations: red, orange, and yellow drugs are perceived to be stimulating, while blue and green drugs are associated with calming effects.

These studies, and many others, demonstrate that the placebo effect is not just powerful, it is multidimensional. Benefits emerge from several directions, and some placebos are more effective than others, even though they are equally inert in their chemical contents. We may even be able to layer different types of placebo together to build a super-placebo.

To call the benefits of placebo treatments “fake” would be to do them a disservice. The point is that both our psychology and physiology can be agreeable subjects for treatment (of course, they are both technically the products of biological processes). Active drugs directly manipulate the mechanics of tissues in our body, and consequently improve our mental states. Psychological treatments manipulate the contents of our minds, and consequently improve the mechanics of our bodily tissues. Both routes — typically described as bottom-up and top-down processes — can be practical and effective.

* * *

Placebo effects are usually studied in the context of medical treatments and symptom outcomes. A study published in the middle of 2018 took a different approach, by asking whether the placebo effect could increase our prosocial behavior.

Participants in this 2018 study were told about the social benefits of an oxytocin hormone drug, and were then given a nasal spray that they believed was oxytocin, but was in fact an inert saline solution. Despite never receiving actual oxytocin, would the participants still demonstrate the enhanced social behaviors that they were taught?

The first test of social behavior was a trust game. In this game, participants could give some money to a second person, knowing that the second person would then receive triple that amount of money before deciding how much of that final total to return to the trusting participant. If the investing participants had faith in the trustee, they would presumably invest all of their money, in anticipation of a larger return than their investment. If they believed the trustee was greedy and would keep the money, they would risk less of their own cash.

Compared to a control condition with no spray at all, the placebo spray participants invested significantly more money with the trustee. The saline spray made them more trusting. And it did far more too. While high on the inert drug, male participants were more comfortable interacting with a female experimenter who stood a little too close to them (at least if the participants were single), and they perceived less anxiety in the experimenter during eye contact.

* * *

In explaining the benefits of placebo treatments, the point of this article is certainly not to support fishy therapies or snake oils. Quite the contrary. I hope it’s a warning signal about the dangers of believing too strongly in the value of particular products or substances that we may have bought into. It’s worthwhile to realize when we are helping ourselves and no longer need to rely on the promises of expensive supplements to our lives.

If the placebo effect proves anything, it’s that we should never underestimate the influence that we have on our own health and wellness, purely through the way we think and decisions we make. Perhaps we can harness the psychological effects of placebos without the need to believe other people’s deceptions.

For many medical problems, we have amazing treatments available to us that perform significantly better than placebo, and we trust our doctors to prescribe those to us. But for many other everyday issues, our levels of self-efficacy — our confidence in our own abilities and successes — may be the biggest hurdle standing in our way.

Belief can have a dramatic impact on the existing health activities in our lives, like regular physical exercise. In a study of 84 hotel cleaners, some were told that their work satisfied a doctor’s physical activity recommendations for a healthy lifestyle, while others in a control group were told nothing about their work in relation to good exercise. A month later, the informed cleaners perceived that they were exercising more, and showed healthier reductions in body fat and blood pressure than the control group.

The placebo effect won’t help us fly, but it may help us with many of the typical problems we suffer in daily life. During our moments of anxiety, sadness, lethargy, and listlessness, we can be our own hero by pushing ourselves into the right mindset. Optimistic self-belief is a tailwind rather than a headwind; it can give us the favorable momentum we need to overcome seemingly intractable challenges. Eventually, we may even be strong enough to throw out the sugar pills.

Comment