Skip To Content
Cambridge University Science Magazine
You can learn a great deal about attachment by asking the clouds. A cyclone turning itself awake across the Indian Ocean can easily leave the ones that sit above, unaffected. And in many ways, I find myself wondering where the bounds of empathy that distinguish us from a cloud dissolve.


Our ability to empathise is a deep-seated feature that is established across species. Theodore Lipps was one of the first to introduce this as a concept he termed Einfühlung, characterised by a critical role of ‘inner imitation’. Lipps, as a philosopher, came across this concept while engaging with the arts, noting that the impact of an artwork didn’t seem to reside in the work but rather in how it was viewed by the observer. Einfühlung embodies this emotion of watching a piece of artwork- a feeling where we project our ideas or memories and re-enact a process. While empathy was originally linked to similar forms of mimesis, such as laughter and yawning, today’s definition has seen the word tossed through the folds of time and interpretation. This concept has been subdivided into three categories: emotional empathy (sharing feelings and matching them), cognitive empathy (the capacity to think about and understand another’s feelings), and compassion (the motivation to do something about another’s state).

In the past two decades, the rise in neuroimaging techniques has allowed us to peer into the brain itself. Functional magnetic resonance imaging (fMRI) experiments in awake subjects have shown that the brain networks activated while observing emotional states (sadness, happiness, disgust, pain etc.) in others are the same ones activated when we experience these emotions firsthand.

In 2004, Tania Singer conducted a study investigating females and their romantic partners. Females were either given a painful shock, or made to see the hands of their male partners being given the shock, all while lying in a scanner. It was found that the same brain regions were activated both when females received the shock or witnessed their partners receive it. Similarly, a different set of experiments by Bruno Wicker and colleagues, showed that the same subregions are activated when subjects either inhaled foul-smelling odourants or viewed the disgusted faces of people inhaling the same.

Empathy is not restricted to humans. It has been documented in several animal species, including primates, dogs, rodents and dolphins. For example, when rats in a study were given a choice between freeing a trapped cagemate in one restrainer, and obtaining chocolate in the other, they preferred to open both restrainers and share the chocolate! A study published in Cell in 2022, showed that prairie voles helped members of their species (conspecifics) that were soaked in water, by opening a door. This behaviour became quicker with progressive trials, and was only observed when the conspecific was soaked in water.

Psychologists and neuroscientists have reasoned that a basic underlying mechanism of empathy, across the animal kingdom, is having scenarios activate the neural representations of us being in similar situations. When we see another being in pain, we share it by imagining ourselves in their shoes. Empathy is not solely about inferring states of others - we empathise because we are guided by our own experiences and emotional pasts, that resonate with a situation.


Our empathic responses are driven by how we operate subconsciously. We don’t react to everyone’s pain the same way, and are more empathetic towards objects that have greater resemblance to us.

In his discussion on Empathy as Orientation rather than Feeling, Steve Larocco discusses how engaging with another may not necessarily mean we engage with everything that person feels. We are more likely to relate to those experiences of the other people that are most familiar to our own lives. For example, Prof. William C. Adams’ 13-year study of United States citizens in the 1970s and 1980s studied empathic responses of citizens to natural disasters by surveying by how much they cared about them. These depended on how much they cared about the country of origin, how geographically close it was to the U.S., and how likely they were to visit it. As individuals that constantly operate within networks of biases, selectivity of empathy is highly commonplace and very important in conserving where the brain invests its energy-draining capacities for emotional regulation.

Selectivity has also been shown in group differences. The anterior cingulate cortex is a key region implicated in empathy perception. A study published in the Journal of Neuroscience showed that responses in this region were increased when respondents witnessed members of their own race in pain, as compared to those of another race. People are also more likely to empathise with human-like robots than with those that bear fewer resemblances to us.

Morality is another common factor that distinguishes empathic responses. In behavioural studies, subjects feel less empathetic towards an immoral person being made to experience bad things than a moral person. This is driven by several factors, including dehumanisation of specific groups. If you’ve ever had a discussion on a highly polarising topic, you may have found that some people who support one side find it harder to empathise with the victims of the other.

A most telling example of this is in Susan Fiske and Lasana Harris’ neuroimaging study where participants viewed photographs of social groups and objects. The social groups differed in their relatability to the participants, and viewing them activated the medial prefrontal cortex, a key region in social cognition. However, watching pictures of people from extremely different socioeconomic groups (homeless people and drug addicts), elicited little response here. Instead, they activated the same regions that were activated while observing images of objects representing disgust. This was one of the first experiments to show a quite literal ‘dehumanisation’ mechanism at play in the brain, effectively built on our biases, and toying with the joystick of empathy.

Another situation where we observe a distinct lack of certain forms of empathy is in disorders like psychopathy. In the Netherlands Institute for Neuroscience, researchers measured brain activation among 18 psychopathic offenders and compared their results to those obtained from control subjects. All participants were made to view a video clip of two hands interacting with each other, in loving, neutral, painful or rejecting ways, and were then specifically instructed to feel with (or try to imagine the feelings of) the actors in the videos. As expected, when subjects viewed videos, brain regions that lit up in controls who saw these interactions were not as strongly activated in patients.

However, the results from the ‘feeling’ part of the study suggested a highly unexpected explanation behind how psychopathic patients viewed and experienced interactions. When psychopathic patients were instructed to feel, their brains were activated to nearly the same extent as controls, almost suggesting the presence of an empathy switch in the brain! This suggests that it is not a complete lack of empathy that we see in psychopathy. Rather, these patients are unable to have spontaneous, naturally occurring empathic responses as opposed to the more forced ones triggered by asking to ‘feel’. Natural responses are likely in-built in controls, allowing them to empathise without any extra effort. This could also indicate that psychopathy is more than just blunted empathy, but a selective reduction in the ability to spontaneously experience empathy.


In the last decade, we have harnessed a spectacular ability to peek into one another’s lives, and witnessed a revolution in how we share information through social media. Empathy never boiled down to a straightforward manifestation, and now, a new variable has been added to the list: that of a technological nature. With artificial intelligence (AI) and ‘smart’ algorithm based social platforms dominating the ways in which we interact as a social species, empathy is enshrouded in its layers.

I remember having a conversation with a friend on a particularly cold night. We were discussing how he found it difficult to navigate social media of late, because he felt bombarded by the content he encountered - atrocities being committed on an international scale, each varying by the country of origin. To the same concern, I replied that I didn’t find it nearly as difficult. It was still possible for me to question their happenings and keep track of their routine impacts in our daily lives.

But I often found that the increased frequency of exposure to such content, coupled with the brief engagement that each news item elicits, dimmed my own ability to process the more complex feelings associated with each of these pieces of information. In fact, this didn’t limit itself to happenings in the world. I find it remarkably easier to deal with changes happening in my own home, now that I’ve moved away.

This observation is reminiscent of the study by Prof. Adams in the 1970s which explored how geographical proximity and our own relatability to the country determines spontaneous empathic responses. When faced with information overload from the internet, we face a similar problem with distance and numbers. The phenomenon of numbers in numbing response has long been observed, even popularised in Mother Theresa’s words, ‘If I look at the mass I will never act. If I look at the one, I will’.

Another part of this is explained by what Paul Slovic, a psychologist at the University of Oregon, describes as ‘psychic numbing’. His work explains how the mind is limited in its abilities for compassion by concepts that are more abstract – large crises far removed from our immediate planes of existence. Brain imaging evidence also appears to support this concept. When exposed to radio newscasts, comprising neutral or negative events happening to individuals or groups, participants show greater activity in the medial prefrontal cortex when events happen to a single person, regardless of their emotional state. Psychologically, this has been termed compassion fade or compassion fatigue. It has been linked to evolutionarily ingrained altruistic behaviours that drive us to help a single victim more than multiple victims, and recognizable ones more than unidentifiable ones.

We not only engage with information on the masses but also witness it coming at us in bite-sized chunks at an unprecedented rate. While compassion fatigue has been well-documented in studies like the one above, little is known about whether this feeling could be enhanced by increasingly engaging with short-lived pieces of media. This could even create an increased tolerance for engaging with media, especially when it concerns negative connotations.

It is not just the scrolling that keeps news going, but also the fact that news items are often replaced or overridden by other items, based on our own preferences. On the one hand, you learn about various happenings around the world, but on the other, you’ve moved on to the next bit that’s interesting before you know it. The way most algorithms work, you may even end up navigating to a whole other field or topic, and remain in its confines. Within these new circles of information flow, we are encountering a virtual version of the numbers problem, complicating our ability to process and empathise further.


Both AI and social media rely on their attractiveness as tools of ‘immediate’ relief. They provide a sense of gratification perfect for our brains. The role of social media in increased sympathy-seeking (ie, narcissism) is undeniable. What does this mean for our empathic capabilities?

Social media engagement has fundamentally altered an array of responses. It is not just our bumblebee-esque attention spans, but also our fundamental understanding of processing social cues and face-to-face interactions that have been affected. Certain studies point to the existence of psychic numbing, in the sense described earlier, from excessive engagement with content. Researchers have suggested that a decreased amount of in-person communication can reduce how efficiently we process emotional cues in interactions. This is particularly relevant to the growing numbers of adolescents on social media, living in a time crucial for their prosocial skills. According to Sara Konrath’s study published in the Personality and Social Psychology Review, declines in empathy-related variables were associated with increases in narcissism among a study of adolescents.

Researchers additionally note that an empathy bias can be built further through the use of social media. This is because such platforms are usually built to circulate similar types of information through feedback systems. Users find validation for their thoughts within familiar circles espousing similar ideas, restricting the ability to empathise with new, unfamiliar situations. Similarly, studies have found correlations between rising self-esteem and narcissism among college students from the 1970s to 2010, with simultaneous declines in empathy.

Evidence for the impact of social media upon empathic responses is however mixed. Some even suggest that empathy scores among older children and teen respondents can be unaffected or improved with social media usage. A study that investigated 942 Dutch adolescents between 10-14 years of age in 2016, found that within a year, an increase in their social media use was linked to a rise in their abilities to understand and share their peers’ feelings. A key discriminating point here is the way in which social media usage works in each person’s life. It has been theorised that having fewer non-verbal cues while communicating this way may engage teens to reflect on and imagine their conversational partners better, thereby fostering empathy. Another theory in support of this concept explains the introvert-friendly nature of virtual modes. People who may be unwilling to share could find it easier to do so over such platforms, thereby facilitating deeper communication. The increased willingness to share and lowered inhibitions associated with the teenage brain are complex, and unravelling them through the lens of social media is an ongoing process.


Artificial intelligence systems are riding a wave of fame today, and while we’ve found ways to converse with them, some suggest that this also extends to empathy. A recent article claims that, ‘With artificial empathy, you can now tailor experiences to how a customer is feeling at any given moment’. The same article went on to suggest that customer experience could be enhanced by using AI chatbots to serve the role of customer service operators, with their ‘kinder, gentler’ responses. 71% of customers agreed that AI would serve as more empathetic. Many companies have started to tap into this potential. Hume AI is a company that develops tools to ‘measure’ emotions using verbal, facial, and vocal expressions. Zoom has a new plugin called MorphCast that seeks to provide users with real-time analysis of emotions during a virtual meeting! MorphCast is designed to detect attention levels, engagement, and emotions, and provide visual feedback to the host to build communication.

While it is useful for a speaker to know when their audience is falling asleep, and when they are most interested, engagement metrics only scratch the surface. Whether these actually explore the underlying emotions is an ongoing debate. Emotions always exist within contexs and these transcend mere facial expressions. For that matter, even clinical ratings that calculate emotional indices have been subject to criticism, due to the large amount of subjectivity in perceiving these, and the biases that change the ways an observer might interpret them (as someone who has worked with emotional disorders, I’ve had my share of questioning all that we study). A main change that AI seeks to bring about in this space is to be more nuanced in detecting miniscule changes in expressions and voice modulation that we simply can’t do with our eyes and brains. But even then, can it be empathetic?

The gap in this processing ability has recently been highlighted through the wave of large language models (LLMs), like ChatGPT. A study published in the Journal of the American Medical Association compared responses written by doctors and AI systems to 200 medical questions submitted by patients. Th healthcare professionals that were tasked with judging the responses found that 80% of the AI generated responses were more nuanced, and descriptive, as compared to those written by physicians. Moreover, about 45% of AI-generated answers were judged to be empathetic, against 5% of those written by doctors.

This is largely explained by the nature of physicians’ lives, characterised by excessive burnout, high patient populations to interact with, and regular interactions with distressing situations. A recent study by Dan-Mikael Ellingson and colleagues, investigated brain processes supporting how pain is communicated between clinicians and patients. They found that patients who received consultations with their clinicians prior to receiving painful pressure stimuli evaluated their clinicians as better able to understand their pain. These clinicians were also more accurate in assessing their pain levels, and their brain activity patterns showed increased concordance across the areas activated in the patients, as compared to those that did not interact with their patients in advance.


Among the many debates raised about the responsible usage of AI, one particularly stands out. These systems are smart and capable of beating us in several tasks but, at the end of the day, they still reflect all the glaring imperfections and inequalities we have in our systems. Their identity is still what ours collectively constitutes. This is clearly seen in the ways these systems make assumptions about marginalised groups that line with our own stereotypes. When two emotion recognition systems analysed videos from 400 basketball games, both managed to assign more negative emotions on average to Black players, even if they smiled. Even if the system seems empathetic, it also isn’t equally empathetic to any and every user.

It is paradoxical that a likely reason why ChatGPT seemed more empathetic than a doctor is because it is not only unburdened by constraints that the ordinary physician faces, but also unaffected by having to ‘absorb’ each person’s concerns. While the current set of AI models is capable of scouring the web for responses to my problems, seeming very empathetic towards my daily woes, it lacks the very internal response that occurs in us as humans, arguably a defining trait of empathy. It doesn’t imagine itself in pain the way the female subjects did in the MRI study, as much as it categorically evaluates it.

The AI that assists a doctor in empathising with their patients, thrives on its own level of detachedness behind the curtains. Hence, it appears to perform poorly within the constructs of emotional empathy. This is different from cognitive empathy, which is what makes us understand, as opposed to experience, the other side. ChatGPT does appear to perform well on this metric, by offering help when needed. But here is where another concern is thrown into the mix. Researchers at the San Francisco State University, and the University of California, Berkeley, published an article in 2021, arguing that solely possessing cognitive empathy is also characteristic of psychopathic patients. By creating AI that is extremely good at solving our problems on an immediate scale, we leave little room for discussing the emotional aspects.

Moravec’s paradox observes that simulating reasoning skills requires far less computation as compared to sensory and perception skills. What continues to set us apart in empathy is something similar - it isn’t the understanding as much as it is the perception of cues within their respective contexts (with some complex social situations requiring knowledge of multiple interactions over varying timescales). This relies far more on our unconscious mental experiences, little of which we understand today.

Technology is now far too entangled within our interactions to be left out of the empathy debate. In our search to define and better understand this feeling, we inevitably end up including a new subset of machine-specific social skills. Moreover, AI systems will continue to serve an essential role in decreasing the load, provided we identify the right types of functions for them during user interactions. Working hand-in-hand with these systems can be immensely useful. We can free up our minds for real emotional engagement, by outsourcing the more mundane tasks that occupy our cells. As far as empathy goes, we’re still peeping over the boundaries of what makes us distinctly human, and what doesn’t. And of what makes our brains capable of being the multi functioning icebergs that they are. There is a lot more going on beneath, that can easily steer or sink us. In the end, we are like the cloud in many ways, and in many ways, we struggle to be it.

Article by Spatika Jayaram.

Artwork by Sonja Stiebahl.