Coronavirus (COVID-19) update

Research stories

On our News pages

Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.

On TheConversation.com

Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.

Coronavirus: experts in evolution explain why social distancing feels so unnatural

Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor UniversityVivien Shaw, Lecturer in Anatomy, Bangor University

Shutterstock/Lightspring

For many people, the most distressing part of the coronavirus pandemic is the idea of social isolation. If we get ill, we quarantine ourselves for the protection of others. But even among the healthy, loneliness may be setting in as we engage with pre-emptive social distancing.

There is some great advice out there about how to stay connected at such times. But why is the act of social distancing so hard for so many of us? The answer probably has more to do with our evolutionary history than people might think.

Humans are part of a very sociable group, the primates. Primates are distinguished from other animals by their grasping hands and various ways of moving around, and because they show a high level of social interaction.

Compared to other mammals of the same body size, primates also have larger brains. There are several hypotheses about why this is. We know, for instance, that within the primates, species which face ecological challenges like accessing hard-to-reach foods have slightly larger brains. Doing these things may require more sophisticated brains.

Our large brains seem to be as much about managing our social relationships as our survival skills. Brain size in all mammals is linked to understanding and intelligence. In primates it is also positively correlated with social group size.

Living in groups requires us to understand relationships, both amicable and conflicting, with those around us. For primates, remembering how two individuals have interacted in the past, and how they might feel about each other now, is necessary knowledge when deciding who to approach for help. Social skills are therefore fundamental for survival in group situations.

Human brains are even larger than those of other primates. If we apply the scaling rule to ourselves, we would predict an average social group size of around 150 people. This prediction seems to be true. Workplaces, for example, have been shown to function better when there are no more than 150 employees.

Why live in groups?

Living in a group offers various advantages. Larger groups have better defences against rivals and predators. They are often better able to find food – more pairs of eyes searching for fruit trees means more success – and they are more able to defend that food from competitors.

There are reproductive advantages, too. The larger the group, the more likely any individual is to be able to find a suitable mate.

Social animals.Shutterstock/Ints Vikmanis

In more social species, there is also the potential availability of alternative care-givers to babysit or teach the young. Infant primates have lots of complicated social and physical skills to learn. Living in a group gives them more opportunities to develop those skills in a safe environment under the watchful eye of an elder.

Finally, larger social groups have more capacity to generate, retain and transmit knowledge. Older members are more numerous in larger groups. They may remember how to access difficult or unusual resources, and be able to show others how to do it. This can mean the difference between survival or death. For instance in a drought, only the oldest members of the group may remember where the remaining water holes are.

How are we different?

All this goes some way to explaining why being socially isolated can be so very uncomfortable for us. Modern humans are one of the most social species of all mammals.

As we evolved since our split with chimpanzees, our brains have continued to expand. These increases seem to fit with even more intense reliance on community.

Several of our distinctive features, including language and culture, suggest that modern humans are particularly dependent on social living. The most convincing evidence, however, may come from our characteristic division of labour.

A division of labour means that we allocate various specific tasks to different people or groups. In hunter-gatherer societies, some individuals may go hunting, while others collect plants, care for children or produce clothing or tools.

Humans employ this strategy more than any other primate. Today, there are many people who have never hunted or grown their own food – these tasks instead being delegated to other people or companies, like supermarkets. This means we are free to work on other things, but it also makes us intensely dependent on our social networks for day-to-day necessities.

An evolutionary perspective

We have literally evolved to be social creatures, and it’s really no wonder so many of us find social distancing intimidating. It’s not all doom and gloom, however. Humans’ intense sociability has evolved over a very long period of time to make us habitually able to maintain relationships with large numbers of people, and so improve our shared chances of survival.

We have already evolved symbolic language and huge cultural and technological capacities. If we had not, we would have no way to live in our increasingly global society, where maintaining personal links to everyone we depend on is effectively impossible.

Current social distancing measures are, in fact, all about physical distance. But today, physical distance doesn’t have to mean social isolation.

Our rich human history of managing social interaction in new ways suggests that we have a talent for adapting and innovating to compensate for difficulty. In the last 20 years, the explosion of mobile phones, the internet and social media has turned us into “super-communicators”. This is proof of our deep desire to be connected with each other.

Our inner ape craves company, and in this time of physical distancing, these methods of staying in touch really come into their own.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Why do snakes produce venom? Not for self-defence, study shows

Author: Wolfgang Wüster, Senior Lecturer in Zoology, Bangor UniversityKevin Arbuckle, Senior Lecturer in Biosciences, Swansea University

Krait bites are so painless that many victims don't even realise they have been bitten until it's too late.Wolfgang Wüster, Author provided

Snake venoms vary a lot between species in their make-up and effects, which is a major problem for developing treatments. Snakes use these venoms for two main purposes. The first is foraging, where venom helps the snake to overpower its prey before eating it. The second is self-defence against potential predators – this is how millions of people get bitten, and around 100,000 killed, every year.

Many studies have shown that the need to capture and eat prey often drives the evolution of different snake venoms. For instance, many species have venoms that are especially lethal to their main prey species. On the other hand, scientists know surprisingly little about the role of natural selection for self-defence in the evolution of venoms.

A western diamondback rattlesnake (Crotalus atrox) in a defensive stance.Wolfgang Wüster, Author provided

If you have ever been stung by a bee, you will know that the sting hurts almost immediately, and the pain rapidly reaches its peak. And if you think a bee sting is no big deal, consider the consequences of being stung by a lionfish. Here, the intensity of pain is far more severe, but its rapid onset is the same.

This makes biological sense. The function of a defensive venom is to deter and repel a predatory attack before its bearer is killed or injured, and pain is a universal deterrent. If the evolution of snake venom was driven by natural selection for defence, we would expect to see the same pattern – almost immediate pain that is severe enough to be a deterrent. But is this what happens?

Sssself-defence?

There is often severe pain in snakebites, but little was known about the timescale of pain development. If pain occurs long after the bite, it may simply be a side effect of other venom properties, such as tissue damage.

The ideal organism on which to test this idea is a species that is regularly exposed to venomous snakebites from a wide variety of snakes and can communicate precisely the effects of a bite. That model organism is Homo sapiens. In particular, snake keepers, reptile researchers and ecologists who work with them in the field.

To tap into the body of collective snakebite experience accumulated by this demographic, Bangor University researcher Harry Ward-Smith designed and distributed a questionnaire asking them about past venomous snakebites, and in particular, how pain developed after a bite.

Bites are a common occurrence for snake handlers, but it seems most snakes didn’t evolve venom for self-defence.mr.kie/Shutterstock

Respondents were asked to report their pain level on a scale of 0-10 after one minute and five minutes, and the maximum pain level at any time after the bite. The purpose was to focus mostly on the timescale of pain development rather than the actual pain levels themselves. The rationale was that while the intensity of pain experienced will vary greatly between people, the timing of when pain develops should be more consistent. Different people may consider a bee sting to be a minor nuisance or unbearable, but everyone agrees that it hurts immediately.

The survey asked 368 people who had collectively received 584 bites from 192 different snake species. By far the most common experience involved relatively low pain levels after a bite within the first five minutes, when the pain might deter a predator in time for the snake to escape injury or death.

More severe pain often followed later. Fewer than 15% of cases resulted in pain that was debilitating within five minutes of the bite, and roughly 55% of cases never reached that level. These results strongly suggest that self-defence doesn’t drive the evolution of snake venom.

We also investigated the presence of venoms that caused early-onset pain throughout an evolutionary tree of snake species. We found that venoms which cause early pain evolved on several occasions, but were usually quickly lost again during the course of snake evolution. Again, this suggests that snakes don’t develop venom as a response to the need to ward off potential predators.

Some venomous snakes, like this Brazilian caissaca (Bothrops moojeni), have toxins with the main function of causing pain.Wolfgang Wüster, Author provided

There are likely exceptions though. For instance, some coral snakes and pit vipers have specifically pain-inducing toxins in their venoms. Spitting cobras have unique behavioural adaptations for defensive venom use, and their venoms cause intense pain upon contact with eyes.

Those who feel personally threatened by the existence of dangerously venomous snakes can rest easy. Overall, it’s not aimed at us.

The Conversation

Wolfgang Wüster has received funding from The Leverhulme Trust.

Kevin Arbuckle receives funding from the Hamish Ogston Foundation for snakebite research.

À quoi pourrait ressembler la Terre lorsque le prochain supercontinent se formera

Author: Mattias Green, Reader in Physical Oceanography, Bangor UniversityHannah Sophia Davies, PhD Researcher, Universidade de Lisboa Joao C. Duarte, Researcher and Coordinator of the Marine Geology and Geophysics Group, Universidade de Lisboa

Triff/Shutterstock, CC BY

La couche extérieure de la Terre, la croûte solide sur laquelle nous marchons, est constituée de morceaux brisés, un peu comme la coquille d’un œuf cassé. Ces morceaux, les plaques tectoniques, se déplacent autour de la planète à une vitesse de quelques centimètres par an. De temps en temps, les plaques se rassemblent et se combinent pour former un supercontinent, qui perdure quelques centaines de millions d’années avant de se briser. Puis les plaques se dispersent et s’éloignent les unes des autres, jusqu’à ce qu’elles finissent par se réunir à nouveau – après quelques 400 à 600 millions d’années.

Le dernier supercontinent, la Pangée, s’est formé il y a environ 310 millions d’années et a commencé à se désassembler il y a environ 180 millions d’années. Il a été suggéré que le prochain supercontinent se formera dans 200-250 millions d’années. Nous sommes donc actuellement à peu près au milieu du cycle d’assemblage et dispersion du supercontinent. La question est la suivante : comment le prochain supercontinent se formera-t-il et pourquoi ?

Il existe quatre scénarios fondamentaux pour la formation du prochain supercontinent : la nouvelle Pangée, la Pangée ultime, Aurica et Amasia. La formation de ces quatre supercontinents potentiels est en fait liée à la façon dont la Pangée s’est séparée et dont les continents du monde se déplacent encore aujourd’hui.

L’éclatement de la Pangée a conduit à la formation de l’océan Atlantique, qui s’ouvre et s’élargit encore aujourd’hui. De l’autre côté, l’océan Pacifique se ferme et se rétrécit. Tout autour de l’océan Pacifique se trouve la célèbre ceinture de feu du Pacifique, où le fond océanique descend par subduction sous les plaques continentales et plonge dans l’intérieur de la Terre. Là, les anciens fonds océaniques sont « recyclés » et peuvent se transformer en panaches volcaniques. L’Atlantique, en revanche, possède une grande dorsale océanique qui produit une nouvelle plaque océanique, mais n’abrite que deux zones de subduction : l’arc des Petites Antilles dans les Caraïbes et l’arc des Antilles australes, situé entre l’Amérique du Sud et l’Antarctique.

1. La nouvelle Pangée

Si nous supposons que les conditions actuelles persistent, de sorte que l’Atlantique continue à s’ouvrir et le Pacifique à se fermer, nous avons un scénario dans lequel le prochain supercontinent se forme aux antipodes de la Pangée. Les Amériques entreraient en collision avec l’Antarctique, qui dérive vers le nord, puis avec l’Afrique-Eurasie, deux plaques déjà en collision. Le supercontinent qui se formerait s’appelle « nouvelle Pangée », ou Novopangea.

La nouvelle Pangée.Author provided

2. La Pangée ultime

L’ouverture de l’Atlantique pourrait toutefois ralentir, voire même s’inverser, et l’Atlantique commencer à se fermer, tandis que les deux petits arcs de subduction dans l’Atlantique s’étendraient tout le long de la côte est des Amériques Sud et Nord. Ceci entraînerait une recréation de la Pangée : les Amériques, l’Europe et l’Afrique seraient réunies en un supercontinent appelé la Pangée ultime. Ce nouveau supercontinent serait entouré par un Océan Pacifique géant.

La Pangée ultime, formée par la fermeture de l’Atlantique.Author provided

3. Aurica

Cependant, si l’Atlantique devait développer de nouvelles zones de subduction – ce qui est peut-être déjà le cas– les océans Pacifique et Atlantique pourraient tous deux être condamnés à se fermer. Cela signifie qu’un nouveau bassin océanique devrait se former pour les remplacer.

Dans ce scénario, le rift pan-asiatique, qui traverse actuellement l’Asie depuis l’ouest de l’Inde jusqu’à l’Arctique, s’ouvrirait pour former un nouvel océan et former le supercontinent Aurica. L’Australie, qui dérive actuellement vers le nord, serait au centre du nouveau continent, alors que l’Asie de l’Est et les Amériques fermeraient le Pacifique de chaque côté. Les plaques européenne et africaine rejoindraient alors les Amériques à la fermeture de l’Atlantique.

4. Amasia

Le destin de la Terre est complètement différent dans le quatrième scénario. Celui part de l’observation, bien réelle que plusieurs des plaques tectoniques se déplacent actuellement vers le nord, notamment l’Afrique et l’Australie. Cette dérive serait due à des anomalies laissées par la Pangée dans les profondeurs de l’intérieur de la Terre, dans la partie appelée manteau. En raison de cette dérive vers le nord, on peut envisager un scénario dans lequel les continents, à l’exception de l’Antarctique, continueraient à dériver et finiraient par se rassembler autour du pôle Nord en un supercontinent appelé Amasia. Dans ce scénario, l’Atlantique et le Pacifique resteraient en grande partie ouverts.

Amasia, le quatrième scénario.Author provided

Penser à l’avenir

De ces quatre scénarios, nous pensons que la nouvelle Pangée est le plus probable. Il s’agit d’une progression logique des directions actuelles de la dérive des plaques continentales, tandis que les trois autres supposent qu’un autre processus entre en jeu. Il faudrait de nouvelles zones de subduction atlantique pour Aurica, le renversement de l’ouverture atlantique pour la Pangée ultime, ou des anomalies à l’intérieur de la Terre laissées par la Pangée pour Amasia.

L’étude de l’avenir tectonique de la Terre nous oblige à repousser les limites de nos connaissances et à réfléchir aux processus qui façonnent notre planète sur de longues échelles de temps. Elle nous amène également à réfléchir sur le système terrestre dans son ensemble et soulève une série de questions : quel sera le climat du prochain supercontinent ? Comment la circulation océanique s’adaptera-t-elle ? Comment la vie évoluera-t-elle et s’adaptera-t-elle ? Ce genre de questions repoussent les limites de la science parce qu’elles repoussent les limites de notre imagination.

The Conversation

Mattias Green reçoit un financement du Natural Environmental Research Council.

Hannah Davies reçoit un financement de la Fondation portugaise pour la science (FCT)

Joao C. Duarte reçoit un financement de la Fondation portugaise pour la science (FCT)

Huge ecosystems could collapse in less than 50 years – new study

Author: John Dearing, Professor of Physical Geography, University of SouthamptonGreg Cooper, Postdoctoral Research Fellow, Centre for Development, Environment and Policy, SOAS, University of LondonSimon Willcock, Senior Lecturer in Environmental Geography, Bangor University

The Amazon (left) may one day look more like the Serengeti (right). worldclassphoto / GTS Productions / shutterstock

We know that ecosystems under stress can reach a point where they rapidly collapse into something very different. The clear water of a pristine lake can turn algae-green in a matter of months. In hot summers, a colourful coral reef can soon become bleached and virtually barren. And if a tropical forest has its canopy significantly reduced by deforestation, the loss of humidity can cause a shift to savanna grassland with few trees.

We know this can happen because such changes have already been widely observed. But our research, now published in the journal Nature Communications, shows that the size of the ecosystem is important. Once a “tipping point” is triggered, large ecosystems could collapse much faster than we had thought possible. It’s a finding that has worrying implications for the functioning of our planet.

We started off by wondering how the size of the ecosystem might affect the time taken for these changes (ecologists call them “regime shifts”) to happen. It seems intuitive to expect large ecosystems to shift more slowly than small ones. If so, would the relationship between shift time and size be the same for lakes, corals, fisheries and forests?

We began by analysing data for about 40 regime shifts that had already been observed by scientists. These ranged in size from very small ponds in North America, through to savanna grassland in Botswana, the Newfoundland fishery and the Black Sea aquatic ecosystem.

Fish like the beluga once ruled the Black Sea, but their reign ended in an ecological collapse that took just 40 years.alexkoral

We found that larger ecosystems do indeed take longer to collapse than small systems, due to the diffusion of stresses across large distances and time-lags. The relationship does seem to hold across different types of ecosystem: lakes take longer than ponds, forests take longer to collapse than a copse, and so on.

But what really stood out was that larger systems shift relatively faster. A forest that is 100 times bigger than another forest will not take 100 times longer to collapse – it actually collapses much more quickly than that. This is quite a profound finding because it means that large ecosystems that have been around for thousands of years could collapse in less than 50 years. Our mean estimates suggest the Caribbean coral reefs could collapse in only 15 years and the whole Amazon rainforest in just 49 years.

Real world observations (solid line) predict large ecosystems will collapse relatively faster than predicted by a simple linear relationship (dashed line).Dearing et al, Author provided

What explains this phenomenon? To find out, we ran five computer models that simulated things like predation and herbivory (think: wolves, sheep and grasslands) or social networks (how accents spread through society). The models support the data in that large systems collapse relatively faster than small ones.

However, the models also provide further insight. For example, large ecosystems often have relatively more species and habitats existing as connected compartments, or sub-systems. This enhanced “modularity” actually makes the system more resilient to stress and collapse, rather like the water-tight compartments in a ship prevent it from sinking if the hull is breached.

But paradoxically, the same modularity seems to allow a highly stressed system to unravel more quickly once a collapse starts. And because large systems are relatively more modular, their collapse is relatively faster.

These unravelling effects should add to concerns about the effects of fires on the long-term resilience of the Amazon to climate change, or the rapid spread of recent bush fires in Australia caused by existing fires igniting further fires. The only upside to our finding concerns ecosystems that have already been managed into alternate regimes, such as human-made agricultural landscapes. These now have much less modularity, and thus may experience relatively slow transitions in the face of climate change or other stresses.

The messages are stark. Humanity now needs to prepare for changes in ecosystems that are faster than we previously envisaged through our traditional linear view of the world. Large iconic ecosystems like the Amazon rainforest or the Caribbean coral reefs are likely to collapse over relatively short “human timescales” of years and decades once a tipping point is triggered. Our findings are yet another reason to halt the environmental damage that is pushing ecosystems to their limits.


Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.

The Conversation

John Dearing received funding from the Deltas, Vulnerability and Climate Change: Migration and Adaptation (DECCMA) project under the Collaborative Adaptation Research Initiative in Africa and Asia (CARIAA) program with financial support from the UK Government’s Department for International Development (DFID) and the International Development Research Centre (IDRC), Canada (Grant No. 107642-001). The views expressed in this work are those of the creators and do not necessarily represent those of DFID and IDRC or its Boards of Governors. He is a professor at the University of Southampton. He is also a member of the Green Party of England and Wales and an elected local councillor at Warwick District Council.

Greg Cooper received a research studentship from the Deltas, Vulnerability and Climate Change: Migration and Adaptation(DECCMA) project under the Collaborative Adaptation Research Initiative in Africa and Asia (CARIAA) program with financial support from the UK Government’s Department for International Development (DFID) and the International Development Research Centre (IDRC), Canada. The views expressed in this work are those of the creators and do not necessarily represent those of DFID and IDRC or its Boards of Governors.

Simon Willcock receives funding from UK research and innovation (project numbers: NE/L001322/1, NE/T00391X/1, ES/R009279/1 and ES/R006865/1). He is affiliated with Bangor University.

Not just farmers: understanding rural aspirations is key to Kenya's future

Author: Kai Mausch, Senior Economist, World Agroforestry Centre (ICRAF)David Harris, Honorary Lecturer, Bangor University

Most households didn't want their future generations to become farmersDIVatUSAID/Flickr

About 8.3 million people living in Kenya’s rural areas farm to feed themselves. They typically have just a few acres of land and depend on rain to grow their crops. This makes them extremely vulnerable to changes in the weather. Many already struggle.

As with other countries in sub-Saharan Africa, rural communities in Kenya are characterised by higher rates of poverty, illiteracy and child mortality. They also have poor access to basic services, such as electricity and sanitation.

As a result, governments and development organisations consider improving farmers’ agricultural performance a priority to solve poverty and hunger.

Yet, after decades of agriculture-centred rural development approaches, progress has been limited. Food insecurity is still widespread and crop yields in farmer fields remain much lower than their potential. Many agricultural technologies – such as improved crop varieties or the use of fertiliser – fail to scale beyond pilot phase.

One overlooked reason why these technologies aren’t being used at scale could be people’s relative interest in farming and their aspirations beyond farming. There’s evidence in this given that rural incomes are increasingly diversifying.

We argue in our paper that a better understanding of households’ aspirations is key to design successful technologies that will be adopted more widely.

In our research, we interviewed 624 households to explore the aspirations of people living in Kenya’s rural areas. We asked them about their current income sources, investment plans and what they would like their children to do in the future.

We found that only a few households specialised in farming and yet, many self-identified as farmers and said they aspired to increase their agricultural income. This was surprising as the majority of their income often came from sources other than farming. We also found that few aspired for their children to become farmers.

Our findings show that rural Kenyan households can’t just be classified as farmers: there are many different income portfolios and aspirations. And it’s important to listen to people we call “farmers” so policies can be developed that offer innovations that meet their realities, such as training and financial services adapted to a more diversified livelihood portfolio.

Avoiding the trap of calling all households with farm activity “farmers”, and assuming they have no other interests, may also increase the match between demands and technology adoption.

Not only farmers

The rural economy is not only about agriculture. While many rural Kenyan families have their shamba (plot) and have a strong attachment to the land, not all households are farmers in the traditional understanding.

Our results showed that only a quarter of the families in rural Kenya are full-time farmers. The majority (60%) of farm labourers and households derived most of their income from activities outside farming.

Yet, they still identified as farmers.

When asked about their future, two-thirds aspired to increasing their farm incomes through irrigation access, small livestock or high-value crops like fruits and vegetables. Just a third looked outside the farming sector with suggestions of transport, hair salons, shops or other rural business ventures. But this doesn’t mean these other diverse activities aren’t important – and likely becoming more so all the time.

Income diversification isn’t surprising. Several agriculture experts question the potential of rain-fed smallholder farming, as practised by millions of rural families in sub-Saharan Africa, as a pathway out of poverty.

Even though much of sub-Saharan Africa is experiencing changes in farm size distribution, the share of small farms under under five hectares is still dominant– as it is in Kenya. Although adopting new technologies generally has positive economic returns per hectare and could improve the resilience of these farmers, the small size of most farms limits smallholders’ agricultural earning potential. This means that escaping poverty purely based on farming is not possible.

A well thought out rural investment strategy should provide a more diverse portfolio to the rural population.

Future generations

More questions were raised when we looked at household aspirations for future generations.

Very few parents hope for a future in farming for their children. This is in stark contrast to their personal aspirations and investment plans, which mostly involve expansion or intensification of farming.

This finding raises several pertinent questions that should be explored in future research. For example, what is the implication for agricultural innovations now and in the future? If most households foresee their children stepping out, does this mean that they are focused on short-term investments with quick wins?

Though all poor households are probably looking for quick wins, this may mean that even wealthier households might not have the long-term horizons needed to consider investments in practices with delayed benefits such as agroforestry or soil fertility management.

There are also implications for changes in land use patterns, currently characterised by high levels of land fragmentation in densely populated areas.

If households increasingly step out of farming, would this reverse the trend and enable consolidation of land for the next generation, for example, through people selling or renting out their land? Or is land still perceived as a necessary insurance or for retirement?

These are all important questions for the country’s future that require data and evidence for policymakers to base decisions on.

Capturing what drives the decision-making and aspirations of rural households will help design more effective policies and development initiatives that trigger positive, lasting change within the community.

The Conversation

Kai Mausch received funding from multiple organisations that fund international agricultural research.

David Harris has received funding from various public institutions that support international agricultural research .

A&E waiting times worst on record – but using AI to unblock beds could be part of the solution

Author: Christian P Subbe, Senior Clinical Lecturer in Acute & Critical Care Medicine, Bangor University

January is the busiest month of the year for the NHS – with patients often queuing in corridors and ambulances.

In 2019 Emergency Department waiting times in England were the worst on record, with 2000 patients waiting for more than 12 hours for a hospital bed in December. At the same time latest research shows that over the past three years almost 5500 patients have died in emergency departments while waiting for a hospital bed.

Part of the problem is that patients who are admitted as emergencies to hospital can be really sick and unstable. So making the decision as to when they are getting better and are safe to go home (and the bed is free) is complicated and risky.

Indeed for every five patients sent home from hospital, one will be brought back as a medical emergency within a month. But our research might have found a way to help unblock hospital beds and help doctors and nurses know quickly which patients are safe to go home.

In our latest research, we used machine learning – or artificial intelligence (AI) – to help doctors and nurses be confident as to which patients are ready to leave hospital and which should stay. We used changes in vital signs such as blood pressure and heart rate to highlight those patients who might be well enough to leave hospital.

In the unit in Bangor where we tested this system with 790 seriously ill patients, we found that using AI in this way would have meant 2500 less days in hospital for these patients.

Reading the signs

Vital signs such as blood pressure, heart rate, speed of breathing, temperature, oxygen level, need for additional oxygen and level of consciousness are already commonly used by doctors and nurses to find out how sick someone is. These are taken two to six times a day while patients are in hospital. The more abnormal each measurement is the more likely it is that the patient may need intensive care or die.

Our new study builds on research from our unit at Wrexham Maelor Hospital, published in 2001, which tested a system that summarised all vital signs in a single number. A very similar system is now used in most ambulances and hospitals in the UK, making it easier for doctors and nurses to quickly assess a patient and communicate how seriously a patient is ill.

The system basically gives each vital sign a score between zero and three points – zero points for normal measurements and up to three points for very abnormal measurements. All points are added up: if the total score is zero the patient is likely to be well. If the total score rises the patient is at higher risk – with 20 being the highest score.

For this new study we teamed up with the Bevan Commission and researchers from health electronics company Philips Healthcare. In our study, the computer looked at vital signs of sick patients who were admitted to hospital as emergencies with medical conditions such as asthma, heart failure and stomach ulcers. The computer observed and learned the scores for two days and then started to looks at trends – analysing how often the scores went up and down and how high or low the scores went.

Our findings not only make hospital discharges safer but can also be used to make sure patients are getting the right care.Chan2545/Shutterstock

During the study we found that patients with total scores of three or less for more than 96 hours are usually “stable”. They were unlikely to become unwell again during the rest of their hospital stay and could most likely safely leave hospital. We calculated that implementing this simple rule alone would have saved 2143 days in hospital for the 790 patients.

But when the computer used AI it was possible to tell who the “stable” patients were much earlier – after just 12 hours. Working out which patients were ok to go home at this point would have meant 2652 less days in hospital for the patients in our study.

Balancing risks

Deciding whether a patient can go home is complicated – and doctors and nurses often opt to keep patients in hospital longer because they are unsure whether they are really getting better. But the longer patients stay in hospital the more likely they will suffer from complications.

Indeed, it is estimated that 300,000 patients acquire infections in hospital in England alone each year. And being in hospital is particularly risky for elderly patients.

Decisions about discharge from NHS hospitals are usually done by senior hospital specialists, who see patients twice a week – though they might review patients briefly on other days. In contrast our AI system could check whether a patient is getting better several times each day.

The system we developed used data from real patients in a typical UK hospital. The data used by the team in Bangor is available at every hospital bedside in the UK, so could be easily rolled out to all hospitals. This would not only help to make hospital discharges safer but also help to make sure patients are getting the right care. And if the system can stop patients from having to queue for days in Emergency Departments or in ambulances then it might also ultimately help to save lives.

The Conversation

Christian P Subbe has undertaken consultancy work for Philips Healthcare. His employer BCUHB has received funding from the Bevan Commission and Philips Healthcare to undertake the research reported in this article. He is affiliated with Bangor University as a Senior Clinical Lecturer, with Betsi Cadwaladr University Health Board as a Consultant in Acute, Respiratory & Critical Care Medicine and the Health Foundation as an Improvement Science Fellow.

Exercise: we calculated its true value for older people and society

Author: Carys Jones, Research Fellow in Health Economics, Bangor University

Group exercise significantly benefits older people.Shutterstock

Taking up exercise is one of the most popular New Year’s resolutions for people wanting to improve their health. But our research shows that the benefits of older people going to exercise groups go beyond self-improvement and provide good value for society, too.

Less than two-thirds of UK adults reach the recommended physical activity levels of 150 minutes of moderate intensity exercise a week. Keeping active is especially important for older people because it can help reduce falls and improve independence and the ability to carry out everyday tasks. It also boosts mental wellbeing.

Older people are more vulnerable to loneliness and social isolation, and forming friendships and the social aspect of taking part in group exercise is a good way of protecting them from this. A study that followed older people in Taiwan over 18 years found that people who regularly took part in social activities were less likely to be depressed than those who did none. Research has also shown that having a strong social network decreases the risk of death over time.

But our research has now also found that exercise groups for older people are valuable not only to those who take part but also for the wider community.

The facts

We carried out a study of the social value generated by the Health Precinct, a community hub in North Wales that grew out of a partnership between local government, the NHS and Public Health Wales. People with chronic health conditions are referred to the Health Precinct through social prescribing. Social prescribing is a way of linking people to non-clinical services that are available in their community. The idea is that offering services in a community setting rather than a hospital or clinic will provide a non-threatening environment and encourage people to go.


Read more: Things people say about exercise that aren't true


Although the scheme is open to people of all ages with chronic conditions, so far it has mainly been used by older people and the most common reasons for referral are issues with mobility, balance, arthritis and heart conditions.

After someone is assessed at the Health Precinct, they receive a tailored 16-week plan that sets realistic goals and encourages them to take part in exercise groups at the local leisure centre. The Health Precinct promotes health and wellbeing improvement by encouraging social participation, independence and self-management of conditions.

Our approach to measuring the value of the programme was to carry out a social return on investment analysis. This method explores a broader concept of value than market prices, and puts a monetary value on social and environmental factors such as health status and social connectivity.

To establish what the impact was at a societal level, we included in our analysis the effects on people who attended the Health Precinct, their families, the NHS and the local government.

Over a 20-month period, we asked people aged over 55 and newly referred to the Health Precinct to fill out a questionnaire at their first appointment, and again four months later. We were interested in measuring changes to their physical activity levels, health status, confidence and social connectivity.

We also asked family members to fill out a questionnaire on changes to their own health as we thought they may worry less about their loved ones and increase their own activity levels.

Better together.Shutterstock

We calculated potential savings to the NHS by collecting information on how the individuals’ number of GP visits changed after taking part in the Health Precinct. We also estimated the impact on local government by looking at patterns of leisure centre attendance, and explored how likely people were to take out annual memberships after they had finished a 16-week programme.

A monetary value was then assigned to all of these factors to estimate what the overall amount of social value generated by older people doing regular exercise at the leisure centre was. This figure was compared to the annual running costs of the scheme.

Our findings suggest that the value generated by the Health Precinct outweighs the cost of running it, leading to a significant positive social return on investment.

Investing in health

In the current climate of squeezed health and social care budgets, it is more important than ever to identify services that offer good value for money and benefit multiple people and organisations.

The model of social prescribing and managing health and social care services in the community is increasingly popular. One of the more established examples is the pioneering Bromley by Bow Centre in London, which celebrated its 35th year in 2019.

Investing in community assets that encourage older people to get active physically and socially are key to not just improving their wellbeing but also generating future savings for society by lowering demand for health and social care services.

The Conversation

Carys Jones receives funding from the Welsh Government through Health and Care Research Wales.

Cinq ans après, « Je suis Charlie » sonne creux

Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University

« Je suis Charlie ». La phrase a été répétée à l’envi depuis l’attentat contre le journal satirique le 7 janvier 2015 qui a fait 12 morts, plusieurs blessés et choqué l’opinion y compris au-delà des frontières hexagonales.

Mais derrière l’émotion toute compréhensible qui a accompagné ces élans de solidarité repose une réalité bien plus complexe.

En analysant au plus près, on s’aperçoit ainsi que les réactions après l’attaque apparaissent bien plus conservatrices que de prime abord, certaines s’éloignant beaucoup des valeurs auxquelles est attachée la publication.

Ainsi, cinq après après l’attentat, « Je suis Charlie » sonne creux.

Une déclaration d’empathie

Avant 2015, Charlie Hebdoétait lu en moyenne par près de 40 000 personnes. L’immense majorité s’étant déclarée « Charlie », soit quelques centaines de milliers de personnes, ne faisait donc pas partie des lecteurs réguliers de l’hebdomadaire.

Le mouvement « Je suis Charlie » semble d’abord avoir été une déclaration d’empathie vis-à-vis du journal plutôt qu’une adhésion à son humour subversif.

La phrase symbolise aussi le désir de défendre à tout prix la liberté d’expression, sans nécessairement valider les modes d’expression de Charlie Hebdo.

Le journal s’est traditionnellement défini comme étant « irresponsable ». Il se plaît aussi à qualifier son humour de « bête et méchant ». Mais cette forme d’humour noir et provocateur n’a cessé d’attirer et d’attiser les critiques, en particulier au sein du monde politique. Néanmoins, de très nombreuses personnalités politiques dont de nombreuses « cibles » de l’humour Charlie, étaient présentes lors des marches de janvier 2015.

Beaucoup ont d’ailleurs critiqué la présence très hypocrite d’un certain nombre de personnalités politiques étrangères venues honorer le journal au nom de la liberté d’expression tandis que cette dernière est bafouée dans leurs pays respectifs. L’organisation Reporters sans frontières a été particulièrement véhémenteà ce propos, pointant du doigt les ministres des Affaires étrangères Sameh Shoukry (Egypte) et Sergei Lavrov (Russie) ainsi que le premier ministre turc Ahmet Davutoglu.

Commémorer et oublier

Charlie Hebdo s’est généralement moqué à peu près de tout et de n’importe quoi, peu importe le ton ou style. Néanmoins, après les événements de janvier 2015 et la mise en avant du besoin de liberté d’expression, on observe plusieurs incohérences.

Juste après les attentats, plusieurs émissions humoristiques, à la radio comme à la télévision ont été annulées, les animateurs ayant jugé l’affaire trop horrible pour assurer normalement leur chronique.

Parmi les rares exceptions on trouve Les Guignols, dont la ligne éditoriale recoupait parfois celle de Charlie, et qui a inclus plusieurs sketches dans une émission hommage diffusée à peine quelques heures après l’attaque.

La vidéo impliquait notamment une poupée du prophète Mohammed se distanciant des terroristes. Elle se concluait par l’entrée aux cieux des caricaturistes et journalistes assassinés en ironisant sur le fait qu’ils se soient si fréquemment moqués de la religion.

Pourtant, les médias français et plus largement la société tout entière semblent encore avoir du mal à apprécier l’humour noir.

Lors d’un événement en septembre 2017, l’humoriste Jérémy Ferrari racontait ainsi comment plusieurs émissions avaient déprogrammé des interviews avec lui au sujet de son nouveau spectacle Vends 2 pièces à Beyrouth, soulignant que si les médias exaltaient la place donnée à la liberté d’expression, se moquer de la guerre ou du terrorisme demeurait un sujet sensible.

Apologie du terrorisme

Par ailleurs, courant 2015, tous ceux et celles qui, d’un trait d’humour ou non, pouvaient minimiser l’attaque, voire critiquer le journal, prenaient le risque d’être pointés du doigt pour délit d’« apologie du terrorisme, un terme qui fait débat ».

La façon d’aborder cette question dans les établissements scolaires a fait particulièrement débat. Plusieurs médias ont évoqué des réactions des élèves dont certains ont vivement critiquéCharlie Hebdo, notamment par rapport à son humour provocateur.

Dans une école au nord de Paris, un élève qui avait fait une blague au sujet de l’un des terroristes avait été puni et chargé de recopier plusieurs fois la phrase « on ne rit pas de choses sérieuses ».

Aujourd’hui, comme je le montre dans mon ouvrage Humour in Contemporary France : Controversy, Consensus and Contradictions, récemment paru, les humoristes français semblent divisés entre le fait de pouvoir rire librement de tous les sujets tout en demeurant inquiets des possibles conséquences que cela peut engendrer.

Craintes et conséquences

En 2015, dans son spectacle Le Fond de l’air effraie l’humoriste Sophia Aram défendait la liberté totale d’expression et l’importance de se moquer librement de la religion et de l’extrémisme.

Mustapha El Atrassi – d’origine franco-marocaine comme Aram et qui a grandi lui aussi dans une famille de confession musulmane – insiste comme sa consœur sur le fait de pouvoir rire de sujets sensibles. Mais il souligne également que tous les humoristes ne peuvent rire de tout de la même façon. Selon lui, un humoriste français qui s’appelle « Maxence » aurait bien plus de chances de connaître une réaction positive à des blagues et de l’humour noir sur le terrorisme que lui.

Sophia Aram rend hommage à Charlie Hebdo.

En 2016, le comédien Stéphane Guillon a rappelé certaines des raisons qui ont fait de Charlie Hebdo une cible pour les fondamentalistes : les caricatures de Mahomet.

« Si tu peux mourir à cause d’un dessin, tu peux mourir à cause d’un sketch. »

Il a de nouveau évoqué sa crainte des conséquences potentiellement dangereuses de se moquer du prophète Mahomet sur scène en 2018. Lors d’un événement commémorant le troisième anniversaire de l’attaque de Charlie Hebdo, l’humouriste normalement acerbe [a ainsi déclaré] : « Je ne veux pas manquer de voir mes enfants grandir à cause d’une blague sur Mahomet. »

Stéphane Guillon, 6 janvier 2018.

Vous êtes toujours là ?

Cinq ans après les attentats, la France n’embrasse pas davantage les valeurs associées au journal. Si, peu après les événements le nombre d’abonnés était passé à 260 000 et que, six mois plus tard, il se vendait 120 000 journaux par semaine, en 2018, on ne recensait plus que 35 000 abonnés et seuls 35 000 exemplaires supplémentaires étaient vendus par semaine aux non-abonnés.

Après une nouvelle baisse des ventes – également due à une gestion conflictuelle du journal –, le quatrième anniversaire des attentats en 2019 se doublait d’un éditorial sarcastique qui interrogeait ses lecteurs : « Vous êtes toujours là ? »

Certains ont bel et bien disparu : ainsi Les Guignols ont tiré leur révérence en 2018 après que les quatre auteurs principaux de l’émission aient été remerciés à l’été 2015. Un peu comme si la France semblait finalement beaucoup moins encline à adopter l’humour satirique mordant que l’on aurait pu croire en 2015.


Traduction Clea Chakraverty

The Conversation

Jonathan Ervine ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son poste universitaire.

Five years on from the Charlie Hebdo attack, 'Je suis Charlie' rings hollow

Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University

After the terror attack on the Paris office of satirical magazine Charlie Hebdo on January 7 2015 left 12 people dead, many declared “Je suis Charlie” (“I am Charlie”) in solidarity. But behind the understandable emotion that accompanied such declarations lay a more complicated reality. Many reactions to the attack were more conservative than first appeared, and not in keeping with the values of the publication. Five years on, “Je suis Charlie” has quite a hollow ring to it.

Before 2015, about 40,000 people read Charlie Hebdo each week. Given that many hundreds of thousands declared “je suis Charlie”, most were clearly not regular readers. “Je suis Charlie” primarily appears to have been a statement of sympathy rather than an endorsement of the brand of humour of this subversive publication. The phrase also symbolised a desire to defend freedom of expression, although not necessarily an agreement with the ways in which Charlie Hebdo has expressed itself.

Charlie Hebdo has traditionally taken pride in describing itself as a “journal irresponsable” (irresponsible newspaper). It has been happy to describe its humour as “bête et méchant” (stupid and nasty). This sometimes dark and provocative humour has attracted criticism over the years, not least from politicians. Yet many authority figures that Charlie Hebdo had ruthlessly mocked were present in the demonstrations that took place in January 2015.

And as was observed at the time, the presence of certain world leaders also pointed to a degree of hypocrisy. Where many sought to defend freedom of speech, there were several leaders who had restricted freedom of expression in their own countries. The international non-profit organisation Reporters Without Borders was particularly critical of figures such as Egyptian foreign minister Sameh Shoukry, Russian foreign minister Sergei Lavrov and Turkish Prime Minister Ahmet Davutoglu.

Soaring over ‘Je suis Charlie’ in the ski jumping leg of the FIS Nordic Combined World Cup in France, 10 January 2015.Patrick Seeger/EPA-EFE

Commemorating while forgetting

Charlie Hebdo has generally been keen to laugh about anything and everything, and in whatever way it pleases. But despite the focus on freedom of expression in the aftermath to the 2015 attack, there were noticeable inconsistencies. In the immediate aftermath, several topical comedy programmes on French television were not broadcast as writers and presenters struggled to find a way to engage with such horrific events in a humorous manner.

A rare exception was the Canal Plus show Les Guignols, whose brand of humour was sometimes similar to Charlie Hebdo. The daily programme, which featured latex puppets of many well known figures, included several sketches about the attack, broadcast only hours after it had taken place. These included jokes about increased levels of terror threats. It also involved a latex puppet of the Prophet Muhammad distancing himself from the attackers. The show, which was dedicated to the magazine, concluded with a sketch in which several of the Charlie Hebdo cartoonists who had been killed were allowed into heaven despite having frequently mocked religion.

Yet many sections of the French media, and French society in general, were reluctant to embrace such dark humour. At an event in September 2017, comedian Jérémy Ferrari told of how several television stations cancelled planned interviews with him about his new show in early 2015. Stations may have been making time to discuss freedom of expression, but he said they seemed reluctant to discuss the way his stand-up show mocked war and terrorism.

People who sought to play down or joke about the Charlie Hebdo attack in France in early 2005 also risked being charged with the offence of “l'apologie du terrorisme” (excusing terrorism). In a school north of Paris, a pupil was reportedly disciplined for laughing at a joke about the name of a gunman who killed several people in the days after the Charlie Hebdo attack, and was made to repeatedly write the phrase “one does not laugh about serious things”.

A challenge for comedians

Several years on, as I explore in my recent book on the topic, French comedians seem torn between insisting on the importance of being able to joke about whatever topics they wish and worrying about the consequences of doing so.

In 2015, the comedian Sophia Aram started performing a show in which she defended Charlie Hebdo and its values. She insists on the importance of continuing to freely mock religion and extremism.

Mustapha El Atrassi – a comedian who shares Aram’s French-Moroccan roots and was also brought up in a Muslim family – also insists on the need to keep embracing jokes that deal with taboos. But he argues too that not all comedians are equally free to joke about terrorism. He said that a French comedian called “Maxence” – a stereotypically white, European, middle class name – is likely to get a much more positive response to dark humour about terrorism than someone from his background.

Focusing on the depictions of the Prophet Muhammad that made Charlie Hebdo a target for fundamentalists, meanwhile, the comedian Stéphane Guillon said in 2016: “If you can die due to a drawing, you can die due to a sketch.” He again evoked his fear of the potentially dangerous consequences of mocking the Prophet Muhammad on stage in 2018. At an event to commemorate the third anniversary of the Charlie Hebdo attack, the normally acerbic comic stated: “I don’t want to miss out on seeing my children grow up due to a joke about Muhammad.”

Five years on, France has not continued to embrace values associated with Charlie Hebdo. Shortly after the attack, the magazine’s number of subscribers rose to 260,000 and six months on it was selling 120,000 copies each week via newsagents. But by 2018, it had only 35,000 subscribers and sold a further 35,000 copies per week to non-subscribers. After a further decline in sales, it marked the fourth anniversary of the attacks in 2019 with an editorial that asked its readers: “Are you still there?”

One thing that is certainly not still there is Canal Plus’s Les Guignols, the satirical show featuring latex puppets. Its four main writers were fired in summer 2015 and the show moved to a less prominent slot. In 2018, the iconic show was finally cancelled by Canal.

Ultimately, France seems much less keen to embrace biting satirical humour than it initially appeared back in 2015.

Jonathan Ervine is the author of:

Humour in Contemporary France: Controversy, Consensus and Contradictions

Liverpool University Press provides funding as a content partner of The Conversation UK

The Conversation

Jonathan Ervine ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son poste universitaire.

Boris Johnson is planning radical changes to the UK constitution – here are the ones you need to know about

Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University

It's not just Brexit that he's eyeing up. PA

With a very large majority in parliament, Boris Johnson is planning radical changes to the UK constitution. His party claims that far reaching reforms are needed because of a “destabilising and potentially extremely damaging rift between politicians and the people” under the last parliament. The issue at the centre of this “damaging rift”, however, is whether the proposals for constitutional change are a democratic necessity or a cynical attempt by the Conservative government to bolster its power.

These are the most important changes the Conservative government is proposing.

The future of the union

The most seismic constitutional challenge for the prime minister is the future of the union of nations that make up the UK. He claims he wants to strengthen the union, but Brexit raises questions about Northern Ireland and the Scottish nationalism movement has been energised by the 2019 election results.

The Conservatives have been clear about their opposition to holding a second independence referendum in Scotland. And legally speaking, it is for Westminster to make decisions– not Holyrood. However, politically speaking, it’s difficult to envisage the UK government being able to arbitrarily force a country to stay in the UK against its will.

It is likely that Johnson will try to meet this challenge by devolving more powers to the regions and offering them more money.

Cutting 50 MPs from parliament

The Conservative government has detailed plans for changing the way the UK elects its members of parliament, starting with redrawing constituency boundaries to reduce the number of MPs from 650 to 600. The changes were first proposed in 2016 by the independent Boundary Commission.

But it has been noted that moving boundaries could have a greater negative impact on Labour and the Scottish National Party than the Conservatives – which perhaps tells us why the plan features so highly on Johnson’s agenda. It could also mean that smaller regions, such as Wales, lose disproportionately more MPs than other parts of the UK.

Holding elections when he wants

The Fixed-Term Parliaments Act is on the chopping block, too. This act stipulates that general elections must be called every five years, with early elections held only in exceptional circumstances.

The Conservatives say the act is being used as a tool by opposition parties for “delay and dither”, and has led to “paralysis”.

But there is also concern that repealing the act hands the prime minister discretion to decide when to call an election – and, in the most extreme interpretation, could mean that this government’s term lasts a decade. The question therefore becomes what constitutional safeguards would be put in place to replace this law and counterbalance against the arbitrary power of government? We don’t currently have an answer to that.

‘Rebalancing’ human rights

The Conservatives have long talked of repealing the Human Rights Act and replacing it with a “British” bill of rights.


Read more: UK Human Rights Act is at risk of repeal – here's why it should be protected


Their debates have centred around the belief that the UK needs to revisit the balance between individuals’ rights– such as freedom of expression – and the wider public interest. That doesn’t mean the Conservatives want to curtail all rights to free speech but that they want greater powers to manage cases in which people use a free speech argument to justify hate speech. The basis of their argument seems to be that if human rights are universal to all then we may have now gone too far – as they also apply to “bad people”.

However, such arguments are flawed. Human rights legislation already recognises that rights are not absolute, and can be proportionally limited as necessary in a democratic society. Instead these proposals seem to be more about giving the government increased arbitary power to deport individuals they deem to be a risk, such as terrorist suspects, rather than having to fight protracted human rights litigation in court.

What’s more, the Conservatives’ actual commitment to retaining the right free speech can be seen via their proposals to repeal section 40 of the Crime and Courts Act.. This is the law that was introduced following the Leveson Inquiry and phone-hacking scandals, which forced publishers not signed up to an approved regulator to pay all legal costs linked to libel claims, even if the claims were ultimately thrown out. The concern is that if publishers are carrying these financial risks, it restricts the freedom of the press and legitimate investigative journalism.

‘Updating’ justice

The UK is also about to see its justice system “[updated]” – including judicial review, the process through which people can challenge decisions made by public bodies. This process was famously used in two high-profile Brexit cases in which the Supreme Court ruled against the government.

Some therefore question whether the prime minister’s displeasure with these rulings is the real motivation for “updating” the justice system. The Conservative manifesto says the idea is to ensure the process is not being abused “to conduct politics by another means”.

There is a legitimate case for “updating” justice – not least because there has been a rise in the number of people wanting to challenge the state without being able to pay for legal advice. But judicial review has already been subject to significant reforms in recent years. The concern is that this may be an attempt by the Conservatives to muddy the waters by reformulating the rules following the tumultuous time the government has had in the courts.

Parliament or government?

A constitution, democracy and rights commission is to be set up within a year, which appears to be aimed at reviewing the UK constitution through the guise of addressing trust in politics.

It’s likely that the commission will focus on the relationship between parliament and government. It will, in particular, review of the mechanisms available to parliament to hold the government to account and look at what the government can and can’t do without parliamentary approval. These powers currently include decisions to deploy the armed forces, make or unmake international treaties, and to grant honours.

The courts can review the limits of these “prerogative powers”, and can prevent the government from trying to create new ones. This was a key part of the Brexit case taken to the Supreme Court by campaigner Gina Miller when she argued that the government could not trigger Article 50 to begin the Brexit process in 2016 without getting parliament’s approval..

The Conservatives have also hinted at wanting to reform the House of Lords, though it’s not clear how at this stage. It is likely that the new government will want to explicitly reaffirm the supremacy of the Commons over the Lords in a new act of parliament, and possibly even revisit the Lords’ “powers of delay” – something Theresa May threatened during her prime ministership when the Lords refused to pass her Brexit legislation straight away.

The Conversation

Stephen Clear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Can African smallholders farm themselves out of poverty?

Author: David Harris, Honorary Lecturer, Bangor UniversityJordan Chamberlin, Spatial Economist, International Maize and Wheat Improvement Center (CIMMYT)Kai Mausch, Senior Economist, World Agroforestry Centre (ICRAF)

Hard work and poor prospects for smallholder farming households in Africa. Swathi Sridharan (formerly ICRISAT, Bulawayo), CC BY-SA

A great deal of research on agriculture in Africa is organised around the premise that intensification can take smallholder farmers out of poverty. The emphasis in programming often focuses on technologies that increase farm productivity and management practices that go along with them.

Yet the returns of such technologies are not often evaluated within a whole-farm context. And – critically – the returns for smallholders with very little available land have not received sufficient attention.

To support smallholders in their efforts to escape poverty by adopting modern crop varieties, inputs and management practices, it’s necessary to know if their current resources – particularly their farms – are large enough to generate the requisite value.

Two questions can frame this. How big do farms need to be to enable farmers to escape poverty by farming alone? And what alternative avenues can lead them to sustainable development?

These issues were explored in a paper in which we examined how much rural households can benefit from agricultural intensification. In particular we, together with colleagues, looked at the size of smallholder farms and their potential profitability and alternative strategies for support. In sub-Saharan Africa smallholder farms are, on average, smaller than two hectares.

It’s difficult to be precise about the potential profitability of farms in developing countries. But it’s likely that the upper limit for most farms optimistically lies between $1,000 and $2,000 per hectare per year. In fact the actual values currently achieved by farmers in sub-Saharan Africa are much less.

The large profitability gap between current and potential performance per hectare of smallholder farms could, in theory, be narrowed if farmers adopted improved agricultural methods. These could include better crop varieties and animal breeds; more, as well as more efficient, use of fertilisers; and better protection from losses due to pests and diseases.

But are smallholder farms big enough so that closing the profitability gap will make much difference to their poverty status?

Our research suggests that they are not. Even if they were able to achieve high levels of profitability, the actual value that could be generated on a small farm translated into only a small gain in income per capita. From this we conclude that many, if not most, smallholder farmers in sub-Saharan Africa are unlikely to farm themselves out of poverty – defined as living on less than $1.90 per person per day. This would be the case even if they were to make substantial improvements in the productivity and profitability of their farms.

That’s not to say that smallholder farmers shouldn’t be supported. The issue, rather, is what kind of support best suits their circumstances.

Productivity and profitability

In theory, it should be quite simple to calculate how big farms need to be to enable farmers to escape poverty by farming alone.

To begin with, it’s necessary to know how productive and profitable per unit area a farm can be. Productivity and profitability – the value of outputs minus the value of inputs – are functions of farmers’ skills and investment capacities.

They are also dependent on geographical contexts. This includes soils, rainfall and temperature, which determine the potential for crop and livestock productivity. Other factors that play a part include remoteness, which affects farm-gate prices of inputs and outputs, and how many people a farm needs to support.

The figure below summarises the relation between farm size, profitability and income of rural households. We used a net income of $1.90 per person per day (the blue curve) as our working definition of poverty. A more ambitious target of $4 per person per day (the orange curve) represents a modest measure of prosperity beyond the poverty line.

Combinations of land per capita and net whole-farm profitability that would generate 1.90 (blue) and 4 (orange) dollars per person per day. The median land per capita values of rural households from all 46 sites in 15 countries of Sub-Saharan Africa were below the horizontal dashed line (0.60 hectares per person).Author supplied

So, how do these values compare with the situation in sub-Saharan Africa?

It has been estimated that about 80% of farms across nine sub-Saharan countries are smaller than two hectares. These sites would need to generate at least $1,250 per hectare per year just to reach the poverty line. Sites at the lower end of the range cannot escape poverty even if they could generate $3,000 per hectare per year.

Unfortunately, there is limited information about whole-farm net profitability in developing countries. But in Mozambique, Zimbabwe and Malawi, for example, the mean values of only $78, $83 and $424 per hectare per year, respectively, imply that even $1,250 appears to be far out of reach for most small farms.

It’s difficult to interpret information from developed countries in developing country contexts. Nevertheless, gross margin values for even the most efficient mixed farms seldom exceed around $1,400 per hectare per year.

These values are similar to gross margins using best practices for perennial cropping systems reported in a recent literature survey of tropical crop profitability. The study drew on data from nine household surveys in seven African countries. It found that profit from crop production alone (excluding data on livestock) ranged from only $86 per hectare per year in Burkina Faso to $1,184 in Ethiopia. The survey mean was $535 per hectare per year.

From this overview we must conclude that, even with very modest goals, most smallholder farms in sub-Saharan Africa are not “viable” when benchmarked against the poverty line. And it’s unlikely that agricultural intensification alone can take many households across the poverty line.

What is the takeaway?

We certainly do not suggest that continued public and private investments in agricultural technologies are unmerited. In fact, there is evidence that returns to agricultural research and development at national level are very high in developing countries. And there is evidence that agricultural growth is the most important impetus for broader patterns of structural transformation and economic growth in rural Africa. But realistic assessments of the scope for very small farmers to farm themselves out of poverty are necessary.

Farmers are embedded in complex economic webs and increasingly depend on more than farm production for their livelihoods. More integrated lenses for evaluating public investment in the food systems of the developing world will likely be more helpful in the short term.

Integrated investments that affect both on- and off-farm livelihood choices and outcomes will produce better welfare than a narrow focus on production technologies in smallholder dominated systems. Production technology research for development will remain important. But to reach the smallest of Africa’s smallholders will require focus on what’s happening off the farm.

The Conversation

David Harris receives funding from the CGIAR.

Jordan Chamberlin receives funding from the CGIAR, the Bill and Melinda Gates Foundation, and IFAD.

Kai Mausch received funding from multiple organisations that fund international agricultural research.

Why some scientists want to rewrite the history of how we learned to walk

Author: Vivien Shaw, Lecturer in Anatomy, Bangor UniversityIsabelle Catherine Winder, Lecturer in Zoology, Bangor University

_Danuvius guggenmosi_ fossilChristoph Jäckle

It’s not often that a fossil truly rewrites human evolution, but the recent discovery of an ancient extinct ape has some scientists very excited. According to its discoverers, Danuvius guggenmosi combines some human-like features with others that look like those of living chimpanzees. They suggest that it would have had an entirely distinct way of moving that combined upright walking with swinging from branches. And they claim that this probably makes it similar to the last shared ancestor of humans and chimps.

We are not so sure. Looking at a fossilised animal’s anatomy does give us insights into the forces that would have operated on its bones and so how it commonly moved. But it’s a big leap to then make conclusions about its behaviour, or to go from the bones of an individual to the movement of a whole species. The Danuvius fossils are unusually complete, which does provide some vital new evidence. But how much does it really tell us about how our ancestors moved around?

Danuvius has long and mobile arms, habitually extended (stretched out) legs, feet which could sit flat on the floor, and big toes with a strong gripping action. This is a unique configuration. Showing that a specimen is unique is a prerequisite for classifying it as belonging to a separate, new species that deserves its own name.

But what matters in understanding the specimen is how we interpret its uniqueness. Danuvius’s discoverers go from describing its unique anatomy to proposing a unique pattern of movement. When we look at living apes, the relationship between anatomy and movement is not so simple.

The Danuvius find actually includes fossils from four individuals, one of which is nearly complete. But even a group of specimens may not be typical of a species more generally. For instance, humans are known for walking upright not climbing trees, but the Twa hunter-gatherers are regular tree climbers. These people, whose bones look just like ours, have distinctive muscles and ranges of movement well beyond the human norm. But you could not predict their behaviour from their bones.

Studying bones can tells us about movement but not behaviour.Christoph Jäckle

Every living ape uses a repertoire of movements, not just one. For example, orang-utans use clambering, upright or horizontal climbing, suspensory swinging and assisted bipedalism (walking upright using hands for support). Their movement patterns can vary in complex ways because of individual preference, body mass, age, sex or activity.

Gorillas, meanwhile, are “knuckle-walkers” and we used to think they were unable to stand fully upright. But the “walking gorilla” Ambam is famous for his “humanlike” stride.

Ultimately, two animals with very similar anatomies can move differently, and two with different anatomies can move in the same way. This means that Danuvius may not be able to serve as a model for our ancestors’ behaviour, even if its anatomy is similar to theirs.

In fact, we believe there are other plausible interpretations of Danuvius’s bones. These alternatives give a picture of a repertoire of potential movements that may have been used in different contexts.

For example, one of Danuvius‘s most striking features is the high ridge on the top of its shinbone, which the researchers say is associated with “strongly developed cruciate ligaments”, which stabilise the knee joint. The researchers link these strong stabilising ligaments with evidence for an extended hip and a foot that could be placed flat on the floor to suggest that this ape habitually stood upright. Standing upright could be a precursor to bipedal walking, so the authors suggest that this means Danuvius could have been like our last shared ancestor with other apes.

However, the cruciate ligaments also work to stabilise the knee when the leg is rotating. This only happens when the knee is bent with the foot on the ground. This is why skiers who use knee rotation to turn their bodies often injure these ligaments.

Other explanations

We have not seen the Danuvius bones in real life. But, based on the reserachers’ excellent images and descriptions, an equally plausible interpretation of the pronounced ridge on the top of the shinbone could be that the animal used its knee when it was bent, with significant rotational movement.

Perhaps it hung from a branch above and used its feet to steer by gripping branches below, rather than bearing weight through the feet. This could have allowed it to capitalise on its small body weight to access fruit on fine branches. Alternatively, it could have hung from its feet, using the legs to manoeuvre and the hands to grasp.

All of these movements fit equally well with Danuvius’ bones, and could be part of its movement repertoire. So there is no way to say which movement is dominant or typical. As such, any links to our own bipedalism look much less clear-cut.

Danuvius is undoubtedly a very important fossil, with lots to teach us about how varied ape locomotion can be. But we would argue that it is not necessarily particularly like us. Instead, just like living apes, Danuvius would probably have displayed a repertoire of different movements. And we can’t say which would have been typical, because anatomy is not enough to reconstruct behaviour in full.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Accessing healthcare is challenging for Deaf people – but the best solution isn't 'one-size-fits-all'

Author: Anouschka Foltz, Assistant Professor in English Linguistics, University of GrazChristopher Shank, Lecturer in Linguistics, Bangor University

Elnur/ Shutterstock

For many of us, a visit to the doctor’s office can be wrought with anxiety. A persistent cough that won’t go away or an ailment we hope is nothing serious can make GP visits emotionally difficult. Now imagine that you can’t phone the doctor to make an appointment, you don’t understand what your doctor just said, or you don’t know what the medication you’ve been prescribed is for. These are all situations that many Deaf people face when accessing healthcare services.

We use Deaf (with a capital “D”) here to talk about culturally Deaf people, who were typically born deaf, and use a signed language, such as British Sign Language (BSL), as their first or preferred language. In contrast, deaf (lowercase “d”) refers to the audiological condition of deafness.

For our study, we talked to Deaf patients in Wales who communicate using BSL to learn about their experiences with healthcare services. Their experiences illustrated the challenges they face, and showed us that patients have unique needs. For example, a patient born profoundly deaf would have different needs from a person who became deaf later in life.

Health inequalities

Many Deaf communities around the world face inequalities when it comes to accessing health information and healthcare services, as health information and services are often not available in an accessible format. As a result, Deaf individuals often have low health literacy and are at greater risk of being misdiagnosed or not diagnosed at all.

Problems with healthcare access often begin when making an appointment. Because many GPs only allow appointments to be made over the phone, many of those we interviewed had to physically go to health centres to ask for an appointment. Not only is this inconvenient, booking without an interpreter could be difficult and confusing.

Interpreters are essential for patients to receive the best care. However, we heard recurring stories of interpreters not being booked for appointments, arriving late, and – in some cases – not coming at all. Before interpreters were available, one woman described going to the doctor’s office as intimidating “because the communication wasn’t there”. One participant said they always make sure an interpreter has been booked, saying: “Don’t let me down… I don’t want to be going through this again.”

These issues are worsened in emergency situations. One woman recalled an incident where despite texting 999, she didn’t get help until her daughter phoned 999 for her, acting as her interpreter throughout her entire interaction with emergency services.

Emergency situations are made worse by a lack of understanding or help from emergency services.fizkes/ Shutterstock

Another person who texted 999 said:

There are all these questions that they are asking you. And all that we want is to be able to say, ‘We need an ambulance’ … Because what’s happening is we’re panicking, we don’t understand the English, there are all these questions being texted to us, it’s hard enough for us to understand it anyways without panicking at the same time.

Interviewees also recalled emergency situations where interpreters weren’t available at short notice. One Deaf woman recalled when her husband – who is also Deaf – was rushed to hospital. They received no support from staff, and no interpreter was provided to help them.

Deaf awareness and language

Many problems that our interviewees faced related to language, and a lack of Deaf awareness. Many healthcare providers didn’t seem to know that BSL is a language unrelated to English – meaning many BSL users who were born Deaf or lost hearing early in life have limited proficiency in English. One interviewee explained that many healthcare providers think all Deaf people can read, without realising that many BSL users don’t understand English – with many being given health information written in English that they couldn’t comprehend.

Interviewees wished healthcare staff were more Deaf aware, as many healthcare providers lacked understanding about Deafness. This affected the doctor-patient relationship, with many interviewees agreeing that doctors “can be a bit patronising at times” and that this patronising attitude made interactions difficult. A lack of Deaf awareness can also lead to Deaf patients feeling forgotten. Many interviewees felt that Deaf people are easily ignored, with one interviewee saying: “I always feel like Deaf people are put last.”

No ‘one-size-fits-all’ solution

New technologies and services are being offered to help Deaf patients make appointments– such as having an interpreter call the doctor’s office during a video call with the patient.

Video calling might be one solution.Monika Wisniewska/ Shutterstock

Additionally, some health information is now available onlinein BSL. Interpreters can also be more easily available at short notice, for example in emergency situations, through video chat. Remote services particularly show promise for mental health treatments, by providing remote mental health counselling in BSL and other types of confidential services.

Because Deaf communities are small and tight-knit, patients may be wary of interacting with local Deaf counsellors or interpreters, worried about potential gossip. Several interviewees even said that they would not want a Deaf counsellor even if offered, for fear that the counsellor might gossip about them with others in the community. One interviewee suggested a mental health service with a remote online interpreter as the best solution.


Read more: How access to health care for deaf people can be improved in Kenya


The problems and potential solutions that emerged from our research are similar in other Deaf communities around the world. Though technology might offer some promising solutions, it’s important to realise that these might not work for everyone.

Patients have individual differences, needs, preferences, and cultural differences. Some patients may prefer a remote interpreter, others face-to-face interpreting – and these preferences may also depend on the type of appointment. What’s important is that Deaf patients have choice, and that new services, such as through the use of new technologies, are offered in addition to, not instead of, established health services.

The Conversation

Anouschka Foltz receives funding from Public Health Wales. The views in this article should, however, not be assumed to be the same as Public Health Wales.

Christopher Shank receives funding from Public Health Wales. The views in this article should, however, not be assumed to be the same as Public Health Wales.

Botswana is humanity's ancestral home, claims major study – well, actually …

Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor University

A study claims the first humans lived in a wetland around what is now northern Botswana.Prill/Shutterstock

A recent paper in the prestigious journal Nature claims to show that modern humans originated about 200,000 years ago in the region around northern Botswana. For a scientist like myself who studies human origins, this is exciting news. If correct, this paper would suggest that we finally know where our species comes from.

But there are actually several reasons why I and some of my colleagues are not entirely convinced. In fact, there’s good reason to believe that our species doesn’t even have a single origin.

The scientists behind the new research studied genetic data from many individuals from the KhoeSan peoples of southern Africa, who are thought to live where their ancestors have lived for hundreds of thousands of years. The researchers used their new data together with existing information about people all around the world (including other areas traditionally associated with the origins of humankind) to reconstruct in detail the branching of the human family tree.

We can think of the earliest group of humans as the base of the tree with a specific set of genetic data - a gene pool. Each different sub-group that branched off and migrated away from humanity’s original “homeland” took a subset of the genes in that gene pool with them. But most people, and so the vast majority of those genes, remained behind. This means people alive today with different subsets of our species’ genes can be grouped on different branches of the human family tree.

Groups of people with the most diverse genomes are likely to be the ones that descended directly from the original group at the base of the tree, rather than one of the small sub-groups that split from it. In this case, the researchers identified one of the groups of KhoeSan people from around northern Botswana as the very bottom of the trunk, using geographical and archaeological data to back up their conclusion.

Lead study author Vanessa Hayes with Juǀ’hoansi hunters in Namibia.Chris Bennett, Evolving Picture

If you compare this process to creating your own family tree, it makes sense to think you can use information about who lives where today and how everyone relates to each other to reconstruct where the family came from. For example, many of my relatives live on the lovely Channel Island of Alderney, and one branch of my family have indeed been islanders for many generations.

Of course, there’s always some uncertainty created by variations in the data. (I now live in Wales and have cousins in England.) But as long as you look for broad patterns rather than focusing on specific details, you will still get a reasonable impression. There are even some statistical techniques you can use to assess the strength of your interpretation.

But there are several problems with taking the process of building a human family tree to such a detailed conclusion, as this new research does. First, it’s important to note that the study didn’t look at the whole genome. It focused just on mitochondrial DNA, a small part of our genetic material that (unlike the rest) is almost only ever passed from mothers to children. This means it isn’t mixed up with DNA from fathers and so is easier to track across the generations.

As a result, mitochondrial DNA is commonly used to reconstruct evolutionary histories. But it only tells us part of the story. The new study doesn’t tell us the origin of the human genome but the place and time where our mitochondrial DNA appeared. As a string of just 16,569 genetic letters out of over 3.3 billion in each of our cells, mitochondrial DNA is a very tiny part of us.

Other DNA

The fact that mitochondrial DNA comes almost only ever from mothers also means the story of its inheritance is much simpler than the histories of other genes. This implies that every bit of our genetic material may have a different origin, and have followed a different path to get to us. If we did the same reconstruction using Y chromosomes (passed only from father to son) or whole genomes, we’d get a different answer to our question about where and when humans originated.

There is actually a debate over whether the woman from whom all our mitochondrial DNA today descends (“mitochondrial Eve”) could ever have even met the man from whom all living men’s Y-chromosomes descend (“Y-chromosome Adam”). By some estimates, they may have lived as much as 100,000 years apart.

And all of this ignores the possibility that other species or populations may also have contributed DNA to modern humans. After this mitochondrial “origin”, our species interbred with Neanderthals and a group called the Denisovans. There’s even evidence that these two interbred with one another, at about the same time as they were hybridising with us. Earlier modern humans probably also interbred with other human species living alongside them in other time periods.

All of this, of course, suggests that modern human history – like the history of modern primates– was much more than a simple tree with straight lines of inheritance. It’s much more likely that our distant ancestors interbred with other species and populations to form a braiding stream of gene pools than that we form a nice neat tree that can be reconstructed genetically. And if that’s true, we may not even have a single origin we can hope to reconstruct.

The Conversation

Isabelle Catherine Winder received funding from the European Research Council (ERC) as part of the DISPERSE project (2011-2016). It was as part of her work as a post-doc on this project that she wrote the paper about reticulation and the human past cited in this article.

Lab-grown mini brains: we can't dismiss the possibility that they could one day outsmart us

Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

It may not be science fiction anymore. 80's Child/Shutterstock

The cutting-edge method of growing clusters of cells that organise themselves into mini versions of human brains in the lab is gathering more and more attention. These “brain organoids”, made from stem cells, offer unparalleled insights into the human brain, which is notoriously difficult to study.

But some researchers are worried that a form of consciousness might arise in such mini-brains, which are sometimes transplanted into animals. They could at least be sentient to the extent of experiencing pain and suffering from being trapped. If this is true – and before we consider how likely it is – it is absolutely clear in my mind that we must exert a supreme level of caution when considering this issue.

Brain organoids are currently very simple compared to human brains and can’t be conscious in the same way. Due to a lack of blood supply, they do not reach sizes larger than around five or six millimetres. That said, they have been found to produce brain waves that are similar to those in premature babies. A study has showed they can also grow neural networks that respond to light.

There are also signs that such organoids can link up with other organs and receptors in animals. That means that they not only have a prospect of becoming sentient, they also have the potential to communicate with the external world, by collecting sensory information. Perhaps they can one day actually respond through sound devices or digital output.

As a cognitive neuroscientist, I am happy to conceive that an organoid maintained alive for a long time, with a constant supply of life-essential nutrients, could eventually become sentient and maybe even fully conscious.

Time to panic?

This isn’t the first time biological science has thrown up ethical questions. Gender reassignment shocked many in the past, but, whatever your beliefs and moral convictions, sex change narrowly concerns the individual undergoing the procedure, with limited or no biological impact on their entourage and descendants.

Genetic manipulation of embryos, in contrast, raised alert levels to hot red, given the very high likelihood of genetic modifications being heritable and potentially changing the genetic make up of the population down the line. This is why successful operations of this kind conducted by Chinese scientist He Jianku raised very strong objections worldwide.

Human cerebral organoids range in size from a poppy seed to a small pea.NIH/Flickr

But creating mini brains inside animals, or even worse, within an artificial biological environment, should send us all frantically panicking. In my opinion, the ethical implications go well beyond determining whether we may be creating a suffering individual. If we are creating a brain – however small –– we are creating a system with a capacity to process information and, down the line, given enough time and input, potentially the ability to think.

Some form of consciousness is ubiquitous in the animal world, and we, as humans, are obviously on top of the scale of complexity. While we don’t know exactly what consciousness is, we still worry that human-designed AI may develop some form of it. But thought and emotions are likely to be emergent properties of our neurons organised into networks through development, and it is much more likely it could arise in an organoid than in a robot. This may be a primitive form of consciousness or even a full blown version of it, provided it receives input from the external world and finds ways to interact with it.

In theory, mini-brains could be grown forever in a laboratory – whether it is legal or not – increasing in complexity and power for as long as their life-support system can provide them with oxygen and vital nutrients. This is the case for the cancer cells of a woman called Henrietta Lacks, which are alive more than 60 years after her death and multiplying today in hundreds of thousands of labs throughout the world.

Disembodied super intelligence?

But if brains are cultivated in the laboratory in such conditions, without time limit, could they ever develop a form of consciousness that surpasses human capacity? As I see it, why not?

And if they did, would we be able to tell? What if such a new form of mind decided to keep us, humans, in the dark about their existence – be it only to secure enough time to take control of their life-support system and ensure that they are safe?

When I was an adolescent, I often had scary dreams of the world being taken over by a giant computer network. I still have that worry today, and it has partly become true. But the scare of a biological super-brain taking over is now much greater in my mind. Keep in mind that such new organism would not have to worry about their body becoming old and dying, because they would not have a body.

This may sound like the first lines of a bad science fiction plot, but I don’t see reasons to dismiss these ideas as forever unrealistic.

The point is that we have to remain vigilant, especially given that this could all happen without us noticing. You just have to consider how difficult it is to assess whether someone is lying when testifying in court to realise that we will not have an easy task trying to work out the hidden thoughts of a lab grown mini-brain.

Slowing the research down by controlling organoid size and life span, or widely agreeing a moratorium before we reach a point of no return, would make good sense. But unfortunately, the growing ubiquity of biological labs and equipment will make enforcement incredibly difficult – as we’ve seen with genetic embryo editing.

It would be an understatement to say that I share the worries of some of my colleagues working in the field of cellular medicine. The toughest question that we can ask regarding these mesmerising possibilities, and which also applies to genetic manipulations of embryos, is: can we even stop this?

The Conversation

Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Researchers invent device that generates light from the cold night sky – here's what it means for millions living off grid

Author: Jeff Kettle, ‎Lecturer in Electronic Engineering, Bangor University

More than 1.7 billion people worldwide still don’t have a reliable electricity connection. For many of them, solar power is their potential energy saviour – at least when the sun is shining.

Technology to store excess solar power during the dark hours is improving. But what if we could generate electricity from the cold night sky? Researchers at Stanford and UCLA have just done exactly that. Don’t expect it to become solar’s dark twin just yet, but it could play an important role in the energy demands of the future.

The technology itself is nothing new – in fact, the principles behind it were discovered almost 200 years ago. The device, called a thermoelectric generator, uses temperature differences between two metal plates to generate electricity through something called the Seebeck effect. The greater the temperature difference, the greater the power generated.

We already use this technology to convert waste heat from sources such as industrial machinery and car engines. The new research applies the same technique to harness the temperature difference between the outside air and a surface which faces the sky.

The device’s two plates sit on top of one another. The top plate faces the cold air of the open night sky, while the bottom plate is kept enclosed in warmer air, facing the ground. Heat always radiates to cooler environments, and the cooler the environment, the faster heat is radiated. Because the open night sky is cooler than the enclosed air surrounding the bottom plate, the top plate loses heat faster than the bottom plate. This generates a temperature difference between the two plates – in this study, between four and five degrees celsius.

Now at different temperatures, heat also starts to travel from the hotter bottom plate to the cooler top plate. The device harnesses this flow of heat to generate electricity.

At this small temperature difference, power is limited. The researchers’ device produced just 25 milliwatts per meter squared (mW/m²) – enough to power a small LED reading light. By contrast, a solar panel of the same size would be enough to sustain three 32" LED TVs – that’s 4,000 times more power.

Greater potential

In dryer climates, the device could perform better. This is because in wetter climates, any moisture in the air condenses on the downward-facing bottom plate, cooling it and reducing the temperature difference between the plates. In the dry Mediterranean, for example, the device could produce 20 times more power than it did in the US.

The researchers’ device harnessed the cold night sky to power this small light.Aaswath Raman, Author provided

The device itself could also be refined. For example, manufacturers could apply a coating that allows the device’s surface to reach a temperature lower than the surrounding environment during the day, so that it is even cooler at night. They could also use corrugated instead of flat plates, which are more efficient at capturing and emitting radiation. These and other feasible tech upgrades could raise the power output by as much as ten times.

With the efficiency of everyday technologies continually improving, thermolectric devices could play an important role in powering society before long. Colleagues of mine are developing technology that connects household devices to the internet and each other – the so called Internet of Things– at power levels of just 1.5 megawatts per meter squared (MW/m²), a level of energy firmly within the reach of an enhanced device in dry climates.

By connecting a series of thermoelectric generators mounted on the walls of homes, the technology could noticeably lighten the energy load of houses. It’s feasible, too – the technology could easily be mass produced, and sold cheaply enough to provide a viable energy source in locations where it is too expensive or impractical to connect with mains electricity.

Of course, it’s unlikely that thermoelectric devices will ever replace battery storage as the nighttime saviour of solar energy. Batteries now cost a quarter of what they did a decade ago, and solar systems with battery storage are already becoming affordable ways to meet small-scale domestic and industrial energy needs.

But the technology could be a useful complement to solar power and battery storage – and a vital back-up energy source for those living off-grid when batteries fail or panels break. When everything goes wrong on the chilliest of nights, those with thermoelectric devices to power a heater would at least have one thing to thank the freezing night air for.

The Conversation

Jeff Kettle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Kidal bukan berarti Anda dominan otak kanan - jadi apa artinya?

Author: Emma Karlsson, Postdoctoral researcher in Cognitive Neuroscience, Bangor University

Wachiwit/Shutterstock

Ada banyak klaim tentang apa artinya kidal, dan apakah itu mempengaruhi tipe orang – tapi nyatanya ini adalah sesuatu yang membingungkan. Mitos tentang kidal muncul setiap tahunnya, tapi para peneliti belum mengungkap sepenuhnya arti kidal - lebih sering menggunakan tangan kiri ketimbangan kanan untuk aktivitas.

Jadi mengapa orang bisa kidal? Sejujurnya, kami juga tidak sepenuhnya memahami. Apa yang kami ketahui adalah populasi orang kidal hanya sekitar 10% dari populasi dunia - tapi ini tidak terbagi rata menurut jenis kelamin.

Dari populasi 10% tersebut, diketahui sekitar 12% adalah laki-laki dan hanya sekitar 8% perempuan. Beberapa orang heran dengan perbandingan 90:10 ini dan bertanya-tanya mengapa mereka bisa kidal.

Tapi pertanyaan yang menarik adalah mengapa kita tidak kidal secara kebetulan? Mengapa tidak terbagi 50:50? Ini bukan karena arah kita menulis, karena kidal akan dominan di negara-negara yang cara penulisan bahasanya dari kanan ke kiri, bukan itu masalahnya. Bahkan secara genetik ini juga aneh - hanya sekitar 25% orang kidal yang kedua orang tuanya kidal.


Read more: How children's brains develop to make them right or left handed


Kidal telah dikaitkan dengan macam-macam hal buruk, seperti kesehatan yang buruk dan kematian dini - tapi tidak satu pun yang benar. Yang terakhir ini banyak dijelaskan oleh generasi tua, mereka dipaksa untuk pindah tangan dan menggunakan tangan kanan mereka. Dengan ini, sepertinya ada lebih sedikit orang kidal pada masa lalu. Kaitan yang pertama, meski bisa menjadi berita yang menarik, tetaplah salah.

Mitos positif tentang kidal juga berlimpah. Orang kidal dianggap lebih kreatif, karena kebanyakan dari mereka menggunakan “otak kanan”. Ini mungkin salah satu mitos yang paling konsisten terkait kidal dan otak. Tapi tidak peduli seberapa menarik (dan mungkin mengecewakan bagi orang-orang kidal yang masih menunggu untuk suatu hari memiliki talenta setara seniman Leonardo da Vinci), pemikiran bahwa setiap orang menggunakan “sisi otak dominan” dalam mendefinisikan kepribadian dan pengambilan keputusan juga salah.

Lateralisasi otak dan kidal

Memang benar, bagaimana pun, bahwa otak sebelah kanan mengendalikan sisi kiri tubuh, dan otak sebelah kiri mengendalikan sisi kanan - dan bahwa belahan otak memang memiliki spesialisasi masing-masing.

Sebagai contoh, bahasa biasanya diproses sedikit lebih banyak di otak sebelah kiri, dan pengenalan wajah sedikit lebih banyak di otak sebelah kanan. Gagasan bahwa setiap belahan otak dikhususkan untuk beberapa keterampilan, dikenal sebagai lateralisasi otak. Namun, mereka tidak bekerja secara terpisah, ada pita tebal pada serabut saraf - disebut corpus callosum – yang menghubungkan kedua sisi otak.

Menariknya, ada beberapa perbedaan antara orang yang ‘bertangan kanan’ dan kidal yang dikenal dalam spesialisasi ini. Misalnya, sering dikatakan bahwa sekitar 95% orang bertangan-kanan adalah “dominan otak kiri”. Ini tidak sama dengan klaim “otak kiri” di atas, ini sebenarnya merujuk pada temuan awal bahwa kebanyakan orang bertangan-kanan lebih bergantung pada otak sebelah kiri terkait berbicara dan bahasa. Diasumsikan bahwa kebalikannya akan berlaku untuk orang kidal. Namun ini bukan masalahnya. Faktanya, 70% orang kidal juga memproses bahasa lebih banyak pada otak sebelah kiri. Mengapa angka ini lebih rendah dan bukan kebailkannya, ini belum diketahui.


Read more: Why is life left-handed? The answer is in the stars


Para peneliti telah menemukan banyak spesialisasi otak lainnya, atau “asimetri” lain selain bahasa. Kebanyakan terjadi di otak sebelah kanan - setidaknya bagi orang bertangan-kanan - termasuk hal-hal seperti pemrosesan wajah, keterampilan spasial, dan persepsi emosi. Namun ini belum diketahui, mungkin karena peneliti salah mengasumsikan bahwa itu semua bergantung pada bagian otak yang tidak dominan terhadap bahasa.

Kenyataannya, asumsi ini, ditambah pengakuan bahwa sedikit orang kidal memiliki dominasi otak kanan untuk bahasa, membuat mereka diabaikan - atau lebih buruk, dihindari secara aktif - dalam banyak penelitian terhadap otak, karena peneliti berasumsi bahwa, sama seperti bahasa, semua asimetri lainnya akan berkurang.

Bagaimana beberapa fungsi yang terlateralisasi (terkhususkan) dalam otak dapat benar-benar mempengaruhi cara kita memandang sesuatu. Kami mempelajarinya dengan menggunakan tes persepsi sederhana. Sebagai contoh, dalam penelitian baru-baru ini, kami mempresentasikan gambar wajah yang sudah dirancang untuk menunjukkan setengah wajah dengan satu emosi dan setengah lainnya dengan emosi yang berbeda, untuk sejumlah besar orang kidal dan bertangan kanan.

Biasanya, orang-orang cenderung melihat emosi yang ditunjukkan sisi kiri wajah, ini diyakini mencerminkan spesialisasi di otak sebelah kanan. Hal ini terkait dengan fakta bahwa bidang visual diproses sedemikian rupa sehingga ada sebuah bias ke sisi kiri ruang. Bias ini dianggap mewakili pemrosesan oleh otak sebelah kanan, sementara sebuah bias ke sisi kanan dianggap mewakili pemrosesan oleh otak sebelah kiri. Kami juga menyajikan berbagai jenis gambar dan suara, untuk memeriksa beberapa spesialisasi lainnya.

Temuan kami menunjukkan bahwa beberapa jenis spesialisasi, termasuk pemrosesan wajah, tampaknya mengikuti pola menarik yang terlihat untuk bahasa (yaitu, lebih banyak orang kidal memiliki kecenderungan melihat emosi yang ditunjukkan di sisi kanan wajah). Tapi terkait melihat bias-bias pada sesuatu yang diperhatikan, kami tidak menemukan perbedaan pola pemrosesan otak untuk orang bertangan-kanan dan kidal. Hasil ini menunjukkan bahwa, sementara ada hubungan antara kidal dan beberapa spesialisasi otak, tidak lebih.

Orang kidal sangat penting dalam eksperimen baru seperti ini. Bukan hanya karena mereka dapat membantu kita memahami apa yang membuat mereka berbeda, tapi juga bisa membantu kita memecahkan banyak misteri neuropsikologis lama tentang otak.

Franklin Ronaldo menerjemahkan artikel ini dari bahasa Inggris.

The Conversation

Emma Karlsson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Are the Amazon fires a crime against humanity?

Author: Tara Smith, Lecturer in Law, Bangor University

Fires in the Brazilian Amazon have jumped 84% during President Jair Bolsonaro’s first year in office and in July 2019 alone, an area of rainforest the size of Manhattan was lost every day. The Amazon fires may seem beyond human control, but they’re not beyond human culpability.

Bolsonaro ran for president promising to “integrate the Amazon into the Brazilian economy”. Once elected, he slashed the Brazilian environmental protection agency budget by 95% and relaxed safeguards for mining projects on indigenous lands. Farmers cited their support for Bolsonaro’s approach as they set fires to clear rainforest for cattle grazing.

Bolsonaro’s vandalism will be most painful for the indigenous people who call the Amazon home. But destruction of the world’s largest rainforest may accelerate climate change and so cause further suffering worldwide. For that reason, Brazil’s former environment minister, Marina Silva, called the Amazon fires a crime against humanity.

From a legal perspective, this might be a helpful way of prosecuting environmental destruction. Crimes against humanity are international crimes, like genocide and war crimes, which are considered to harm both the immediate victims and humanity as a whole. As such, all of humankind has an interest in their punishment and deterrence.

Historical precedent

Crimes against humanity were first classified as an international crime during the Nuremberg trials that followed World War II. Two German Generals, Alfred Jodl and Lothar Rendulic, were charged with war crimes for implementing scorched earth policies in Finland and Norway. No one was charged with crimes against humanity for causing the unprecedented environmental damage that scarred the post-war landscapes though.

Our understanding of the Earth’s ecology has matured since then, yet so has our capacity to pollute and destroy. It’s now clear that the consequences of environmental destruction don’t stop at national borders. All humanity is placed in jeopardy when burning rainforests flood the atmosphere with CO₂ and exacerbate climate change.

Holding someone like Bolsonaro to account for this by charging him with crimes against humanity would be a world first. If successful, it could set a precedent which might stimulate more aggressive legal action against environmental crimes. But do the Amazon fires fit the criteria?


Read more: Why the International Criminal Court is right to focus on the environment


Prosecuting crimes against humanity requires proof of widespread and systematic attacks against a civilian population. If a specific part of the global population is persecuted, this is an affront to the global conscience. In the same way, domestic crimes are an affront to the population of the state in which they occur.

When prosecuting prominent Nazis in Nuremberg, the US chief prosecutor, Robert Jackson, argued that crimes against humanity are committed by individuals, not abstract entities. Only by holding individuals accountable for their actions can widespread atrocities be deterred in future.

Robert Jackson speaks at the Nuremberg trials in 1945.Raymond D'Addario/Wikipedia

The International Criminal Court’s Chief Prosecutor, Fatou Bensouda, has promised to apply the approach first developed in Nuremberg to prosecute individuals for international crimes that result in significant environmental damage. Her recommendations don’t create new environmental crimes, such as “ecocide”, which would punish severe environmental damage as a crime in itself. They do signal, however, a growing appreciation of the role that environmental damage plays in causing harm and suffering to people.

The International Criminal Court was asked in 2014 to open an investigation into allegations of land-grabbing by the Cambodian government. In Cambodia, large corporations and investment firms were being given prime agricultural land by the government, displacing up to 770,000 Cambodians from 4m hectares of land. Prosecuting these actions as crimes against humanity would be a positive first step towards holding individuals like Bolsonaro accountable.

But given the global consequences of the Amazon fires, could environmental destruction of this nature be legally considered a crime against all humanity? Defining it as such would be unprecedented. The same charge could apply to many politicians and business people. It’s been argued that oil and gas executives who’ve funded disinformation about climate change for decades should be chief among them.

Charging individuals for environmental crimes against humanity could be an effective deterrent. But whether the law will develop in time to prosecute people like Bolsonaro is, as yet, uncertain. Until the International Criminal Court prosecutes individuals for crimes against humanity based on their environmental damage, holding individuals criminally accountable for climate change remains unlikely.


This article is part of The Covering Climate Now series
This is a concerted effort among news organisations to put the climate crisis at the forefront of our coverage. This article is published under a Creative Commons license and can be reproduced for free – just hit the “Republish this article” button on the page to copy the full HTML coding. The Conversation also runs Imagine, a newsletter in which academics explore how the world can rise to the challenge of climate change. Sign up here.


The Conversation

Tara Smith does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Cilia: cell's long-overlooked antenna that can drive cancer – or stop it in its tracks

Author: Angharad Mostyn Wilkie, PhD Researcher in Oncology and Cancer Biology, Bangor University

Motile cilia are antenna-like projections on our body's cells.Author provided

You might know that our lungs are lined with hair-like projections called motile cilia. These are tiny microtubule structures that appear on the surface of some cells or tissues. They can be found lining your nose and respiratory tract too, and along the fallopian tubes and vas deferens in the female and male reproductive tracts. They move from side to side to sweep away any micro-organisms, fluids, and dead cells in the respiratory system, and to help transport the sperm and egg in the reproductive system.

Odds are, however, that you haven’t heard about motile cilia’s arguably more important cousin, primary cilia.

Motile cilia stand out on the right of this image of stained respiratory epithelium cells.Jose Luis Calvo/Shutterstock

Primary cilia are on virtually all cells in the body but for a long time they were considered to be a non-functional vestigial part of the cell. To add to their mystery, they aren’t present all the time. They project from the centrosome– the part of the cell that pulls it apart during division – and so only appear at certain stages of the cell cycle.

The first sign that these little structures were important came with the realisation that disruption to either their formation or function could result in genetic conditions known as ciliopathies. There are around 20 different ciliopathies, and they affect about one in every 1,000 people. These are often disabling and life-threatening conditions, affecting multiple organ systems. They can cause blindness, deafness, chronic respiratory infections, kidney disease, heart disease, infertility, obesity, diabetes and more. Symptoms and severity vary widely, making it hard to classify and diagnose these disorders.

So how can malfunction of a little organelle which was originally thought to be useless result in such a wide variety of devastating symptoms? Well, it is now known that not only do cilia look like little antennas, they act like them too. The cilia is packed full of proteins that detect messenger signals from other cells or the surrounding environment. These signals are then transmitted into the cell’s nucleus to activate a response – for example, these responses are important for the regulation of several essential signalling pathways.

When this was realised, researchers began to ask whether changes in the structure or function of cilia; changes in protein levels associated with cilia; or movement of these proteins to a different part of the cell could occur due to – or potentially drive – other conditions. Given that scientists already knew then that many of the pathways regulated by cilia could drive cancer progression, looking at the relationship between cilia and cancer was a logical step.

Cilia, signals and cancer

Researchers discovered that in many cancers – including renal cell, ovarian, prostrate, breast and pancreatic – there was a distinct lack of primary cilia in the cancerous cells compared to the healthy surrounding cells. It could be that the loss of cilia was just a response to the cancer, disrupting normal cell regulation – but what if it was actually driving the cancer?

Melanomas are one of the most aggressive types of tumours in humans. Some cancerous melanoma cells express higher levels of a protein called EZH2 than healthy cells. EZH2 suppresses cilia genes so malignant cells have less cilia. This loss of cilia activates some of the carcinogenic signalling pathways, resulting in aggressive metastatic melanoma.

However, loss of cilia does not have the same effect in all cancers. In one type of pancreatic cancer, the presence – not absence – of cilia correlates with increased metastasis and decreased patient survival.

Even within the same cancer the picture is unclear. Medulloblastomas are the most common childhood brain tumour. Their development can be driven by one of the signalling pathways regulated by the cilia, the hedgehog signalling pathway. This pathway is active during embryo development but dormant after. However, in many cancers (including medulloblastomas) hedgehog signalling is reactivated, and it can drive cancer growth. But studies into the effects of cilia in medulloblastomas have found that cilia can both drive and protect against this cancer, depending on the way the hedgehog pathway is initially disrupted.

As such strong links have been found between cilia and cancer, researchers have also been looking into whether treatment which targets this structure could be used for cancer therapies. One of the problems faced when treating cancers is the development of resistance to anti-cancer drugs. Many of these drugs’ targets are part of the signalling pathways regulated by cilia, but scientists have found that blocking the growth of cilia in drug-resistant cancer cell lines could restore sensitivity to a treatment.

What was once thought to just be a cell part left over during evolution, has proven to be integral to our understanding and treatment of cancer. The hope is that further research into cilia will help untangle the complex relationship between them and cancer, and provide both new insights into some of the drivers of cancer as well as new targets for cancer treatment.

The Conversation

Angharad Mostyn Wilkie receives funding from the North West Cancer Research Institute

How to become a great impostor

Author: Tim Holmes, Lecturer in Criminology & Criminal Justice, Bangor University

Ferdinand Waldo Demara

Unlike other icons who have appeared on the front of Life magazine, Ferdinand Waldo Demara was not famed as an astronaut, actor, hero or politician. In fact, his 23-year career was rather varied. He was, among other things, a doctor, professor, prison warden and monk. Demara was not some kind of genius either – he actually left school without any qualifications. Rather, he was “The Great Impostor”, a charming rogue who tricked his way to notoriety.

My research speciality is crimes by deception and Demara is a man who I find particularly interesting. For, unlike other notorious con-artists, imposters and fraudsters, he did not steal and defraud for the money alone. Demara’s goal was to attain prestige and status. As his biographer Robert Crichton noted in 1959, “Since his aim was to do good, anything he did to do it was justified. With Demara the end always justifies the means.”

Though we know what he did, and his motivations, there is still one big question that has been left unanswered – why did people believe him? While we don’t have accounts from everyone who encountered Demara, my investigation into his techniques has uncovered some of the secrets of how he managed to keep his high level cons going for so long.


Read more: Why do we fall for scams?


Upon leaving education in 1935, Demara lacked the skills to succeed in the organisations he was drawn to. He wanted the status that came with being a priest, an academic or a military officer, but didn’t have the patience to achieve the necessary qualifications. And so his life of deception started. At just 16-years-old, with a desire to become a member of a silent order of Trappist monks, Demara ran away from his home in Lawrence, Massachusetts, lying about his age to gain entry.

When he was found by his parents he was allowed to stay, as they believed he would eventually give up. Demara remained with the monks long enough to gain his hood and habit, but was ultimately forced out of the monastery at the age of 18 as his fellow monks felt he lacked the right temperament.

Demara then attempted to join other orders, including the Brothers of Charity children’s home in West Newbury, Massachusetts, but again failed to follow the rules. In response, he stole funds and a car from the home, and joined the army in 1941, at the age of 19. But, as it turned out, the army was not for him either. He disliked military life so much that he stole a friend’s identity and fled, eventually deciding to join the navy instead.

From monk to medicine

While in the navy, Demara was accepted for medical training. He passed the basic course but due to his lack of education was not allowed to advance. So, in order to get into the medical school, Demara created his first set of fake documents indicating he already had the needed college qualifications. He was so pleased with his creations that he decided to skip applying to medical school and tried to gain a commission as an officer instead. When his falsified papers were discovered, Demara faked his own death and went on the run again.


Read more: The men who impersonate military personnel for stolen glory


In 1942, Demara took the identity of Dr Robert Linton French, a former navy officer and psychologist. Demara found French’s details in an old college prospectus which had profiled French when he worked there. Though he worked as a college teacher using French’s name till the end of the war in 1945, Demara was eventually caught and the authorities decided to prosecute him for desertion.

However, due to good behaviour, he only served 18 months of the six-year sentence handed to him, but upon his release he went back to his old ways. This time Demara created a new identity, Cecil Hamann, and enrolled at Northeastern University. Tiring of the effort and time needed to complete his law degree, Demara awarded himself a PhD and, under the persona of “Dr” Cecil Hamann, took up another teaching post at a Christian college, The Brother of Instruction, in Maine in the summer of 1950.

It was here that Demara met and befriended Canadian doctor Joseph Cyr, who was moving to the US to set up a medical practice. Needing help with the immigration paperwork, Cyr gave all his identifying documents to Demara, who offered to fill in the application for him. After the two men parted ways, Demara took copies of Cyr’s paperwork and moved up to Canada. Pretending to be Dr Cyr, Demara approached the Canadian Navy with an ultimatum: make me an officer or I will join the army. Not wanting to lose a trained doctor, Demara’s application was fast tracked.

As a commissioned officer during the Korean war, Demara first served at Stadacona naval base, where he convinced other doctors to contribute to a medical booklet he claimed to be producing for lumberjacks living in remote parts of Canada. With this booklet and the knowledge gained from his time in the US Navy, Demara was able to pass successfully as Dr Cyr.

A military marvel

Demara worked aboard HMCS Cayuga as ship’s doctor (pictured in 1954).

In 1951, Demara was transferred to be ship’s doctor on the destroyer HMCS Cayuga. Stationed off the coast of Korea, Demara relied on his sick berth attendant, petty officer Bob Horchin, to handle all minor injuries and complaints. Horchin was pleased to have a superior officer who did not interfere in his work and who empowered him to take on more responsibilities.

Though he very successfully passed as a doctor aboard the Cayuga, Demara’s time there came to a dramatic end after three Korean refugees were brought on in need of medical attention. Relying on textbooks and Horchin, Demara successfully treated all three – even completing the amputation of one man’s leg. Recommended for a commendation for his actions, the story was reported in the press where the real Dr Cyr’s mother saw a picture of Demara impersonating her son. Wanting to avoid further public scrutiny and scandal, the Canadian government elected to simply deport Demara back to the US in November 1951.

After returning to America, there were news reports on his actions, and Demara sold his story to Life magazine in 1952. In his biography, Demara notes that he spent the time after his return to the US using his own name and working in different short-term jobs. While he enjoyed the prestige he had gained in his impostor roles, he started to dislike life as Demara, “the great impostor”, gaining weight and developing a drinking problem.

In 1955, Demara somehow acquired the credentials of a Ben W. Jones and disappeared again. As Jones, Demara began working as a guard at Huntsville Prison in Texas, and was eventually put in charge of the maximum security wing that housed the most dangerous prisoners. In 1956, an educational programme that provided prisoners with magazines to read led to Demara’s discovery once more. One of the prisoners found the Life magazine article and showed the cover picture of Demara to prison officals. Despite categorically denying to the prison warden that he was Demara, and pointing to positive feedback he had received from prison officials and inmates about his performance there, Demara chose to run. In 1957, he was caught in North Haven, Maine and served a six-month prison sentence for his actions.

After his release he made several television appearances including on the game show You Bet Your Life, and made a cameo in horror film The Hypnotic Eye. From this point until his death in 1981, Demara would struggle to escape his past notoriety. He eventually returned to the church, getting ordained using his own name and worked as a counsellor at a hospital in California.

How Demara did it

According to biographer Crichton, Demara had an impressive memory, and through his impersonations accumulated a wealth of knowledge on different topics. This, coupled with charisma and good instincts, about human nature helped him trick all those around him. Studies of professional criminals often observe that con artists are skilled actors and that a con game is essentially an elaborate performance where only the victim is unaware of what is really going on.

Demara also capitalised on workplace habits and social conventions. He is a prime example of why recruiters shouldn’t rely on paper qualifications over demonstrations of skill. And his habit of allowing subordinates to do things he should be doing meant Demara’s ability went untested, while at the same time engendering appreciation from junior staff.

He observed of his time in academia that there was always opportunity to gain authority and power in an organisation. There were ways to set himself as an authority figure without challenging or threatening others by “expanding into the power vacuum”. He would set up his own committees, for example, rather than joining established groups of academics. Demara says in the biography that starting fresh committees and initiatives often gave him the cover he needed to avoid conflict and scrutiny.

…there’s no competition, no past standards to measure you by. How can anyone tell you aren’t running a top outfit? And then there’s no past laws or rules or precedents to hold you down or limit you. Make your own rules and interpretations. Nothing like it. Remember it, expand into the power vacuum.

Working from a position of authority as the head of his own committees further entrenched Demara in professions he was not qualified for. It can be argued that Demara’s most impressive attempt at expansion into the “power vacuum” occurred when teaching as Dr Hamann.

Hamann was considered a prestigious appointee for a small Christian college. Claiming to be a cancer researcher, Demara proposed converting the college into a state-approved university where he would be chancellor. The plans proceeded but Demara was not given a prominent role in the new institution. It was then that Demara decided to take Cyr’s identity and leave for Canada. If Demara had succeeded in becoming chancellor of the new LaMennais College (which would go onto become Walsh University) it is conceivable that he would have been able to avoid scrutiny or questioning thanks to his position of authority.

Inherently trustworthy

Other notable serial impostors and fakes have relied on techniques similar to Demara’s. Frank Abagnale also recognised the reliance people in large organisations placed on paperwork and looking the part. This insight allowed him at 16 to pass as a 25-year-old airline pilot for Pan Am Airways as portrayed in the film, Catch Me If You Can.

More recently, Gene Morrison was jailed after it was discovered that he had spent 26 years running a fake forensic science business in the UK. After buying a PhD online, Morrison set up Criminal and Forensic Investigations Bureau (CFIB) and gave expert evidence in over 700 criminal and civil cases from 1977 to 2005. Just like Demara used others to do his work, Morrison subcontracted other forensic experts and then presented the findings in court as his own.


Read more: How to get away with fraud: the successful techniques of scamming


Marketing and psychology expert Robert Cialdini’s work on the techniques of persuasion in business might offer insight into how people like Demara can succeed, and why it is that others believe them. Cialdini found that there are six universal principles of influence that are used to persuade business professionals: reciprocity, consistency, social proof, getting people to like you, authority and scarcity.

Demara used all of these skills at various points in his impersonations. He would give power to subordinates to hide his lack of knowledge and enable his impersonations (reciprocity). By using other people’s credentials, he was able to manipulate organisations into accepting him, using their own regulations against them (consistency and social proof). Demara’s success in his impersonations points to how likeable he was and how much of an authority he appeared to be. By impersonating academics and professionals, Demara focused on career paths where at the time there was high demand and a degree of scarcity, too.

Laid bare, one can see how Demara tricked his unsuspecting colleagues into believing his lies through manipulation. Yet within this it is interesting to also consider how often we all rely on gut instinct and the appearance of ability rather than witnessed proof. Our gut instinct is built on five questions we ask ourselves when presented with information: does a fact come from a credible source? Do others believe it? Is there plenty of evidence to support it? Is it compatible with what I believe? Does it tell a good story?

Researchers of social trust and solidarity argue that people also have a fundamental need to trust strangers to tell the truth in order for society to function. As sociologist Niklas Luhmann said, “A complete absence of trust would prevent (one) even getting up in the morning.” Trust in people is in a sense a default setting, so to mistrust requires a loss of confidence in someone which must be sparked by some indicator of a lie.

It was only after the prisoner showed the Life article to the Huntsville Prison warden, that they began to ask questions. Until this point, Demara had offered everything his colleagues would need to believe he was a capable member of staff. People accepted Demara’s claims because it felt right to believe him. He had built a rapport and influenced people’s views of who he was and what he could do.


Read more: Five psychological reasons why people fall for scams – and how to avoid them


Another factor to consider when asking why people would believe Demara was the rising dependency on paper proofs of identity at that time. Following World War II, improvements in and a shift towards reliance on paper documentation occurred as social and economic mobility changed in America. Underlying Demara’s impersonations and the actions of many modern con artists is the reliance we have long placed in first paper proofs of identity such as birth certificates, ID cards and, more recently, digital forms of identification.

As his preoccupation was more with prestige than money, it can be argued that Demara had a harder time than other impostors who were only driven by profit. Demara stood out as a surgeon and a prison guard, he was a good fake and influencer, but the added attention that came from his attempts at multiple important professions and media attention led to his downfall. Abagnale similarly had issues with the attention that came with pretending to be an airline pilot, lawyer and surgeon. In contrast, Morrison stuck to his one impersonation for years, avoiding detection and making money until the quality of his work was investigated.

The trick, it appears, to being a good impostor is essentially to be friendly, have access to a history of being trusted by others, have the right paperwork, build others’ confidence in you and understand the social environment you are entering. Although, when Demara was asked to explain why he committed his crimes he simply said, “Rascality, pure rascality”.

The Conversation

Tim Holmes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.