Research stories

On our News pages

Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.

On TheConversation.com

Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.

Can African smallholders farm themselves out of poverty?

Author: David Harris, Honorary Lecturer, Bangor UniversityJordan Chamberlin, Spatial Economist, International Maize and Wheat Improvement Center (CIMMYT)Kai Mausch, Agricultural Economist, World Agroforestry Centre (ICRAF)

Hard work and poor prospects for smallholder farming households in Africa. Swathi Sridharan (formerly ICRISAT, Bulawayo), CC BY-SA

A great deal of research on agriculture in Africa is organised around the premise that intensification can take smallholder farmers out of poverty. The emphasis in programming often focuses on technologies that increase farm productivity and management practices that go along with them.

Yet the returns of such technologies are not often evaluated within a whole-farm context. And – critically – the returns for smallholders with very little available land have not received sufficient attention.

To support smallholders in their efforts to escape poverty by adopting modern crop varieties, inputs and management practices, it’s necessary to know if their current resources – particularly their farms – are large enough to generate the requisite value.

Two questions can frame this. How big do farms need to be to enable farmers to escape poverty by farming alone? And what alternative avenues can lead them to sustainable development?

These issues were explored in a paper in which we examined how much rural households can benefit from agricultural intensification. In particular we, together with colleagues, looked at the size of smallholder farms and their potential profitability and alternative strategies for support. In sub-Saharan Africa smallholder farms are, on average, smaller than two hectares.

It’s difficult to be precise about the potential profitability of farms in developing countries. But it’s likely that the upper limit for most farms optimistically lies between $1,000 and $2,000 per hectare per year. In fact the actual values currently achieved by farmers in sub-Saharan Africa are much less.

The large profitability gap between current and potential performance per hectare of smallholder farms could, in theory, be narrowed if farmers adopted improved agricultural methods. These could include better crop varieties and animal breeds; more, as well as more efficient, use of fertilisers; and better protection from losses due to pests and diseases.

But are smallholder farms big enough so that closing the profitability gap will make much difference to their poverty status?

Our research suggests that they are not. Even if they were able to achieve high levels of profitability, the actual value that could be generated on a small farm translated into only a small gain in income per capita. From this we conclude that many, if not most, smallholder farmers in sub-Saharan Africa are unlikely to farm themselves out of poverty – defined as living on less than $1.90 per person per day. This would be the case even if they were to make substantial improvements in the productivity and profitability of their farms.

That’s not to say that smallholder farmers shouldn’t be supported. The issue, rather, is what kind of support best suits their circumstances.

Productivity and profitability

In theory, it should be quite simple to calculate how big farms need to be to enable farmers to escape poverty by farming alone.

To begin with, it’s necessary to know how productive and profitable per unit area a farm can be. Productivity and profitability – the value of outputs minus the value of inputs – are functions of farmers’ skills and investment capacities.

They are also dependent on geographical contexts. This includes soils, rainfall and temperature, which determine the potential for crop and livestock productivity. Other factors that play a part include remoteness, which affects farm-gate prices of inputs and outputs, and how many people a farm needs to support.

The figure below summarises the relation between farm size, profitability and income of rural households. We used a net income of $1.90 per person per day (the blue curve) as our working definition of poverty. A more ambitious target of $4 per person per day (the orange curve) represents a modest measure of prosperity beyond the poverty line.

Combinations of land per capita and net whole-farm profitability that would generate 1.90 (blue) and 4 (orange) dollars per person per day. The median land per capita values of rural households from all 46 sites in 15 countries of Sub-Saharan Africa were below the horizontal dashed line (0.60 hectares per person).Author supplied

So, how do these values compare with the situation in sub-Saharan Africa?

It has been estimated that about 80% of farms across nine sub-Saharan countries are smaller than two hectares. These sites would need to generate at least $1,250 per hectare per year just to reach the poverty line. Sites at the lower end of the range cannot escape poverty even if they could generate $3,000 per hectare per year.

Unfortunately, there is limited information about whole-farm net profitability in developing countries. But in Mozambique, Zimbabwe and Malawi, for example, the mean values of only $78, $83 and $424 per hectare per year, respectively, imply that even $1,250 appears to be far out of reach for most small farms.

It’s difficult to interpret information from developed countries in developing country contexts. Nevertheless, gross margin values for even the most efficient mixed farms seldom exceed around $1,400 per hectare per year.

These values are similar to gross margins using best practices for perennial cropping systems reported in a recent literature survey of tropical crop profitability. The study drew on data from nine household surveys in seven African countries. It found that profit from crop production alone (excluding data on livestock) ranged from only $86 per hectare per year in Burkina Faso to $1,184 in Ethiopia. The survey mean was $535 per hectare per year.

From this overview we must conclude that, even with very modest goals, most smallholder farms in sub-Saharan Africa are not “viable” when benchmarked against the poverty line. And it’s unlikely that agricultural intensification alone can take many households across the poverty line.

What is the takeaway?

We certainly do not suggest that continued public and private investments in agricultural technologies are unmerited. In fact, there is evidence that returns to agricultural research and development at national level are very high in developing countries. And there is evidence that agricultural growth is the most important impetus for broader patterns of structural transformation and economic growth in rural Africa. But realistic assessments of the scope for very small farmers to farm themselves out of poverty are necessary.

Farmers are embedded in complex economic webs and increasingly depend on more than farm production for their livelihoods. More integrated lenses for evaluating public investment in the food systems of the developing world will likely be more helpful in the short term.

Integrated investments that affect both on- and off-farm livelihood choices and outcomes will produce better welfare than a narrow focus on production technologies in smallholder dominated systems. Production technology research for development will remain important. But to reach the smallest of Africa’s smallholders will require focus on what’s happening off the farm.

The Conversation

David Harris receives funding from the CGIAR.

Jordan Chamberlin receives funding from the CGIAR, the Bill and Melinda Gates Foundation, and IFAD.

Kai Mausch received funding from multiple organisations that fund international agricultural research.

Why some scientists want to rewrite the history of how we learned to walk

Author: Vivien Shaw, Lecturer in Anatomy, Bangor UniversityIsabelle Catherine Winder, Lecturer in Zoology, Bangor University

_Danuvius guggenmosi_ fossilChristoph Jäckle

It’s not often that a fossil truly rewrites human evolution, but the recent discovery of an ancient extinct ape has some scientists very excited. According to its discoverers, Danuvius guggenmosi combines some human-like features with others that look like those of living chimpanzees. They suggest that it would have had an entirely distinct way of moving that combined upright walking with swinging from branches. And they claim that this probably makes it similar to the last shared ancestor of humans and chimps.

We are not so sure. Looking at a fossilised animal’s anatomy does give us insights into the forces that would have operated on its bones and so how it commonly moved. But it’s a big leap to then make conclusions about its behaviour, or to go from the bones of an individual to the movement of a whole species. The Danuvius fossils are unusually complete, which does provide some vital new evidence. But how much does it really tell us about how our ancestors moved around?

Danuvius has long and mobile arms, habitually extended (stretched out) legs, feet which could sit flat on the floor, and big toes with a strong gripping action. This is a unique configuration. Showing that a specimen is unique is a prerequisite for classifying it as belonging to a separate, new species that deserves its own name.

But what matters in understanding the specimen is how we interpret its uniqueness. Danuvius’s discoverers go from describing its unique anatomy to proposing a unique pattern of movement. When we look at living apes, the relationship between anatomy and movement is not so simple.

The Danuvius find actually includes fossils from four individuals, one of which is nearly complete. But even a group of specimens may not be typical of a species more generally. For instance, humans are known for walking upright not climbing trees, but the Twa hunter-gatherers are regular tree climbers. These people, whose bones look just like ours, have distinctive muscles and ranges of movement well beyond the human norm. But you could not predict their behaviour from their bones.

Studying bones can tells us about movement but not behaviour.Christoph Jäckle

Every living ape uses a repertoire of movements, not just one. For example, orang-utans use clambering, upright or horizontal climbing, suspensory swinging and assisted bipedalism (walking upright using hands for support). Their movement patterns can vary in complex ways because of individual preference, body mass, age, sex or activity.

Gorillas, meanwhile, are “knuckle-walkers” and we used to think they were unable to stand fully upright. But the “walking gorilla” Ambam is famous for his “humanlike” stride.

Ultimately, two animals with very similar anatomies can move differently, and two with different anatomies can move in the same way. This means that Danuvius may not be able to serve as a model for our ancestors’ behaviour, even if its anatomy is similar to theirs.

In fact, we believe there are other plausible interpretations of Danuvius’s bones. These alternatives give a picture of a repertoire of potential movements that may have been used in different contexts.

For example, one of Danuvius‘s most striking features is the high ridge on the top of its shinbone, which the researchers say is associated with “strongly developed cruciate ligaments”, which stabilise the knee joint. The researchers link these strong stabilising ligaments with evidence for an extended hip and a foot that could be placed flat on the floor to suggest that this ape habitually stood upright. Standing upright could be a precursor to bipedal walking, so the authors suggest that this means Danuvius could have been like our last shared ancestor with other apes.

However, the cruciate ligaments also work to stabilise the knee when the leg is rotating. This only happens when the knee is bent with the foot on the ground. This is why skiers who use knee rotation to turn their bodies often injure these ligaments.

Other explanations

We have not seen the Danuvius bones in real life. But, based on the reserachers’ excellent images and descriptions, an equally plausible interpretation of the pronounced ridge on the top of the shinbone could be that the animal used its knee when it was bent, with significant rotational movement.

Perhaps it hung from a branch above and used its feet to steer by gripping branches below, rather than bearing weight through the feet. This could have allowed it to capitalise on its small body weight to access fruit on fine branches. Alternatively, it could have hung from its feet, using the legs to manoeuvre and the hands to grasp.

All of these movements fit equally well with Danuvius’ bones, and could be part of its movement repertoire. So there is no way to say which movement is dominant or typical. As such, any links to our own bipedalism look much less clear-cut.

Danuvius is undoubtedly a very important fossil, with lots to teach us about how varied ape locomotion can be. But we would argue that it is not necessarily particularly like us. Instead, just like living apes, Danuvius would probably have displayed a repertoire of different movements. And we can’t say which would have been typical, because anatomy is not enough to reconstruct behaviour in full.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Accessing healthcare is challenging for Deaf people – but the best solution isn't 'one-size-fits-all'

Author: Anouschka Foltz, Assistant Professor in English Linguistics, University of GrazChristopher Shank, Lecturer in Linguistics, Bangor University

Elnur/ Shutterstock

For many of us, a visit to the doctor’s office can be wrought with anxiety. A persistent cough that won’t go away or an ailment we hope is nothing serious can make GP visits emotionally difficult. Now imagine that you can’t phone the doctor to make an appointment, you don’t understand what your doctor just said, or you don’t know what the medication you’ve been prescribed is for. These are all situations that many Deaf people face when accessing healthcare services.

We use Deaf (with a capital “D”) here to talk about culturally Deaf people, who were typically born deaf, and use a signed language, such as British Sign Language (BSL), as their first or preferred language. In contrast, deaf (lowercase “d”) refers to the audiological condition of deafness.

For our study, we talked to Deaf patients in Wales who communicate using BSL to learn about their experiences with healthcare services. Their experiences illustrated the challenges they face, and showed us that patients have unique needs. For example, a patient born profoundly deaf would have different needs from a person who became deaf later in life.

Health inequalities

Many Deaf communities around the world face inequalities when it comes to accessing health information and healthcare services, as health information and services are often not available in an accessible format. As a result, Deaf individuals often have low health literacy and are at greater risk of being misdiagnosed or not diagnosed at all.

Problems with healthcare access often begin when making an appointment. Because many GPs only allow appointments to be made over the phone, many of those we interviewed had to physically go to health centres to ask for an appointment. Not only is this inconvenient, booking without an interpreter could be difficult and confusing.

Interpreters are essential for patients to receive the best care. However, we heard recurring stories of interpreters not being booked for appointments, arriving late, and – in some cases – not coming at all. Before interpreters were available, one woman described going to the doctor’s office as intimidating “because the communication wasn’t there”. One participant said they always make sure an interpreter has been booked, saying: “Don’t let me down… I don’t want to be going through this again.”

These issues are worsened in emergency situations. One woman recalled an incident where despite texting 999, she didn’t get help until her daughter phoned 999 for her, acting as her interpreter throughout her entire interaction with emergency services.

Emergency situations are made worse by a lack of understanding or help from emergency services.fizkes/ Shutterstock

Another person who texted 999 said:

There are all these questions that they are asking you. And all that we want is to be able to say, ‘We need an ambulance’ … Because what’s happening is we’re panicking, we don’t understand the English, there are all these questions being texted to us, it’s hard enough for us to understand it anyways without panicking at the same time.

Interviewees also recalled emergency situations where interpreters weren’t available at short notice. One Deaf woman recalled when her husband – who is also Deaf – was rushed to hospital. They received no support from staff, and no interpreter was provided to help them.

Deaf awareness and language

Many problems that our interviewees faced related to language, and a lack of Deaf awareness. Many healthcare providers didn’t seem to know that BSL is a language unrelated to English – meaning many BSL users who were born Deaf or lost hearing early in life have limited proficiency in English. One interviewee explained that many healthcare providers think all Deaf people can read, without realising that many BSL users don’t understand English – with many being given health information written in English that they couldn’t comprehend.

Interviewees wished healthcare staff were more Deaf aware, as many healthcare providers lacked understanding about Deafness. This affected the doctor-patient relationship, with many interviewees agreeing that doctors “can be a bit patronising at times” and that this patronising attitude made interactions difficult. A lack of Deaf awareness can also lead to Deaf patients feeling forgotten. Many interviewees felt that Deaf people are easily ignored, with one interviewee saying: “I always feel like Deaf people are put last.”

No ‘one-size-fits-all’ solution

New technologies and services are being offered to help Deaf patients make appointments– such as having an interpreter call the doctor’s office during a video call with the patient.

Video calling might be one solution.Monika Wisniewska/ Shutterstock

Additionally, some health information is now available onlinein BSL. Interpreters can also be more easily available at short notice, for example in emergency situations, through video chat. Remote services particularly show promise for mental health treatments, by providing remote mental health counselling in BSL and other types of confidential services.

Because Deaf communities are small and tight-knit, patients may be wary of interacting with local Deaf counsellors or interpreters, worried about potential gossip. Several interviewees even said that they would not want a Deaf counsellor even if offered, for fear that the counsellor might gossip about them with others in the community. One interviewee suggested a mental health service with a remote online interpreter as the best solution.


Read more: How access to health care for deaf people can be improved in Kenya


The problems and potential solutions that emerged from our research are similar in other Deaf communities around the world. Though technology might offer some promising solutions, it’s important to realise that these might not work for everyone.

Patients have individual differences, needs, preferences, and cultural differences. Some patients may prefer a remote interpreter, others face-to-face interpreting – and these preferences may also depend on the type of appointment. What’s important is that Deaf patients have choice, and that new services, such as through the use of new technologies, are offered in addition to, not instead of, established health services.

The Conversation

Anouschka Foltz receives funding from Public Health Wales. The views in this article should, however, not be assumed to be the same as Public Health Wales.

Christopher Shank receives funding from Public Health Wales. The views in this article should, however, not be assumed to be the same as Public Health Wales.

Botswana is humanity's ancestral home, claims major study – well, actually …

Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor University

A study claims the first humans lived in a wetland around what is now northern Botswana.Prill/Shutterstock

A recent paper in the prestigious journal Nature claims to show that modern humans originated about 200,000 years ago in the region around northern Botswana. For a scientist like myself who studies human origins, this is exciting news. If correct, this paper would suggest that we finally know where our species comes from.

But there are actually several reasons why I and some of my colleagues are not entirely convinced. In fact, there’s good reason to believe that our species doesn’t even have a single origin.

The scientists behind the new research studied genetic data from many individuals from the KhoeSan peoples of southern Africa, who are thought to live where their ancestors have lived for hundreds of thousands of years. The researchers used their new data together with existing information about people all around the world (including other areas traditionally associated with the origins of humankind) to reconstruct in detail the branching of the human family tree.

We can think of the earliest group of humans as the base of the tree with a specific set of genetic data - a gene pool. Each different sub-group that branched off and migrated away from humanity’s original “homeland” took a subset of the genes in that gene pool with them. But most people, and so the vast majority of those genes, remained behind. This means people alive today with different subsets of our species’ genes can be grouped on different branches of the human family tree.

Groups of people with the most diverse genomes are likely to be the ones that descended directly from the original group at the base of the tree, rather than one of the small sub-groups that split from it. In this case, the researchers identified one of the groups of KhoeSan people from around northern Botswana as the very bottom of the trunk, using geographical and archaeological data to back up their conclusion.

Lead study author Vanessa Hayes with Juǀ’hoansi hunters in Namibia.Chris Bennett, Evolving Picture

If you compare this process to creating your own family tree, it makes sense to think you can use information about who lives where today and how everyone relates to each other to reconstruct where the family came from. For example, many of my relatives live on the lovely Channel Island of Alderney, and one branch of my family have indeed been islanders for many generations.

Of course, there’s always some uncertainty created by variations in the data. (I now live in Wales and have cousins in England.) But as long as you look for broad patterns rather than focusing on specific details, you will still get a reasonable impression. There are even some statistical techniques you can use to assess the strength of your interpretation.

But there are several problems with taking the process of building a human family tree to such a detailed conclusion, as this new research does. First, it’s important to note that the study didn’t look at the whole genome. It focused just on mitochondrial DNA, a small part of our genetic material that (unlike the rest) is almost only ever passed from mothers to children. This means it isn’t mixed up with DNA from fathers and so is easier to track across the generations.

As a result, mitochondrial DNA is commonly used to reconstruct evolutionary histories. But it only tells us part of the story. The new study doesn’t tell us the origin of the human genome but the place and time where our mitochondrial DNA appeared. As a string of just 16,569 genetic letters out of over 3.3 billion in each of our cells, mitochondrial DNA is a very tiny part of us.

Other DNA

The fact that mitochondrial DNA comes almost only ever from mothers also means the story of its inheritance is much simpler than the histories of other genes. This implies that every bit of our genetic material may have a different origin, and have followed a different path to get to us. If we did the same reconstruction using Y chromosomes (passed only from father to son) or whole genomes, we’d get a different answer to our question about where and when humans originated.

There is actually a debate over whether the woman from whom all our mitochondrial DNA today descends (“mitochondrial Eve”) could ever have even met the man from whom all living men’s Y-chromosomes descend (“Y-chromosome Adam”). By some estimates, they may have lived as much as 100,000 years apart.

And all of this ignores the possibility that other species or populations may also have contributed DNA to modern humans. After this mitochondrial “origin”, our species interbred with Neanderthals and a group called the Denisovans. There’s even evidence that these two interbred with one another, at about the same time as they were hybridising with us. Earlier modern humans probably also interbred with other human species living alongside them in other time periods.

All of this, of course, suggests that modern human history – like the history of modern primates– was much more than a simple tree with straight lines of inheritance. It’s much more likely that our distant ancestors interbred with other species and populations to form a braiding stream of gene pools than that we form a nice neat tree that can be reconstructed genetically. And if that’s true, we may not even have a single origin we can hope to reconstruct.

The Conversation

Isabelle Catherine Winder received funding from the European Research Council (ERC) as part of the DISPERSE project (2011-2016). It was as part of her work as a post-doc on this project that she wrote the paper about reticulation and the human past cited in this article.

Lab-grown mini brains: we can't dismiss the possibility that they could one day outsmart us

Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

It may not be science fiction anymore. 80's Child/Shutterstock

The cutting-edge method of growing clusters of cells that organise themselves into mini versions of human brains in the lab is gathering more and more attention. These “brain organoids”, made from stem cells, offer unparalleled insights into the human brain, which is notoriously difficult to study.

But some researchers are worried that a form of consciousness might arise in such mini-brains, which are sometimes transplanted into animals. They could at least be sentient to the extent of experiencing pain and suffering from being trapped. If this is true – and before we consider how likely it is – it is absolutely clear in my mind that we must exert a supreme level of caution when considering this issue.

Brain organoids are currently very simple compared to human brains and can’t be conscious in the same way. Due to a lack of blood supply, they do not reach sizes larger than around five or six millimetres. That said, they have been found to produce brain waves that are similar to those in premature babies. A study has showed they can also grow neural networks that respond to light.

There are also signs that such organoids can link up with other organs and receptors in animals. That means that they not only have a prospect of becoming sentient, they also have the potential to communicate with the external world, by collecting sensory information. Perhaps they can one day actually respond through sound devices or digital output.

As a cognitive neuroscientist, I am happy to conceive that an organoid maintained alive for a long time, with a constant supply of life-essential nutrients, could eventually become sentient and maybe even fully conscious.

Time to panic?

This isn’t the first time biological science has thrown up ethical questions. Gender reassignment shocked many in the past, but, whatever your beliefs and moral convictions, sex change narrowly concerns the individual undergoing the procedure, with limited or no biological impact on their entourage and descendants.

Genetic manipulation of embryos, in contrast, raised alert levels to hot red, given the very high likelihood of genetic modifications being heritable and potentially changing the genetic make up of the population down the line. This is why successful operations of this kind conducted by Chinese scientist He Jianku raised very strong objections worldwide.

Human cerebral organoids range in size from a poppy seed to a small pea.NIH/Flickr

But creating mini brains inside animals, or even worse, within an artificial biological environment, should send us all frantically panicking. In my opinion, the ethical implications go well beyond determining whether we may be creating a suffering individual. If we are creating a brain – however small –– we are creating a system with a capacity to process information and, down the line, given enough time and input, potentially the ability to think.

Some form of consciousness is ubiquitous in the animal world, and we, as humans, are obviously on top of the scale of complexity. While we don’t know exactly what consciousness is, we still worry that human-designed AI may develop some form of it. But thought and emotions are likely to be emergent properties of our neurons organised into networks through development, and it is much more likely it could arise in an organoid than in a robot. This may be a primitive form of consciousness or even a full blown version of it, provided it receives input from the external world and finds ways to interact with it.

In theory, mini-brains could be grown forever in a laboratory – whether it is legal or not – increasing in complexity and power for as long as their life-support system can provide them with oxygen and vital nutrients. This is the case for the cancer cells of a woman called Henrietta Lacks, which are alive more than 60 years after her death and multiplying today in hundreds of thousands of labs throughout the world.

Disembodied super intelligence?

But if brains are cultivated in the laboratory in such conditions, without time limit, could they ever develop a form of consciousness that surpasses human capacity? As I see it, why not?

And if they did, would we be able to tell? What if such a new form of mind decided to keep us, humans, in the dark about their existence – be it only to secure enough time to take control of their life-support system and ensure that they are safe?

When I was an adolescent, I often had scary dreams of the world being taken over by a giant computer network. I still have that worry today, and it has partly become true. But the scare of a biological super-brain taking over is now much greater in my mind. Keep in mind that such new organism would not have to worry about their body becoming old and dying, because they would not have a body.

This may sound like the first lines of a bad science fiction plot, but I don’t see reasons to dismiss these ideas as forever unrealistic.

The point is that we have to remain vigilant, especially given that this could all happen without us noticing. You just have to consider how difficult it is to assess whether someone is lying when testifying in court to realise that we will not have an easy task trying to work out the hidden thoughts of a lab grown mini-brain.

Slowing the research down by controlling organoid size and life span, or widely agreeing a moratorium before we reach a point of no return, would make good sense. But unfortunately, the growing ubiquity of biological labs and equipment will make enforcement incredibly difficult – as we’ve seen with genetic embryo editing.

It would be an understatement to say that I share the worries of some of my colleagues working in the field of cellular medicine. The toughest question that we can ask regarding these mesmerising possibilities, and which also applies to genetic manipulations of embryos, is: can we even stop this?

The Conversation

Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Researchers invent device that generates light from the cold night sky – here's what it means for millions living off grid

Author: Jeff Kettle, ‎Lecturer in Electronic Engineering, Bangor University

More than 1.7 billion people worldwide still don’t have a reliable electricity connection. For many of them, solar power is their potential energy saviour – at least when the sun is shining.

Technology to store excess solar power during the dark hours is improving. But what if we could generate electricity from the cold night sky? Researchers at Stanford and UCLA have just done exactly that. Don’t expect it to become solar’s dark twin just yet, but it could play an important role in the energy demands of the future.

The technology itself is nothing new – in fact, the principles behind it were discovered almost 200 years ago. The device, called a thermoelectric generator, uses temperature differences between two metal plates to generate electricity through something called the Seebeck effect. The greater the temperature difference, the greater the power generated.

We already use this technology to convert waste heat from sources such as industrial machinery and car engines. The new research applies the same technique to harness the temperature difference between the outside air and a surface which faces the sky.

The device’s two plates sit on top of one another. The top plate faces the cold air of the open night sky, while the bottom plate is kept enclosed in warmer air, facing the ground. Heat always radiates to cooler environments, and the cooler the environment, the faster heat is radiated. Because the open night sky is cooler than the enclosed air surrounding the bottom plate, the top plate loses heat faster than the bottom plate. This generates a temperature difference between the two plates – in this study, between four and five degrees celsius.

Now at different temperatures, heat also starts to travel from the hotter bottom plate to the cooler top plate. The device harnesses this flow of heat to generate electricity.

At this small temperature difference, power is limited. The researchers’ device produced just 25 milliwatts per meter squared (mW/m²) – enough to power a small LED reading light. By contrast, a solar panel of the same size would be enough to sustain three 32" LED TVs – that’s 4,000 times more power.

Greater potential

In dryer climates, the device could perform better. This is because in wetter climates, any moisture in the air condenses on the downward-facing bottom plate, cooling it and reducing the temperature difference between the plates. In the dry Mediterranean, for example, the device could produce 20 times more power than it did in the US.

The researchers’ device harnessed the cold night sky to power this small light.Aaswath Raman, Author provided

The device itself could also be refined. For example, manufacturers could apply a coating that allows the device’s surface to reach a temperature lower than the surrounding environment during the day, so that it is even cooler at night. They could also use corrugated instead of flat plates, which are more efficient at capturing and emitting radiation. These and other feasible tech upgrades could raise the power output by as much as ten times.

With the efficiency of everyday technologies continually improving, thermolectric devices could play an important role in powering society before long. Colleagues of mine are developing technology that connects household devices to the internet and each other – the so called Internet of Things– at power levels of just 1.5 megawatts per meter squared (MW/m²), a level of energy firmly within the reach of an enhanced device in dry climates.

By connecting a series of thermoelectric generators mounted on the walls of homes, the technology could noticeably lighten the energy load of houses. It’s feasible, too – the technology could easily be mass produced, and sold cheaply enough to provide a viable energy source in locations where it is too expensive or impractical to connect with mains electricity.

Of course, it’s unlikely that thermoelectric devices will ever replace battery storage as the nighttime saviour of solar energy. Batteries now cost a quarter of what they did a decade ago, and solar systems with battery storage are already becoming affordable ways to meet small-scale domestic and industrial energy needs.

But the technology could be a useful complement to solar power and battery storage – and a vital back-up energy source for those living off-grid when batteries fail or panels break. When everything goes wrong on the chilliest of nights, those with thermoelectric devices to power a heater would at least have one thing to thank the freezing night air for.

The Conversation

Jeff Kettle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Kidal bukan berarti Anda dominan otak kanan - jadi apa artinya?

Author: Emma Karlsson, Postdoctoral researcher in Cognitive Neuroscience, Bangor University

Wachiwit/Shutterstock

Ada banyak klaim tentang apa artinya kidal, dan apakah itu mempengaruhi tipe orang – tapi nyatanya ini adalah sesuatu yang membingungkan. Mitos tentang kidal muncul setiap tahunnya, tapi para peneliti belum mengungkap sepenuhnya arti kidal - lebih sering menggunakan tangan kiri ketimbangan kanan untuk aktivitas.

Jadi mengapa orang bisa kidal? Sejujurnya, kami juga tidak sepenuhnya memahami. Apa yang kami ketahui adalah populasi orang kidal hanya sekitar 10% dari populasi dunia - tapi ini tidak terbagi rata menurut jenis kelamin.

Dari populasi 10% tersebut, diketahui sekitar 12% adalah laki-laki dan hanya sekitar 8% perempuan. Beberapa orang heran dengan perbandingan 90:10 ini dan bertanya-tanya mengapa mereka bisa kidal.

Tapi pertanyaan yang menarik adalah mengapa kita tidak kidal secara kebetulan? Mengapa tidak terbagi 50:50? Ini bukan karena arah kita menulis, karena kidal akan dominan di negara-negara yang cara penulisan bahasanya dari kanan ke kiri, bukan itu masalahnya. Bahkan secara genetik ini juga aneh - hanya sekitar 25% orang kidal yang kedua orang tuanya kidal.


Baca juga: How children's brains develop to make them right or left handed


Kidal telah dikaitkan dengan macam-macam hal buruk, seperti kesehatan yang buruk dan kematian dini - tapi tidak satu pun yang benar. Yang terakhir ini banyak dijelaskan oleh generasi tua, mereka dipaksa untuk pindah tangan dan menggunakan tangan kanan mereka. Dengan ini, sepertinya ada lebih sedikit orang kidal pada masa lalu. Kaitan yang pertama, meski bisa menjadi berita yang menarik, tetaplah salah.

Mitos positif tentang kidal juga berlimpah. Orang kidal dianggap lebih kreatif, karena kebanyakan dari mereka menggunakan “otak kanan”. Ini mungkin salah satu mitos yang paling konsisten terkait kidal dan otak. Tapi tidak peduli seberapa menarik (dan mungkin mengecewakan bagi orang-orang kidal yang masih menunggu untuk suatu hari memiliki talenta setara seniman Leonardo da Vinci), pemikiran bahwa setiap orang menggunakan “sisi otak dominan” dalam mendefinisikan kepribadian dan pengambilan keputusan juga salah.

Lateralisasi otak dan kidal

Memang benar, bagaimana pun, bahwa otak sebelah kanan mengendalikan sisi kiri tubuh, dan otak sebelah kiri mengendalikan sisi kanan - dan bahwa belahan otak memang memiliki spesialisasi masing-masing.

Sebagai contoh, bahasa biasanya diproses sedikit lebih banyak di otak sebelah kiri, dan pengenalan wajah sedikit lebih banyak di otak sebelah kanan. Gagasan bahwa setiap belahan otak dikhususkan untuk beberapa keterampilan, dikenal sebagai lateralisasi otak. Namun, mereka tidak bekerja secara terpisah, ada pita tebal pada serabut saraf - disebut corpus callosum – yang menghubungkan kedua sisi otak.

Menariknya, ada beberapa perbedaan antara orang yang ‘bertangan kanan’ dan kidal yang dikenal dalam spesialisasi ini. Misalnya, sering dikatakan bahwa sekitar 95% orang bertangan-kanan adalah “dominan otak kiri”. Ini tidak sama dengan klaim “otak kiri” di atas, ini sebenarnya merujuk pada temuan awal bahwa kebanyakan orang bertangan-kanan lebih bergantung pada otak sebelah kiri terkait berbicara dan bahasa. Diasumsikan bahwa kebalikannya akan berlaku untuk orang kidal. Namun ini bukan masalahnya. Faktanya, 70% orang kidal juga memproses bahasa lebih banyak pada otak sebelah kiri. Mengapa angka ini lebih rendah dan bukan kebailkannya, ini belum diketahui.


Baca juga: Why is life left-handed? The answer is in the stars


Para peneliti telah menemukan banyak spesialisasi otak lainnya, atau “asimetri” lain selain bahasa. Kebanyakan terjadi di otak sebelah kanan - setidaknya bagi orang bertangan-kanan - termasuk hal-hal seperti pemrosesan wajah, keterampilan spasial, dan persepsi emosi. Namun ini belum diketahui, mungkin karena peneliti salah mengasumsikan bahwa itu semua bergantung pada bagian otak yang tidak dominan terhadap bahasa.

Kenyataannya, asumsi ini, ditambah pengakuan bahwa sedikit orang kidal memiliki dominasi otak kanan untuk bahasa, membuat mereka diabaikan - atau lebih buruk, dihindari secara aktif - dalam banyak penelitian terhadap otak, karena peneliti berasumsi bahwa, sama seperti bahasa, semua asimetri lainnya akan berkurang.

Bagaimana beberapa fungsi yang terlateralisasi (terkhususkan) dalam otak dapat benar-benar mempengaruhi cara kita memandang sesuatu. Kami mempelajarinya dengan menggunakan tes persepsi sederhana. Sebagai contoh, dalam penelitian baru-baru ini, kami mempresentasikan gambar wajah yang sudah dirancang untuk menunjukkan setengah wajah dengan satu emosi dan setengah lainnya dengan emosi yang berbeda, untuk sejumlah besar orang kidal dan bertangan kanan.

Biasanya, orang-orang cenderung melihat emosi yang ditunjukkan sisi kiri wajah, ini diyakini mencerminkan spesialisasi di otak sebelah kanan. Hal ini terkait dengan fakta bahwa bidang visual diproses sedemikian rupa sehingga ada sebuah bias ke sisi kiri ruang. Bias ini dianggap mewakili pemrosesan oleh otak sebelah kanan, sementara sebuah bias ke sisi kanan dianggap mewakili pemrosesan oleh otak sebelah kiri. Kami juga menyajikan berbagai jenis gambar dan suara, untuk memeriksa beberapa spesialisasi lainnya.

Temuan kami menunjukkan bahwa beberapa jenis spesialisasi, termasuk pemrosesan wajah, tampaknya mengikuti pola menarik yang terlihat untuk bahasa (yaitu, lebih banyak orang kidal memiliki kecenderungan melihat emosi yang ditunjukkan di sisi kanan wajah). Tapi terkait melihat bias-bias pada sesuatu yang diperhatikan, kami tidak menemukan perbedaan pola pemrosesan otak untuk orang bertangan-kanan dan kidal. Hasil ini menunjukkan bahwa, sementara ada hubungan antara kidal dan beberapa spesialisasi otak, tidak lebih.

Orang kidal sangat penting dalam eksperimen baru seperti ini. Bukan hanya karena mereka dapat membantu kita memahami apa yang membuat mereka berbeda, tapi juga bisa membantu kita memecahkan banyak misteri neuropsikologis lama tentang otak.

Franklin Ronaldo menerjemahkan artikel ini dari bahasa Inggris.

The Conversation

Emma Karlsson tidak bekerja, menjadi konsultan, memiliki saham, atau menerima dana dari perusahaan atau organisasi mana pun yang akan mengambil untung dari artikel ini, dan telah mengungkapkan bahwa ia tidak memiliki afiliasi selain yang telah disebut di atas.

Are the Amazon fires a crime against humanity?

Author: Tara Smith, Lecturer in Law, Bangor University

Fires in the Brazilian Amazon have jumped 84% during President Jair Bolsonaro’s first year in office and in July 2019 alone, an area of rainforest the size of Manhattan was lost every day. The Amazon fires may seem beyond human control, but they’re not beyond human culpability.

Bolsonaro ran for president promising to “integrate the Amazon into the Brazilian economy”. Once elected, he slashed the Brazilian environmental protection agency budget by 95% and relaxed safeguards for mining projects on indigenous lands. Farmers cited their support for Bolsonaro’s approach as they set fires to clear rainforest for cattle grazing.

Bolsonaro’s vandalism will be most painful for the indigenous people who call the Amazon home. But destruction of the world’s largest rainforest may accelerate climate change and so cause further suffering worldwide. For that reason, Brazil’s former environment minister, Marina Silva, called the Amazon fires a crime against humanity.

From a legal perspective, this might be a helpful way of prosecuting environmental destruction. Crimes against humanity are international crimes, like genocide and war crimes, which are considered to harm both the immediate victims and humanity as a whole. As such, all of humankind has an interest in their punishment and deterrence.

Historical precedent

Crimes against humanity were first classified as an international crime during the Nuremberg trials that followed World War II. Two German Generals, Alfred Jodl and Lothar Rendulic, were charged with war crimes for implementing scorched earth policies in Finland and Norway. No one was charged with crimes against humanity for causing the unprecedented environmental damage that scarred the post-war landscapes though.

Our understanding of the Earth’s ecology has matured since then, yet so has our capacity to pollute and destroy. It’s now clear that the consequences of environmental destruction don’t stop at national borders. All humanity is placed in jeopardy when burning rainforests flood the atmosphere with CO₂ and exacerbate climate change.

Holding someone like Bolsonaro to account for this by charging him with crimes against humanity would be a world first. If successful, it could set a precedent which might stimulate more aggressive legal action against environmental crimes. But do the Amazon fires fit the criteria?


Read more: Why the International Criminal Court is right to focus on the environment


Prosecuting crimes against humanity requires proof of widespread and systematic attacks against a civilian population. If a specific part of the global population is persecuted, this is an affront to the global conscience. In the same way, domestic crimes are an affront to the population of the state in which they occur.

When prosecuting prominent Nazis in Nuremberg, the US chief prosecutor, Robert Jackson, argued that crimes against humanity are committed by individuals, not abstract entities. Only by holding individuals accountable for their actions can widespread atrocities be deterred in future.

Robert Jackson speaks at the Nuremberg trials in 1945.Raymond D'Addario/Wikipedia

The International Criminal Court’s Chief Prosecutor, Fatou Bensouda, has promised to apply the approach first developed in Nuremberg to prosecute individuals for international crimes that result in significant environmental damage. Her recommendations don’t create new environmental crimes, such as “ecocide”, which would punish severe environmental damage as a crime in itself. They do signal, however, a growing appreciation of the role that environmental damage plays in causing harm and suffering to people.

The International Criminal Court was asked in 2014 to open an investigation into allegations of land-grabbing by the Cambodian government. In Cambodia, large corporations and investment firms were being given prime agricultural land by the government, displacing up to 770,000 Cambodians from 4m hectares of land. Prosecuting these actions as crimes against humanity would be a positive first step towards holding individuals like Bolsonaro accountable.

But given the global consequences of the Amazon fires, could environmental destruction of this nature be legally considered a crime against all humanity? Defining it as such would be unprecedented. The same charge could apply to many politicians and business people. It’s been argued that oil and gas executives who’ve funded disinformation about climate change for decades should be chief among them.

Charging individuals for environmental crimes against humanity could be an effective deterrent. But whether the law will develop in time to prosecute people like Bolsonaro is, as yet, uncertain. Until the International Criminal Court prosecutes individuals for crimes against humanity based on their environmental damage, holding individuals criminally accountable for climate change remains unlikely.


This article is part of The Covering Climate Now series
This is a concerted effort among news organisations to put the climate crisis at the forefront of our coverage. This article is published under a Creative Commons license and can be reproduced for free – just hit the “Republish this article” button on the page to copy the full HTML coding. The Conversation also runs Imagine, a newsletter in which academics explore how the world can rise to the challenge of climate change. Sign up here.


The Conversation

Tara Smith does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Cilia: cell's long-overlooked antenna that can drive cancer – or stop it in its tracks

Author: Angharad Mostyn Wilkie, PhD Researcher in Oncology and Cancer Biology, Bangor University

Motile cilia are antenna-like projections on our body's cells.Author provided

You might know that our lungs are lined with hair-like projections called motile cilia. These are tiny microtubule structures that appear on the surface of some cells or tissues. They can be found lining your nose and respiratory tract too, and along the fallopian tubes and vas deferens in the female and male reproductive tracts. They move from side to side to sweep away any micro-organisms, fluids, and dead cells in the respiratory system, and to help transport the sperm and egg in the reproductive system.

Odds are, however, that you haven’t heard about motile cilia’s arguably more important cousin, primary cilia.

Motile cilia stand out on the right of this image of stained respiratory epithelium cells.Jose Luis Calvo/Shutterstock

Primary cilia are on virtually all cells in the body but for a long time they were considered to be a non-functional vestigial part of the cell. To add to their mystery, they aren’t present all the time. They project from the centrosome– the part of the cell that pulls it apart during division – and so only appear at certain stages of the cell cycle.

The first sign that these little structures were important came with the realisation that disruption to either their formation or function could result in genetic conditions known as ciliopathies. There are around 20 different ciliopathies, and they affect about one in every 1,000 people. These are often disabling and life-threatening conditions, affecting multiple organ systems. They can cause blindness, deafness, chronic respiratory infections, kidney disease, heart disease, infertility, obesity, diabetes and more. Symptoms and severity vary widely, making it hard to classify and diagnose these disorders.

So how can malfunction of a little organelle which was originally thought to be useless result in such a wide variety of devastating symptoms? Well, it is now known that not only do cilia look like little antennas, they act like them too. The cilia is packed full of proteins that detect messenger signals from other cells or the surrounding environment. These signals are then transmitted into the cell’s nucleus to activate a response – for example, these responses are important for the regulation of several essential signalling pathways.

When this was realised, researchers began to ask whether changes in the structure or function of cilia; changes in protein levels associated with cilia; or movement of these proteins to a different part of the cell could occur due to – or potentially drive – other conditions. Given that scientists already knew then that many of the pathways regulated by cilia could drive cancer progression, looking at the relationship between cilia and cancer was a logical step.

Cilia, signals and cancer

Researchers discovered that in many cancers – including renal cell, ovarian, prostrate, breast and pancreatic – there was a distinct lack of primary cilia in the cancerous cells compared to the healthy surrounding cells. It could be that the loss of cilia was just a response to the cancer, disrupting normal cell regulation – but what if it was actually driving the cancer?

Melanomas are one of the most aggressive types of tumours in humans. Some cancerous melanoma cells express higher levels of a protein called EZH2 than healthy cells. EZH2 suppresses cilia genes so malignant cells have less cilia. This loss of cilia activates some of the carcinogenic signalling pathways, resulting in aggressive metastatic melanoma.

However, loss of cilia does not have the same effect in all cancers. In one type of pancreatic cancer, the presence – not absence – of cilia correlates with increased metastasis and decreased patient survival.

Even within the same cancer the picture is unclear. Medulloblastomas are the most common childhood brain tumour. Their development can be driven by one of the signalling pathways regulated by the cilia, the hedgehog signalling pathway. This pathway is active during embryo development but dormant after. However, in many cancers (including medulloblastomas) hedgehog signalling is reactivated, and it can drive cancer growth. But studies into the effects of cilia in medulloblastomas have found that cilia can both drive and protect against this cancer, depending on the way the hedgehog pathway is initially disrupted.

As such strong links have been found between cilia and cancer, researchers have also been looking into whether treatment which targets this structure could be used for cancer therapies. One of the problems faced when treating cancers is the development of resistance to anti-cancer drugs. Many of these drugs’ targets are part of the signalling pathways regulated by cilia, but scientists have found that blocking the growth of cilia in drug-resistant cancer cell lines could restore sensitivity to a treatment.

What was once thought to just be a cell part left over during evolution, has proven to be integral to our understanding and treatment of cancer. The hope is that further research into cilia will help untangle the complex relationship between them and cancer, and provide both new insights into some of the drivers of cancer as well as new targets for cancer treatment.

The Conversation

Angharad Mostyn Wilkie receives funding from the North West Cancer Research Institute

How to become a great impostor

Author: Tim Holmes, Lecturer in Criminology & Criminal Justice, Bangor University

Ferdinand Waldo Demara

Unlike other icons who have appeared on the front of Life magazine, Ferdinand Waldo Demara was not famed as an astronaut, actor, hero or politician. In fact, his 23-year career was rather varied. He was, among other things, a doctor, professor, prison warden and monk. Demara was not some kind of genius either – he actually left school without any qualifications. Rather, he was “The Great Impostor”, a charming rogue who tricked his way to notoriety.

My research speciality is crimes by deception and Demara is a man who I find particularly interesting. For, unlike other notorious con-artists, imposters and fraudsters, he did not steal and defraud for the money alone. Demara’s goal was to attain prestige and status. As his biographer Robert Crichton noted in 1959, “Since his aim was to do good, anything he did to do it was justified. With Demara the end always justifies the means.”

Though we know what he did, and his motivations, there is still one big question that has been left unanswered – why did people believe him? While we don’t have accounts from everyone who encountered Demara, my investigation into his techniques has uncovered some of the secrets of how he managed to keep his high level cons going for so long.


Read more: Why do we fall for scams?


Upon leaving education in 1935, Demara lacked the skills to succeed in the organisations he was drawn to. He wanted the status that came with being a priest, an academic or a military officer, but didn’t have the patience to achieve the necessary qualifications. And so his life of deception started. At just 16-years-old, with a desire to become a member of a silent order of Trappist monks, Demara ran away from his home in Lawrence, Massachusetts, lying about his age to gain entry.

When he was found by his parents he was allowed to stay, as they believed he would eventually give up. Demara remained with the monks long enough to gain his hood and habit, but was ultimately forced out of the monastery at the age of 18 as his fellow monks felt he lacked the right temperament.

Demara then attempted to join other orders, including the Brothers of Charity children’s home in West Newbury, Massachusetts, but again failed to follow the rules. In response, he stole funds and a car from the home, and joined the army in 1941, at the age of 19. But, as it turned out, the army was not for him either. He disliked military life so much that he stole a friend’s identity and fled, eventually deciding to join the navy instead.

From monk to medicine

While in the navy, Demara was accepted for medical training. He passed the basic course but due to his lack of education was not allowed to advance. So, in order to get into the medical school, Demara created his first set of fake documents indicating he already had the needed college qualifications. He was so pleased with his creations that he decided to skip applying to medical school and tried to gain a commission as an officer instead. When his falsified papers were discovered, Demara faked his own death and went on the run again.


Read more: The men who impersonate military personnel for stolen glory


In 1942, Demara took the identity of Dr Robert Linton French, a former navy officer and psychologist. Demara found French’s details in an old college prospectus which had profiled French when he worked there. Though he worked as a college teacher using French’s name till the end of the war in 1945, Demara was eventually caught and the authorities decided to prosecute him for desertion.

However, due to good behaviour, he only served 18 months of the six-year sentence handed to him, but upon his release he went back to his old ways. This time Demara created a new identity, Cecil Hamann, and enrolled at Northeastern University. Tiring of the effort and time needed to complete his law degree, Demara awarded himself a PhD and, under the persona of “Dr” Cecil Hamann, took up another teaching post at a Christian college, The Brother of Instruction, in Maine in the summer of 1950.

It was here that Demara met and befriended Canadian doctor Joseph Cyr, who was moving to the US to set up a medical practice. Needing help with the immigration paperwork, Cyr gave all his identifying documents to Demara, who offered to fill in the application for him. After the two men parted ways, Demara took copies of Cyr’s paperwork and moved up to Canada. Pretending to be Dr Cyr, Demara approached the Canadian Navy with an ultimatum: make me an officer or I will join the army. Not wanting to lose a trained doctor, Demara’s application was fast tracked.

As a commissioned officer during the Korean war, Demara first served at Stadacona naval base, where he convinced other doctors to contribute to a medical booklet he claimed to be producing for lumberjacks living in remote parts of Canada. With this booklet and the knowledge gained from his time in the US Navy, Demara was able to pass successfully as Dr Cyr.

A military marvel

Demara worked aboard HMCS Cayuga as ship’s doctor (pictured in 1954).

In 1951, Demara was transferred to be ship’s doctor on the destroyer HMCS Cayuga. Stationed off the coast of Korea, Demara relied on his sick berth attendant, petty officer Bob Horchin, to handle all minor injuries and complaints. Horchin was pleased to have a superior officer who did not interfere in his work and who empowered him to take on more responsibilities.

Though he very successfully passed as a doctor aboard the Cayuga, Demara’s time there came to a dramatic end after three Korean refugees were brought on in need of medical attention. Relying on textbooks and Horchin, Demara successfully treated all three – even completing the amputation of one man’s leg. Recommended for a commendation for his actions, the story was reported in the press where the real Dr Cyr’s mother saw a picture of Demara impersonating her son. Wanting to avoid further public scrutiny and scandal, the Canadian government elected to simply deport Demara back to the US in November 1951.

After returning to America, there were news reports on his actions, and Demara sold his story to Life magazine in 1952. In his biography, Demara notes that he spent the time after his return to the US using his own name and working in different short-term jobs. While he enjoyed the prestige he had gained in his impostor roles, he started to dislike life as Demara, “the great impostor”, gaining weight and developing a drinking problem.

In 1955, Demara somehow acquired the credentials of a Ben W. Jones and disappeared again. As Jones, Demara began working as a guard at Huntsville Prison in Texas, and was eventually put in charge of the maximum security wing that housed the most dangerous prisoners. In 1956, an educational programme that provided prisoners with magazines to read led to Demara’s discovery once more. One of the prisoners found the Life magazine article and showed the cover picture of Demara to prison officals. Despite categorically denying to the prison warden that he was Demara, and pointing to positive feedback he had received from prison officials and inmates about his performance there, Demara chose to run. In 1957, he was caught in North Haven, Maine and served a six-month prison sentence for his actions.

After his release he made several television appearances including on the game show You Bet Your Life, and made a cameo in horror film The Hypnotic Eye. From this point until his death in 1981, Demara would struggle to escape his past notoriety. He eventually returned to the church, getting ordained using his own name and worked as a counsellor at a hospital in California.

How Demara did it

According to biographer Crichton, Demara had an impressive memory, and through his impersonations accumulated a wealth of knowledge on different topics. This, coupled with charisma and good instincts, about human nature helped him trick all those around him. Studies of professional criminals often observe that con artists are skilled actors and that a con game is essentially an elaborate performance where only the victim is unaware of what is really going on.

Demara also capitalised on workplace habits and social conventions. He is a prime example of why recruiters shouldn’t rely on paper qualifications over demonstrations of skill. And his habit of allowing subordinates to do things he should be doing meant Demara’s ability went untested, while at the same time engendering appreciation from junior staff.

He observed of his time in academia that there was always opportunity to gain authority and power in an organisation. There were ways to set himself as an authority figure without challenging or threatening others by “expanding into the power vacuum”. He would set up his own committees, for example, rather than joining established groups of academics. Demara says in the biography that starting fresh committees and initiatives often gave him the cover he needed to avoid conflict and scrutiny.

…there’s no competition, no past standards to measure you by. How can anyone tell you aren’t running a top outfit? And then there’s no past laws or rules or precedents to hold you down or limit you. Make your own rules and interpretations. Nothing like it. Remember it, expand into the power vacuum.

Working from a position of authority as the head of his own committees further entrenched Demara in professions he was not qualified for. It can be argued that Demara’s most impressive attempt at expansion into the “power vacuum” occurred when teaching as Dr Hamann.

Hamann was considered a prestigious appointee for a small Christian college. Claiming to be a cancer researcher, Demara proposed converting the college into a state-approved university where he would be chancellor. The plans proceeded but Demara was not given a prominent role in the new institution. It was then that Demara decided to take Cyr’s identity and leave for Canada. If Demara had succeeded in becoming chancellor of the new LaMennais College (which would go onto become Walsh University) it is conceivable that he would have been able to avoid scrutiny or questioning thanks to his position of authority.

Inherently trustworthy

Other notable serial impostors and fakes have relied on techniques similar to Demara’s. Frank Abagnale also recognised the reliance people in large organisations placed on paperwork and looking the part. This insight allowed him at 16 to pass as a 25-year-old airline pilot for Pan Am Airways as portrayed in the film, Catch Me If You Can.

More recently, Gene Morrison was jailed after it was discovered that he had spent 26 years running a fake forensic science business in the UK. After buying a PhD online, Morrison set up Criminal and Forensic Investigations Bureau (CFIB) and gave expert evidence in over 700 criminal and civil cases from 1977 to 2005. Just like Demara used others to do his work, Morrison subcontracted other forensic experts and then presented the findings in court as his own.


Read more: How to get away with fraud: the successful techniques of scamming


Marketing and psychology expert Robert Cialdini’s work on the techniques of persuasion in business might offer insight into how people like Demara can succeed, and why it is that others believe them. Cialdini found that there are six universal principles of influence that are used to persuade business professionals: reciprocity, consistency, social proof, getting people to like you, authority and scarcity.

Demara used all of these skills at various points in his impersonations. He would give power to subordinates to hide his lack of knowledge and enable his impersonations (reciprocity). By using other people’s credentials, he was able to manipulate organisations into accepting him, using their own regulations against them (consistency and social proof). Demara’s success in his impersonations points to how likeable he was and how much of an authority he appeared to be. By impersonating academics and professionals, Demara focused on career paths where at the time there was high demand and a degree of scarcity, too.

Laid bare, one can see how Demara tricked his unsuspecting colleagues into believing his lies through manipulation. Yet within this it is interesting to also consider how often we all rely on gut instinct and the appearance of ability rather than witnessed proof. Our gut instinct is built on five questions we ask ourselves when presented with information: does a fact come from a credible source? Do others believe it? Is there plenty of evidence to support it? Is it compatible with what I believe? Does it tell a good story?

Researchers of social trust and solidarity argue that people also have a fundamental need to trust strangers to tell the truth in order for society to function. As sociologist Niklas Luhmann said, “A complete absence of trust would prevent (one) even getting up in the morning.” Trust in people is in a sense a default setting, so to mistrust requires a loss of confidence in someone which must be sparked by some indicator of a lie.

It was only after the prisoner showed the Life article to the Huntsville Prison warden, that they began to ask questions. Until this point, Demara had offered everything his colleagues would need to believe he was a capable member of staff. People accepted Demara’s claims because it felt right to believe him. He had built a rapport and influenced people’s views of who he was and what he could do.


Read more: Five psychological reasons why people fall for scams – and how to avoid them


Another factor to consider when asking why people would believe Demara was the rising dependency on paper proofs of identity at that time. Following World War II, improvements in and a shift towards reliance on paper documentation occurred as social and economic mobility changed in America. Underlying Demara’s impersonations and the actions of many modern con artists is the reliance we have long placed in first paper proofs of identity such as birth certificates, ID cards and, more recently, digital forms of identification.

As his preoccupation was more with prestige than money, it can be argued that Demara had a harder time than other impostors who were only driven by profit. Demara stood out as a surgeon and a prison guard, he was a good fake and influencer, but the added attention that came from his attempts at multiple important professions and media attention led to his downfall. Abagnale similarly had issues with the attention that came with pretending to be an airline pilot, lawyer and surgeon. In contrast, Morrison stuck to his one impersonation for years, avoiding detection and making money until the quality of his work was investigated.

The trick, it appears, to being a good impostor is essentially to be friendly, have access to a history of being trusted by others, have the right paperwork, build others’ confidence in you and understand the social environment you are entering. Although, when Demara was asked to explain why he committed his crimes he simply said, “Rascality, pure rascality”.

The Conversation

Tim Holmes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Tissue donations are important to cancer research, what happens to your cells after they are taken?

Author: Helena Robinson, Postdoctoral Research Officer in Cancer Biology, Bangor University

Vladimir Borovic/Shutterstock

If you’ve ever had a tumour removed or biopsy taken, you may have contributed to life-saving research. People are often asked to give consent for any tissue that is not needed for diagnosis to be used in other scientific work. Though you probably won’t be told exactly what research your cells will be used for, tissue samples like these are vital for helping us understand and improve diagnosis and treatment of a whole range of illnesses and diseases. But once they’re removed, how are these tissue samples used exactly? How do they go from patient to project?

When tissue is removed from a person’s body, most often it is immediately put into a chemical preservative. It is then taken to a lab and embedded in a wax block. Protecting the tissue like this retains its structure and stops it from decomposing so it can be stored at room temperature for long periods of time.

This process also means that biochemical molecules like protein and DNA are preserved, which can provide vital clues about what processes are occurring in the tissue at that stage in the person’s illness. If we were looking at, for example, whether molecule A occurs in one particular tumour type but not in others (which would make it helpful for diagnosis) we would want a large number of each type to test. But there may not be enough patients of each type currently in treatment, so it is useful to have a library of samples to draw from.


Read more: More people can donate tissue than organs – so why do we know so little about it?


Or we might want to test if patients with tumours containing molecule B are less likely to survive for five years than those without this molecule. This sort of question requires samples with a follow-up time of at least five years. But the answer may help doctors decide whether they need to treat their current patients with B more aggressively or with a different kind of treatment.

To analyse tissues, lab scientists cut very thin slices from the wax blocks and view them under a microscope. The slides are stained with dyes that show the overall tissue structure, and may also be stained with antibodies to show the presence of specific molecules.

Human tissue embedded in wax and a stained slide ready for examination.Komsan Loonprom/Shutterstock

Studies often need large numbers of samples from different patients to adequately answer a research question, which can take some time to collect. Take my work for example. My team is interested in finding more about a protein called brachyury, and how it relates to bowel cancer. But to do this we need to compare lots of samples, so we are using tissue from 823 bowel cancer patients and 50 non-cancer patients in our research.

When not in use, the tissue blocks are – with patient consent – placed in a store that researchers can access. The UK has several of these stores, known as biobanks or biorepositories, holding all kinds of tissues. Some cancer biobanks also store different kinds of tumours and blood samples.


Read more: How biobanks can help improve the integrity of scientific research


While there are no reliable figures available on how many samples are held in all biobanks, or how often they are used, we do know these numbers are significant. The Children’s Cancer and Leukaemia Biobank alone has banked 19,000 samples since 1998. The Northern Ireland Biobank reports that 2,062 patients consented for their tissues to be used in research between 2017-2018, and 4,086 samples were accessed by researchers in that period.

Identifying biomarkers

Projects that use biobanks are often trying to identify biomarkers. These are any biological characteristics that give useful information about a disease or condition. Our team is looking at whether the protein brachyury is a useful biomarker to improve bowel cancer diagnosis.

Brachyury is essential for early embryonic development, but it is switched off in most cells by the time you are born. However, several studiesimply that finding brachyury in a tumour indicates a poorer outcome for the patient. But to work out if this link is correct, we need to look at biobank samples. Doing this will help us work out more accurately which patients are at higher risk of cancer recurrence or metastasis. This is important when doctors are deciding on the best course of treatment.

In our research, we also need clinical details, such as what happened to the patient and all the information available at the time of diagnosis. Then we can assess whether testing for brachyury would have added useful information to the diagnosis. Information that accompanies each block is anonymised, which means the researcher analysing the data won’t know the patient’s name or be able to identify them from the sample. But they can see any relevant clinical details such as tumour stage, age, sex and survival.

Biobank samples have had already improved treatment of childhood acute lymphocytic leukaemia. Samples from the Cancer and Leukaemia Biobank were used to demonstrate that children with an abnormality in chromosome 21 had poorer outcomes that those without it. This led to treatment being modified for these children so they are no longer at a disadvantage.

People are often applauded for raising money for research by undertaking gruelling or inventive challenges. Patients who decide their tissue can be used in research should be similarly applauded. Without their unique and valuable gift, we wouldn’t be able to further our understanding, diagnosis and treatment of all kinds of illnesses and diseases.

The Conversation

Helena Robinson receives funding from Cancer Research Wales.

Being left-handed doesn't mean you are right-brained — so what does it mean?

Author: Emma Karlsson, Postdoctoral researcher in Cognitive Neuroscience, Bangor University

Wachiwit/Shutterstock

There have been plenty of claims about what being left-handed means, and whether it changes the type of person someone is – but the truth is something of an enigma. Myths about handedness appear year after year, but researchers have yet to uncover all of what it means to be left-handed.

So why are people left-handed? The truth is we don’t fully know that either. What we do know is that only around 10% of people across the world are left-handed – but this isn’t split equally between the sexes. About 12% of men are left-handed but only about 8% of women. Some people get very excited about the 90:10 split and wonder why we aren’t all right-handed.

But the interesting question is, why isn’t our handedness based on chance? Why isn’t it a 50:50 split? It is not due to handwriting direction, as left-handedness would be dominant in countries where their languages are written right to left, which it is not the case. Even the genetics are odd – only about 25% of children who have two left-handed parents will also be left-handed.


Read more: How children's brains develop to make them right or left handed


Being left-handed has been linked with all sorts of bad things. Poor health and early death are often associated, for example – but neither are exactly true. The latter is explained by many people in older generations being forced to switch and use their right hands. This makes it look like there are less left-handers at older ages. The former, despite being an appealing headline, is just wrong.

Positive myths are also abound. People say that left-handers are more creative, as most of them use their “right brain”. This is perhaps one of the more persistent myths about handedness and the brain. But no matter how appealing (and perhaps to the disappointment of those lefties still waiting to wake up one day with the talents of Leonardo da Vinci), the general idea that any of us use a “dominant brain side” that defines our personality and decision making is also wrong.

Brain lateralisation and handedness

It is true, however, that the brain’s right hemisphere controls the left side of the body, and the left hemisphere the right side – and that the hemispheres do actually have specialities. For example, language is usually processed a little bit more within the left hemisphere, and recognition of faces a little bit more within the right hemisphere. This idea that each hemisphere is specialised for some skills is known as brain lateralisation. However, the halves do not work in isolation, as a thick band of nerve fibres – called the corpus callosum – connects the two sides.

Interestingly, there are some known differences in these specialities between right-handers and left-handers. For example, it is often cited that around 95% of right-handers are “left hemisphere dominant”. This is not the same as the “left brain” claim above, it actually refers to the early finding that most right-handers depend more on the left hemisphere for speech and language. It was assumed that the opposite would be true for lefties. But this is not the case. In fact, 70% of left-handers also process language more in the left hemisphere. Why this number is lower, rather than reversed, is as yet unknown.


Read more: Why is life left-handed? The answer is in the stars


Researchers have found many other brain specialities, or “asymmetries” in addition to language. Many of these are specialised in the right hemisphere – in most right-handers at least – and include things such as face processing, spatial skills and perception of emotions. But these are understudied, perhaps because scientists have incorrectly assumed that they all depend on being in the hemisphere that isn’t dominant for language in each person.

In fact, this assumption, plus the recognition that a small number of left-handers have unusual right hemisphere brain dominance for language, means left-handers are either ignored – or worse, actively avoided – in many studies of the brain, because researchers assume that, as with language, all other asymmetries will be reduced.

How some of these functions are lateralised (specialised) in the brain can actually influence how we perceive things and so can be studied using simple perception tests. For example, in my research group’s recent study, we presented pictures of faces that were constructed so that one half of the face shows one emotion, while the other half shows a different emotion, to a large number of right-handers and left-handers.

Usually, people see the emotion shown on the left side of the face, and this is believed to reflect specialisation in the right hemisphere. This is linked to the fact that visual fields are processed in such a way there is a bias to the left side of space. This is thought to represent right hemisphere processing while a bias to the right side of space is thought to represent left hemisphere processing. We also presented different types of pictures and sounds, to examine several other specialisations.

Our findings suggest that some types of specialisations, including processing of faces, do seem to follow the interesting pattern seen for language (that is, more of the left-handers seemed to have a preference for the emotion shown on the right side of the face). But in another task that looked at biases in what we pay attention to, we found no differences in the brain-processing patterns for right-handers and left-handers. This result suggests that while there are relationships between handedness and some of the brain’s specialisations, there aren’t for others.

Left-handers are absolutely central to new experiments like this, but not just because they can help us understand what makes this minority different. Learning what makes left-handers different could also help us finally solve many of the long-standing neuropsychological mysteries of the brain.

The Conversation

Emma Karlsson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Brexit uncertainty boosts support for Welsh independence from the UK

Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University

vladm/Shutterstock

In a move that surprised many, in June 2016, 52.5% of people in Wales voted to leave the European Union. But concerns over Brexit negotiations, and “chaos in UK politics” have mounted since then, and recent polls suggest that support for remain has risen considerably in Wales.

Now, the Welsh government has announced that it will campaign for the UK to remain in the EU while public attention is turning to the question of whether the Welsh should become independent from a post-Brexit UK.

Welsh independence has long been supported by Plaid Cymru, but it now appears to be becoming more mainstream, with more Welsh citizens now considering the possibility of leaving the union. Marches are being held across the country and recent YouGov polls indicate that support for independence, or at least “indy-curiosity” has grown in Wales in the past two years.

If it were to become independent, Wales wouldn’t have to start from scratch. It has had a devolved government and parliament (the National Assembly or “Senedd”) for 20 years.

At present these bodies do not have control over all matters relating to Wales. They don’t have control over defence and national security, foreign policy, and immigration, for example. But the Assembly does have responsibility for policy and passing laws for the benefit of the people of Wales, and has been doing so for the past 20 years.

Wales, alone

Strictly speaking, constitutional law dictates that Wales cannot run its own referendum nor declare independence unilaterally. The new Schedule 7A to the Government of Wales Act 2006 states that “the union of the nations of Wales and England” is a reserved matter, not for the Assembly. But precedent suggests that an independence referendum is not an impossibility.

If there is momentum for Wales to decide its own future, this would put pressure on the UK government to facilitate a legal solution for a referendum. This opportunity was afforded to the former Scottish first minister, Alex Salmond, by former prime minister David Cameron, via the Scottish Independence Act 2013.

While not all are in favour of Welsh independence, the political narrative is changing. Welsh first minister Mark Drakeford has stated that “support for the union is not unconditional” and that “independence has risen up the public agenda”.

Concerned by relationships between the UK’s countries, former prime minister Theresa May referred to the electoral success of nationalist parties such as Plaid Cymru as evidence that the union is “more imperilled now than it has ever been”. She also sanctioned the Dunlop review, with a remit to address “how we can secure our union for the future”.

Her comments echo warnings from former Labour prime minister Gordon Brown, who recently remarked that UK unity is “more at risk than at any time in 300 years – and more in danger than when we had to fight for it in 2014 during a bitter Scottish referendum”.

The Senedd

So if Wales overcame the legal challenges and gained national political support, would the devolved government and parliament be able to manage the country? As noted above the National Assembly has been making laws for Wales since 1999. Frequently cited achievements include the abolishing of prescription charges and financial support for Welsh university students (via a mix of tuition loans and living cost grants). In addition the Social Services and Well-being Act 2014 changed how peoples’ needs are assessed and services delivered.

Wales was also among the first to introduce free bus travel for OAPs, charges for plastic bags, and the indoor smoking ban– with further bans in school playgrounds and outside hospitals in 2019.

More recently its Future Generations Act was celebrated for compelling public bodies to think about the long-term impact of their decisions on communities and the environment – albeit with some criticisms from legal experts for being “toothless” in terms of enforceability.


Read more: Wales is leading the world with its new public health law


Alongside these headline-grabbing results, the National Assembly itself has been an achievement in its own right. While its initial establishment was something of a battle – in 1979 Wales voted 4:1 against creating an Assembly and in 1997 just 50.3% voted for it – The Wales Act 2017 actually extended the scope of the Assembly’s powers.

This changed its constitutional structure from a conferred powers model (which limited it to specifically listed areas) to a reserved powers model, which empowers the Assembly to produce a multitude of Welsh laws on all matters that are not reserved to the UK parliament.

But even with its strong history, it must be noted that not everyone is in favour of the Assembly. A small number of UKIP assembly members are currently arguing to reverse devolution while others criticise Wales’ record– particularly in the areas of schooling and the NHS.

Independence challenges

The are several other dimensions to the question of whether Wales could become an independent state. Socially and economically, opponents advocate that Wales is too small and too poor to stand alone on the world stage. Yes Cymru, a non-partisan pro-independence campaign group, has sought to debunk these myths, pointing out that there are 18 countries in Europe smaller than Wales, and that the assessment of Wales’ fiscal deficit is flawed in excluding significant assets such as water and electricity.

The constitutional shift in power that will follow Brexit will certainly give rise to the prospects of a divided UK. But the outcome of Brexit, and its impact on Welsh independence, hinges on the new prime minister’s actions.

While Boris Johnson has reiterated that the “union comes first”, if there is significant public support for independence in Wales, it will be hard for Johnson to ignore the people’s right to self-determination and arbitrarily enforce the union at all costs. Should the independence movement gain further wide support in the coming months compromises will have to be reached, with at least more incremental devolution being likely in the medium term.

Ultimately, while it would be a monumental change, the question of whether Wales becomes independent hinges on what the people want for their country. If successive UK governments take the union for granted without more meaningful consideration to the cumulative effects on the people of Wales, calls for independence may become louder.

The Conversation

Stephen Clear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How the brain prepares for movement and actions

Author: Myrto Mantziara, PhD Researcher, Bangor University

To perform a sequence of actions, our brains need to prepare and queue them in the correct order.AYAakovlev/Shutterstock

Our behaviour is largely tied to how well we control, organise and carry out movements in the correct order. Take writing, for example. If we didn’t make one stroke after another on a page, we would not be able to write a word.

However, motor skills (single or sequences of actions which through practice become effortless) can become very difficult to learn and retrieve when neurological conditions disrupt the planning and control of sequential movements. When a person has a disorder – such as dyspraxia or stuttering – certain skills cannot be performed in a smooth and coordinated way.

Traditionally scientists have believed that in a sequence of actions, each is tightly associated to the other in the brain, and one triggers the next. But if this is correct, then how can we explain errors in sequencing? Why do we mistype “form” instead of “from”, for example?

Some researchersargue that before we begin a sequence of actions, the brain recalls and plans all items at the same time. It prepares a map where each item has an activation stamp relative to its order in the sequence. These compete with each other until the item with the stronger activation wins. It “comes out” for execution as being more “readied” – so we type “f” in the word “from” first, for example – and then it is erased from the map. This process, called competitive queuing, is repeated for the rest of the actions until we execute all the items of the sequence in the correct order.

This idea that the brain uses simultaneous activations of actions before any movement takes place was proven in a 2002 study. As monkeys were drawing shapes (making three strokes for a triangle, for example), researchers found that before the start of the movement, there existed simultaneous neural patterns for each stroke. How strong the activation was could predict the position of that particular action in execution.

Planning and queuing

What has not been known until now is whether this activation system is used in the human brain. Nor have we known how actions are queued while we prepare them based on their position in the sequence. However recent research from neuroscientists at Bangor University and University College London has shown that there is simultaneous planning and competitive queuing in the human brain too.

To carry out sequences of actions, our brains must queue each one before we do it.Liderina/Shutterstock

For this study, the researchers were interested to see how the brain prepares for executing well-learned action sequences like typing or playing the piano. Participants were trained for two days to pair abstract shapes with five-finger sequences in a computer-based task. They learned the sequences by watching a small dot move from finger to finger on a hand image displayed on the screen, and pressing the corresponding finger on a response device. These sequences were combinations of two finger orders with two different rhythms.

On the third day, the participants had to produce – based on the abstract shape presented for a while on the screen – the correct sequence entirely from memory while their brain activity was recorded.

Looking at the brain signals, the team was able to distinguish participants’ neural patterns as they planned and executed the movements. The researchers found that, milliseconds before the start of the movement, all the finger presses were queued and “stacked” in an ordered manner. The activation pattern of the finger presses reflected their position in the sequence that was performed immediately after. This competitive queuing pattern showed that the brain prepared the sequence by organising the individual actions in the correct order.

The researchers also looked at whether this preparatory queuing activity was shared across different sequences which had different rhythms or different finger orders, and found that it was. The competitive queuing mechanism acted as a template to guide each action into a position, and provided the base for the accurate production of new sequences. In this way the brain stays flexible and efficient enough to be ready to produce unknown combinations of sequences by organising them using this preparatory template.

Interestingly, the quality of the preparatory pattern predicted how accurate a participant was in producing a sequence. In other words, the more well-separated the activities or actions were before the execution of the sequence, the more likely the participant was to execute the sequence without mistakes. The presence of errors, on the other hand, meant that the queuing of the patterns in preparation for the action was less well-defined, and tended to be mingled.

By knowing how our actions are pre-planned in the brain, researchers will be able to find out the parameters of executing smooth and accurate movement sequences. This could lead to a better understanding of the difficulties found in disorders of sequence learning and control, such as stuttering and dyspraxia. It could also help the development of new rehabilitation or treatment techniques which optimise movement planning in order for patients to achieve a more skilled control of action sequences.

The Conversation

Myrto Mantziara is a PhD researcher and receives funding from School of Psychology, Bangor University.

Peut-on parler d’une identité européenne ?

Author: François Dubet, Professeur des universités émérite, Université de BordeauxNathalie Heinich, Sociologue, Centre national de la recherche scientifique (CNRS)Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University

François Dubet, Université de Bordeaux : « Chacun perçoit l’Europe de son propre point de vue »

La question de l’identité est toujours enfermée dans le même paradoxe. D’un côté, l’identité semble inconsistante : une construction faite de bric et de broc, un récit, un ensemble instable d’imaginaires et de croyances qui se décomposent dès que l’on essaie de s’en saisir. Mais d’un autre côté, ces identités incertaines semblent extrêmement solides, enchâssées dans les subjectivités les plus intimes. Souvent, il suffit que les identités collectives imaginaires se défassent pour que les individus se sentent menacés et blessés au plus profond d’eux-mêmes.

Après tout, les centaines de milliers de sujets de sa Majesté qui ont manifesté le 23 mars contre le Brexit se sentaient européens parce que cette part infime d’eux même risque de leur être arrachée, alors même qu’ils ne pourraient pas la définir précisément.

L’identité européenne en mouvement

Migrations européennes, 2013.FNSP, Sciences Po, Atelier de cartographie, CC BY-NC-ND

Je suppose que les historiens et les spécialistes des civilisations pourraient aisément définir quelque chose comme une identité européenne tenant aux histoires communes des sociétés et des États qui se sont formés dans les mondes latins, les mondes chrétiens et germaniques, les guerres répétées, les alliances monarchiques, les révolutions, les échanges commerciaux, la circulation des élites et les migrations intérieures à l’Europe.

Les histoires des États nationaux sont tout simplement incompréhensibles en dehors de l’histoire de l’Europe. Ceci dit, nous aurions beaucoup de mal à définir cette identité fractionnée, clivée, mouvante. Chacun perçoit l’Europe de son propre point de vue, et d’ailleurs quand les institutions européennes se risquent à définir une identité européenne, elles n’y parviennent difficilement.

L’identité européenne serait-elle qu’un leurre, un cumul d’identités nationales, les seules vraiment solides, car étayées par des institutions ?

Vivre l’Europe pour l’aimer

Les sondages, à manier avec précaution, montrent que les individus hiérarchisent leurs sentiments d’appartenance. On se sent Breton et Français, et Européens, et croyant, et une femme ou un homme, et de telle ou telle origine sans que, dans la plupart des cas, ces multiples identifications soient perçues comme des dilemmes.

Même ceux qui en veulent à l’Europe politique car trop libérale et trop bureaucratique, ne semblent guère désireux de revenir aux mobilisations en masse pour défendre leur pays contre leurs voisins européens. Et ce, malgré, la montée des partis d’extrême droite un peu partout en Europe, qui soulignent un attachement à l’identité nationale.


Read more: FPÖ, AfD, Vox : les partis d’extrême droite à l’offensive


La monnaie commune a simplifié beaucoup les échanges entre les Européennes, mais n’a pas effacé les disparités.Pixabay, CC BY

Au-delà d’une conscience politique explicite, il s’est ainsi formé une forme d’identité européenne vécue à travers les déplacements de populations, les loisirs ou modes de vie.

Beaucoup de ceux qui combattent l’Europe n’imaginent probablement plus de demander des visas et de changer des Francs contre des Pesetas pour passer deux semaines en Espagne.

Pourtant les démagogues accusent l’Europe d’être la cause de leurs malheurs, une attaque qui résonne de plus en plus forts dans les oreilles des groupes socio-économiques désavantagés.

Il n’est pas exclu que la critique de l’Europe procède plus de l’amour déçu que de l’hostilité. L’identité européenne existe bien plus qu’on ne le croit. Il suffirait que l’Europe implose pour qu’elle nous manque, et pas seulement au nom de nos intérêts bien compris.

Nathalie Heinich, CNRS/EHESS : « Doit-on parler d’identité européenne ? »

Parler d’« identité » à propos d’une entité chargée de connotations politiques n’est jamais neutre, comme on le voit avec la notion d’« identité française » : soit on affirme l’existence de cette entité (« identité européenne ») en visant implicitement sa distinction par rapport à un collectif supérieur (par exemple l’Amérique, la Chine…), et l’on est d’emblée dans la revendication d’un soutien aux petits (« dominés ») contre les grands (« dominants ») ; soit on vise implicitement sa distinction par rapport à un collectif inférieur (la nation, la France), et l’on est dans la revendication d’une affirmation de la supériorité du grand sur le petit. Tout dépend donc du contexte et des attendus.

Une expression à deux sens

Mais si l’on veut éviter une réponse normative pour s’en tenir à une description neutre, dégagée de jugement de valeur, alors il faut distinguer entre deux sens du terme « identité européenne ». Le premier renvoie à la nature de l’entité abstraite nommée « Europe » : ses frontières, ses institutions, son histoire, sa ou ses cultures, etc. L’exercice est classique, et la littérature historienne et politiste est abondante à ce sujet même si le mot « identité » n’y est pas forcément requis.

« Peut-on (encore) parler d’identité européenne ? » (« Is there (still) such a thing as European identity ? », Roger Casale, TEDx Oxford.

Le second sens renvoie, lui, aux représentations que se font les individus concrets de leur « identité d’Européen », c’est-à-dire la manière et le degré auquel ils se rattachent à ce collectif de niveau plus général que l’habituelle identité nationale. Le diagnostic passe alors par l’enquête sociologique sur les trois « moments » de l’identité – autoperception, présentation, désignation – par lesquels un individu se sent, se présente et est désigné comme « européen ». Et cette enquête peut prendre une dimension quantitative, avec un dispositif de type sondage représentatif basé sur ces trois expériences. La question « Peut-on parler d’une identité européenne ? » ne pourra dès lors trouver de réponse qu’au terme d’une telle enquête.

Une question pour les citoyens et leurs représentants

Mais les enjeux politiques de la question n’échappent à personne, et c’est pourquoi il faut avoir à l’esprit la fonction que revêt, dans la réflexion sur l’Europe, l’introduction du mot « identité » : il s’agit bien de transformer un projet économique et social en programme politique acceptable par le plus grand nombre – voire désirable.

C’est pourquoi le problème n’est pas tant de savoir si l’on peut, mais si l’on doit faire de l’Europe un enjeu identitaire et non plus seulement économique et social. Et donc : « Doit-on parler d’identité européenne ? »

La réponse à cette question appartient aux citoyens et à leurs représentants – pas aux chercheurs.

Nikolaos Papadogiannis, Université de Bangor, Royaume-Uni : « L’identité européenne : une pluralité d’options »

Le résultat du référendum britannique de 2016 sur l’adhésion à l’UE a provoqué des ondes de choc à travers l’Europe. Elle a, entre autres, suscité des débats sur la question de savoir si une « culture européenne » ou une « identité européenne » existe réellement ou si les identités nationales dominent toujours.

Il serait erroné, à mon sens, de passer sous silence l’identification de diverses personnes à « l’Europe ». Cette identification est l’aboutissement d’un long processus, en particulier dans la seconde moitié du XXe siècle, qui a impliqué à la fois les politiques des institutions de la CEE/UE et les initiatives locales.

La mobilité transfrontalière des jeunes depuis 1945 est un exemple clé de la première : elle a souvent été développée par des groupes qui n’étaient pas formellement liés à la CEE/UE. Ils ont tout de même contribué à développer un attachement à « l’Europe » dans plusieurs pays du continent.

Comme l’a montré le politologue Ronald Inglehart dans les années 1960, plus les jeunes « étaient jeunes » et plus ils voyageaient, plus ils étaient susceptibles de soutenir une union politique toujours plus étroite en Europe. Plus récemment, les programmes d’échanges Erasmus ont également contribué à développer des formes d’identification à l’Europe.

Se sentir « européen »

Simultanément, se sentir « européen » et adhérer à une identité nationale sont loin d’être incompatibles. Dans les années 1980, de nombreux Allemands de l’Ouest se sont passionnés pour une Allemagne réunifiée faisant partie d’une Europe politiquement unie.

Une partie du mur de Berlin.MariaTortajada/Pixabay, CC BY

L’attachement à « l’Europe » a également été un élément clé du nationalisme régional dans plusieurs pays européens au cours des trois dernières décennies, tels que le nationalisme écossais, catalan et gallois. Un cri de ralliement pour les nationalistes écossais depuis les années 1980 a été « l’indépendance en Europe ». Il en est encore ainsi aujourd’hui. Il est assez révélateur que le slogan principal du Parti national écossais de centre gauche (SNP), le parti nationaliste le plus puissant d’Écosse, pour les élections du Parlement européen de 2019, soit : « L’avenir de l’Écosse appartient à l’Europe ».

Des objectifs nationaux variés réunis sous la bannière étoilée

Cependant, ce qui mérite plus d’attention, c’est l’importance attachée à la notion d’identité européenne. Divers groupes sociaux et politiques l’ont utilisée, de l’extrême gauche à l’extrême droite.

Le sens qu’ils attachent à cette identité varie également. Pour le SNP, il est compatible avec l’adhésion de l’Écosse à l’UE. Le SNP combine cette dernière avec une compréhension inclusive de la nation écossaise, qui est ouverte aux personnes nées ailleurs dans le monde, mais qui vivent en Écosse.

Discours du leader du SNP et premier ministre écossais, Nicola Sturgeon, lors de l’ouverture royale du Parlement écossais le 2 juillet 2016.

En Allemagne, par contre, l’AfD (Alternative für Deutschland, Alternative for Germany) d’extrême droite s’identifie à « l’Europe », mais critique l’UE. Elle combine la première avec l’islamophobie. Un exemple clair de ce mélange est une affiche publiée par ce parti avant les élections de 2019. et demandant aux « Européens » de voter pour l’AfD, afin que l’Europe ne devienne pas une « Eurabie ».

Si l’identification à l’Europe existe, il s’agit d’un phénomène complexe, formulé de plusieurs façons. Cela n’implique pas nécessairement un soutien à l’UE. De même, les identités européennes ne s’excluent pas nécessairement mutuellement avec les identités nationales. Enfin, elles peuvent, bien que pas toujours, reposer sur des stéréotypes à l’encontre de personnes considérées comme « non européennes ».

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Is there such thing as a 'European identity'?

Author: Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University

Is there such a thing as an European identity?Marco Verch/Flickr, CC BY-ND

The outcome of the UK’s 2016 referendum on EU membership has sent shockwaves across Europe. Among other impacts, it has prompted debates around the issues whether a “European culture” or a “European identity” actually exist or whether national identities still dominate.

It would be wrong, in my opinion, to write off the identification of various people with “Europe”. This identification has been the outcome of a long process, particularly in the second half of the 20th century, involving both the policies of the European Economic Community (EEC) and EU institutions and grassroots initiatives. Cross-border youth mobility since 1945 is a key example of the former: it was often developed by groups that were not formally linked to the EEC/EU. They still helped develop an attachment to “Europe” in several countries of the continent.

As political scientist Ronald Inglehart showed in the 1960s, the younger people were, and the more they travelled, the more likely they were to support an ever-closer political union in Europe. More recently, Erasmus exchange programmes have also helped develop forms of identification with Europe.

Feeling “European”

Simultaneously, feeling “European” and subscribing to a national identity have been far from mutually exclusive. Numerous West Germans in the 1980s were passionate about a reunified Germany being part of a politically united Europe.

Attachment to “Europe” has also been a key component of regional nationalism in several European countries in the last three decades, such as the Scottish or the Catalan nationalism. A rallying cry for Scottish nationalists from the 1980s on has been “independence in Europe”, and it continues to be the case today. Indeed, for the 2019 European Parliament elections, the primary slogan of the centre-left Scottish National Party (SNP), currently in power, is “Scotland’s future belongs in Europe”.

Diverse agendas

What requires further attention is the significance attached to the notion of European identity. Diverse social and political groups have used it, ranging from the far left to the far right, and the meaning they attach varies. For the SNP, it is compatible with the EU membership of Scotland. The party combines the latter with an inclusive understanding of the Scottish nation, which is open to people who have been born elsewhere in the globe, but live in Scotland.

Speech by SNP leader and first minister of Scotland, Nicola Sturgeon, on July 2, 2016.

By contrast, Germany’s far-right AfD party (Alternative für Deutschland, Alternative for Germany) is critical of the EU, yet identifies with “Europe”, which it explicitly contrasts with Islam. A clear example is a one of the party’s posters for the upcoming elections that asks “Europeans” to vote for AfD so that the EU doesn’t become “Eurabia”.

Identification with Europe does exist, but it is a complex phenomenon, framed in several ways. and does not necessarily imply support for the EU. Similarly, European identities are not necessarily mutually exclusive with national identities. Finally, both the former and the latter identities may rest upon stereotypes against people regarded as “non-European”.

The Conversation

Nikolaos Papadogiannis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Climate change is putting even resilient and adaptable animals like baboons at risk

Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor University

Villiers Steyn/Shutterstock.com

Baboons are large, smart, ground-dwelling monkeys. They are found across sub-Saharan Africa in various habitats and eat a flexible diet including meat, eggs, and plants. And they are known opportunists – in addition to raiding crops and garbage, some even mug tourists for their possessions, especially food.

We might be tempted to assume that this ecological flexibility (we might even call it resilience) will help baboons survive on our changing planet. Indeed, the International Union for the Conservation of Nature (IUCN), which assesses extinction risk, labels five of six baboon species as “of Least Concern”. This suggests that expert assessors agree: the baboons, at least relatively speaking, are at low risk.

Unfortunately, my recent research suggests this isn’t the whole story. Even this supposedly resilient species m⁠a⁠y⁠ ⁠b⁠e⁠ ⁠a⁠t⁠ ⁠s⁠i⁠g⁠n⁠i⁠f⁠i⁠c⁠a⁠n⁠t⁠ ⁠r⁠i⁠s⁠k⁠ ⁠o⁠f⁠ ⁠e⁠x⁠t⁠i⁠n⁠c⁠t⁠i⁠o⁠n⁠ ⁠b⁠y⁠ ⁠2⁠0⁠7⁠0⁠.⁠

Resourceful – surely resilient?Okyela/Shutterstock.com

We know people are having huge impacts on the natural world. Scientists have gone as far as naming a new epoch, the Anthropocene, after our ability to transform the planet. Humans drive other species extinct and modify environments to our own ends every day. Astonishing television epics like Our Planet emphasise humanity’s overwhelming power to damage the natural world.

But so much remains uncertain. In particular, while we now have a good understanding of some of the changes Earth will face in the next decades – we’ve already experienced 1°C of warming as well as increases in the frequency of floods, hurricanes and wildfires – we still struggle to predict the biological effects of our actions.

In February 2019 the Bramble Cay melomys (a small Australian rodent) had the dubious honour of being named the first mammal extinct as a result of anthropogenic climate change. Others have suffered range loss, population decline and complex knock-on effects from their ecosystems changing around them. Predicting how these impacts will stack up is a significant scientific challenge.

We can guess at which species are at most risk and which are safe. But we must not fall into the trap of trusting our expectations of resilience, based as they are on a specie’s current success. Our recent research aimed to test these expectations – we suspected that they would not also predict survival under changing climates, and we were right.

Baboons and climate change

Models of the effects of climate change on individual species are improving all the time. These are ecological niche models, which take information on where a species lives today and use it to explore where it might be found in future.

For the baboon study, my masters student Sarah Hill and I modelled each of the six baboon species separately, starting in the present day. We then projected their potential ranges under 12 different future climate scenarios. Our models included two different time periods (2050 and 2070), two different degrees of projected climate change (2.6°C and 6°C of warming) and three different global climate models, each with subtly different perspectives on the Earth system. These two different degrees of warming were chosen because they represent expected “best case” and “worst case” scenarios, as modelled by the Intergovernmental Panel on Climate Change.

Our model outputs allowed us to calculate the change in the area of suitable habitat for each species under each scenario. Three of our species, the yellow, olive and hamadryas baboons, seemed resilient, as we initially expected. For yellow and olive baboons, suitable habitat expanded under all our scenarios. The hamadryas baboon’s habitat, meanwhile, remained stable.

Guinea baboons like these seem to be especially sensitive to warm and arid conditions.William Warby via Flickr and Wikimedia Commons

Guinea baboons (the only one IUCN-labelled as Near Threatened) showed a small loss. Under scenarios predicting warmer, wetter conditions, they might even gain a little. Unfortunately, models projecting warming and drying predicted that Guinea baboons could lose up to 41.5% of their suitable habitat.

But Kinda baboons seemed sensitive to the same warmer and wetter conditions that might favour their Guinea baboon cousins. They were predicted to lose habitat under every model, though the loss ranged from a small one (0-22.7%) in warmer and dryer conditions to 70.2% under the worst warm and wet scenario.

And the final baboon species, the chacma baboon from South Africa (the same species that are known for raiding tourist vehicles to steal treats) is predicted the worst habitat loss. Under our 12 scenarios, habitat loss was predicted to range from 32.4% to 83.5%.

Chacma baboons like these may struggle to survive in the next few decades.PACA COMO/Shutterstock.com

Wider implications

The IUCN identifies endangered species using estimates of population and range size and how they have changed. Although climate change impacts are recognised as potentially causing important shifts in both these factors, climate change effect models like ours are rarely included, perhaps because they are often not available.

Our results suggest that in a few decades several baboon species might move into higher-risk categories. This depends on the extent of range (and hence population) loss they actually experience. New assessments will be required to see which category will apply to chacma, Kinda and Guinea baboons in 2070. It’s worth noting also that baboons are behaviourally flexible: they may yet find new ways to survive.

This also has wider implications for conservation practice. First, it suggests that we should try to incorporate more climate change models into assessments of species’ prospects. Second, having cast doubt on our assumption of baboon “resilience”, our work challenges us to establish which other apparently resilient species might be similarly affected. And given that the same projected changes act differently even on closely related baboon species, we presumably need to start to assess species more or less systematically, without prior assumptions, and to try to extract new general principles about climate change impacts as we work.

Sarah and I most definitely would not advocate discarding any of the existing assessment tools – the work the IUCN does is vitally important and our findings just confirm that. But our project may have identified an important additional factor affecting the prospects of even seemingly resilient species in the Anthropocene.


Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.

The Conversation

Isabelle Catherine Winder does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Replanting oil palm may be driving a second wave of biodiversity loss

Author: Simon Willcock, Senior Lecturer in Environmental Geography, Bangor UniversityAdham Ashton-Butt, Post-doctoral Research Associate, University of Hull

Rufous-backed dwarf kingfisher habitat is lost when forests are cleared for oil palm plantations© Muhammad Syafiq Yahya

The environmental impact of palm oil production has been well publicised. Found in everything from food to cosmetics, the deforestation, ecosystem decline and biodiversity loss associated with its use is a serious cause for concern.

What many people may not know, however, is that oil palm trees – the fruit of which is used to create palm oil – have a limited commercial lifespan of 25 years. Once this period has ended, the plantation is cut down and replanted, as older trees start to become less productive and are difficult to harvest. Our research has now found that this replanting might be causing a second wave of biodiversity loss, further damaging the environment where these plantations have been created.

An often overlooked fact is that oil palm plantations actually have higher levels of biodiversity compared to some other crops. More species of forest butterflies would be lost if a forest were converted to a rubber plantation, than if it were converted to oil palm, for example. One reason for this is that oil palm plantations provide a habitat that is more similar to tropical forest than other forms of agriculture (such as soybean production). The vegetation growing beneath the oil palm canopy (called understory vegetation) also provides food and a habitat for many different species, allowing them to thrive. Lizard abundance typically increases when primary forests are converted to oil palm, for example.


Read more: Palm oil boycott could actually increase deforestation – sustainable products are the solution


This does not mean oil palm plantations are good for the environment. In South-East Asia, where 85% of palm oil is produced, the conversion of forest to oil palm plantations has caused declines in the number of several charismatic animals, including orangutans, sun bears and hornbills. Globally, palm oil production affects at least 193 threatened species, and further expansion could affect 54% of threatened mammals and 64% of threatened birds.

Second crisis

Banning palm oil would likely only displace, not halt this biodiversity loss. Several large brands and retailers are already producing products using sustainably certified palm oil, as consumers reassess the impact of their purchasing. But as it is such an ubiquitous ingredient, if it were outlawed companies would need an alternative to keep producing products which include it, and developing countries would need to find something else to contribute to their economies. Production would shift to the cultivation of other oil crops elsewhere, such as rapeseed, sunflower or soybean, in order to meet global demand. In fact, since oil palm produces the highest yields per hectare – up to nine times more oil than any other vegetable oil crop – it could be argued that cultivating oil palm minimises deforestation.

That’s not to say further deforestation should be encouraged to create plantations though. It is preferable to replace plantations in situ, replanting each site so that land already allocated for palm oil production can be reused. This replanting is no small undertaking – 13m hectares of palm oil plantations are to be uprooted by the year 2030, an area nearly twice the size of Scotland. However, our study reveals that much more needs to be done in the management and processes around this replanting, in order to maximise productivity and protect biodiversity in plantations.


Read more: Palm oil: scourge of the earth, or wonder crop?


We found significant declines in the biodiversity and abundance of soil organisms as a consequence of palm replanting. While there was some recovery over the seven years it takes the new crop to establish, the samples we took still had nearly 20% less diversity of invertebrates (such as ants, earthworms, millipedes and spiders) than oil palm converted directly from forest.

We also found that second-wave mature oil palm trees had 59% fewer animals than the previous crop. This drastic change could have severe repercussions for soil health and the overall agro-ecosystem sustainability. Without healthy, well-functioning soil, crop production suffers.

It is likely that replanting drives these declines. Prior to replanting, heavy machinery is used to uproot old palms. This severely disrupts the soil, making upper layers vulnerable to erosion and compaction, reducing its capacity to hold water. This is likely to have a negative impact on biodiversity, which is then further reduced due to the heavy use of pesticides.


Read more: How Indonesia's election puts global biodiversity at stake with an impending war on palm oil


Without change to these management practices, soil degradation is likely to continue, causing decreases in future biodiversity, as well as the productivity of the plantation.

Ultimately, palm oil appears to be a necessary food product for growing populations. However, now that we have identified some of the detrimental consequences of replanting practices, it is clear that long-term production of palm oil comes at a higher cost than previously thought. The world needs to push for more sustainable palm oil, and those in the industry must explore more biodiversity-friendly replanting practices in order to lessen the long-term impacts of intensive oil palm cultivation.

The Conversation

Simon Willcock receives funding from the UK's Economic and Social Research Council (ESRC; ES/R009279/1 and ES/R006865/10). He is affiliated with Bangor University, and is on the Board of Directors of Alliance Earth. This article was written in collaboration with Anna Ray, a research assistant and undergraduate student studying Environmental Science at Bangor University.

Adham Ashton-Butt receives funding from The Natural Environment Research Council. He is affiliated with The University of Hull and the University of Southampton.

Game of Thrones: neither Arya Stark nor Brienne of Tarth are unusual — medieval romance heroines did it all before

Author: Raluca Radulescu, Professor of Medieval Literature and English Literature, Bangor University

Warrior women: Brienne of Tarth, left, and Arya Stark sparring. ©2017 Home Box Office, Inc.

Brienne of Tarth and Arya Stark are very unlike what some may expect of a typical medieval lady. The only daughter of a minor knight, Brienne has trained up as a warrior and has been knighted for her valour in the field of battle. Meanwhile Arya, a tomboyish teen when we first met her in series one, is a trained and hardened assassin. No damsels in distress, then – they’ve chosen to defy their society’s expectations and follow their own paths.

Yet while they are certainly enjoyable to watch, neither character is as unusual as modern viewers may think. While the books and television series play with modern perceptions (and misperceptions) of women’s roles, Arya and Brienne resemble the heroines of medieval times. In those days both real and fictional women took arms to defend cities and fight for their community – inspired by the courage of figures such as Boudicca or Joan of Arc. They went in disguise to look for their loved ones or ran away from home as minstrels or pilgrims. They were players, not bystanders.

While Arya chooses to spend the night with Gendry, she ultimately refuses his proposal of a life together.© 2019 Home Box Office, Inc.

Medieval audiences were regularly inspired by stories of women’s acts of courage and emotional strength. There was Josian, for example, the Saracen (Muslim) princess of the popular medieval romance Bevis of Hampton, who promises to convert to Christianity for love (fulfilling the wishes of the Christian audience). She also murders a man to whom she has been married against her wishes.

There was the lustful married lady who attempts to seduce Sir Gawain in the 14th-century poem Sir Gawain and the Green Knight too. As well as Rymenhild, a princess that eventually marries King Horn in an early example of the romance genre– who very much wants to break moral codes by having sex with her beloved before their wedding, which at that point has not been decided upon.

Medieval stories of such intense desire celebrate the young virgin heroine who woos the object of her desire and takes no notice of the personal, social, political and economic effects of sex before marriage. This is the case with both Arya and Brienne. Arya chooses her childhood friend Gendry to take her virginity on the eve of the cataclysmic battle against the undead. Brienne does the same with Jaime Lannister, the night after the cataclysmic battle – but only after he earns her trust over many adventures together.

Boldness and strength

It is the emotional strength and courage of these heroines that drives their stories forward rather than their relationship to the male hero. Throughout Game of Thrones, this emotional strength has also helped Arya and Brienne stay true to their missions. Arya’s continued strength has to be seen in the light of what has happened to her, however. Brienne began the story as a trained “knight” but Arya’s journey has seen her learning, through bitter experience, the skills she needs to survive.

A medieval audience would have been attuned to this message of self-reliance. Especially given the everydaygendered experiences of women who ran businesses, households and countries, married unafraid of conventions, or chose not to marry.

It is not too far-fetched to think that Arya and Brienne could together lead the alliance against the evil queen Cersei, having both learned that fate reserves unlikely rewards for those who prepare well and carry on in the name of ideals rather than to improve their own status. The frequently (and most likely deliberately) unnamed heroines of medieval romance similarly prove to be resourceful – and often rose to power, leading countries or armies, without even a mention of prior training.

Sir Brienne of Tarth.©2017 Home Box Office, Inc.

The medieval heroines that went unnamed provided a perfect model for women then to project themselves onto. The Duchess in the poem Sir Gowther, under duress (her husband threatens to leave on the grounds of not providing an heir), prays that she be given a son “no matter through what means”, and sleeps with the devil – producing the desired heir.

In the Middle English romance story of Sir Isumbras, his wife – whose name we are not told – transforms from a stereotypical courtly lady, kidnapped by a sultan, to a queen who fights against her captor. She becomes an empty shell onto which medieval women – especially those who do not come from the titled aristocracy – can project themselves. She battles alongside her husband and sons when his men desert him, with no training, only her own natural qualities to rely on.

These real and fictional heroines of the Middle Ages had no choices: they found solutions to seemingly impossible situations, just as Brienne and Arya have done. These two are unsung heroes, female warriors who stand in the background and don’t involve themselves in the “game”. While the men celebrate their victory against the undead White Walkers with a feast at Winterfell, Arya – whose timely assassination of their leader, the Night King, enabled the victory – shuns the limelight.

While the conclusion to the stories of Arya and Brienne is yet to be revealed, given the heroines that inspired these characters it will not be surprising if it is the women warriors – not the men – who will drive the game to its end.

The Conversation

Raluca Radulescu has nothing to disclose.

Allergies aux graminées : le type de pollen compterait plus que la quantité

Author: Simon Creer, Professor in Molecular Ecology, Bangor UniversityGeorgina Brennan, Postdoctoral Research Officer, Bangor University

Les pollens de graminées comptent parmi les plus allergènes.Pixabay

Lorsque le froid hivernal cède la place à des températures plus élevées, que les journées s’allongent et que la vie végétale renaît, près de 400 millions de personnes dans le monde sont victimes de réactions allergiques provoquées par les pollens en suspension dans l’air, qu’il s’agisse de ceux des arbres ou des plantes herbacées. Les symptômes vont des démangeaisons oculaires accompagnées de congestion et d’éternuements à l’aggravation de l’asthme, avec un coût pour la société qui se chiffre en milliards.

Depuis les années 1950, de nombreux pays partout dans le monde tiennent des comptes concernant les quantités de pollen, afin d’établir des prévisions à destination des personnes allergiques. Au Royaume-Uni, ces prévisions sont fournies par le Met Office en collaboration avec l’University of Worcester. (En France, le Réseau national de surveillance aérobiologique, association de loi 1901, est chargé d’étudier le contenu de l’air en particules biologiques pouvant avoir une incidence sur le risque allergique. Ses bulletins sont accessibles en ligne.)

Jusqu’à présent, les prévisions liées au pollen se basaient sur le comptage du nombre total de grains de pollen présents dans l’air : ceux-ci sont recueillis à l’aide d’échantillonneurs d’air qui capturent les particules sur un tambour collant à rotation lente (2 mm/heure).

Le problème est que ces prévisions portent sur le niveau de tous les pollens présents dans l’air, or les gens souffrent de réactions allergiques différentes selon le type de pollen rencontré. Le pollen de graminées, par exemple, est l’aéroallergène le plus nocif – le nombre de personnes qui y sont allergiques dépasse celui de tout autre allergène atmosphérique. Par ailleurs, les données préliminaires que nous avons recueillies suggèrent que les allergies à ce pollen varient au cours de la saison de floraison.

Repérer le pollen

Le pollen d’un grand nombre d’espèces d’arbres et de plantes allergènes peut être identifié grâce au microscope. Malheureusement, ce n’est pas faisable pour les pollens des graminées, car leurs grains ont une apparence très similaire. Cela signifie qu’il est presque impossible de déterminer à quelles espèces ils appartiennent grâce à un simple examen visuel, en routine.

Dans le but d’améliorer la précision des comptages et des prévisions, nous avons monté un nouveau projet visant à mettre au point des méthodes pour distinguer les différents types de pollen de graminées au Royaume-Uni. L’objectif est de savoir quelles espèces de pollen sont présentes en Grande-Bretagne tout au long de la saison de floraison de ces herbes.

Au cours des dernières années, notre équipe de recherche a exploré plusieurs approches pour identifier les pollens de graminées, parmi lesquelles la génétique moléculaire. L’une des méthodes employées par notre équipe repose sur l’utilisation du séquençage de l’ADN. Il s’agit d’examiner des millions de courtes sections d’ADN (ou marqueurs de codes-barres à ADN). Ces marqueurs sont spécifiques à chaque espèce ou genre de pollen de graminées.

Cette approche est appelée « metabarcoding » et peut être utilisée pour analyser l’ADN provenant de communautés mixtes d’organismes, ainsi que l’ADN provenant de différents types de sources environnementales (par exemple, le sol, les sources aquatiques, le miel et l’air). Cela signifie que nous pouvons de cette façon évaluer la biodiversité de centaines ou de milliers d’échantillons. Il nous a ainsi été possible d’analyser l’ADN des pollens prélevés par des échantillonneurs aériens disposés sur les toits en Grande-Bretagne, à 14 endroits différents.

Saison de floraison

En comparant le pollen que nous avons capturé à des échantillons de la bibliothèque de codes-barres ADN des plantes du Royaume-Uni (une base de données ADN de référence, établie à partir d’espèces de graminées correctement identifiées), nous avons été en mesure d’identifier différents types de pollen de graminées à partir de mélanges complexes de pollen en suspension. Cela nous a permis de visualiser comment les différents types de pollens de graminées sont répartis dans toute la Grande-Bretagne au cours de la saison de floraison. Jusqu’à présent, on ne savait pas si la mixture de de pollens présents dans l’air changeait au fil du temps, reflétant la floraison terrestre, ou si le mélange s’enrichissait de nouvelles espèces, par accumulation régulière au fil de la saison pollinique.

On aurait pu légitimement s’attendre à ce que les mélanges de pollens présents dans l’air aient une composition très variée et hétérogène – en raison de la mobilité des grains de pollen et du fait que différentes espèces fleurissent à divers moments de la saison. Pourtant, nos travaux ont révélé que ce n’est pas le cas. En effet, nous avons constaté que la composition du pollen en suspension dans l’air reproduit la progression saisonnière de la diversité des graminées : d’abord des espèces à floraison précoce, puis floraison de mi- et fin de saison.

Grâce à des données complémentaires, contemporaines et historiques, nous avons également constaté qu’au fur et à mesure que la saison de floraison des graminées progresse, le pollen présent en suspension dans l’air reproduit sensiblement, mais avec un délai, les floraisons observées au sol. Autrement dit, au cours de la saison de floraison, les différents types de pollens ne persistent pas dans l’environnement, mais disparaissent.

L’importance de ces travaux va au-delà de la simple compréhension des plantes. En effet, nous avons accumulé des preuves montrant que les ventes de médicaments antiallergiques ne sont pas, elles non plus, uniformes durant la saison de floraison des graminées. On sait que certains types de pollens peuvent contribuer plus que d’autres aux allergies. On peut donc supposer que lorsque les symptômes allergiques sont particulièrement graves, ils résultent davantage de la présence d’un type de pollen donné dans l’air que d’une augmentation des quantités globales de pollens.

Au cours des prochains mois, nous examinerons différents types de pollens et les données de santé associées, afin d’analyser les liens entre la biodiversité du pollen présent dans l’air et les symptômes allergiques. L’objectif principal de notre travail est d’améliorer à terme les prévisions, la planification et les mesures de prévention afin limiter les allergies aux graminées.

The Conversation

Simon Creer a reçu des financements du Natural Environment Research Council.

Georgina Brennan a reçu des financements du Natural Environment Research Council.