Research stories

On our News pages

Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.

On TheConversation.com

Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.

Are the Amazon fires a crime against humanity?

Author: Tara Smith, Lecturer in Law, Bangor University

Fires in the Brazilian Amazon have jumped 84% during President Jair Bolsonaro’s first year in office and in July 2019 alone, an area of rainforest the size of Manhattan was lost every day. The Amazon fires may seem beyond human control, but they’re not beyond human culpability.

Bolsonaro ran for president promising to “integrate the Amazon into the Brazilian economy”. Once elected, he slashed the Brazilian environmental protection agency budget by 95% and relaxed safeguards for mining projects on indigenous lands. Farmers cited their support for Bolsonaro’s approach as they set fires to clear rainforest for cattle grazing.

Bolsonaro’s vandalism will be most painful for the indigenous people who call the Amazon home. But destruction of the world’s largest rainforest may accelerate climate change and so cause further suffering worldwide. For that reason, Brazil’s former environment minister, Marina Silva, called the Amazon fires a crime against humanity.

From a legal perspective, this might be a helpful way of prosecuting environmental destruction. Crimes against humanity are international crimes, like genocide and war crimes, which are considered to harm both the immediate victims and humanity as a whole. As such, all of humankind has an interest in their punishment and deterrence.

Historical precedent

Crimes against humanity were first classified as an international crime during the Nuremberg trials that followed World War II. Two German Generals, Alfred Jodl and Lothar Rendulic, were charged with war crimes for implementing scorched earth policies in Finland and Norway. No one was charged with crimes against humanity for causing the unprecedented environmental damage that scarred the post-war landscapes though.

Our understanding of the Earth’s ecology has matured since then, yet so has our capacity to pollute and destroy. It’s now clear that the consequences of environmental destruction don’t stop at national borders. All humanity is placed in jeopardy when burning rainforests flood the atmosphere with CO₂ and exacerbate climate change.

Holding someone like Bolsonaro to account for this by charging him with crimes against humanity would be a world first. If successful, it could set a precedent which might stimulate more aggressive legal action against environmental crimes. But do the Amazon fires fit the criteria?


Read more: Why the International Criminal Court is right to focus on the environment


Prosecuting crimes against humanity requires proof of widespread and systematic attacks against a civilian population. If a specific part of the global population is persecuted, this is an affront to the global conscience. In the same way, domestic crimes are an affront to the population of the state in which they occur.

When prosecuting prominent Nazis in Nuremberg, the US chief prosecutor, Robert Jackson, argued that crimes against humanity are committed by individuals, not abstract entities. Only by holding individuals accountable for their actions can widespread atrocities be deterred in future.

Robert Jackson speaks at the Nuremberg trials in 1945.Raymond D'Addario/Wikipedia

The International Criminal Court’s Chief Prosecutor, Fatou Bensouda, has promised to apply the approach first developed in Nuremberg to prosecute individuals for international crimes that result in significant environmental damage. Her recommendations don’t create new environmental crimes, such as “ecocide”, which would punish severe environmental damage as a crime in itself. They do signal, however, a growing appreciation of the role that environmental damage plays in causing harm and suffering to people.

The International Criminal Court was asked in 2014 to open an investigation into allegations of land-grabbing by the Cambodian government. In Cambodia, large corporations and investment firms were being given prime agricultural land by the government, displacing up to 770,000 Cambodians from 4m hectares of land. Prosecuting these actions as crimes against humanity would be a positive first step towards holding individuals like Bolsonaro accountable.

But given the global consequences of the Amazon fires, could environmental destruction of this nature be legally considered a crime against all humanity? Defining it as such would be unprecedented. The same charge could apply to many politicians and business people. It’s been argued that oil and gas executives who’ve funded disinformation about climate change for decades should be chief among them.

Charging individuals for environmental crimes against humanity could be an effective deterrent. But whether the law will develop in time to prosecute people like Bolsonaro is, as yet, uncertain. Until the International Criminal Court prosecutes individuals for crimes against humanity based on their environmental damage, holding individuals criminally accountable for climate change remains unlikely.


This article is part of The Covering Climate Now series
This is a concerted effort among news organisations to put the climate crisis at the forefront of our coverage. This article is published under a Creative Commons license and can be reproduced for free – just hit the “Republish this article” button on the page to copy the full HTML coding. The Conversation also runs Imagine, a newsletter in which academics explore how the world can rise to the challenge of climate change. Sign up here.


The Conversation

Tara Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Cilia: cell's long-overlooked antenna that can drive cancer – or stop it in its tracks

Author: Angharad Mostyn Wilkie, PhD Researcher in Oncology and Cancer Biology, Bangor University

Motile cilia are antenna-like projections on our body's cells.Author provided

You might know that our lungs are lined with hair-like projections called motile cilia. These are tiny microtubule structures that appear on the surface of some cells or tissues. They can be found lining your nose and respiratory tract too, and along the fallopian tubes and vas deferens in the female and male reproductive tracts. They move from side to side to sweep away any micro-organisms, fluids, and dead cells in the respiratory system, and to help transport the sperm and egg in the reproductive system.

Odds are, however, that you haven’t heard about motile cilia’s arguably more important cousin, primary cilia.

Motile cilia stand out on the right of this image of stained respiratory epithelium cells.Jose Luis Calvo/Shutterstock

Primary cilia are on virtually all cells in the body but for a long time they were considered to be a non-functional vestigial part of the cell. To add to their mystery, they aren’t present all the time. They project from the centrosome– the part of the cell that pulls it apart during division – and so only appear at certain stages of the cell cycle.

The first sign that these little structures were important came with the realisation that disruption to either their formation or function could result in genetic conditions known as ciliopathies. There are around 20 different ciliopathies, and they affect about one in every 1,000 people. These are often disabling and life-threatening conditions, affecting multiple organ systems. They can cause blindness, deafness, chronic respiratory infections, kidney disease, heart disease, infertility, obesity, diabetes and more. Symptoms and severity vary widely, making it hard to classify and diagnose these disorders.

So how can malfunction of a little organelle which was originally thought to be useless result in such a wide variety of devastating symptoms? Well, it is now known that not only do cilia look like little antennas, they act like them too. The cilia is packed full of proteins that detect messenger signals from other cells or the surrounding environment. These signals are then transmitted into the cell’s nucleus to activate a response – for example, these responses are important for the regulation of several essential signalling pathways.

When this was realised, researchers began to ask whether changes in the structure or function of cilia; changes in protein levels associated with cilia; or movement of these proteins to a different part of the cell could occur due to – or potentially drive – other conditions. Given that scientists already knew then that many of the pathways regulated by cilia could drive cancer progression, looking at the relationship between cilia and cancer was a logical step.

Cilia, signals and cancer

Researchers discovered that in many cancers – including renal cell, ovarian, prostrate, breast and pancreatic – there was a distinct lack of primary cilia in the cancerous cells compared to the healthy surrounding cells. It could be that the loss of cilia was just a response to the cancer, disrupting normal cell regulation – but what if it was actually driving the cancer?

Melanomas are one of the most aggressive types of tumours in humans. Some cancerous melanoma cells express higher levels of a protein called EZH2 than healthy cells. EZH2 suppresses cilia genes so malignant cells have less cilia. This loss of cilia activates some of the carcinogenic signalling pathways, resulting in aggressive metastatic melanoma.

However, loss of cilia does not have the same effect in all cancers. In one type of pancreatic cancer, the presence – not absence – of cilia correlates with increased metastasis and decreased patient survival.

Even within the same cancer the picture is unclear. Medulloblastomas are the most common childhood brain tumour. Their development can be driven by one of the signalling pathways regulated by the cilia, the hedgehog signalling pathway. This pathway is active during embryo development but dormant after. However, in many cancers (including medulloblastomas) hedgehog signalling is reactivated, and it can drive cancer growth. But studies into the effects of cilia in medulloblastomas have found that cilia can both drive and protect against this cancer, depending on the way the hedgehog pathway is initially disrupted.

As such strong links have been found between cilia and cancer, researchers have also been looking into whether treatment which targets this structure could be used for cancer therapies. One of the problems faced when treating cancers is the development of resistance to anti-cancer drugs. Many of these drugs’ targets are part of the signalling pathways regulated by cilia, but scientists have found that blocking the growth of cilia in drug-resistant cancer cell lines could restore sensitivity to a treatment.

What was once thought to just be a cell part left over during evolution, has proven to be integral to our understanding and treatment of cancer. The hope is that further research into cilia will help untangle the complex relationship between them and cancer, and provide both new insights into some of the drivers of cancer as well as new targets for cancer treatment.

The Conversation

Angharad Mostyn Wilkie receives funding from the North West Cancer Research Institute

How to become a great impostor

Author: Tim Holmes, Lecturer in Criminology & Criminal Justice, Bangor University

Ferdinand Waldo Demara

Unlike other icons who have appeared on the front of Life magazine, Ferdinand Waldo Demara was not famed as an astronaut, actor, hero or politician. In fact, his 23-year career was rather varied. He was, among other things, a doctor, professor, prison warden and monk. Demara was not some kind of genius either – he actually left school without any qualifications. Rather, he was “The Great Impostor”, a charming rogue who tricked his way to notoriety.

My research speciality is crimes by deception and Demara is a man who I find particularly interesting. For, unlike other notorious con-artists, imposters and fraudsters, he did not steal and defraud for the money alone. Demara’s goal was to attain prestige and status. As his biographer Robert Crichton noted in 1959, “Since his aim was to do good, anything he did to do it was justified. With Demara the end always justifies the means.”

Though we know what he did, and his motivations, there is still one big question that has been left unanswered – why did people believe him? While we don’t have accounts from everyone who encountered Demara, my investigation into his techniques has uncovered some of the secrets of how he managed to keep his high level cons going for so long.


Read more: Why do we fall for scams?


Upon leaving education in 1935, Demara lacked the skills to succeed in the organisations he was drawn to. He wanted the status that came with being a priest, an academic or a military officer, but didn’t have the patience to achieve the necessary qualifications. And so his life of deception started. At just 16-years-old, with a desire to become a member of a silent order of Trappist monks, Demara ran away from his home in Lawrence, Massachusetts, lying about his age to gain entry.

When he was found by his parents he was allowed to stay, as they believed he would eventually give up. Demara remained with the monks long enough to gain his hood and habit, but was ultimately forced out of the monastery at the age of 18 as his fellow monks felt he lacked the right temperament.

Demara then attempted to join other orders, including the Brothers of Charity children’s home in West Newbury, Massachusetts, but again failed to follow the rules. In response, he stole funds and a car from the home, and joined the army in 1941, at the age of 19. But, as it turned out, the army was not for him either. He disliked military life so much that he stole a friend’s identity and fled, eventually deciding to join the navy instead.

From monk to medicine

While in the navy, Demara was accepted for medical training. He passed the basic course but due to his lack of education was not allowed to advance. So, in order to get into the medical school, Demara created his first set of fake documents indicating he already had the needed college qualifications. He was so pleased with his creations that he decided to skip applying to medical school and tried to gain a commission as an officer instead. When his falsified papers were discovered, Demara faked his own death and went on the run again.


Read more: The men who impersonate military personnel for stolen glory


In 1942, Demara took the identity of Dr Robert Linton French, a former navy officer and psychologist. Demara found French’s details in an old college prospectus which had profiled French when he worked there. Though he worked as a college teacher using French’s name till the end of the war in 1945, Demara was eventually caught and the authorities decided to prosecute him for desertion.

However, due to good behaviour, he only served 18 months of the six-year sentence handed to him, but upon his release he went back to his old ways. This time Demara created a new identity, Cecil Hamann, and enrolled at Northeastern University. Tiring of the effort and time needed to complete his law degree, Demara awarded himself a PhD and, under the persona of “Dr” Cecil Hamann, took up another teaching post at a Christian college, The Brother of Instruction, in Maine in the summer of 1950.

It was here that Demara met and befriended Canadian doctor Joseph Cyr, who was moving to the US to set up a medical practice. Needing help with the immigration paperwork, Cyr gave all his identifying documents to Demara, who offered to fill in the application for him. After the two men parted ways, Demara took copies of Cyr’s paperwork and moved up to Canada. Pretending to be Dr Cyr, Demara approached the Canadian Navy with an ultimatum: make me an officer or I will join the army. Not wanting to lose a trained doctor, Demara’s application was fast tracked.

As a commissioned officer during the Korean war, Demara first served at Stadacona naval base, where he convinced other doctors to contribute to a medical booklet he claimed to be producing for lumberjacks living in remote parts of Canada. With this booklet and the knowledge gained from his time in the US Navy, Demara was able to pass successfully as Dr Cyr.

A military marvel

Demara worked aboard HMCS Cayuga as ship’s doctor (pictured in 1954).

In 1951, Demara was transferred to be ship’s doctor on the destroyer HMCS Cayuga. Stationed off the coast of Korea, Demara relied on his sick berth attendant, petty officer Bob Horchin, to handle all minor injuries and complaints. Horchin was pleased to have a superior officer who did not interfere in his work and who empowered him to take on more responsibilities.

Though he very successfully passed as a doctor aboard the Cayuga, Demara’s time there came to a dramatic end after three Korean refugees were brought on in need of medical attention. Relying on textbooks and Horchin, Demara successfully treated all three – even completing the amputation of one man’s leg. Recommended for a commendation for his actions, the story was reported in the press where the real Dr Cyr’s mother saw a picture of Demara impersonating her son. Wanting to avoid further public scrutiny and scandal, the Canadian government elected to simply deport Demara back to the US in November 1951.

After returning to America, there were news reports on his actions, and Demara sold his story to Life magazine in 1952. In his biography, Demara notes that he spent the time after his return to the US using his own name and working in different short-term jobs. While he enjoyed the prestige he had gained in his impostor roles, he started to dislike life as Demara, “the great impostor”, gaining weight and developing a drinking problem.

In 1955, Demara somehow acquired the credentials of a Ben W. Jones and disappeared again. As Jones, Demara began working as a guard at Huntsville Prison in Texas, and was eventually put in charge of the maximum security wing that housed the most dangerous prisoners. In 1956, an educational programme that provided prisoners with magazines to read led to Demara’s discovery once more. One of the prisoners found the Life magazine article and showed the cover picture of Demara to prison officals. Despite categorically denying to the prison warden that he was Demara, and pointing to positive feedback he had received from prison officials and inmates about his performance there, Demara chose to run. In 1957, he was caught in North Haven, Maine and served a six-month prison sentence for his actions.

After his release he made several television appearances including on the game show You Bet Your Life, and made a cameo in horror film The Hypnotic Eye. From this point until his death in 1981, Demara would struggle to escape his past notoriety. He eventually returned to the church, getting ordained using his own name and worked as a counsellor at a hospital in California.

How Demara did it

According to biographer Crichton, Demara had an impressive memory, and through his impersonations accumulated a wealth of knowledge on different topics. This, coupled with charisma and good instincts, about human nature helped him trick all those around him. Studies of professional criminals often observe that con artists are skilled actors and that a con game is essentially an elaborate performance where only the victim is unaware of what is really going on.

Demara also capitalised on workplace habits and social conventions. He is a prime example of why recruiters shouldn’t rely on paper qualifications over demonstrations of skill. And his habit of allowing subordinates to do things he should be doing meant Demara’s ability went untested, while at the same time engendering appreciation from junior staff.

He observed of his time in academia that there was always opportunity to gain authority and power in an organisation. There were ways to set himself as an authority figure without challenging or threatening others by “expanding into the power vacuum”. He would set up his own committees, for example, rather than joining established groups of academics. Demara says in the biography that starting fresh committees and initiatives often gave him the cover he needed to avoid conflict and scrutiny.

…there’s no competition, no past standards to measure you by. How can anyone tell you aren’t running a top outfit? And then there’s no past laws or rules or precedents to hold you down or limit you. Make your own rules and interpretations. Nothing like it. Remember it, expand into the power vacuum.

Working from a position of authority as the head of his own committees further entrenched Demara in professions he was not qualified for. It can be argued that Demara’s most impressive attempt at expansion into the “power vacuum” occurred when teaching as Dr Hamann.

Hamann was considered a prestigious appointee for a small Christian college. Claiming to be a cancer researcher, Demara proposed converting the college into a state-approved university where he would be chancellor. The plans proceeded but Demara was not given a prominent role in the new institution. It was then that Demara decided to take Cyr’s identity and leave for Canada. If Demara had succeeded in becoming chancellor of the new LaMennais College (which would go onto become Walsh University) it is conceivable that he would have been able to avoid scrutiny or questioning thanks to his position of authority.

Inherently trustworthy

Other notable serial impostors and fakes have relied on techniques similar to Demara’s. Frank Abagnale also recognised the reliance people in large organisations placed on paperwork and looking the part. This insight allowed him at 16 to pass as a 25-year-old airline pilot for Pan Am Airways as portrayed in the film, Catch Me If You Can.

More recently, Gene Morrison was jailed after it was discovered that he had spent 26 years running a fake forensic science business in the UK. After buying a PhD online, Morrison set up Criminal and Forensic Investigations Bureau (CFIB) and gave expert evidence in over 700 criminal and civil cases from 1977 to 2005. Just like Demara used others to do his work, Morrison subcontracted other forensic experts and then presented the findings in court as his own.


Read more: How to get away with fraud: the successful techniques of scamming


Marketing and psychology expert Robert Cialdini’s work on the techniques of persuasion in business might offer insight into how people like Demara can succeed, and why it is that others believe them. Cialdini found that there are six universal principles of influence that are used to persuade business professionals: reciprocity, consistency, social proof, getting people to like you, authority and scarcity.

Demara used all of these skills at various points in his impersonations. He would give power to subordinates to hide his lack of knowledge and enable his impersonations (reciprocity). By using other people’s credentials, he was able to manipulate organisations into accepting him, using their own regulations against them (consistency and social proof). Demara’s success in his impersonations points to how likeable he was and how much of an authority he appeared to be. By impersonating academics and professionals, Demara focused on career paths where at the time there was high demand and a degree of scarcity, too.

Laid bare, one can see how Demara tricked his unsuspecting colleagues into believing his lies through manipulation. Yet within this it is interesting to also consider how often we all rely on gut instinct and the appearance of ability rather than witnessed proof. Our gut instinct is built on five questions we ask ourselves when presented with information: does a fact come from a credible source? Do others believe it? Is there plenty of evidence to support it? Is it compatible with what I believe? Does it tell a good story?

Researchers of social trust and solidarity argue that people also have a fundamental need to trust strangers to tell the truth in order for society to function. As sociologist Niklas Luhmann said, “A complete absence of trust would prevent (one) even getting up in the morning.” Trust in people is in a sense a default setting, so to mistrust requires a loss of confidence in someone which must be sparked by some indicator of a lie.

It was only after the prisoner showed the Life article to the Huntsville Prison warden, that they began to ask questions. Until this point, Demara had offered everything his colleagues would need to believe he was a capable member of staff. People accepted Demara’s claims because it felt right to believe him. He had built a rapport and influenced people’s views of who he was and what he could do.


Read more: Five psychological reasons why people fall for scams – and how to avoid them


Another factor to consider when asking why people would believe Demara was the rising dependency on paper proofs of identity at that time. Following World War II, improvements in and a shift towards reliance on paper documentation occurred as social and economic mobility changed in America. Underlying Demara’s impersonations and the actions of many modern con artists is the reliance we have long placed in first paper proofs of identity such as birth certificates, ID cards and, more recently, digital forms of identification.

As his preoccupation was more with prestige than money, it can be argued that Demara had a harder time than other impostors who were only driven by profit. Demara stood out as a surgeon and a prison guard, he was a good fake and influencer, but the added attention that came from his attempts at multiple important professions and media attention led to his downfall. Abagnale similarly had issues with the attention that came with pretending to be an airline pilot, lawyer and surgeon. In contrast, Morrison stuck to his one impersonation for years, avoiding detection and making money until the quality of his work was investigated.

The trick, it appears, to being a good impostor is essentially to be friendly, have access to a history of being trusted by others, have the right paperwork, build others’ confidence in you and understand the social environment you are entering. Although, when Demara was asked to explain why he committed his crimes he simply said, “Rascality, pure rascality”.

The Conversation

Tim Holmes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Tissue donations are important to cancer research, what happens to your cells after they are taken?

Author: Helena Robinson, Postdoctoral Research Officer in Cancer Biology, Bangor University

Vladimir Borovic/Shutterstock

If you’ve ever had a tumour removed or biopsy taken, you may have contributed to life-saving research. People are often asked to give consent for any tissue that is not needed for diagnosis to be used in other scientific work. Though you probably won’t be told exactly what research your cells will be used for, tissue samples like these are vital for helping us understand and improve diagnosis and treatment of a whole range of illnesses and diseases. But once they’re removed, how are these tissue samples used exactly? How do they go from patient to project?

When tissue is removed from a person’s body, most often it is immediately put into a chemical preservative. It is then taken to a lab and embedded in a wax block. Protecting the tissue like this retains its structure and stops it from decomposing so it can be stored at room temperature for long periods of time.

This process also means that biochemical molecules like protein and DNA are preserved, which can provide vital clues about what processes are occurring in the tissue at that stage in the person’s illness. If we were looking at, for example, whether molecule A occurs in one particular tumour type but not in others (which would make it helpful for diagnosis) we would want a large number of each type to test. But there may not be enough patients of each type currently in treatment, so it is useful to have a library of samples to draw from.


Read more: More people can donate tissue than organs – so why do we know so little about it?


Or we might want to test if patients with tumours containing molecule B are less likely to survive for five years than those without this molecule. This sort of question requires samples with a follow-up time of at least five years. But the answer may help doctors decide whether they need to treat their current patients with B more aggressively or with a different kind of treatment.

To analyse tissues, lab scientists cut very thin slices from the wax blocks and view them under a microscope. The slides are stained with dyes that show the overall tissue structure, and may also be stained with antibodies to show the presence of specific molecules.

Human tissue embedded in wax and a stained slide ready for examination.Komsan Loonprom/Shutterstock

Studies often need large numbers of samples from different patients to adequately answer a research question, which can take some time to collect. Take my work for example. My team is interested in finding more about a protein called brachyury, and how it relates to bowel cancer. But to do this we need to compare lots of samples, so we are using tissue from 823 bowel cancer patients and 50 non-cancer patients in our research.

When not in use, the tissue blocks are – with patient consent – placed in a store that researchers can access. The UK has several of these stores, known as biobanks or biorepositories, holding all kinds of tissues. Some cancer biobanks also store different kinds of tumours and blood samples.


Read more: How biobanks can help improve the integrity of scientific research


While there are no reliable figures available on how many samples are held in all biobanks, or how often they are used, we do know these numbers are significant. The Children’s Cancer and Leukaemia Biobank alone has banked 19,000 samples since 1998. The Northern Ireland Biobank reports that 2,062 patients consented for their tissues to be used in research between 2017-2018, and 4,086 samples were accessed by researchers in that period.

Identifying biomarkers

Projects that use biobanks are often trying to identify biomarkers. These are any biological characteristics that give useful information about a disease or condition. Our team is looking at whether the protein brachyury is a useful biomarker to improve bowel cancer diagnosis.

Brachyury is essential for early embryonic development, but it is switched off in most cells by the time you are born. However, several studiesimply that finding brachyury in a tumour indicates a poorer outcome for the patient. But to work out if this link is correct, we need to look at biobank samples. Doing this will help us work out more accurately which patients are at higher risk of cancer recurrence or metastasis. This is important when doctors are deciding on the best course of treatment.

In our research, we also need clinical details, such as what happened to the patient and all the information available at the time of diagnosis. Then we can assess whether testing for brachyury would have added useful information to the diagnosis. Information that accompanies each block is anonymised, which means the researcher analysing the data won’t know the patient’s name or be able to identify them from the sample. But they can see any relevant clinical details such as tumour stage, age, sex and survival.

Biobank samples have had already improved treatment of childhood acute lymphocytic leukaemia. Samples from the Cancer and Leukaemia Biobank were used to demonstrate that children with an abnormality in chromosome 21 had poorer outcomes that those without it. This led to treatment being modified for these children so they are no longer at a disadvantage.

People are often applauded for raising money for research by undertaking gruelling or inventive challenges. Patients who decide their tissue can be used in research should be similarly applauded. Without their unique and valuable gift, we wouldn’t be able to further our understanding, diagnosis and treatment of all kinds of illnesses and diseases.

The Conversation

Helena Robinson receives funding from Cancer Research Wales.

Being left-handed doesn't mean you are right-brained — so what does it mean?

Author: Emma Karlsson, Postdoctoral researcher in Cognitive Neuroscience, Bangor University

Wachiwit/Shutterstock

There have been plenty of claims about what being left-handed means, and whether it changes the type of person someone is – but the truth is something of an enigma. Myths about handedness appear year after year, but researchers have yet to uncover all of what it means to be left-handed.

So why are people left-handed? The truth is we don’t fully know that either. What we do know is that only around 10% of people across the world are left-handed – but this isn’t split equally between the sexes. About 12% of men are left-handed but only about 8% of women. Some people get very excited about the 90:10 split and wonder why we aren’t all right-handed.

But the interesting question is, why isn’t our handedness based on chance? Why isn’t it a 50:50 split? It is not due to handwriting direction, as left-handedness would be dominant in countries where their languages are written right to left, which it is not the case. Even the genetics are odd – only about 25% of children who have two left-handed parents will also be left-handed.


Read more: How children's brains develop to make them right or left handed


Being left-handed has been linked with all sorts of bad things. Poor health and early death are often associated, for example – but neither are exactly true. The latter is explained by many people in older generations being forced to switch and use their right hands. This makes it look like there are less left-handers at older ages. The former, despite being an appealing headline, is just wrong.

Positive myths are also abound. People say that left-handers are more creative, as most of them use their “right brain”. This is perhaps one of the more persistent myths about handedness and the brain. But no matter how appealing (and perhaps to the disappointment of those lefties still waiting to wake up one day with the talents of Leonardo da Vinci), the general idea that any of us use a “dominant brain side” that defines our personality and decision making is also wrong.

Brain lateralisation and handedness

It is true, however, that the brain’s right hemisphere controls the left side of the body, and the left hemisphere the right side – and that the hemispheres do actually have specialities. For example, language is usually processed a little bit more within the left hemisphere, and recognition of faces a little bit more within the right hemisphere. This idea that each hemisphere is specialised for some skills is known as brain lateralisation. However, the halves do not work in isolation, as a thick band of nerve fibres – called the corpus callosum – connects the two sides.

Interestingly, there are some known differences in these specialities between right-handers and left-handers. For example, it is often cited that around 95% of right-handers are “left hemisphere dominant”. This is not the same as the “left brain” claim above, it actually refers to the early finding that most right-handers depend more on the left hemisphere for speech and language. It was assumed that the opposite would be true for lefties. But this is not the case. In fact, 70% of left-handers also process language more in the left hemisphere. Why this number is lower, rather than reversed, is as yet unknown.


Read more: Why is life left-handed? The answer is in the stars


Researchers have found many other brain specialities, or “asymmetries” in addition to language. Many of these are specialised in the right hemisphere – in most right-handers at least – and include things such as face processing, spatial skills and perception of emotions. But these are understudied, perhaps because scientists have incorrectly assumed that they all depend on being in the hemisphere that isn’t dominant for language in each person.

In fact, this assumption, plus the recognition that a small number of left-handers have unusual right hemisphere brain dominance for language, means left-handers are either ignored – or worse, actively avoided – in many studies of the brain, because researchers assume that, as with language, all other asymmetries will be reduced.

How some of these functions are lateralised (specialised) in the brain can actually influence how we perceive things and so can be studied using simple perception tests. For example, in my research group’s recent study, we presented pictures of faces that were constructed so that one half of the face shows one emotion, while the other half shows a different emotion, to a large number of right-handers and left-handers.

Usually, people see the emotion shown on the left side of the face, and this is believed to reflect specialisation in the right hemisphere. This is linked to the fact that visual fields are processed in such a way there is a bias to the left side of space. This is thought to represent right hemisphere processing while a bias to the right side of space is thought to represent left hemisphere processing. We also presented different types of pictures and sounds, to examine several other specialisations.

Our findings suggest that some types of specialisations, including processing of faces, do seem to follow the interesting pattern seen for language (that is, more of the left-handers seemed to have a preference for the emotion shown on the right side of the face). But in another task that looked at biases in what we pay attention to, we found no differences in the brain-processing patterns for right-handers and left-handers. This result suggests that while there are relationships between handedness and some of the brain’s specialisations, there aren’t for others.

Left-handers are absolutely central to new experiments like this, but not just because they can help us understand what makes this minority different. Learning what makes left-handers different could also help us finally solve many of the long-standing neuropsychological mysteries of the brain.

The Conversation

Emma Karlsson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Brexit uncertainty boosts support for Welsh independence from the UK

Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University

vladm/Shutterstock

In a move that surprised many, in June 2016, 52.5% of people in Wales voted to leave the European Union. But concerns over Brexit negotiations, and “chaos in UK politics” have mounted since then, and recent polls suggest that support for remain has risen considerably in Wales.

Now, the Welsh government has announced that it will campaign for the UK to remain in the EU while public attention is turning to the question of whether the Welsh should become independent from a post-Brexit UK.

Welsh independence has long been supported by Plaid Cymru, but it now appears to be becoming more mainstream, with more Welsh citizens now considering the possibility of leaving the union. Marches are being held across the country and recent YouGov polls indicate that support for independence, or at least “indy-curiosity” has grown in Wales in the past two years.

If it were to become independent, Wales wouldn’t have to start from scratch. It has had a devolved government and parliament (the National Assembly or “Senedd”) for 20 years.

At present these bodies do not have control over all matters relating to Wales. They don’t have control over defence and national security, foreign policy, and immigration, for example. But the Assembly does have responsibility for policy and passing laws for the benefit of the people of Wales, and has been doing so for the past 20 years.

Wales, alone

Strictly speaking, constitutional law dictates that Wales cannot run its own referendum nor declare independence unilaterally. The new Schedule 7A to the Government of Wales Act 2006 states that “the union of the nations of Wales and England” is a reserved matter, not for the Assembly. But precedent suggests that an independence referendum is not an impossibility.

If there is momentum for Wales to decide its own future, this would put pressure on the UK government to facilitate a legal solution for a referendum. This opportunity was afforded to the former Scottish first minister, Alex Salmond, by former prime minister David Cameron, via the Scottish Independence Act 2013.

While not all are in favour of Welsh independence, the political narrative is changing. Welsh first minister Mark Drakeford has stated that “support for the union is not unconditional” and that “independence has risen up the public agenda”.

Concerned by relationships between the UK’s countries, former prime minister Theresa May referred to the electoral success of nationalist parties such as Plaid Cymru as evidence that the union is “more imperilled now than it has ever been”. She also sanctioned the Dunlop review, with a remit to address “how we can secure our union for the future”.

Her comments echo warnings from former Labour prime minister Gordon Brown, who recently remarked that UK unity is “more at risk than at any time in 300 years – and more in danger than when we had to fight for it in 2014 during a bitter Scottish referendum”.

The Senedd

So if Wales overcame the legal challenges and gained national political support, would the devolved government and parliament be able to manage the country? As noted above the National Assembly has been making laws for Wales since 1999. Frequently cited achievements include the abolishing of prescription charges and financial support for Welsh university students (via a mix of tuition loans and living cost grants). In addition the Social Services and Well-being Act 2014 changed how peoples’ needs are assessed and services delivered.

Wales was also among the first to introduce free bus travel for OAPs, charges for plastic bags, and the indoor smoking ban– with further bans in school playgrounds and outside hospitals in 2019.

More recently its Future Generations Act was celebrated for compelling public bodies to think about the long-term impact of their decisions on communities and the environment – albeit with some criticisms from legal experts for being “toothless” in terms of enforceability.


Read more: Wales is leading the world with its new public health law


Alongside these headline-grabbing results, the National Assembly itself has been an achievement in its own right. While its initial establishment was something of a battle – in 1979 Wales voted 4:1 against creating an Assembly and in 1997 just 50.3% voted for it – The Wales Act 2017 actually extended the scope of the Assembly’s powers.

This changed its constitutional structure from a conferred powers model (which limited it to specifically listed areas) to a reserved powers model, which empowers the Assembly to produce a multitude of Welsh laws on all matters that are not reserved to the UK parliament.

But even with its strong history, it must be noted that not everyone is in favour of the Assembly. A small number of UKIP assembly members are currently arguing to reverse devolution while others criticise Wales’ record– particularly in the areas of schooling and the NHS.

Independence challenges

The are several other dimensions to the question of whether Wales could become an independent state. Socially and economically, opponents advocate that Wales is too small and too poor to stand alone on the world stage. Yes Cymru, a non-partisan pro-independence campaign group, has sought to debunk these myths, pointing out that there are 18 countries in Europe smaller than Wales, and that the assessment of Wales’ fiscal deficit is flawed in excluding significant assets such as water and electricity.

The constitutional shift in power that will follow Brexit will certainly give rise to the prospects of a divided UK. But the outcome of Brexit, and its impact on Welsh independence, hinges on the new prime minister’s actions.

While Boris Johnson has reiterated that the “union comes first”, if there is significant public support for independence in Wales, it will be hard for Johnson to ignore the people’s right to self-determination and arbitrarily enforce the union at all costs. Should the independence movement gain further wide support in the coming months compromises will have to be reached, with at least more incremental devolution being likely in the medium term.

Ultimately, while it would be a monumental change, the question of whether Wales becomes independent hinges on what the people want for their country. If successive UK governments take the union for granted without more meaningful consideration to the cumulative effects on the people of Wales, calls for independence may become louder.

The Conversation

Stephen Clear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How the brain prepares for movement and actions

Author: Myrto Mantziara, PhD Researcher, Bangor University

To perform a sequence of actions, our brains need to prepare and queue them in the correct order.AYAakovlev/Shutterstock

Our behaviour is largely tied to how well we control, organise and carry out movements in the correct order. Take writing, for example. If we didn’t make one stroke after another on a page, we would not be able to write a word.

However, motor skills (single or sequences of actions which through practice become effortless) can become very difficult to learn and retrieve when neurological conditions disrupt the planning and control of sequential movements. When a person has a disorder – such as dyspraxia or stuttering – certain skills cannot be performed in a smooth and coordinated way.

Traditionally scientists have believed that in a sequence of actions, each is tightly associated to the other in the brain, and one triggers the next. But if this is correct, then how can we explain errors in sequencing? Why do we mistype “form” instead of “from”, for example?

Some researchersargue that before we begin a sequence of actions, the brain recalls and plans all items at the same time. It prepares a map where each item has an activation stamp relative to its order in the sequence. These compete with each other until the item with the stronger activation wins. It “comes out” for execution as being more “readied” – so we type “f” in the word “from” first, for example – and then it is erased from the map. This process, called competitive queuing, is repeated for the rest of the actions until we execute all the items of the sequence in the correct order.

This idea that the brain uses simultaneous activations of actions before any movement takes place was proven in a 2002 study. As monkeys were drawing shapes (making three strokes for a triangle, for example), researchers found that before the start of the movement, there existed simultaneous neural patterns for each stroke. How strong the activation was could predict the position of that particular action in execution.

Planning and queuing

What has not been known until now is whether this activation system is used in the human brain. Nor have we known how actions are queued while we prepare them based on their position in the sequence. However recent research from neuroscientists at Bangor University and University College London has shown that there is simultaneous planning and competitive queuing in the human brain too.

To carry out sequences of actions, our brains must queue each one before we do it.Liderina/Shutterstock

For this study, the researchers were interested to see how the brain prepares for executing well-learned action sequences like typing or playing the piano. Participants were trained for two days to pair abstract shapes with five-finger sequences in a computer-based task. They learned the sequences by watching a small dot move from finger to finger on a hand image displayed on the screen, and pressing the corresponding finger on a response device. These sequences were combinations of two finger orders with two different rhythms.

On the third day, the participants had to produce – based on the abstract shape presented for a while on the screen – the correct sequence entirely from memory while their brain activity was recorded.

Looking at the brain signals, the team was able to distinguish participants’ neural patterns as they planned and executed the movements. The researchers found that, milliseconds before the start of the movement, all the finger presses were queued and “stacked” in an ordered manner. The activation pattern of the finger presses reflected their position in the sequence that was performed immediately after. This competitive queuing pattern showed that the brain prepared the sequence by organising the individual actions in the correct order.

The researchers also looked at whether this preparatory queuing activity was shared across different sequences which had different rhythms or different finger orders, and found that it was. The competitive queuing mechanism acted as a template to guide each action into a position, and provided the base for the accurate production of new sequences. In this way the brain stays flexible and efficient enough to be ready to produce unknown combinations of sequences by organising them using this preparatory template.

Interestingly, the quality of the preparatory pattern predicted how accurate a participant was in producing a sequence. In other words, the more well-separated the activities or actions were before the execution of the sequence, the more likely the participant was to execute the sequence without mistakes. The presence of errors, on the other hand, meant that the queuing of the patterns in preparation for the action was less well-defined, and tended to be mingled.

By knowing how our actions are pre-planned in the brain, researchers will be able to find out the parameters of executing smooth and accurate movement sequences. This could lead to a better understanding of the difficulties found in disorders of sequence learning and control, such as stuttering and dyspraxia. It could also help the development of new rehabilitation or treatment techniques which optimise movement planning in order for patients to achieve a more skilled control of action sequences.

The Conversation

Myrto Mantziara is a PhD researcher and receives funding from School of Psychology, Bangor University.

Peut-on parler d’une identité européenne ?

Author: François Dubet, Professeur des universités émérite, Université de BordeauxNathalie Heinich, Sociologue, Centre national de la recherche scientifique (CNRS)Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University

François Dubet, Université de Bordeaux : « Chacun perçoit l’Europe de son propre point de vue »

La question de l’identité est toujours enfermée dans le même paradoxe. D’un côté, l’identité semble inconsistante : une construction faite de bric et de broc, un récit, un ensemble instable d’imaginaires et de croyances qui se décomposent dès que l’on essaie de s’en saisir. Mais d’un autre côté, ces identités incertaines semblent extrêmement solides, enchâssées dans les subjectivités les plus intimes. Souvent, il suffit que les identités collectives imaginaires se défassent pour que les individus se sentent menacés et blessés au plus profond d’eux-mêmes.

Après tout, les centaines de milliers de sujets de sa Majesté qui ont manifesté le 23 mars contre le Brexit se sentaient européens parce que cette part infime d’eux même risque de leur être arrachée, alors même qu’ils ne pourraient pas la définir précisément.

L’identité européenne en mouvement

Migrations européennes, 2013.FNSP, Sciences Po, Atelier de cartographie, CC BY-NC-ND

Je suppose que les historiens et les spécialistes des civilisations pourraient aisément définir quelque chose comme une identité européenne tenant aux histoires communes des sociétés et des États qui se sont formés dans les mondes latins, les mondes chrétiens et germaniques, les guerres répétées, les alliances monarchiques, les révolutions, les échanges commerciaux, la circulation des élites et les migrations intérieures à l’Europe.

Les histoires des États nationaux sont tout simplement incompréhensibles en dehors de l’histoire de l’Europe. Ceci dit, nous aurions beaucoup de mal à définir cette identité fractionnée, clivée, mouvante. Chacun perçoit l’Europe de son propre point de vue, et d’ailleurs quand les institutions européennes se risquent à définir une identité européenne, elles n’y parviennent difficilement.

L’identité européenne serait-elle qu’un leurre, un cumul d’identités nationales, les seules vraiment solides, car étayées par des institutions ?

Vivre l’Europe pour l’aimer

Les sondages, à manier avec précaution, montrent que les individus hiérarchisent leurs sentiments d’appartenance. On se sent Breton et Français, et Européens, et croyant, et une femme ou un homme, et de telle ou telle origine sans que, dans la plupart des cas, ces multiples identifications soient perçues comme des dilemmes.

Même ceux qui en veulent à l’Europe politique car trop libérale et trop bureaucratique, ne semblent guère désireux de revenir aux mobilisations en masse pour défendre leur pays contre leurs voisins européens. Et ce, malgré, la montée des partis d’extrême droite un peu partout en Europe, qui soulignent un attachement à l’identité nationale.


Read more: FPÖ, AfD, Vox : les partis d’extrême droite à l’offensive


La monnaie commune a simplifié beaucoup les échanges entre les Européennes, mais n’a pas effacé les disparités.Pixabay, CC BY

Au-delà d’une conscience politique explicite, il s’est ainsi formé une forme d’identité européenne vécue à travers les déplacements de populations, les loisirs ou modes de vie.

Beaucoup de ceux qui combattent l’Europe n’imaginent probablement plus de demander des visas et de changer des Francs contre des Pesetas pour passer deux semaines en Espagne.

Pourtant les démagogues accusent l’Europe d’être la cause de leurs malheurs, une attaque qui résonne de plus en plus forts dans les oreilles des groupes socio-économiques désavantagés.

Il n’est pas exclu que la critique de l’Europe procède plus de l’amour déçu que de l’hostilité. L’identité européenne existe bien plus qu’on ne le croit. Il suffirait que l’Europe implose pour qu’elle nous manque, et pas seulement au nom de nos intérêts bien compris.

Nathalie Heinich, CNRS/EHESS : « Doit-on parler d’identité européenne ? »

Parler d’« identité » à propos d’une entité chargée de connotations politiques n’est jamais neutre, comme on le voit avec la notion d’« identité française » : soit on affirme l’existence de cette entité (« identité européenne ») en visant implicitement sa distinction par rapport à un collectif supérieur (par exemple l’Amérique, la Chine…), et l’on est d’emblée dans la revendication d’un soutien aux petits (« dominés ») contre les grands (« dominants ») ; soit on vise implicitement sa distinction par rapport à un collectif inférieur (la nation, la France), et l’on est dans la revendication d’une affirmation de la supériorité du grand sur le petit. Tout dépend donc du contexte et des attendus.

Une expression à deux sens

Mais si l’on veut éviter une réponse normative pour s’en tenir à une description neutre, dégagée de jugement de valeur, alors il faut distinguer entre deux sens du terme « identité européenne ». Le premier renvoie à la nature de l’entité abstraite nommée « Europe » : ses frontières, ses institutions, son histoire, sa ou ses cultures, etc. L’exercice est classique, et la littérature historienne et politiste est abondante à ce sujet même si le mot « identité » n’y est pas forcément requis.

« Peut-on (encore) parler d’identité européenne ? » (« Is there (still) such a thing as European identity ? », Roger Casale, TEDx Oxford.

Le second sens renvoie, lui, aux représentations que se font les individus concrets de leur « identité d’Européen », c’est-à-dire la manière et le degré auquel ils se rattachent à ce collectif de niveau plus général que l’habituelle identité nationale. Le diagnostic passe alors par l’enquête sociologique sur les trois « moments » de l’identité – autoperception, présentation, désignation – par lesquels un individu se sent, se présente et est désigné comme « européen ». Et cette enquête peut prendre une dimension quantitative, avec un dispositif de type sondage représentatif basé sur ces trois expériences. La question « Peut-on parler d’une identité européenne ? » ne pourra dès lors trouver de réponse qu’au terme d’une telle enquête.

Une question pour les citoyens et leurs représentants

Mais les enjeux politiques de la question n’échappent à personne, et c’est pourquoi il faut avoir à l’esprit la fonction que revêt, dans la réflexion sur l’Europe, l’introduction du mot « identité » : il s’agit bien de transformer un projet économique et social en programme politique acceptable par le plus grand nombre – voire désirable.

C’est pourquoi le problème n’est pas tant de savoir si l’on peut, mais si l’on doit faire de l’Europe un enjeu identitaire et non plus seulement économique et social. Et donc : « Doit-on parler d’identité européenne ? »

La réponse à cette question appartient aux citoyens et à leurs représentants – pas aux chercheurs.

Nikolaos Papadogiannis, Université de Bangor, Royaume-Uni : « L’identité européenne : une pluralité d’options »

Le résultat du référendum britannique de 2016 sur l’adhésion à l’UE a provoqué des ondes de choc à travers l’Europe. Elle a, entre autres, suscité des débats sur la question de savoir si une « culture européenne » ou une « identité européenne » existe réellement ou si les identités nationales dominent toujours.

Il serait erroné, à mon sens, de passer sous silence l’identification de diverses personnes à « l’Europe ». Cette identification est l’aboutissement d’un long processus, en particulier dans la seconde moitié du XXe siècle, qui a impliqué à la fois les politiques des institutions de la CEE/UE et les initiatives locales.

La mobilité transfrontalière des jeunes depuis 1945 est un exemple clé de la première : elle a souvent été développée par des groupes qui n’étaient pas formellement liés à la CEE/UE. Ils ont tout de même contribué à développer un attachement à « l’Europe » dans plusieurs pays du continent.

Comme l’a montré le politologue Ronald Inglehart dans les années 1960, plus les jeunes « étaient jeunes » et plus ils voyageaient, plus ils étaient susceptibles de soutenir une union politique toujours plus étroite en Europe. Plus récemment, les programmes d’échanges Erasmus ont également contribué à développer des formes d’identification à l’Europe.

Se sentir « européen »

Simultanément, se sentir « européen » et adhérer à une identité nationale sont loin d’être incompatibles. Dans les années 1980, de nombreux Allemands de l’Ouest se sont passionnés pour une Allemagne réunifiée faisant partie d’une Europe politiquement unie.

Une partie du mur de Berlin.MariaTortajada/Pixabay, CC BY

L’attachement à « l’Europe » a également été un élément clé du nationalisme régional dans plusieurs pays européens au cours des trois dernières décennies, tels que le nationalisme écossais, catalan et gallois. Un cri de ralliement pour les nationalistes écossais depuis les années 1980 a été « l’indépendance en Europe ». Il en est encore ainsi aujourd’hui. Il est assez révélateur que le slogan principal du Parti national écossais de centre gauche (SNP), le parti nationaliste le plus puissant d’Écosse, pour les élections du Parlement européen de 2019, soit : « L’avenir de l’Écosse appartient à l’Europe ».

Des objectifs nationaux variés réunis sous la bannière étoilée

Cependant, ce qui mérite plus d’attention, c’est l’importance attachée à la notion d’identité européenne. Divers groupes sociaux et politiques l’ont utilisée, de l’extrême gauche à l’extrême droite.

Le sens qu’ils attachent à cette identité varie également. Pour le SNP, il est compatible avec l’adhésion de l’Écosse à l’UE. Le SNP combine cette dernière avec une compréhension inclusive de la nation écossaise, qui est ouverte aux personnes nées ailleurs dans le monde, mais qui vivent en Écosse.

Discours du leader du SNP et premier ministre écossais, Nicola Sturgeon, lors de l’ouverture royale du Parlement écossais le 2 juillet 2016.

En Allemagne, par contre, l’AfD (Alternative für Deutschland, Alternative for Germany) d’extrême droite s’identifie à « l’Europe », mais critique l’UE. Elle combine la première avec l’islamophobie. Un exemple clair de ce mélange est une affiche publiée par ce parti avant les élections de 2019. et demandant aux « Européens » de voter pour l’AfD, afin que l’Europe ne devienne pas une « Eurabie ».

Si l’identification à l’Europe existe, il s’agit d’un phénomène complexe, formulé de plusieurs façons. Cela n’implique pas nécessairement un soutien à l’UE. De même, les identités européennes ne s’excluent pas nécessairement mutuellement avec les identités nationales. Enfin, elles peuvent, bien que pas toujours, reposer sur des stéréotypes à l’encontre de personnes considérées comme « non européennes ».

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Is there such thing as a 'European identity'?

Author: Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University

Is there such a thing as an European identity?Marco Verch/Flickr, CC BY-ND

The outcome of the UK’s 2016 referendum on EU membership has sent shockwaves across Europe. Among other impacts, it has prompted debates around the issues whether a “European culture” or a “European identity” actually exist or whether national identities still dominate.

It would be wrong, in my opinion, to write off the identification of various people with “Europe”. This identification has been the outcome of a long process, particularly in the second half of the 20th century, involving both the policies of the European Economic Community (EEC) and EU institutions and grassroots initiatives. Cross-border youth mobility since 1945 is a key example of the former: it was often developed by groups that were not formally linked to the EEC/EU. They still helped develop an attachment to “Europe” in several countries of the continent.

As political scientist Ronald Inglehart showed in the 1960s, the younger people were, and the more they travelled, the more likely they were to support an ever-closer political union in Europe. More recently, Erasmus exchange programmes have also helped develop forms of identification with Europe.

Feeling “European”

Simultaneously, feeling “European” and subscribing to a national identity have been far from mutually exclusive. Numerous West Germans in the 1980s were passionate about a reunified Germany being part of a politically united Europe.

Attachment to “Europe” has also been a key component of regional nationalism in several European countries in the last three decades, such as the Scottish or the Catalan nationalism. A rallying cry for Scottish nationalists from the 1980s on has been “independence in Europe”, and it continues to be the case today. Indeed, for the 2019 European Parliament elections, the primary slogan of the centre-left Scottish National Party (SNP), currently in power, is “Scotland’s future belongs in Europe”.

Diverse agendas

What requires further attention is the significance attached to the notion of European identity. Diverse social and political groups have used it, ranging from the far left to the far right, and the meaning they attach varies. For the SNP, it is compatible with the EU membership of Scotland. The party combines the latter with an inclusive understanding of the Scottish nation, which is open to people who have been born elsewhere in the globe, but live in Scotland.

Speech by SNP leader and first minister of Scotland, Nicola Sturgeon, on July 2, 2016.

By contrast, Germany’s far-right AfD party (Alternative für Deutschland, Alternative for Germany) is critical of the EU, yet identifies with “Europe”, which it explicitly contrasts with Islam. A clear example is a one of the party’s posters for the upcoming elections that asks “Europeans” to vote for AfD so that the EU doesn’t become “Eurabia”.

Identification with Europe does exist, but it is a complex phenomenon, framed in several ways. and does not necessarily imply support for the EU. Similarly, European identities are not necessarily mutually exclusive with national identities. Finally, both the former and the latter identities may rest upon stereotypes against people regarded as “non-European”.

The Conversation

Nikolaos Papadogiannis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Climate change is putting even resilient and adaptable animals like baboons at risk

Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor University

Villiers Steyn/Shutterstock.com

Baboons are large, smart, ground-dwelling monkeys. They are found across sub-Saharan Africa in various habitats and eat a flexible diet including meat, eggs, and plants. And they are known opportunists – in addition to raiding crops and garbage, some even mug tourists for their possessions, especially food.

We might be tempted to assume that this ecological flexibility (we might even call it resilience) will help baboons survive on our changing planet. Indeed, the International Union for the Conservation of Nature (IUCN), which assesses extinction risk, labels five of six baboon species as “of Least Concern”. This suggests that expert assessors agree: the baboons, at least relatively speaking, are at low risk.

Unfortunately, my recent research suggests this isn’t the whole story. Even this supposedly resilient species m⁠a⁠y⁠ ⁠b⁠e⁠ ⁠a⁠t⁠ ⁠s⁠i⁠g⁠n⁠i⁠f⁠i⁠c⁠a⁠n⁠t⁠ ⁠r⁠i⁠s⁠k⁠ ⁠o⁠f⁠ ⁠e⁠x⁠t⁠i⁠n⁠c⁠t⁠i⁠o⁠n⁠ ⁠b⁠y⁠ ⁠2⁠0⁠7⁠0⁠.⁠

Resourceful – surely resilient?Okyela/Shutterstock.com

We know people are having huge impacts on the natural world. Scientists have gone as far as naming a new epoch, the Anthropocene, after our ability to transform the planet. Humans drive other species extinct and modify environments to our own ends every day. Astonishing television epics like Our Planet emphasise humanity’s overwhelming power to damage the natural world.

But so much remains uncertain. In particular, while we now have a good understanding of some of the changes Earth will face in the next decades – we’ve already experienced 1°C of warming as well as increases in the frequency of floods, hurricanes and wildfires – we still struggle to predict the biological effects of our actions.

In February 2019 the Bramble Cay melomys (a small Australian rodent) had the dubious honour of being named the first mammal extinct as a result of anthropogenic climate change. Others have suffered range loss, population decline and complex knock-on effects from their ecosystems changing around them. Predicting how these impacts will stack up is a significant scientific challenge.

We can guess at which species are at most risk and which are safe. But we must not fall into the trap of trusting our expectations of resilience, based as they are on a specie’s current success. Our recent research aimed to test these expectations – we suspected that they would not also predict survival under changing climates, and we were right.

Baboons and climate change

Models of the effects of climate change on individual species are improving all the time. These are ecological niche models, which take information on where a species lives today and use it to explore where it might be found in future.

For the baboon study, my masters student Sarah Hill and I modelled each of the six baboon species separately, starting in the present day. We then projected their potential ranges under 12 different future climate scenarios. Our models included two different time periods (2050 and 2070), two different degrees of projected climate change (2.6°C and 6°C of warming) and three different global climate models, each with subtly different perspectives on the Earth system. These two different degrees of warming were chosen because they represent expected “best case” and “worst case” scenarios, as modelled by the Intergovernmental Panel on Climate Change.

Our model outputs allowed us to calculate the change in the area of suitable habitat for each species under each scenario. Three of our species, the yellow, olive and hamadryas baboons, seemed resilient, as we initially expected. For yellow and olive baboons, suitable habitat expanded under all our scenarios. The hamadryas baboon’s habitat, meanwhile, remained stable.

Guinea baboons like these seem to be especially sensitive to warm and arid conditions.William Warby via Flickr and Wikimedia Commons

Guinea baboons (the only one IUCN-labelled as Near Threatened) showed a small loss. Under scenarios predicting warmer, wetter conditions, they might even gain a little. Unfortunately, models projecting warming and drying predicted that Guinea baboons could lose up to 41.5% of their suitable habitat.

But Kinda baboons seemed sensitive to the same warmer and wetter conditions that might favour their Guinea baboon cousins. They were predicted to lose habitat under every model, though the loss ranged from a small one (0-22.7%) in warmer and dryer conditions to 70.2% under the worst warm and wet scenario.

And the final baboon species, the chacma baboon from South Africa (the same species that are known for raiding tourist vehicles to steal treats) is predicted the worst habitat loss. Under our 12 scenarios, habitat loss was predicted to range from 32.4% to 83.5%.

Chacma baboons like these may struggle to survive in the next few decades.PACA COMO/Shutterstock.com

Wider implications

The IUCN identifies endangered species using estimates of population and range size and how they have changed. Although climate change impacts are recognised as potentially causing important shifts in both these factors, climate change effect models like ours are rarely included, perhaps because they are often not available.

Our results suggest that in a few decades several baboon species might move into higher-risk categories. This depends on the extent of range (and hence population) loss they actually experience. New assessments will be required to see which category will apply to chacma, Kinda and Guinea baboons in 2070. It’s worth noting also that baboons are behaviourally flexible: they may yet find new ways to survive.

This also has wider implications for conservation practice. First, it suggests that we should try to incorporate more climate change models into assessments of species’ prospects. Second, having cast doubt on our assumption of baboon “resilience”, our work challenges us to establish which other apparently resilient species might be similarly affected. And given that the same projected changes act differently even on closely related baboon species, we presumably need to start to assess species more or less systematically, without prior assumptions, and to try to extract new general principles about climate change impacts as we work.

Sarah and I most definitely would not advocate discarding any of the existing assessment tools – the work the IUCN does is vitally important and our findings just confirm that. But our project may have identified an important additional factor affecting the prospects of even seemingly resilient species in the Anthropocene.


Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.

The Conversation

Isabelle Catherine Winder does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Replanting oil palm may be driving a second wave of biodiversity loss

Author: Simon Willcock, Senior Lecturer in Environmental Geography, Bangor UniversityAdham Ashton-Butt, Post-doctoral Research Associate, University of Hull

Rufous-backed dwarf kingfisher habitat is lost when forests are cleared for oil palm plantations© Muhammad Syafiq Yahya

The environmental impact of palm oil production has been well publicised. Found in everything from food to cosmetics, the deforestation, ecosystem decline and biodiversity loss associated with its use is a serious cause for concern.

What many people may not know, however, is that oil palm trees – the fruit of which is used to create palm oil – have a limited commercial lifespan of 25 years. Once this period has ended, the plantation is cut down and replanted, as older trees start to become less productive and are difficult to harvest. Our research has now found that this replanting might be causing a second wave of biodiversity loss, further damaging the environment where these plantations have been created.

An often overlooked fact is that oil palm plantations actually have higher levels of biodiversity compared to some other crops. More species of forest butterflies would be lost if a forest were converted to a rubber plantation, than if it were converted to oil palm, for example. One reason for this is that oil palm plantations provide a habitat that is more similar to tropical forest than other forms of agriculture (such as soybean production). The vegetation growing beneath the oil palm canopy (called understory vegetation) also provides food and a habitat for many different species, allowing them to thrive. Lizard abundance typically increases when primary forests are converted to oil palm, for example.


Read more: Palm oil boycott could actually increase deforestation – sustainable products are the solution


This does not mean oil palm plantations are good for the environment. In South-East Asia, where 85% of palm oil is produced, the conversion of forest to oil palm plantations has caused declines in the number of several charismatic animals, including orangutans, sun bears and hornbills. Globally, palm oil production affects at least 193 threatened species, and further expansion could affect 54% of threatened mammals and 64% of threatened birds.

Second crisis

Banning palm oil would likely only displace, not halt this biodiversity loss. Several large brands and retailers are already producing products using sustainably certified palm oil, as consumers reassess the impact of their purchasing. But as it is such an ubiquitous ingredient, if it were outlawed companies would need an alternative to keep producing products which include it, and developing countries would need to find something else to contribute to their economies. Production would shift to the cultivation of other oil crops elsewhere, such as rapeseed, sunflower or soybean, in order to meet global demand. In fact, since oil palm produces the highest yields per hectare – up to nine times more oil than any other vegetable oil crop – it could be argued that cultivating oil palm minimises deforestation.

That’s not to say further deforestation should be encouraged to create plantations though. It is preferable to replace plantations in situ, replanting each site so that land already allocated for palm oil production can be reused. This replanting is no small undertaking – 13m hectares of palm oil plantations are to be uprooted by the year 2030, an area nearly twice the size of Scotland. However, our study reveals that much more needs to be done in the management and processes around this replanting, in order to maximise productivity and protect biodiversity in plantations.


Read more: Palm oil: scourge of the earth, or wonder crop?


We found significant declines in the biodiversity and abundance of soil organisms as a consequence of palm replanting. While there was some recovery over the seven years it takes the new crop to establish, the samples we took still had nearly 20% less diversity of invertebrates (such as ants, earthworms, millipedes and spiders) than oil palm converted directly from forest.

We also found that second-wave mature oil palm trees had 59% fewer animals than the previous crop. This drastic change could have severe repercussions for soil health and the overall agro-ecosystem sustainability. Without healthy, well-functioning soil, crop production suffers.

It is likely that replanting drives these declines. Prior to replanting, heavy machinery is used to uproot old palms. This severely disrupts the soil, making upper layers vulnerable to erosion and compaction, reducing its capacity to hold water. This is likely to have a negative impact on biodiversity, which is then further reduced due to the heavy use of pesticides.


Read more: How Indonesia's election puts global biodiversity at stake with an impending war on palm oil


Without change to these management practices, soil degradation is likely to continue, causing decreases in future biodiversity, as well as the productivity of the plantation.

Ultimately, palm oil appears to be a necessary food product for growing populations. However, now that we have identified some of the detrimental consequences of replanting practices, it is clear that long-term production of palm oil comes at a higher cost than previously thought. The world needs to push for more sustainable palm oil, and those in the industry must explore more biodiversity-friendly replanting practices in order to lessen the long-term impacts of intensive oil palm cultivation.

The Conversation

Simon Willcock receives funding from the UK's Economic and Social Research Council (ESRC; ES/R009279/1 and ES/R006865/10). He is affiliated with Bangor University, and is on the Board of Directors of Alliance Earth. This article was written in collaboration with Anna Ray, a research assistant and undergraduate student studying Environmental Science at Bangor University.

Adham Ashton-Butt receives funding from The Natural Environment Research Council. He is affiliated with The University of Hull and the University of Southampton.

Game of Thrones: neither Arya Stark nor Brienne of Tarth are unusual — medieval romance heroines did it all before

Author: Raluca Radulescu, Professor of Medieval Literature and English Literature, Bangor University

Warrior women: Brienne of Tarth, left, and Arya Stark sparring. ©2017 Home Box Office, Inc.

Brienne of Tarth and Arya Stark are very unlike what some may expect of a typical medieval lady. The only daughter of a minor knight, Brienne has trained up as a warrior and has been knighted for her valour in the field of battle. Meanwhile Arya, a tomboyish teen when we first met her in series one, is a trained and hardened assassin. No damsels in distress, then – they’ve chosen to defy their society’s expectations and follow their own paths.

Yet while they are certainly enjoyable to watch, neither character is as unusual as modern viewers may think. While the books and television series play with modern perceptions (and misperceptions) of women’s roles, Arya and Brienne resemble the heroines of medieval times. In those days both real and fictional women took arms to defend cities and fight for their community – inspired by the courage of figures such as Boudicca or Joan of Arc. They went in disguise to look for their loved ones or ran away from home as minstrels or pilgrims. They were players, not bystanders.

While Arya chooses to spend the night with Gendry, she ultimately refuses his proposal of a life together.© 2019 Home Box Office, Inc.

Medieval audiences were regularly inspired by stories of women’s acts of courage and emotional strength. There was Josian, for example, the Saracen (Muslim) princess of the popular medieval romance Bevis of Hampton, who promises to convert to Christianity for love (fulfilling the wishes of the Christian audience). She also murders a man to whom she has been married against her wishes.

There was the lustful married lady who attempts to seduce Sir Gawain in the 14th-century poem Sir Gawain and the Green Knight too. As well as Rymenhild, a princess that eventually marries King Horn in an early example of the romance genre– who very much wants to break moral codes by having sex with her beloved before their wedding, which at that point has not been decided upon.

Medieval stories of such intense desire celebrate the young virgin heroine who woos the object of her desire and takes no notice of the personal, social, political and economic effects of sex before marriage. This is the case with both Arya and Brienne. Arya chooses her childhood friend Gendry to take her virginity on the eve of the cataclysmic battle against the undead. Brienne does the same with Jaime Lannister, the night after the cataclysmic battle – but only after he earns her trust over many adventures together.

Boldness and strength

It is the emotional strength and courage of these heroines that drives their stories forward rather than their relationship to the male hero. Throughout Game of Thrones, this emotional strength has also helped Arya and Brienne stay true to their missions. Arya’s continued strength has to be seen in the light of what has happened to her, however. Brienne began the story as a trained “knight” but Arya’s journey has seen her learning, through bitter experience, the skills she needs to survive.

A medieval audience would have been attuned to this message of self-reliance. Especially given the everydaygendered experiences of women who ran businesses, households and countries, married unafraid of conventions, or chose not to marry.

It is not too far-fetched to think that Arya and Brienne could together lead the alliance against the evil queen Cersei, having both learned that fate reserves unlikely rewards for those who prepare well and carry on in the name of ideals rather than to improve their own status. The frequently (and most likely deliberately) unnamed heroines of medieval romance similarly prove to be resourceful – and often rose to power, leading countries or armies, without even a mention of prior training.

Sir Brienne of Tarth.©2017 Home Box Office, Inc.

The medieval heroines that went unnamed provided a perfect model for women then to project themselves onto. The Duchess in the poem Sir Gowther, under duress (her husband threatens to leave on the grounds of not providing an heir), prays that she be given a son “no matter through what means”, and sleeps with the devil – producing the desired heir.

In the Middle English romance story of Sir Isumbras, his wife – whose name we are not told – transforms from a stereotypical courtly lady, kidnapped by a sultan, to a queen who fights against her captor. She becomes an empty shell onto which medieval women – especially those who do not come from the titled aristocracy – can project themselves. She battles alongside her husband and sons when his men desert him, with no training, only her own natural qualities to rely on.

These real and fictional heroines of the Middle Ages had no choices: they found solutions to seemingly impossible situations, just as Brienne and Arya have done. These two are unsung heroes, female warriors who stand in the background and don’t involve themselves in the “game”. While the men celebrate their victory against the undead White Walkers with a feast at Winterfell, Arya – whose timely assassination of their leader, the Night King, enabled the victory – shuns the limelight.

While the conclusion to the stories of Arya and Brienne is yet to be revealed, given the heroines that inspired these characters it will not be surprising if it is the women warriors – not the men – who will drive the game to its end.

The Conversation

Raluca Radulescu has nothing to disclose.

Allergies aux graminées : le type de pollen compterait plus que la quantité

Author: Simon Creer, Professor in Molecular Ecology, Bangor UniversityGeorgina Brennan, Postdoctoral Research Officer, Bangor University

Les pollens de graminées comptent parmi les plus allergènes.Pixabay

Lorsque le froid hivernal cède la place à des températures plus élevées, que les journées s’allongent et que la vie végétale renaît, près de 400 millions de personnes dans le monde sont victimes de réactions allergiques provoquées par les pollens en suspension dans l’air, qu’il s’agisse de ceux des arbres ou des plantes herbacées. Les symptômes vont des démangeaisons oculaires accompagnées de congestion et d’éternuements à l’aggravation de l’asthme, avec un coût pour la société qui se chiffre en milliards.

Depuis les années 1950, de nombreux pays partout dans le monde tiennent des comptes concernant les quantités de pollen, afin d’établir des prévisions à destination des personnes allergiques. Au Royaume-Uni, ces prévisions sont fournies par le Met Office en collaboration avec l’University of Worcester. (En France, le Réseau national de surveillance aérobiologique, association de loi 1901, est chargé d’étudier le contenu de l’air en particules biologiques pouvant avoir une incidence sur le risque allergique. Ses bulletins sont accessibles en ligne.)

Jusqu’à présent, les prévisions liées au pollen se basaient sur le comptage du nombre total de grains de pollen présents dans l’air : ceux-ci sont recueillis à l’aide d’échantillonneurs d’air qui capturent les particules sur un tambour collant à rotation lente (2 mm/heure).

Le problème est que ces prévisions portent sur le niveau de tous les pollens présents dans l’air, or les gens souffrent de réactions allergiques différentes selon le type de pollen rencontré. Le pollen de graminées, par exemple, est l’aéroallergène le plus nocif – le nombre de personnes qui y sont allergiques dépasse celui de tout autre allergène atmosphérique. Par ailleurs, les données préliminaires que nous avons recueillies suggèrent que les allergies à ce pollen varient au cours de la saison de floraison.

Repérer le pollen

Le pollen d’un grand nombre d’espèces d’arbres et de plantes allergènes peut être identifié grâce au microscope. Malheureusement, ce n’est pas faisable pour les pollens des graminées, car leurs grains ont une apparence très similaire. Cela signifie qu’il est presque impossible de déterminer à quelles espèces ils appartiennent grâce à un simple examen visuel, en routine.

Dans le but d’améliorer la précision des comptages et des prévisions, nous avons monté un nouveau projet visant à mettre au point des méthodes pour distinguer les différents types de pollen de graminées au Royaume-Uni. L’objectif est de savoir quelles espèces de pollen sont présentes en Grande-Bretagne tout au long de la saison de floraison de ces herbes.

Au cours des dernières années, notre équipe de recherche a exploré plusieurs approches pour identifier les pollens de graminées, parmi lesquelles la génétique moléculaire. L’une des méthodes employées par notre équipe repose sur l’utilisation du séquençage de l’ADN. Il s’agit d’examiner des millions de courtes sections d’ADN (ou marqueurs de codes-barres à ADN). Ces marqueurs sont spécifiques à chaque espèce ou genre de pollen de graminées.

Cette approche est appelée « metabarcoding » et peut être utilisée pour analyser l’ADN provenant de communautés mixtes d’organismes, ainsi que l’ADN provenant de différents types de sources environnementales (par exemple, le sol, les sources aquatiques, le miel et l’air). Cela signifie que nous pouvons de cette façon évaluer la biodiversité de centaines ou de milliers d’échantillons. Il nous a ainsi été possible d’analyser l’ADN des pollens prélevés par des échantillonneurs aériens disposés sur les toits en Grande-Bretagne, à 14 endroits différents.

Saison de floraison

En comparant le pollen que nous avons capturé à des échantillons de la bibliothèque de codes-barres ADN des plantes du Royaume-Uni (une base de données ADN de référence, établie à partir d’espèces de graminées correctement identifiées), nous avons été en mesure d’identifier différents types de pollen de graminées à partir de mélanges complexes de pollen en suspension. Cela nous a permis de visualiser comment les différents types de pollens de graminées sont répartis dans toute la Grande-Bretagne au cours de la saison de floraison. Jusqu’à présent, on ne savait pas si la mixture de de pollens présents dans l’air changeait au fil du temps, reflétant la floraison terrestre, ou si le mélange s’enrichissait de nouvelles espèces, par accumulation régulière au fil de la saison pollinique.

On aurait pu légitimement s’attendre à ce que les mélanges de pollens présents dans l’air aient une composition très variée et hétérogène – en raison de la mobilité des grains de pollen et du fait que différentes espèces fleurissent à divers moments de la saison. Pourtant, nos travaux ont révélé que ce n’est pas le cas. En effet, nous avons constaté que la composition du pollen en suspension dans l’air reproduit la progression saisonnière de la diversité des graminées : d’abord des espèces à floraison précoce, puis floraison de mi- et fin de saison.

Grâce à des données complémentaires, contemporaines et historiques, nous avons également constaté qu’au fur et à mesure que la saison de floraison des graminées progresse, le pollen présent en suspension dans l’air reproduit sensiblement, mais avec un délai, les floraisons observées au sol. Autrement dit, au cours de la saison de floraison, les différents types de pollens ne persistent pas dans l’environnement, mais disparaissent.

L’importance de ces travaux va au-delà de la simple compréhension des plantes. En effet, nous avons accumulé des preuves montrant que les ventes de médicaments antiallergiques ne sont pas, elles non plus, uniformes durant la saison de floraison des graminées. On sait que certains types de pollens peuvent contribuer plus que d’autres aux allergies. On peut donc supposer que lorsque les symptômes allergiques sont particulièrement graves, ils résultent davantage de la présence d’un type de pollen donné dans l’air que d’une augmentation des quantités globales de pollens.

Au cours des prochains mois, nous examinerons différents types de pollens et les données de santé associées, afin d’analyser les liens entre la biodiversité du pollen présent dans l’air et les symptômes allergiques. L’objectif principal de notre travail est d’améliorer à terme les prévisions, la planification et les mesures de prévention afin limiter les allergies aux graminées.

The Conversation

Simon Creer a reçu des financements du Natural Environment Research Council.

Georgina Brennan a reçu des financements du Natural Environment Research Council.

Ligue 1: France gets its first female top flight football referee, but the federation scores an own goal

Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University

As the end of the 2018-19 football season approaches, a match between Amiens and Strasbourg in France’s Ligue 1 would normally attract little attention. However, Sunday’s game has already created headlines as Stéphanie Frappart will become the first ever woman to act as a main referee in the top tier of French men’s football.

Initially, this appointment could be seen as a symbol of progress and inclusion. But the French Football Federation (FFF) announced that Frappart had been appointed as the main official for the Amiens-Strasbourg match in order to “prepare her for World Cup conditions” ahead of the 2019 Women’s World Cup in France.

The FFF’s explanation seems somewhat begrudging as it makes no reference to Frappart’s experience or talent as a match official. It arguably presents her nomination as a means to an end rather than a logical next step for someone who has officiated in Ligue 2 since 2014. Indeed, Frappart has also been a fourth official or video assistant referee in Ligue 1 several times.

Whether Frappart will establish herself as a leading referee within men’s football in France is uncertain. Pascal Garibian, technical director for refereeing in France, has said it is “still too early to say” if she will become a regular main referee in Ligue 1. In addition, it is unclear if she will referee any more top division matches this season.

It is also worth questioning to what extent officiating at Amiens-Strasbourg constitutes good preparation for this summer’s Women’s World Cup. Amiens’ home stadium can welcome 12,000 spectators, 8,000 fewer than the smallest 2019 Women’s World Cup venue. Seven of France’s nine World Cup stadiums have more than double the capacity of Amiens’ Stade de la Licorne. And Amiens has the third lowest average attendance of Ligue 1 teams during the current season.

Slow progression

Frappart becoming the first woman to referee a match in Ligue 1 is significant, but also somewhat paradoxical. In fact, it highlights the lack of career progression enjoyed by female officials within French men’s football – and across Europe, too.

In September 2017, Bibiana Steinhaus became the first female referee in a European main men’s football league (in Germany’s Bundesliga). But while Frappart’s appointment will see Ligue 1 become the second major European men’s league in which a woman has taken charge of a game, it has taken some time to get here.

In 1996, Nelly Viennot became the first female assistant referee in Ligue 1, yet it has taken another 23 years for the first female main referee. In a top-level career lasting from 1996-2007, Viennot was regularly an assistant referee in men’s football, but never a main referee.

Regrettably, it seems that the FFF has taken the sheen off a notable first. A request from FIFA that its member associations help match officials to “prepare in the best conditions possible” for the 2019 Women’s World Cup seems the main reason Frappart will officiate this Sunday. It is somewhat unusual for someone not selected as a top division referee at the start of the season to officiate in Ligue 1. In Germany, Bibiana Steinhaus had been listed as one of the top division referees prior to the 2017-18 season.

As a referee in Ligue 2, Frappart has at times encountered sexist attitudes. When coach of Valenciennes in 2015, David Le Frapper said that “when a woman referees in a man’s sport, things are complicated” following a match Frappart refereed. Such comments are reminiscent of Sky presenters Richard Keys and Andy Gray’s reaction to Sian Massey-Ellis’ presence as assistant referee at an English Premier League match in 2011, when they suggested that female officials “don’t know the offside rule”.

During the last decade, the FFF has provoked controversy when seeking to encourage more women to get involved in football. In 2010, they sought to boost the profile of women’s football in France via a campaign featuring Adrianna Karembeu. Several posters were based on obvious gender stereotypes.

One featured an image of female footballers in a changing room and the slogan “For once you won’t scream when seeing another girl wearing the same outfit”. The FFF had previously promoted women’s football via an image of three leading players posing naked alongside the question “Is this what we have to do for you to come to see us play?”

Nelly Viennot’s presence as the first female assistant referee in Ligue 1 did not herald the arrival of many more female officials in French men’s football. Stéphanie Frappart is still the only woman to have been the main referee in Ligue 2. It is unclear to what extent attitudes to female referees in French men’s football are evolving. It may well be several years before we realise the real impact of Frappart’s appointment as referee for the match between Amiens and Strasbourg.

The Conversation

Jonathan Ervine does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How did the moon end up where it is?

Author: Mattias Green, Reader in Physical Oceanography, Bangor UniversityDavid Waltham, Professor of Geophysics, Royal Holloway

Suppakij1017/Shutterstock

Nearly 50 years since man first walked on the moon, the human race is once more pushing forward with attempts to land on the Earth’s satellite. This year alone, China has landed a robotic spacecraft on the far side of the moon, while India is close to landing a lunar vehicle, and Israel continues its mission to touch down on the surface, despite the crash of its recent venture. NASA meanwhile has announced it wants to send astronauts to the moon’s south pole by 2024.

But while these missions seek to further our knowledge of the moon, we are still working to answer a fundamental question about it: how did it end up where it is?

On July 21, 1969, the Apollo 11 crew installed the first set of mirrors to reflect lasers targeted at the moon from Earth. The subsequent experiments carried out using these arrays have helped scientists to work out the distance between the Earth and moon for the past 50 years. We now know that the moon’s orbit has been getting larger by 3.8cm per year– it is moving away from the Earth.

This distance, and the use of moon rocks to date the moon’s formation to to 4.51 billion years ago, are the basis for the giant impact hypothesis (the theory that the moon formed from debris after a collision early in Earth’s history). But if we assume that lunar recession has always been 3.8cm/year, we have to go back 13 billion years to find a time when the Earth and moon were close together (for the moon to form). This is much too long ago – but the mismatch is not surprising, and it might be explained by the world’s ancient continents and tides.

Tides and recession

The distance to the moon can be linked to the history of Earth’s continental configurations. The loss of tidal energy (due to friction between the moving ocean and the seabed) slows the planet’s spin, which forces the moon to move away from it – the moon recedes. The tides are largely controlled by the shape and size of the Earth’s ocean basins. When the Earth’s tectonic plates move around, the ocean geometry changes, and so does the tide. This affects the moon’s retreat, so it appears smaller in the sky.

This means that if we know how Earth’s tectonic plates have changed position, we can work out where the moon was in relation to our planet at a given point in time.

We know that the strength of the tide (and so the recession rate) also depends on the distance between Earth and the moon. So we can assume that the tides were stronger when the moon was young and closer to the planet. As the moon rapidly receded early in its history, the tides will have become weaker and the recession slower.

The detailed mathematics that describe this evolution were first developed by George Darwin, son of the great Charles Darwin, in 1880. But his formula produces the opposite problem when we input our modern figures. It predicts that Earth and the moon were close together only 1.5 billion years ago. Darwin’s formula can only be reconciled with modern estimates of the moon’s age and distance if its typical recent recession rate is reduced to about one centimetre per year.

The implication is that today’s tides must be abnormally large, causing the 3.8cm recession rate. The reason for these large tides is that the present-day North Atlantic Ocean is just the right width and depth to be in resonance with the tide, so the natural period of oscillation is close to that of the tide, allowing them to get very large. This is much like a child on a swing who moves higher if pushed with the right timing.

But go back in time – a few million years is enough – and the North Atlantic is sufficiently different in shape that this resonance disappears, and so the moon’s recession rate will have been slower. As plate tectonics moved the continents around, and as the slowing of Earth’s rotation changed the length of days and the period of tides, the planet would have slipped in and out of similar strong-tide states. But we don’t know the details of the tides over long periods of time and, as a result, we cannot say where the moon was in the distant past.

Sediment solution

One promising approach to resolve this is to try to detect Milankovitch cycles from physical and chemical changes in ancient sediments. These cycles come about because of variations in the shape and orientation of Earth’s orbit, and variations in the orientation of Earth’s axis. These produced climate cycles, such as the ice ages of the last few million years.

Most Milankovitch cycles don’t change their periods over Earth’s history but some are affected by the rate of Earth’s spin and the distance to the moon. If we can detect and quantify those particular periods, we can use them to estimate day-length and Earth-moon distance at the time the sediments were deposited. So far, this has only been attempted for a single point in the distant past. Sediments from China suggest that 1.4 billion years ago the Earth-moon distance was 341,000km (its current distance is 384,000km).

Now we are aiming to repeat these calculations for sediments in hundreds of locations laid down at different time periods. This will provide a robust and near-continuous record of lunar recession over the past few billion years, and give us a better appreciation of how tides changed in the past. Together, these interrelated studies will produce a consistent picture of how the Earth-moon system has evolved through time.

The Conversation

Mattias Green receives funding from The Natural Environmental Research Council.

David Waltham receives funding from NERC

DNA analysis finds that type of grass pollen, not total count, could be important for allergy sufferers

Author: Simon Creer, Professor in Molecular Ecology, Bangor UniversityGeorgina Brennan, Postdoctoral Research Officer, Bangor University

Elizaveta Galitckaia/Shutterstock

As the winter cold is replaced by warmer temperatures, longer days and an explosion of botanical life, up to 400m people worldwide will develop allergic reactions to airborne pollen from trees, grasses and weeds. Symptoms will range from itchy eyes, congestion and sneezing, to the aggravation of asthma and an associated cost to society that runs into the billions.

Ever since the 1950s, countries around the world have been recording pollen counts to create forecasts for allergy sufferers. In the UK this forecast is provided by the Met Office in collaboration with the University of Worcester. To date, pollen forecasts have been based on counting the total number of grains of pollen in the air from trees, weeds and grass. The pollen is collected using air sampling machines that capture the particles on a slowly rotating sticky drum.

However, while these forecasts focus on the level of all pollens in the air, people suffer from allergic reactions to different types of pollen. Grass pollen, for example, is the most harmful aeroallergen – more people are allergic to grass pollen than any other airborne allergen. And now our own preliminary health data suggests that allergies to this pollen vary across the grass flowering season.

Pinpointing pollen

In an effort to improve the accuracy of pollen counts and forecasts, we have been working on a new project to distinguish between different types of grass pollen in the UK. The aim is to find out what species of pollen are present across Britain throughout the grass flowering season.

Microscopes are used to identify the pollen of many allergenic tree and weeds, but unfortunately this can’t be done for grass pollen, since all grass pollen grains look highly similar underneath a microscope. This means it is almost impossible to routinely distinguish the species of grass they come from using visual observation.

So, over the past few years, our research team, PollerGEN, has been investigating whether a new wave of approaches, including molecular genetics, can be used to identify different airborne grass pollens instead. One method that our team has employed to identify the pollen relies on using DNA sequencing to examine millions of short sections of DNA (also called barcode markers). These markers are unique to each species or genus of grass pollen.

This approach is called “metabarcoding” and it can be used to analyse DNA derived from mixed communities of organisms, as well as DNA from many different types of environmental sources (for example, soil, aquatic sources, honey and the air). It means that we can assess the biodiversity of hundreds to thousands of samples. In particular, it has allowed us to analyse pollen DNA collected by aerial samplers at 14 rooftop locations across Britain.

Flowering season

By comparing the pollen we captured to samples in the UK plant DNA barcode library (an established reference DNA database of correctly identified grass species) we have been able to identify different types of grass pollen from complex mixtures of airborne pollen. This has allowed us to visualise how different types of grass pollen are distributed throughout Britain across the grass flowering season.

While there was a real chance that aerial pollen mixtures could be very varied and haphazard – due to the mobility of pollen in the environment and the fact that different grasses flower at different times of the season – our newly published study has found that this is not the case. We have found that the composition of airborne pollen resembles a seasonal progression of diversity, featuring early, then mid and late-season flowering grasses.

By combining other historical and contemporary data, we also found that as the grass flowering season progresses, airborne pollen follows a sensible, but delayed appearance from the first flowering times noted from the ground. This means that different types of grass pollen are not present throughout each period of the flowering season. They disappear from the environmental mixture.

This research is important to more than just our understanding of plants. Our own emerging evidence suggests that over-the-counter medications are not uniform throughout the grass flowering season. So certain types of grass pollen may be contributing more to allergenic disease than others. It could be that when symptoms are particularly bad, allergies are caused by the type of grass pollen in the air, not just the amount.

In the next few months, we will be looking into different forms of pollen and health data, to investigate links between the biodiversity of aerial pollen and allergenic symptoms. The overarching aim of our work is to eventually provide better forecasting, planning and prevention measures to enable less people to suffer from grass allergenic disease.

The Conversation

Simon Creer receives funding from The Natural Environment Research Council.

Georgina Brennan receives funding from The Natural Environment Research Council.

Kuasa bahasa: Kata-kata menerjemahkan pikiran dan pengaruhi cara berpikir

Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

Kata-kata menjelaskan dunia kita.Curioso via Shutterstock

Pernahkah pada masa sekolah atau di kemudian hari Anda mengkhawatirkan bahwa waktu Anda untuk mencapai semua cita-cita Anda akan habis? Jika demikian, apakah akan lebih mudah menyampaikan perasaan ini kepada orang lain jika ada kata yang memiliki makna itu? Dalam bahasa Jerman, ada. Perasaan panik yang terkait dengan peluang seseorang yang tampaknya akan habis disebut Torschlusspanik.

Bahasa Jerman memiliki banyak koleksi istilah seperti itu, terdiri dari dua, tiga atau lebih kata yang tersambung untuk membentuk satu kata super atau kata majemuk. Kata majemuk sangat kuat karena mereka bermakna lebih dari bagian-bagian pembentuk kata tersebut. Torschlusspanik, misalnya, secara harfiah tersusun dari “gerbang” - “menutup” - “panik”.

Jika Anda tiba di stasiun kereta sedikit terlambat dan melihat pintu kereta Anda masih terbuka, Anda mungkin pernah mengalami satu bentuk konkret Torschlusspanik, didorong oleh bunyi beep khas saat pintu kereta hendak ditutup. Tapi kata majemuk dari Jerman ini memiliki asosiasi yang lebih kaya dari sekadar makna literal. Kata ini membangkitkan sesuatu yang lebih abstrak, merujuk pada perasaan bahwa kehidupan semakin menutup pintu peluang seiring berjalannya waktu.

Bahasa Inggris juga mempunyai banyak kata majemuk. Beberapa menggabungkan kata-kata yang konkret seperti “seahorse” (kuda laut), “butterfly” (kupu-kupu), atau “turtleneck” (sweater yang kerahnya menutupi leher). Lainnya lebih abstrak, seperti “backwards” (mundur) atau “whatsoever” (apa pun). Dan tentu saja seperti dalam bahasa Jerman atau bahasa Prancis, dalam bahasa Inggris kata majemuk juga termasuk kata-kata super, karena maknanya sering berbeda dari arti kata per kata. Seekor kuda laut (seahorse) bukan kuda (horse), seekor kupu-kupu (butterfly) bukan seekor lalat (fly), penyu (turtles) tidak memakai sweater dengan kerahnya menutupi leher (turtleneck), dan lainnya.

Salah satu ciri luar biasa dari kata majemuk adalah ketika diterjemahkan ke dalam bahasa lain hasilnya tidak pas, paling tidak ketika kata itu diterjemahkan secara harfiah per bagian. Siapa yang menyangka bahwa “lembaran-bawa(carry-sheets)” adalah dompet (wallet) - porte-feuille -, atau bahwa “dukung-tenggorokan (support-throat)” adalah BH - soutien-gorge - dalam bahasa Prancis?

Ini menimbulkan pertanyaan tentang apa yang terjadi ketika kita sulit menemukan padanan sebuah kata dalam bahasa lain. Misalnya, apa yang terjadi ketika seorang penutur asli bahasa Jerman mencoba menyampaikan dalam bahasa Inggris bahwa mereka baru saja berlari cepat karena Torschlusspanik? Secara alami, mereka akan memparafrase, yaitu, mereka akan membuat narasi dengan contoh-contoh untuk membuat lawan bicara mereka memahami apa yang mereka coba katakan.

Tapi kemudian, ini menimbulkan pertanyaan lain yang lebih besar: Apakah orang-orang yang memiliki kata-kata yang tidak dapat diterjemahkan dalam bahasa lain memiliki akses ke konsep yang berbeda? Ambil contoh hiraeth misalnya, kata yang indah dari bahasa Welsh yang terkenal karena tidak dapat diterjemahkan. Hiraeth dimaksudkan untuk menyampaikan perasaan yang terkait dengan ingatan pahit tentang kehilangan sesuatu atau seseorang, sambil bersyukur atas keberadaan mereka.

Hiraeth bukan nostalgia, itu bukan penderitaan, atau frustrasi, atau melankolis, atau penyesalan. Hiraeth juga menyampaikan perasaan yang dialami seseorang ketika mereka meminta seseorang untuk menikahi mereka dan mereka ditolak.

Kata yang berbeda, pikiran yang berbeda?

Keberadaan sebuah kata dalam bahasa Welsh untuk menyampaikan perasaan khusus ini menimbulkan pertanyaan mendasar tentang hubungan pemikiran-bahasa. Filsuf seperti Herodotus (450 SM) menanyakan hal ini pada masa Yunani kuno. Pertanyaan ini muncul kembali pada pertengahan abad terakhir, di bawah dorongan Edward Sapir dan mahasiswanya Benjamin Lee Whorf. Pertanyaan ini telah berkembang menjadi yang dikenal sebagai hipotesis relativitas linguistik.

Relativitas linguistik adalah gagasan bahwa bahasa, yang mayoritas orang setuju berasal dari dan mengekspresikan pemikiran manusia, dapat memberi umpan balik pada pemikiran, mempengaruhi pemikiran sebagai balasannya. Jadi, dapatkah kata-kata yang berbeda atau konstruksi tata bahasa yang berbeda “membentuk” cara berpikir secara berbeda dalam penutur bahasa yang berbeda? Ide ini mulai dilirik produsen budaya populer, dan muncul dalam film fiksi sains Arrival.

Meski ide ini intuitif bagi sebagian orang, terdapat klaim berlebihan tentang tingkat keragaman kosakata di beberapa bahasa. Klaim semacam ini mendorong ahli bahasa terkenal untuk menulis esai satir seperti “hoax kosakata Eskimo yang begitu banyak”, saat Geoff Pullum mencela fantasi tentang jumlah kata yang digunakan oleh orang Eskimo untuk merujuk pada salju. Namun, berapa pun jumlah kata sebenarnya untuk salju di Eskimo, Pullum gagal menjawab pertanyaan penting: apa yang sebenarnya kita ketahui tentang persepsi orang Eskimo tentang salju?

Meski banyak kritik terhadap hipotesis relativitas linguistik, penelitian eksperimental untuk mencari bukti ilmiah adanya perbedaan antara penutur bahasa yang berbeda semakin banyak. Contohnya, Panos Athanasopoulos di Lancaster University, telah membuat pengamatan yang mengejutkan bahwa adanya kata-kata khusus untuk membedakan kategori warna beriringan dengan kemampuan apresiasi kontras warna.

Jadi, ia menunjukkan, penutur asli bahasa Yunani, yang memiliki istilah khusus untuk biru terang dan biru tua (masing-masing ghalazio dan ble) cenderung menganggap berbagai macam warna biru berbeda satu sama lain ketimbang penutur asli bahasa Inggris, yang menggunakan istilah yang sama “blue” untuk menggambarkannya.

Tapi para pemikir termasuk Steven Pinker di Harvard tidak terkesan, dengan alasan bahwa efek seperti itu sepele dan tidak menarik, karena individu yang terlibat dalam eksperimen cenderung menggunakan bahasa di kepala mereka ketika membuat penilaian tentang warna–sehingga perilaku mereka secara dangkal dipengaruhi oleh bahasa, sementara semua orang melihat dunia dengan cara yang sama.

Agar perdebatan ini lebih berkembang, saya percaya kita perlu mempelajari otak manusia, dengan mengukur persepsi secara lebih langsung, terutama dalam waktu yang singkat sebelum akses mental ke bahasa. Hal ini mungkin terjadi saat ini, berkat metode neurosains dan - secara luar biasa - hasil awal condong mendukung intuisi Sapir dan Whorf.

Jadi, ya, suka atau tidak, mungkin saja memiliki kata-kata yang berbeda berarti memiliki pikiran yang terstruktur berbeda.

The Conversation

Guillaume Thierry has received funding from the European Research Council, the Economic and Social Research Council, the British Academy, the Arts and Humanities Research Council, the Biotechnology and Biological Research Council, and the Arts Council of Wales.

Our Planet is billed as an Attenborough documentary with a difference but it shies away from uncomfortable truths

Author: Julia P G Jones, Professor of Conservation Science, Bangor University

A ghost ship off the coast of Peru, home to the biggest fishery on the planet, has become an unlikely nesting site for guanay cormorants and Peruvian boobies.Hugh Pearson / Silverback/Netflix

Over six decades, Sir David Attenborough’s name has become synonymous with high-quality nature documentaries. But while for his latest project, the Netflix series Our Planet, he is once again explaining incredible shots of nature and wildlife – this series is a little different from his past films. Many of his previoussmashhits have portrayed the natural world as untouched and perfect, Our Planet is billed as putting the threats facing natural ecosystems front and centre to the narrative. In the opening scenes we are told: “For the first time in human history the stability of nature can no longer be taken for granted.”

This is a very significant departure – and one which is arguably long overdue. Those of us who study the pressures on wild nature have been frustrated that nature documentaries give the impression that everything is OK. Some argue that they may do more harm than good by giving viewers a sense of complacency.

Conservation scientists were expecting that the new series wouldn’t shy away from the awful truth: the wonders shown in these mesmerising nature programmes are tragically reduced– and many are at risk of being lost forever.

Documentaries portray pristine habitats, but that’s not always the case.

I had the privilege of seeing the One Planet team at work back in 2015 (these films take years to make). I spent three weeks at the camp in western Madagascar where they were working on their forest film. While the camera crew were working day and night filming fossa (lemur-hunting carnivores), and trying to get the perfect footage of leaf bugs producing honeydew (the series is worth watching for this sequence alone), the team was also digging deep into the complex issues of what is happening to this wondrous biodiversity. Their researcher spent many hours with Malagasy conservation scientist Rio Heriniaina talking to local community leaders about the challenges they face and the reasons for the very rapid rate of forest loss in the region.

However, none of that fascinating footage made the final cut. Following a scene showing fossa mating, we are told that their forests have since been burnt. This was already happening in 2015. As Heriniaina told me:

Madagascar’s dry forests are vanishing before our eyes. Every burning season large areas of forest go up in flames to clear space for peanuts and corn. There is no simply answer to as why, and no simple solutions. Poverty plays a role but so does corruption and the influence of powerful people who profit from the destruction.

Our Planet’s team filmed the burning of Madagascar’s dry forests, but this didn’t make the final cut.Jeff Wilson/Silverback/Netflix

This is my main critique of Our Planet. Despite being billed as an unflinching look at the threats facing the intricate and endlessly fascinating ecosystems being depicted, it actually tends to shy away from showing these threats or, even more importantly, addressing the question of what can be done to resolve them. Like previous documentaries, shots have been carefully positioned to cut out evidence of human influence.

In my three decades of watching wildlife documentaries, I remember only one moment which broke from this tradition. In Simon Reeves’ 2012 series about the Indian Ocean, he showed people living in and around the habitats he was filming. He humanised them. He was also honest about how limited the picturesque natural habitats he was filming were. In a memorable sequence showing a sifaka leaping between trees, he asked the camera man to turn around, revealing the miles of sisal plantation which surround the tiny remnant of forest where endless crews go to film these charismatic lemurs. When Planet Earth II came out in 2016 I was disappointed to see a return to more of the same – that same remnant forest in southern Madagascar appeared, but without the context.

As with previous documentaries, you could come away from Our Planet thinking the places being portrayed are completely separate from people. Human presence in and around many of these habitats has been erased. However to be successful, conservation can’t ignore people.

A red capped manakin, filmed in Panama.Emilio White/Silverback/Netflix

Maybe it is churlish to complain that Our Planet, like other such films, avoids showing the uncomfortable truth about just how threatened so much of nature really is. Perhaps the pure and unsullied vision is what makes them so popular. So many of us working in conservation were drawn in through watching Sir David Attenborough’s other films as children. By introducing viewers to fascinating facts about ecology (who knew that winds blowing across deserts feed life in the ocean?) and the mind-boggling behaviours of birds (such as the manakins shown doing a shuffle dance), Our Planet will engage a whole new generation.

Researchers have shown time and time again that knowledge isn’t enough to change people’s behaviour. However feeling connected with nature does matter. One thing the series will certainly do is make people fall in love with the planet. That is certainly a good thing.


Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.

The Conversation

Julia P G Jones has received funding from NERC and the Leverhulme Trust to support her research in Madagascar.

Food banks are becoming institutionalised in the UK

Author: Dave Beck, Postdoctoral Teaching Fellow in Sociology, Bangor University

I was one of 58 academics, activists and food writers who published a stark open letter warning against food banks becoming institutionalised in the UK. We believe the country is now reaching a point where “left behind people” and retailers’ “leftover food” share a symbiotic relationship. Food banks are becoming embedded within welfare provision, fuelled by corporate involvement and ultimately creating an industry of poverty.

We advocate challenging this link between food waste and food poverty. The UK has a welfare system that should be there for people in their time of need. But instead food banks – of which there are at least 2,000 across the country – are in receipt of government subsidies supporting redistribution, and fresh food is being introduced through publicly funded corporate philanthropy.

While people are certainly being helped by food banks in their moments of need, we cannot accept that they solve long-term poverty. In the US and Canada, academic Andy Fisher has highlighted that food bank institutionalisation has been politically and corporately encouraged over the last 35 years, but this has done nothing to alleviate food poverty. It has, in fact, only served corporate interest and entrenched food poverty further.

How has this happened?

For my PhD research I looked into the rise of food banks and critically examined their role as a new and emerging provider of aid for people struggling with welfare reform. My work also assessed the structural causes of food poverty associated with the Welfare Reform Act 2012, and the changing language of social security.

Austerity policies provided the initial fertile ground which led to many more people needing to access food banks. Under welfare reform, access to welfare became subject to heavy conditions. People came under heightened sanctions if they failed to follow their claimant commitment, while the so-called bedroom tax saw some losing housing benefit entitlement if they had a spare bedroom in their council or housing association-owned property.

This paved the way for food banks to fill the void left behind by retrenched welfare. Now food banks are increasingly accepting large donations and working with big retailers and food redistribution organisations, as they become an accepted part of UK life.

For food banks to become part of an institutionalised provision, leading food poverty expert Graham Riches argues that there is a three-stage process. First, there needs to be a national food bank provider, for example Feeding America in the US, and Food Banks Canada. These organisations coordinate and support linked food pantries under their banner. Within the UK, the Trussell Trust, with a strong network of 427 foodbanks (plus associated distribution centres), has a similar role.

Second, this national provider must create partnership alliances with food companies and food redistribution organisations. For the last seven years, the Trussell Trust has worked with UK food retailer Tesco. Recently, it has also collaborated with Fareshare and Asda to increase redistribution to its food banks.

Contacted by The Conversation for this article, the Trussell Trust insists it is “campaigning to create a future without food banks”. Emma Revie, chief executive, highlighted its role in campaigning for changes to the benefits system to properly support people who need help. She added there was no desire for food banks to “become the new normal”.

But the engagement of retail giants serves to embed food banks, as it combines two socially distinct problems – food surplus and food poverty– while doing nothing to solve the structural issues of poverty. It serves the retailer well too, by improving their corporate social responsibility (large retailers are seen to be acting for the social good of their community), not to mention the increase in sales through their tills. Shoppers are purchasing their donations from these retailers and putting them in store donation bins to be taken to the food banks.

The third stage is an increasing influence and relationship with national government. A national food bank provider can then emerge as an accepted response to declining welfare. This has happened in the US and Canada, although UK food banks at present are still in a campaigning position.

Not the new normal

However, I think that food banks also need to complete two more stages for there to be complete institutionalisation. Through their partnership with larger organisations, food banks recognise the need to invest in facilities and transport to deal with redistributed food, especially if it includes fresh food. They also begin to invest in time and energy from dedicated volunteers who make food banks warm and welcoming places. This is common now in North America and has also already begun in the UK, potentially creating an air of permanence about them.

Fifth and finally, when food banks are truly institutionalised we will see them accepted by society as being an adequate substitute for welfare, especially for “less deserving” people. This recognition was evidenced when Asda removed all unmanned food bank collection baskets in February 2016, signalling the end of customers’ donations. Following a social media uproar, and a challenge put forward by the charities affected, Asda reinstated the baskets.

Food bank collection baskets in supermarkets are now commonplace. Their removal and subsequent disquiet shows how there is social acceptance of food banks. People realise the value of them for those in need, fuelling the process of embedding food banks, not just within society, but within our social conscience.

But we need to remember that food poverty has no place within our society. We should be campaigning for change, not acceptance of a new normal. As the US and Canada have seen, once food banks become embedded, they do not go away. Food banks may be vital in times of crisis, but they are not a substitute for proper support.

Editor’s note: This article was updated to amend the number of food banks in the UK from 3,000 to “at least 2,000”

The Conversation

Dave Beck does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Snake venom can vary in a single species — and it’s not just about adaptation to their prey

Author: Wolfgang Wüster, Senior Lecturer in Zoology, Bangor UniversityGiulia Zancolli, Associate Research Scientist, Université de Lausanne

Few sights and sounds are as emblematic of the North American southwest as a defensive rattlesnake, reared up, buzzing, and ready to strike. The message is loud and clear, “Back off! If you don’t hurt me, I won’t hurt you.” Any intruders who fail to heed the warning can expect to fall victim to a venomous bite.

But the consequences of that bite are surprisingly unpredictable. Snake venoms are complex cocktails made up of dozens of individual toxins that attack different parts of the target’s body. The composition of these cocktails is highly variable, even within single species. Biologists have come to assume that most of this variation reflects adaptation to what prey the snakes eat in the wild. But our study of the Mohave rattlesnake (Crotalus scutulatus, also known as the Mojave rattlesnake) has uncovered an intriguing exception to this rule.

What’s in those glands? It depends where you are!W. Wüster

A 20-minute drive can take you from a population of this rattlesnake species with a highly lethal neurotoxic venom, causing paralysis and shock, to one with a haemotoxic venom, causing swelling, bruising, blistering and bleeding. The neurotoxic venom (known as venom A) can be more than ten times as lethal as the haemotoxic venom (venom B), at least to lab mice.

The Mohave rattlesnake is not alone in having different venoms like this – several other rattlesnake species display the same variation. But why do we see these differences? Snake venom evolved to subdue and kill prey. One venom may be better at killing one prey species, while another may be more toxic to different prey. Natural selection should favour different venoms in snakes eating different prey – it’s a classic example of evolution through natural selection.

This idea that snake venom varies due to adaptation to eating different prey has become widely accepted among herpetologists and toxinologists. Some have found correlations between venom and prey. Others have shown prey-specific lethality of venoms, or identified toxins fine-tuned for killing the snakes’ natural prey. The venom of some snakes even changes along with their diet as they grow.

We expected the Mohave rattlesnake to be a prime example of this phenomenon. The extreme differences in venom composition, toxicity and mode of action (whether it is neurotoxic or haemotoxic) seem an obvious target for natural selection for different prey. And yet, when we correlated differences in venom composition with regional diet, we were shocked to find there is no link.

Variable venoms

In the absence of adaptation to local diet, we expected to see a connection between gene flow (transfer of genetic material between populations) and venom composition. Populations with ample gene flow would be expected to have more similar venoms than populations that are genetically less connected. But once again, we drew a blank – there is no link between gene flow and venom. This finding, together with the geographic segregation of the two populations with different venoms, suggests that instead there is strong local selection for venom type.

Mohave rattlesnake feeding on a kangaroo rat, one of its most common prey items.W. Wüster

The next step in our research was to test for links between venom and the physical environment. Finally, we found some associations. The haemotoxic venom is found in rattlesnakes which live in an area which experiences warmer temperatures and more consistently low rainfall compared to where the rattlesnakes with the neurotoxic venom are found. But even this finding is deeply puzzling.

It has been suggested that, as well as killing prey, venom may also help digestion. Rattlesnakes eat large prey in one piece, and then have to digest it in a race against decay. A venom that starts predigesting the prey from the inside could help, especially in cooler climates where digestion is more difficult.

But the rattlesnakes with haemotoxic venom B, which better aids digestion, are found in warmer places, while snakes from cooler upland deserts invariably produce the non-digestive, neurotoxic venom A. Yet again, none of the conventional explanations make sense.

Clearly, the selective forces behind the extreme venom variation in the Mohave rattlesnake are complex and subtle. A link to diet may yet be found, perhaps through different kinds of venom resistance in key prey species, or prey dynamics affected by local climate. In any case, our results reopen the discussion on the drivers of venom composition, and caution against the simplistic assumption that all venom variation is driven by the species composition of regional diets.

From a human perspective, variation in venom composition is the bane of anyone working on snakebite treatments, or antidote development. It can lead to unexpected symptoms, and antivenoms may not work against some populations of a species they supposedly cover. Anyone living within the range of the Mohave rattlesnake can rest easy though – the available antivenoms cover both main venom types.

Globally, however, our study underlines the unpredictability of venom variation, and shows again that there are no shortcuts to understanding it. Those developing antivenoms need to identify regional venom variants and carry out extensive testing to ensure that their products are effective against all intended venoms.

The Conversation

Wolfgang Wüster receives funding from The Leverhulme Trust.

Giulia Zancolli receives funding from Santander Early Career Research Scholarship.