On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
Exercise: we calculated its true value for older people and society
Author: Carys Jones, Research Fellow in Health Economics, Bangor University
Taking up exercise is one of the most popular New Year’s resolutions for people wanting to improve their health. But our research shows that the benefits of older people going to exercise groups go beyond self-improvement and provide good value for society, too.
Less than two-thirds of UK adults reach the recommended physical activity levels of 150 minutes of moderate intensity exercise a week. Keeping active is especially important for older people because it can help reduce falls and improve independence and the ability to carry out everyday tasks. It also boosts mental wellbeing.
Older people are more vulnerable to loneliness and social isolation, and forming friendships and the social aspect of taking part in group exercise is a good way of protecting them from this. A study that followed older people in Taiwan over 18 years found that people who regularly took part in social activities were less likely to be depressed than those who did none. Research has also shown that having a strong social network decreases the risk of death over time.
But our research has now also found that exercise groups for older people are valuable not only to those who take part but also for the wider community.
We carried out a study of the social value generated by the Health Precinct, a community hub in North Wales that grew out of a partnership between local government, the NHS and Public Health Wales. People with chronic health conditions are referred to the Health Precinct through social prescribing. Social prescribing is a way of linking people to non-clinical services that are available in their community. The idea is that offering services in a community setting rather than a hospital or clinic will provide a non-threatening environment and encourage people to go.
Although the scheme is open to people of all ages with chronic conditions, so far it has mainly been used by older people and the most common reasons for referral are issues with mobility, balance, arthritis and heart conditions.
After someone is assessed at the Health Precinct, they receive a tailored 16-week plan that sets realistic goals and encourages them to take part in exercise groups at the local leisure centre. The Health Precinct promotes health and wellbeing improvement by encouraging social participation, independence and self-management of conditions.
Our approach to measuring the value of the programme was to carry out a social return on investment analysis. This method explores a broader concept of value than market prices, and puts a monetary value on social and environmental factors such as health status and social connectivity.
To establish what the impact was at a societal level, we included in our analysis the effects on people who attended the Health Precinct, their families, the NHS and the local government.
Over a 20-month period, we asked people aged over 55 and newly referred to the Health Precinct to fill out a questionnaire at their first appointment, and again four months later. We were interested in measuring changes to their physical activity levels, health status, confidence and social connectivity.
We also asked family members to fill out a questionnaire on changes to their own health as we thought they may worry less about their loved ones and increase their own activity levels.
We calculated potential savings to the NHS by collecting information on how the individuals’ number of GP visits changed after taking part in the Health Precinct. We also estimated the impact on local government by looking at patterns of leisure centre attendance, and explored how likely people were to take out annual memberships after they had finished a 16-week programme.
A monetary value was then assigned to all of these factors to estimate what the overall amount of social value generated by older people doing regular exercise at the leisure centre was. This figure was compared to the annual running costs of the scheme.
Our findings suggest that the value generated by the Health Precinct outweighs the cost of running it, leading to a significant positive social return on investment.
Investing in health
In the current climate of squeezed health and social care budgets, it is more important than ever to identify services that offer good value for money and benefit multiple people and organisations.
The model of social prescribing and managing health and social care services in the community is increasingly popular. One of the more established examples is the pioneering Bromley by Bow Centre in London, which celebrated its 35th year in 2019.
Investing in community assets that encourage older people to get active physically and socially are key to not just improving their wellbeing but also generating future savings for society by lowering demand for health and social care services.
Carys Jones receives funding from the Welsh Government through Health and Care Research Wales.
Cinq ans après, « Je suis Charlie » sonne creux
Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University
« Je suis Charlie ». La phrase a été répétée à l’envi depuis l’attentat contre le journal satirique le 7 janvier 2015 qui a fait 12 morts, plusieurs blessés et choqué l’opinion y compris au-delà des frontières hexagonales.
Mais derrière l’émotion toute compréhensible qui a accompagné ces élans de solidarité repose une réalité bien plus complexe.
En analysant au plus près, on s’aperçoit ainsi que les réactions après l’attaque apparaissent bien plus conservatrices que de prime abord, certaines s’éloignant beaucoup des valeurs auxquelles est attachée la publication.
Ainsi, cinq après après l’attentat, « Je suis Charlie » sonne creux.
Une déclaration d’empathie
Avant 2015, Charlie Hebdoétait lu en moyenne par près de 40 000 personnes. L’immense majorité s’étant déclarée « Charlie », soit quelques centaines de milliers de personnes, ne faisait donc pas partie des lecteurs réguliers de l’hebdomadaire.
Le mouvement « Je suis Charlie » semble d’abord avoir été une déclaration d’empathie vis-à-vis du journal plutôt qu’une adhésion à son humour subversif.
La phrase symbolise aussi le désir de défendre à tout prix la liberté d’expression, sans nécessairement valider les modes d’expression de Charlie Hebdo.
Le journal s’est traditionnellement défini comme étant « irresponsable ». Il se plaît aussi à qualifier son humour de « bête et méchant ». Mais cette forme d’humour noir et provocateur n’a cessé d’attirer et d’attiser les critiques, en particulier au sein du monde politique. Néanmoins, de très nombreuses personnalités politiques dont de nombreuses « cibles » de l’humour Charlie, étaient présentes lors des marches de janvier 2015.
Beaucoup ont d’ailleurs critiqué la présence très hypocrite d’un certain nombre de personnalités politiques étrangères venues honorer le journal au nom de la liberté d’expression tandis que cette dernière est bafouée dans leurs pays respectifs. L’organisation Reporters sans frontières a été particulièrement véhémenteà ce propos, pointant du doigt les ministres des Affaires étrangères Sameh Shoukry (Egypte) et Sergei Lavrov (Russie) ainsi que le premier ministre turc Ahmet Davutoglu.
Commémorer et oublier
Charlie Hebdo s’est généralement moqué à peu près de tout et de n’importe quoi, peu importe le ton ou style. Néanmoins, après les événements de janvier 2015 et la mise en avant du besoin de liberté d’expression, on observe plusieurs incohérences.
Juste après les attentats, plusieurs émissions humoristiques, à la radio comme à la télévision ont été annulées, les animateurs ayant jugé l’affaire trop horrible pour assurer normalement leur chronique.
Parmi les rares exceptions on trouve Les Guignols, dont la ligne éditoriale recoupait parfois celle de Charlie, et qui a inclus plusieurs sketches dans une émission hommage diffusée à peine quelques heures après l’attaque.
La vidéo impliquait notamment une poupée du prophète Mohammed se distanciant des terroristes. Elle se concluait par l’entrée aux cieux des caricaturistes et journalistes assassinés en ironisant sur le fait qu’ils se soient si fréquemment moqués de la religion.
Pourtant, les médias français et plus largement la société tout entière semblent encore avoir du mal à apprécier l’humour noir.
Lors d’un événement en septembre 2017, l’humoriste Jérémy Ferrari racontait ainsi comment plusieurs émissions avaient déprogrammé des interviews avec lui au sujet de son nouveau spectacle Vends 2 pièces à Beyrouth, soulignant que si les médias exaltaient la place donnée à la liberté d’expression, se moquer de la guerre ou du terrorisme demeurait un sujet sensible.
Apologie du terrorisme
Par ailleurs, courant 2015, tous ceux et celles qui, d’un trait d’humour ou non, pouvaient minimiser l’attaque, voire critiquer le journal, prenaient le risque d’être pointés du doigt pour délit d’« apologie du terrorisme, un terme qui fait débat ».
La façon d’aborder cette question dans les établissements scolaires a fait particulièrement débat. Plusieurs médias ont évoqué des réactions des élèves dont certains ont vivement critiquéCharlie Hebdo, notamment par rapport à son humour provocateur.
Dans une école au nord de Paris, un élève qui avait fait une blague au sujet de l’un des terroristes avait été puni et chargé de recopier plusieurs fois la phrase « on ne rit pas de choses sérieuses ».
Aujourd’hui, comme je le montre dans mon ouvrage Humour in Contemporary France : Controversy, Consensus and Contradictions, récemment paru, les humoristes français semblent divisés entre le fait de pouvoir rire librement de tous les sujets tout en demeurant inquiets des possibles conséquences que cela peut engendrer.
Craintes et conséquences
En 2015, dans son spectacle Le Fond de l’air effraie l’humoriste Sophia Aram défendait la liberté totale d’expression et l’importance de se moquer librement de la religion et de l’extrémisme.
Mustapha El Atrassi – d’origine franco-marocaine comme Aram et qui a grandi lui aussi dans une famille de confession musulmane – insiste comme sa consœur sur le fait de pouvoir rire de sujets sensibles. Mais il souligne également que tous les humoristes ne peuvent rire de tout de la même façon. Selon lui, un humoriste français qui s’appelle « Maxence » aurait bien plus de chances de connaître une réaction positive à des blagues et de l’humour noir sur le terrorisme que lui.
En 2016, le comédien Stéphane Guillon a rappelé certaines des raisons qui ont fait de Charlie Hebdo une cible pour les fondamentalistes : les caricatures de Mahomet.
« Si tu peux mourir à cause d’un dessin, tu peux mourir à cause d’un sketch. »
Il a de nouveau évoqué sa crainte des conséquences potentiellement dangereuses de se moquer du prophète Mahomet sur scène en 2018. Lors d’un événement commémorant le troisième anniversaire de l’attaque de Charlie Hebdo, l’humouriste normalement acerbe [a ainsi déclaré] : « Je ne veux pas manquer de voir mes enfants grandir à cause d’une blague sur Mahomet. »
Vous êtes toujours là ?
Cinq ans après les attentats, la France n’embrasse pas davantage les valeurs associées au journal. Si, peu après les événements le nombre d’abonnés était passé à 260 000 et que, six mois plus tard, il se vendait 120 000 journaux par semaine, en 2018, on ne recensait plus que 35 000 abonnés et seuls 35 000 exemplaires supplémentaires étaient vendus par semaine aux non-abonnés.
Après une nouvelle baisse des ventes – également due à une gestion conflictuelle du journal –, le quatrième anniversaire des attentats en 2019 se doublait d’un éditorial sarcastique qui interrogeait ses lecteurs : « Vous êtes toujours là ? »
Certains ont bel et bien disparu : ainsi Les Guignols ont tiré leur révérence en 2018 après que les quatre auteurs principaux de l’émission aient été remerciés à l’été 2015. Un peu comme si la France semblait finalement beaucoup moins encline à adopter l’humour satirique mordant que l’on aurait pu croire en 2015.
Traduction Clea Chakraverty
Jonathan Ervine ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son poste universitaire.
Five years on from the Charlie Hebdo attack, 'Je suis Charlie' rings hollow
Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University
After the terror attack on the Paris office of satirical magazine Charlie Hebdo on January 7 2015 left 12 people dead, many declared “Je suis Charlie” (“I am Charlie”) in solidarity. But behind the understandable emotion that accompanied such declarations lay a more complicated reality. Many reactions to the attack were more conservative than first appeared, and not in keeping with the values of the publication. Five years on, “Je suis Charlie” has quite a hollow ring to it.
Before 2015, about 40,000 people read Charlie Hebdo each week. Given that many hundreds of thousands declared “je suis Charlie”, most were clearly not regular readers. “Je suis Charlie” primarily appears to have been a statement of sympathy rather than an endorsement of the brand of humour of this subversive publication. The phrase also symbolised a desire to defend freedom of expression, although not necessarily an agreement with the ways in which Charlie Hebdo has expressed itself.
Charlie Hebdo has traditionally taken pride in describing itself as a “journal irresponsable” (irresponsible newspaper). It has been happy to describe its humour as “bête et méchant” (stupid and nasty). This sometimes dark and provocative humour has attracted criticism over the years, not least from politicians. Yet many authority figures that Charlie Hebdo had ruthlessly mocked were present in the demonstrations that took place in January 2015.
And as was observed at the time, the presence of certain world leaders also pointed to a degree of hypocrisy. Where many sought to defend freedom of speech, there were several leaders who had restricted freedom of expression in their own countries. The international non-profit organisation Reporters Without Borders was particularly critical of figures such as Egyptian foreign minister Sameh Shoukry, Russian foreign minister Sergei Lavrov and Turkish Prime Minister Ahmet Davutoglu.
Commemorating while forgetting
Charlie Hebdo has generally been keen to laugh about anything and everything, and in whatever way it pleases. But despite the focus on freedom of expression in the aftermath to the 2015 attack, there were noticeable inconsistencies. In the immediate aftermath, several topical comedy programmes on French television were not broadcast as writers and presenters struggled to find a way to engage with such horrific events in a humorous manner.
A rare exception was the Canal Plus show Les Guignols, whose brand of humour was sometimes similar to Charlie Hebdo. The daily programme, which featured latex puppets of many well known figures, included several sketches about the attack, broadcast only hours after it had taken place. These included jokes about increased levels of terror threats. It also involved a latex puppet of the Prophet Muhammad distancing himself from the attackers. The show, which was dedicated to the magazine, concluded with a sketch in which several of the Charlie Hebdo cartoonists who had been killed were allowed into heaven despite having frequently mocked religion.
Yet many sections of the French media, and French society in general, were reluctant to embrace such dark humour. At an event in September 2017, comedian Jérémy Ferrari told of how several television stations cancelled planned interviews with him about his new show in early 2015. Stations may have been making time to discuss freedom of expression, but he said they seemed reluctant to discuss the way his stand-up show mocked war and terrorism.
People who sought to play down or joke about the Charlie Hebdo attack in France in early 2005 also risked being charged with the offence of “l'apologie du terrorisme” (excusing terrorism). In a school north of Paris, a pupil was reportedly disciplined for laughing at a joke about the name of a gunman who killed several people in the days after the Charlie Hebdo attack, and was made to repeatedly write the phrase “one does not laugh about serious things”.
A challenge for comedians
Several years on, as I explore in my recent book on the topic, French comedians seem torn between insisting on the importance of being able to joke about whatever topics they wish and worrying about the consequences of doing so.
In 2015, the comedian Sophia Aram started performing a show in which she defended Charlie Hebdo and its values. She insists on the importance of continuing to freely mock religion and extremism.
Mustapha El Atrassi – a comedian who shares Aram’s French-Moroccan roots and was also brought up in a Muslim family – also insists on the need to keep embracing jokes that deal with taboos. But he argues too that not all comedians are equally free to joke about terrorism. He said that a French comedian called “Maxence” – a stereotypically white, European, middle class name – is likely to get a much more positive response to dark humour about terrorism than someone from his background.
Focusing on the depictions of the Prophet Muhammad that made Charlie Hebdo a target for fundamentalists, meanwhile, the comedian Stéphane Guillon said in 2016: “If you can die due to a drawing, you can die due to a sketch.” He again evoked his fear of the potentially dangerous consequences of mocking the Prophet Muhammad on stage in 2018. At an event to commemorate the third anniversary of the Charlie Hebdo attack, the normally acerbic comic stated: “I don’t want to miss out on seeing my children grow up due to a joke about Muhammad.”
Five years on, France has not continued to embrace values associated with Charlie Hebdo. Shortly after the attack, the magazine’s number of subscribers rose to 260,000 and six months on it was selling 120,000 copies each week via newsagents. But by 2018, it had only 35,000 subscribers and sold a further 35,000 copies per week to non-subscribers. After a further decline in sales, it marked the fourth anniversary of the attacks in 2019 with an editorial that asked its readers: “Are you still there?”
One thing that is certainly not still there is Canal Plus’s Les Guignols, the satirical show featuring latex puppets. Its four main writers were fired in summer 2015 and the show moved to a less prominent slot. In 2018, the iconic show was finally cancelled by Canal.
Ultimately, France seems much less keen to embrace biting satirical humour than it initially appeared back in 2015.
Jonathan Ervine does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Boris Johnson is planning radical changes to the UK constitution – here are the ones you need to know about
Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University
With a very large majority in parliament, Boris Johnson is planning radical changes to the UK constitution. His party claims that far reaching reforms are needed because of a “destabilising and potentially extremely damaging rift between politicians and the people” under the last parliament. The issue at the centre of this “damaging rift”, however, is whether the proposals for constitutional change are a democratic necessity or a cynical attempt by the Conservative government to bolster its power.
These are the most important changes the Conservative government is proposing.
The future of the union
The most seismic constitutional challenge for the prime minister is the future of the union of nations that make up the UK. He claims he wants to strengthen the union, but Brexit raises questions about Northern Ireland and the Scottish nationalism movement has been energised by the 2019 election results.
The Conservatives have been clear about their opposition to holding a second independence referendum in Scotland. And legally speaking, it is for Westminster to make decisions– not Holyrood. However, politically speaking, it’s difficult to envisage the UK government being able to arbitrarily force a country to stay in the UK against its will.
Cutting 50 MPs from parliament
The Conservative government has detailed plans for changing the way the UK elects its members of parliament, starting with redrawing constituency boundaries to reduce the number of MPs from 650 to 600. The changes were first proposed in 2016 by the independent Boundary Commission.
But it has been noted that moving boundaries could have a greater negative impact on Labour and the Scottish National Party than the Conservatives – which perhaps tells us why the plan features so highly on Johnson’s agenda. It could also mean that smaller regions, such as Wales, lose disproportionately more MPs than other parts of the UK.
Holding elections when he wants
But there is also concern that repealing the act hands the prime minister discretion to decide when to call an election – and, in the most extreme interpretation, could mean that this government’s term lasts a decade. The question therefore becomes what constitutional safeguards would be put in place to replace this law and counterbalance against the arbitrary power of government? We don’t currently have an answer to that.
‘Rebalancing’ human rights
Their debates have centred around the belief that the UK needs to revisit the balance between individuals’ rights– such as freedom of expression – and the wider public interest. That doesn’t mean the Conservatives want to curtail all rights to free speech but that they want greater powers to manage cases in which people use a free speech argument to justify hate speech. The basis of their argument seems to be that if human rights are universal to all then we may have now gone too far – as they also apply to “bad people”.
However, such arguments are flawed. Human rights legislation already recognises that rights are not absolute, and can be proportionally limited as necessary in a democratic society. Instead these proposals seem to be more about giving the government increased arbitary power to deport individuals they deem to be a risk, such as terrorist suspects, rather than having to fight protracted human rights litigation in court.
What’s more, the Conservatives’ actual commitment to retaining the right free speech can be seen via their proposals to repeal section 40 of the Crime and Courts Act.. This is the law that was introduced following the Leveson Inquiry and phone-hacking scandals, which forced publishers not signed up to an approved regulator to pay all legal costs linked to libel claims, even if the claims were ultimately thrown out. The concern is that if publishers are carrying these financial risks, it restricts the freedom of the press and legitimate investigative journalism.
The UK is also about to see its justice system “[updated]” – including judicial review, the process through which people can challenge decisions made by public bodies. This process was famously used in two high-profile Brexit cases in which the Supreme Court ruled against the government.
Some therefore question whether the prime minister’s displeasure with these rulings is the real motivation for “updating” the justice system. The Conservative manifesto says the idea is to ensure the process is not being abused “to conduct politics by another means”.
There is a legitimate case for “updating” justice – not least because there has been a rise in the number of people wanting to challenge the state without being able to pay for legal advice. But judicial review has already been subject to significant reforms in recent years. The concern is that this may be an attempt by the Conservatives to muddy the waters by reformulating the rules following the tumultuous time the government has had in the courts.
Parliament or government?
A constitution, democracy and rights commission is to be set up within a year, which appears to be aimed at reviewing the UK constitution through the guise of addressing trust in politics.
It’s likely that the commission will focus on the relationship between parliament and government. It will, in particular, review of the mechanisms available to parliament to hold the government to account and look at what the government can and can’t do without parliamentary approval. These powers currently include decisions to deploy the armed forces, make or unmake international treaties, and to grant honours.
The courts can review the limits of these “prerogative powers”, and can prevent the government from trying to create new ones. This was a key part of the Brexit case taken to the Supreme Court by campaigner Gina Miller when she argued that the government could not trigger Article 50 to begin the Brexit process in 2016 without getting parliament’s approval..
The Conservatives have also hinted at wanting to reform the House of Lords, though it’s not clear how at this stage. It is likely that the new government will want to explicitly reaffirm the supremacy of the Commons over the Lords in a new act of parliament, and possibly even revisit the Lords’ “powers of delay” – something Theresa May threatened during her prime ministership when the Lords refused to pass her Brexit legislation straight away.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Can African smallholders farm themselves out of poverty?
Author: David Harris, Honorary Lecturer, Bangor UniversityJordan Chamberlin, Spatial Economist, International Maize and Wheat Improvement Center (CIMMYT)Kai Mausch, Agricultural Economist, World Agroforestry Centre (ICRAF)
A great deal of research on agriculture in Africa is organised around the premise that intensification can take smallholder farmers out of poverty. The emphasis in programming often focuses on technologies that increase farm productivity and management practices that go along with them.
Yet the returns of such technologies are not often evaluated within a whole-farm context. And – critically – the returns for smallholders with very little available land have not received sufficient attention.
To support smallholders in their efforts to escape poverty by adopting modern crop varieties, inputs and management practices, it’s necessary to know if their current resources – particularly their farms – are large enough to generate the requisite value.
Two questions can frame this. How big do farms need to be to enable farmers to escape poverty by farming alone? And what alternative avenues can lead them to sustainable development?
These issues were explored in a paper in which we examined how much rural households can benefit from agricultural intensification. In particular we, together with colleagues, looked at the size of smallholder farms and their potential profitability and alternative strategies for support. In sub-Saharan Africa smallholder farms are, on average, smaller than two hectares.
It’s difficult to be precise about the potential profitability of farms in developing countries. But it’s likely that the upper limit for most farms optimistically lies between $1,000 and $2,000 per hectare per year. In fact the actual values currently achieved by farmers in sub-Saharan Africa are much less.
The large profitability gap between current and potential performance per hectare of smallholder farms could, in theory, be narrowed if farmers adopted improved agricultural methods. These could include better crop varieties and animal breeds; more, as well as more efficient, use of fertilisers; and better protection from losses due to pests and diseases.
But are smallholder farms big enough so that closing the profitability gap will make much difference to their poverty status?
Our research suggests that they are not. Even if they were able to achieve high levels of profitability, the actual value that could be generated on a small farm translated into only a small gain in income per capita. From this we conclude that many, if not most, smallholder farmers in sub-Saharan Africa are unlikely to farm themselves out of poverty – defined as living on less than $1.90 per person per day. This would be the case even if they were to make substantial improvements in the productivity and profitability of their farms.
That’s not to say that smallholder farmers shouldn’t be supported. The issue, rather, is what kind of support best suits their circumstances.
Productivity and profitability
In theory, it should be quite simple to calculate how big farms need to be to enable farmers to escape poverty by farming alone.
To begin with, it’s necessary to know how productive and profitable per unit area a farm can be. Productivity and profitability – the value of outputs minus the value of inputs – are functions of farmers’ skills and investment capacities.
They are also dependent on geographical contexts. This includes soils, rainfall and temperature, which determine the potential for crop and livestock productivity. Other factors that play a part include remoteness, which affects farm-gate prices of inputs and outputs, and how many people a farm needs to support.
The figure below summarises the relation between farm size, profitability and income of rural households. We used a net income of $1.90 per person per day (the blue curve) as our working definition of poverty. A more ambitious target of $4 per person per day (the orange curve) represents a modest measure of prosperity beyond the poverty line.
So, how do these values compare with the situation in sub-Saharan Africa?
It has been estimated that about 80% of farms across nine sub-Saharan countries are smaller than two hectares. These sites would need to generate at least $1,250 per hectare per year just to reach the poverty line. Sites at the lower end of the range cannot escape poverty even if they could generate $3,000 per hectare per year.
Unfortunately, there is limited information about whole-farm net profitability in developing countries. But in Mozambique, Zimbabwe and Malawi, for example, the mean values of only $78, $83 and $424 per hectare per year, respectively, imply that even $1,250 appears to be far out of reach for most small farms.
It’s difficult to interpret information from developed countries in developing country contexts. Nevertheless, gross margin values for even the most efficient mixed farms seldom exceed around $1,400 per hectare per year.
These values are similar to gross margins using best practices for perennial cropping systems reported in a recent literature survey of tropical crop profitability. The study drew on data from nine household surveys in seven African countries. It found that profit from crop production alone (excluding data on livestock) ranged from only $86 per hectare per year in Burkina Faso to $1,184 in Ethiopia. The survey mean was $535 per hectare per year.
From this overview we must conclude that, even with very modest goals, most smallholder farms in sub-Saharan Africa are not “viable” when benchmarked against the poverty line. And it’s unlikely that agricultural intensification alone can take many households across the poverty line.
What is the takeaway?
We certainly do not suggest that continued public and private investments in agricultural technologies are unmerited. In fact, there is evidence that returns to agricultural research and development at national level are very high in developing countries. And there is evidence that agricultural growth is the most important impetus for broader patterns of structural transformation and economic growth in rural Africa. But realistic assessments of the scope for very small farmers to farm themselves out of poverty are necessary.
Farmers are embedded in complex economic webs and increasingly depend on more than farm production for their livelihoods. More integrated lenses for evaluating public investment in the food systems of the developing world will likely be more helpful in the short term.
Integrated investments that affect both on- and off-farm livelihood choices and outcomes will produce better welfare than a narrow focus on production technologies in smallholder dominated systems. Production technology research for development will remain important. But to reach the smallest of Africa’s smallholders will require focus on what’s happening off the farm.
David Harris receives funding from the CGIAR.
Jordan Chamberlin receives funding from the CGIAR, the Bill and Melinda Gates Foundation, and IFAD.
Kai Mausch received funding from multiple organisations that fund international agricultural research.
Why some scientists want to rewrite the history of how we learned to walk
Author: Vivien Shaw, Lecturer in Anatomy, Bangor UniversityIsabelle Catherine Winder, Lecturer in Zoology, Bangor University
It’s not often that a fossil truly rewrites human evolution, but the recent discovery of an ancient extinct ape has some scientists very excited. According to its discoverers, Danuvius guggenmosi combines some human-like features with others that look like those of living chimpanzees. They suggest that it would have had an entirely distinct way of moving that combined upright walking with swinging from branches. And they claim that this probably makes it similar to the last shared ancestor of humans and chimps.
We are not so sure. Looking at a fossilised animal’s anatomy does give us insights into the forces that would have operated on its bones and so how it commonly moved. But it’s a big leap to then make conclusions about its behaviour, or to go from the bones of an individual to the movement of a whole species. The Danuvius fossils are unusually complete, which does provide some vital new evidence. But how much does it really tell us about how our ancestors moved around?
Danuvius has long and mobile arms, habitually extended (stretched out) legs, feet which could sit flat on the floor, and big toes with a strong gripping action. This is a unique configuration. Showing that a specimen is unique is a prerequisite for classifying it as belonging to a separate, new species that deserves its own name.
But what matters in understanding the specimen is how we interpret its uniqueness. Danuvius’s discoverers go from describing its unique anatomy to proposing a unique pattern of movement. When we look at living apes, the relationship between anatomy and movement is not so simple.
The Danuvius find actually includes fossils from four individuals, one of which is nearly complete. But even a group of specimens may not be typical of a species more generally. For instance, humans are known for walking upright not climbing trees, but the Twa hunter-gatherers are regular tree climbers. These people, whose bones look just like ours, have distinctive muscles and ranges of movement well beyond the human norm. But you could not predict their behaviour from their bones.
Every living ape uses a repertoire of movements, not just one. For example, orang-utans use clambering, upright or horizontal climbing, suspensory swinging and assisted bipedalism (walking upright using hands for support). Their movement patterns can vary in complex ways because of individual preference, body mass, age, sex or activity.
Gorillas, meanwhile, are “knuckle-walkers” and we used to think they were unable to stand fully upright. But the “walking gorilla” Ambam is famous for his “humanlike” stride.
Ultimately, two animals with very similar anatomies can move differently, and two with different anatomies can move in the same way. This means that Danuvius may not be able to serve as a model for our ancestors’ behaviour, even if its anatomy is similar to theirs.
In fact, we believe there are other plausible interpretations of Danuvius’s bones. These alternatives give a picture of a repertoire of potential movements that may have been used in different contexts.
For example, one of Danuvius‘s most striking features is the high ridge on the top of its shinbone, which the researchers say is associated with “strongly developed cruciate ligaments”, which stabilise the knee joint. The researchers link these strong stabilising ligaments with evidence for an extended hip and a foot that could be placed flat on the floor to suggest that this ape habitually stood upright. Standing upright could be a precursor to bipedal walking, so the authors suggest that this means Danuvius could have been like our last shared ancestor with other apes.
However, the cruciate ligaments also work to stabilise the knee when the leg is rotating. This only happens when the knee is bent with the foot on the ground. This is why skiers who use knee rotation to turn their bodies often injure these ligaments.
We have not seen the Danuvius bones in real life. But, based on the reserachers’ excellent images and descriptions, an equally plausible interpretation of the pronounced ridge on the top of the shinbone could be that the animal used its knee when it was bent, with significant rotational movement.
Perhaps it hung from a branch above and used its feet to steer by gripping branches below, rather than bearing weight through the feet. This could have allowed it to capitalise on its small body weight to access fruit on fine branches. Alternatively, it could have hung from its feet, using the legs to manoeuvre and the hands to grasp.
All of these movements fit equally well with Danuvius’ bones, and could be part of its movement repertoire. So there is no way to say which movement is dominant or typical. As such, any links to our own bipedalism look much less clear-cut.
Danuvius is undoubtedly a very important fossil, with lots to teach us about how varied ape locomotion can be. But we would argue that it is not necessarily particularly like us. Instead, just like living apes, Danuvius would probably have displayed a repertoire of different movements. And we can’t say which would have been typical, because anatomy is not enough to reconstruct behaviour in full.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Accessing healthcare is challenging for Deaf people – but the best solution isn't 'one-size-fits-all'
Author: Anouschka Foltz, Assistant Professor in English Linguistics, University of GrazChristopher Shank, Lecturer in Linguistics, Bangor University
For many of us, a visit to the doctor’s office can be wrought with anxiety. A persistent cough that won’t go away or an ailment we hope is nothing serious can make GP visits emotionally difficult. Now imagine that you can’t phone the doctor to make an appointment, you don’t understand what your doctor just said, or you don’t know what the medication you’ve been prescribed is for. These are all situations that many Deaf people face when accessing healthcare services.
We use Deaf (with a capital “D”) here to talk about culturally Deaf people, who were typically born deaf, and use a signed language, such as British Sign Language (BSL), as their first or preferred language. In contrast, deaf (lowercase “d”) refers to the audiological condition of deafness.
For our study, we talked to Deaf patients in Wales who communicate using BSL to learn about their experiences with healthcare services. Their experiences illustrated the challenges they face, and showed us that patients have unique needs. For example, a patient born profoundly deaf would have different needs from a person who became deaf later in life.
Many Deaf communities around the world face inequalities when it comes to accessing health information and healthcare services, as health information and services are often not available in an accessible format. As a result, Deaf individuals often have low health literacy and are at greater risk of being misdiagnosed or not diagnosed at all.
Problems with healthcare access often begin when making an appointment. Because many GPs only allow appointments to be made over the phone, many of those we interviewed had to physically go to health centres to ask for an appointment. Not only is this inconvenient, booking without an interpreter could be difficult and confusing.
Interpreters are essential for patients to receive the best care. However, we heard recurring stories of interpreters not being booked for appointments, arriving late, and – in some cases – not coming at all. Before interpreters were available, one woman described going to the doctor’s office as intimidating “because the communication wasn’t there”. One participant said they always make sure an interpreter has been booked, saying: “Don’t let me down… I don’t want to be going through this again.”
These issues are worsened in emergency situations. One woman recalled an incident where despite texting 999, she didn’t get help until her daughter phoned 999 for her, acting as her interpreter throughout her entire interaction with emergency services.
Another person who texted 999 said:
There are all these questions that they are asking you. And all that we want is to be able to say, ‘We need an ambulance’ … Because what’s happening is we’re panicking, we don’t understand the English, there are all these questions being texted to us, it’s hard enough for us to understand it anyways without panicking at the same time.
Interviewees also recalled emergency situations where interpreters weren’t available at short notice. One Deaf woman recalled when her husband – who is also Deaf – was rushed to hospital. They received no support from staff, and no interpreter was provided to help them.
Deaf awareness and language
Many problems that our interviewees faced related to language, and a lack of Deaf awareness. Many healthcare providers didn’t seem to know that BSL is a language unrelated to English – meaning many BSL users who were born Deaf or lost hearing early in life have limited proficiency in English. One interviewee explained that many healthcare providers think all Deaf people can read, without realising that many BSL users don’t understand English – with many being given health information written in English that they couldn’t comprehend.
Interviewees wished healthcare staff were more Deaf aware, as many healthcare providers lacked understanding about Deafness. This affected the doctor-patient relationship, with many interviewees agreeing that doctors “can be a bit patronising at times” and that this patronising attitude made interactions difficult. A lack of Deaf awareness can also lead to Deaf patients feeling forgotten. Many interviewees felt that Deaf people are easily ignored, with one interviewee saying: “I always feel like Deaf people are put last.”
No ‘one-size-fits-all’ solution
New technologies and services are being offered to help Deaf patients make appointments– such as having an interpreter call the doctor’s office during a video call with the patient.
Additionally, some health information is now available onlinein BSL. Interpreters can also be more easily available at short notice, for example in emergency situations, through video chat. Remote services particularly show promise for mental health treatments, by providing remote mental health counselling in BSL and other types of confidential services.
Because Deaf communities are small and tight-knit, patients may be wary of interacting with local Deaf counsellors or interpreters, worried about potential gossip. Several interviewees even said that they would not want a Deaf counsellor even if offered, for fear that the counsellor might gossip about them with others in the community. One interviewee suggested a mental health service with a remote online interpreter as the best solution.
The problems and potential solutions that emerged from our research are similar in other Deaf communities around the world. Though technology might offer some promising solutions, it’s important to realise that these might not work for everyone.
Patients have individual differences, needs, preferences, and cultural differences. Some patients may prefer a remote interpreter, others face-to-face interpreting – and these preferences may also depend on the type of appointment. What’s important is that Deaf patients have choice, and that new services, such as through the use of new technologies, are offered in addition to, not instead of, established health services.
Anouschka Foltz receives funding from Public Health Wales. The views in this article should, however, not be assumed to be the same as Public Health Wales.
Christopher Shank receives funding from Public Health Wales. The views in this article should, however, not be assumed to be the same as Public Health Wales.
Botswana is humanity's ancestral home, claims major study – well, actually …
Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor University
A recent paper in the prestigious journal Nature claims to show that modern humans originated about 200,000 years ago in the region around northern Botswana. For a scientist like myself who studies human origins, this is exciting news. If correct, this paper would suggest that we finally know where our species comes from.
But there are actually several reasons why I and some of my colleagues are not entirely convinced. In fact, there’s good reason to believe that our species doesn’t even have a single origin.
The scientists behind the new research studied genetic data from many individuals from the KhoeSan peoples of southern Africa, who are thought to live where their ancestors have lived for hundreds of thousands of years. The researchers used their new data together with existing information about people all around the world (including other areas traditionally associated with the origins of humankind) to reconstruct in detail the branching of the human family tree.
We can think of the earliest group of humans as the base of the tree with a specific set of genetic data - a gene pool. Each different sub-group that branched off and migrated away from humanity’s original “homeland” took a subset of the genes in that gene pool with them. But most people, and so the vast majority of those genes, remained behind. This means people alive today with different subsets of our species’ genes can be grouped on different branches of the human family tree.
Groups of people with the most diverse genomes are likely to be the ones that descended directly from the original group at the base of the tree, rather than one of the small sub-groups that split from it. In this case, the researchers identified one of the groups of KhoeSan people from around northern Botswana as the very bottom of the trunk, using geographical and archaeological data to back up their conclusion.
If you compare this process to creating your own family tree, it makes sense to think you can use information about who lives where today and how everyone relates to each other to reconstruct where the family came from. For example, many of my relatives live on the lovely Channel Island of Alderney, and one branch of my family have indeed been islanders for many generations.
Of course, there’s always some uncertainty created by variations in the data. (I now live in Wales and have cousins in England.) But as long as you look for broad patterns rather than focusing on specific details, you will still get a reasonable impression. There are even some statistical techniques you can use to assess the strength of your interpretation.
But there are several problems with taking the process of building a human family tree to such a detailed conclusion, as this new research does. First, it’s important to note that the study didn’t look at the whole genome. It focused just on mitochondrial DNA, a small part of our genetic material that (unlike the rest) is almost only ever passed from mothers to children. This means it isn’t mixed up with DNA from fathers and so is easier to track across the generations.
As a result, mitochondrial DNA is commonly used to reconstruct evolutionary histories. But it only tells us part of the story. The new study doesn’t tell us the origin of the human genome but the place and time where our mitochondrial DNA appeared. As a string of just 16,569 genetic letters out of over 3.3 billion in each of our cells, mitochondrial DNA is a very tiny part of us.
The fact that mitochondrial DNA comes almost only ever from mothers also means the story of its inheritance is much simpler than the histories of other genes. This implies that every bit of our genetic material may have a different origin, and have followed a different path to get to us. If we did the same reconstruction using Y chromosomes (passed only from father to son) or whole genomes, we’d get a different answer to our question about where and when humans originated.
There is actually a debate over whether the woman from whom all our mitochondrial DNA today descends (“mitochondrial Eve”) could ever have even met the man from whom all living men’s Y-chromosomes descend (“Y-chromosome Adam”). By some estimates, they may have lived as much as 100,000 years apart.
And all of this ignores the possibility that other species or populations may also have contributed DNA to modern humans. After this mitochondrial “origin”, our species interbred with Neanderthals and a group called the Denisovans. There’s even evidence that these two interbred with one another, at about the same time as they were hybridising with us. Earlier modern humans probably also interbred with other human species living alongside them in other time periods.
All of this, of course, suggests that modern human history – like the history of modern primates– was much more than a simple tree with straight lines of inheritance. It’s much more likely that our distant ancestors interbred with other species and populations to form a braiding stream of gene pools than that we form a nice neat tree that can be reconstructed genetically. And if that’s true, we may not even have a single origin we can hope to reconstruct.
Isabelle Catherine Winder received funding from the European Research Council (ERC) as part of the DISPERSE project (2011-2016). It was as part of her work as a post-doc on this project that she wrote the paper about reticulation and the human past cited in this article.
Lab-grown mini brains: we can't dismiss the possibility that they could one day outsmart us
Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University
The cutting-edge method of growing clusters of cells that organise themselves into mini versions of human brains in the lab is gathering more and more attention. These “brain organoids”, made from stem cells, offer unparalleled insights into the human brain, which is notoriously difficult to study.
But some researchers are worried that a form of consciousness might arise in such mini-brains, which are sometimes transplanted into animals. They could at least be sentient to the extent of experiencing pain and suffering from being trapped. If this is true – and before we consider how likely it is – it is absolutely clear in my mind that we must exert a supreme level of caution when considering this issue.
Brain organoids are currently very simple compared to human brains and can’t be conscious in the same way. Due to a lack of blood supply, they do not reach sizes larger than around five or six millimetres. That said, they have been found to produce brain waves that are similar to those in premature babies. A study has showed they can also grow neural networks that respond to light.
There are also signs that such organoids can link up with other organs and receptors in animals. That means that they not only have a prospect of becoming sentient, they also have the potential to communicate with the external world, by collecting sensory information. Perhaps they can one day actually respond through sound devices or digital output.
As a cognitive neuroscientist, I am happy to conceive that an organoid maintained alive for a long time, with a constant supply of life-essential nutrients, could eventually become sentient and maybe even fully conscious.
Time to panic?
This isn’t the first time biological science has thrown up ethical questions. Gender reassignment shocked many in the past, but, whatever your beliefs and moral convictions, sex change narrowly concerns the individual undergoing the procedure, with limited or no biological impact on their entourage and descendants.
Genetic manipulation of embryos, in contrast, raised alert levels to hot red, given the very high likelihood of genetic modifications being heritable and potentially changing the genetic make up of the population down the line. This is why successful operations of this kind conducted by Chinese scientist He Jianku raised very strong objections worldwide.
But creating mini brains inside animals, or even worse, within an artificial biological environment, should send us all frantically panicking. In my opinion, the ethical implications go well beyond determining whether we may be creating a suffering individual. If we are creating a brain – however small –– we are creating a system with a capacity to process information and, down the line, given enough time and input, potentially the ability to think.
Some form of consciousness is ubiquitous in the animal world, and we, as humans, are obviously on top of the scale of complexity. While we don’t know exactly what consciousness is, we still worry that human-designed AI may develop some form of it. But thought and emotions are likely to be emergent properties of our neurons organised into networks through development, and it is much more likely it could arise in an organoid than in a robot. This may be a primitive form of consciousness or even a full blown version of it, provided it receives input from the external world and finds ways to interact with it.
In theory, mini-brains could be grown forever in a laboratory – whether it is legal or not – increasing in complexity and power for as long as their life-support system can provide them with oxygen and vital nutrients. This is the case for the cancer cells of a woman called Henrietta Lacks, which are alive more than 60 years after her death and multiplying today in hundreds of thousands of labs throughout the world.
Disembodied super intelligence?
But if brains are cultivated in the laboratory in such conditions, without time limit, could they ever develop a form of consciousness that surpasses human capacity? As I see it, why not?
And if they did, would we be able to tell? What if such a new form of mind decided to keep us, humans, in the dark about their existence – be it only to secure enough time to take control of their life-support system and ensure that they are safe?
When I was an adolescent, I often had scary dreams of the world being taken over by a giant computer network. I still have that worry today, and it has partly become true. But the scare of a biological super-brain taking over is now much greater in my mind. Keep in mind that such new organism would not have to worry about their body becoming old and dying, because they would not have a body.
This may sound like the first lines of a bad science fiction plot, but I don’t see reasons to dismiss these ideas as forever unrealistic.
The point is that we have to remain vigilant, especially given that this could all happen without us noticing. You just have to consider how difficult it is to assess whether someone is lying when testifying in court to realise that we will not have an easy task trying to work out the hidden thoughts of a lab grown mini-brain.
Slowing the research down by controlling organoid size and life span, or widely agreeing a moratorium before we reach a point of no return, would make good sense. But unfortunately, the growing ubiquity of biological labs and equipment will make enforcement incredibly difficult – as we’ve seen with genetic embryo editing.
It would be an understatement to say that I share the worries of some of my colleagues working in the field of cellular medicine. The toughest question that we can ask regarding these mesmerising possibilities, and which also applies to genetic manipulations of embryos, is: can we even stop this?
Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Researchers invent device that generates light from the cold night sky – here's what it means for millions living off grid
Author: Jeff Kettle, Lecturer in Electronic Engineering, Bangor University
More than 1.7 billion people worldwide still don’t have a reliable electricity connection. For many of them, solar power is their potential energy saviour – at least when the sun is shining.
Technology to store excess solar power during the dark hours is improving. But what if we could generate electricity from the cold night sky? Researchers at Stanford and UCLA have just done exactly that. Don’t expect it to become solar’s dark twin just yet, but it could play an important role in the energy demands of the future.
The technology itself is nothing new – in fact, the principles behind it were discovered almost 200 years ago. The device, called a thermoelectric generator, uses temperature differences between two metal plates to generate electricity through something called the Seebeck effect. The greater the temperature difference, the greater the power generated.
We already use this technology to convert waste heat from sources such as industrial machinery and car engines. The new research applies the same technique to harness the temperature difference between the outside air and a surface which faces the sky.
The device’s two plates sit on top of one another. The top plate faces the cold air of the open night sky, while the bottom plate is kept enclosed in warmer air, facing the ground. Heat always radiates to cooler environments, and the cooler the environment, the faster heat is radiated. Because the open night sky is cooler than the enclosed air surrounding the bottom plate, the top plate loses heat faster than the bottom plate. This generates a temperature difference between the two plates – in this study, between four and five degrees celsius.
Now at different temperatures, heat also starts to travel from the hotter bottom plate to the cooler top plate. The device harnesses this flow of heat to generate electricity.
At this small temperature difference, power is limited. The researchers’ device produced just 25 milliwatts per meter squared (mW/m²) – enough to power a small LED reading light. By contrast, a solar panel of the same size would be enough to sustain three 32" LED TVs – that’s 4,000 times more power.
In dryer climates, the device could perform better. This is because in wetter climates, any moisture in the air condenses on the downward-facing bottom plate, cooling it and reducing the temperature difference between the plates. In the dry Mediterranean, for example, the device could produce 20 times more power than it did in the US.
The device itself could also be refined. For example, manufacturers could apply a coating that allows the device’s surface to reach a temperature lower than the surrounding environment during the day, so that it is even cooler at night. They could also use corrugated instead of flat plates, which are more efficient at capturing and emitting radiation. These and other feasible tech upgrades could raise the power output by as much as ten times.
With the efficiency of everyday technologies continually improving, thermolectric devices could play an important role in powering society before long. Colleagues of mine are developing technology that connects household devices to the internet and each other – the so called Internet of Things– at power levels of just 1.5 megawatts per meter squared (MW/m²), a level of energy firmly within the reach of an enhanced device in dry climates.
By connecting a series of thermoelectric generators mounted on the walls of homes, the technology could noticeably lighten the energy load of houses. It’s feasible, too – the technology could easily be mass produced, and sold cheaply enough to provide a viable energy source in locations where it is too expensive or impractical to connect with mains electricity.
Of course, it’s unlikely that thermoelectric devices will ever replace battery storage as the nighttime saviour of solar energy. Batteries now cost a quarter of what they did a decade ago, and solar systems with battery storage are already becoming affordable ways to meet small-scale domestic and industrial energy needs.
But the technology could be a useful complement to solar power and battery storage – and a vital back-up energy source for those living off-grid when batteries fail or panels break. When everything goes wrong on the chilliest of nights, those with thermoelectric devices to power a heater would at least have one thing to thank the freezing night air for.
Jeff Kettle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Kidal bukan berarti Anda dominan otak kanan - jadi apa artinya?
Author: Emma Karlsson, Postdoctoral researcher in Cognitive Neuroscience, Bangor University
Ada banyak klaim tentang apa artinya kidal, dan apakah itu mempengaruhi tipe orang – tapi nyatanya ini adalah sesuatu yang membingungkan. Mitos tentang kidal muncul setiap tahunnya, tapi para peneliti belum mengungkap sepenuhnya arti kidal - lebih sering menggunakan tangan kiri ketimbangan kanan untuk aktivitas.
Jadi mengapa orang bisa kidal? Sejujurnya, kami juga tidak sepenuhnya memahami. Apa yang kami ketahui adalah populasi orang kidal hanya sekitar 10% dari populasi dunia - tapi ini tidak terbagi rata menurut jenis kelamin.
Dari populasi 10% tersebut, diketahui sekitar 12% adalah laki-laki dan hanya sekitar 8% perempuan. Beberapa orang heran dengan perbandingan 90:10 ini dan bertanya-tanya mengapa mereka bisa kidal.
Tapi pertanyaan yang menarik adalah mengapa kita tidak kidal secara kebetulan? Mengapa tidak terbagi 50:50? Ini bukan karena arah kita menulis, karena kidal akan dominan di negara-negara yang cara penulisan bahasanya dari kanan ke kiri, bukan itu masalahnya. Bahkan secara genetik ini juga aneh - hanya sekitar 25% orang kidal yang kedua orang tuanya kidal.
Kidal telah dikaitkan dengan macam-macam hal buruk, seperti kesehatan yang buruk dan kematian dini - tapi tidak satu pun yang benar. Yang terakhir ini banyak dijelaskan oleh generasi tua, mereka dipaksa untuk pindah tangan dan menggunakan tangan kanan mereka. Dengan ini, sepertinya ada lebih sedikit orang kidal pada masa lalu. Kaitan yang pertama, meski bisa menjadi berita yang menarik, tetaplah salah.
Mitos positif tentang kidal juga berlimpah. Orang kidal dianggap lebih kreatif, karena kebanyakan dari mereka menggunakan “otak kanan”. Ini mungkin salah satu mitos yang paling konsisten terkait kidal dan otak. Tapi tidak peduli seberapa menarik (dan mungkin mengecewakan bagi orang-orang kidal yang masih menunggu untuk suatu hari memiliki talenta setara seniman Leonardo da Vinci), pemikiran bahwa setiap orang menggunakan “sisi otak dominan” dalam mendefinisikan kepribadian dan pengambilan keputusan juga salah.
Lateralisasi otak dan kidal
Memang benar, bagaimana pun, bahwa otak sebelah kanan mengendalikan sisi kiri tubuh, dan otak sebelah kiri mengendalikan sisi kanan - dan bahwa belahan otak memang memiliki spesialisasi masing-masing.
Sebagai contoh, bahasa biasanya diproses sedikit lebih banyak di otak sebelah kiri, dan pengenalan wajah sedikit lebih banyak di otak sebelah kanan. Gagasan bahwa setiap belahan otak dikhususkan untuk beberapa keterampilan, dikenal sebagai lateralisasi otak. Namun, mereka tidak bekerja secara terpisah, ada pita tebal pada serabut saraf - disebut corpus callosum – yang menghubungkan kedua sisi otak.
Menariknya, ada beberapa perbedaan antara orang yang ‘bertangan kanan’ dan kidal yang dikenal dalam spesialisasi ini. Misalnya, sering dikatakan bahwa sekitar 95% orang bertangan-kanan adalah “dominan otak kiri”. Ini tidak sama dengan klaim “otak kiri” di atas, ini sebenarnya merujuk pada temuan awal bahwa kebanyakan orang bertangan-kanan lebih bergantung pada otak sebelah kiri terkait berbicara dan bahasa. Diasumsikan bahwa kebalikannya akan berlaku untuk orang kidal. Namun ini bukan masalahnya. Faktanya, 70% orang kidal juga memproses bahasa lebih banyak pada otak sebelah kiri. Mengapa angka ini lebih rendah dan bukan kebailkannya, ini belum diketahui.
Para peneliti telah menemukan banyak spesialisasi otak lainnya, atau “asimetri” lain selain bahasa. Kebanyakan terjadi di otak sebelah kanan - setidaknya bagi orang bertangan-kanan - termasuk hal-hal seperti pemrosesan wajah, keterampilan spasial, dan persepsi emosi. Namun ini belum diketahui, mungkin karena peneliti salah mengasumsikan bahwa itu semua bergantung pada bagian otak yang tidak dominan terhadap bahasa.
Kenyataannya, asumsi ini, ditambah pengakuan bahwa sedikit orang kidal memiliki dominasi otak kanan untuk bahasa, membuat mereka diabaikan - atau lebih buruk, dihindari secara aktif - dalam banyak penelitian terhadap otak, karena peneliti berasumsi bahwa, sama seperti bahasa, semua asimetri lainnya akan berkurang.
Bagaimana beberapa fungsi yang terlateralisasi (terkhususkan) dalam otak dapat benar-benar mempengaruhi cara kita memandang sesuatu. Kami mempelajarinya dengan menggunakan tes persepsi sederhana. Sebagai contoh, dalam penelitian baru-baru ini, kami mempresentasikan gambar wajah yang sudah dirancang untuk menunjukkan setengah wajah dengan satu emosi dan setengah lainnya dengan emosi yang berbeda, untuk sejumlah besar orang kidal dan bertangan kanan.
Biasanya, orang-orang cenderung melihat emosi yang ditunjukkan sisi kiri wajah, ini diyakini mencerminkan spesialisasi di otak sebelah kanan. Hal ini terkait dengan fakta bahwa bidang visual diproses sedemikian rupa sehingga ada sebuah bias ke sisi kiri ruang. Bias ini dianggap mewakili pemrosesan oleh otak sebelah kanan, sementara sebuah bias ke sisi kanan dianggap mewakili pemrosesan oleh otak sebelah kiri. Kami juga menyajikan berbagai jenis gambar dan suara, untuk memeriksa beberapa spesialisasi lainnya.
Temuan kami menunjukkan bahwa beberapa jenis spesialisasi, termasuk pemrosesan wajah, tampaknya mengikuti pola menarik yang terlihat untuk bahasa (yaitu, lebih banyak orang kidal memiliki kecenderungan melihat emosi yang ditunjukkan di sisi kanan wajah). Tapi terkait melihat bias-bias pada sesuatu yang diperhatikan, kami tidak menemukan perbedaan pola pemrosesan otak untuk orang bertangan-kanan dan kidal. Hasil ini menunjukkan bahwa, sementara ada hubungan antara kidal dan beberapa spesialisasi otak, tidak lebih.
Orang kidal sangat penting dalam eksperimen baru seperti ini. Bukan hanya karena mereka dapat membantu kita memahami apa yang membuat mereka berbeda, tapi juga bisa membantu kita memecahkan banyak misteri neuropsikologis lama tentang otak.
Franklin Ronaldo menerjemahkan artikel ini dari bahasa Inggris.
Emma Karlsson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Are the Amazon fires a crime against humanity?
Author: Tara Smith, Lecturer in Law, Bangor University
Fires in the Brazilian Amazon have jumped 84% during President Jair Bolsonaro’s first year in office and in July 2019 alone, an area of rainforest the size of Manhattan was lost every day. The Amazon fires may seem beyond human control, but they’re not beyond human culpability.
Bolsonaro ran for president promising to “integrate the Amazon into the Brazilian economy”. Once elected, he slashed the Brazilian environmental protection agency budget by 95% and relaxed safeguards for mining projects on indigenous lands. Farmers cited their support for Bolsonaro’s approach as they set fires to clear rainforest for cattle grazing.
Bolsonaro’s vandalism will be most painful for the indigenous people who call the Amazon home. But destruction of the world’s largest rainforest may accelerate climate change and so cause further suffering worldwide. For that reason, Brazil’s former environment minister, Marina Silva, called the Amazon fires a crime against humanity.
From a legal perspective, this might be a helpful way of prosecuting environmental destruction. Crimes against humanity are international crimes, like genocide and war crimes, which are considered to harm both the immediate victims and humanity as a whole. As such, all of humankind has an interest in their punishment and deterrence.
Crimes against humanity were first classified as an international crime during the Nuremberg trials that followed World War II. Two German Generals, Alfred Jodl and Lothar Rendulic, were charged with war crimes for implementing scorched earth policies in Finland and Norway. No one was charged with crimes against humanity for causing the unprecedented environmental damage that scarred the post-war landscapes though.
Our understanding of the Earth’s ecology has matured since then, yet so has our capacity to pollute and destroy. It’s now clear that the consequences of environmental destruction don’t stop at national borders. All humanity is placed in jeopardy when burning rainforests flood the atmosphere with CO₂ and exacerbate climate change.
Holding someone like Bolsonaro to account for this by charging him with crimes against humanity would be a world first. If successful, it could set a precedent which might stimulate more aggressive legal action against environmental crimes. But do the Amazon fires fit the criteria?
Prosecuting crimes against humanity requires proof of widespread and systematic attacks against a civilian population. If a specific part of the global population is persecuted, this is an affront to the global conscience. In the same way, domestic crimes are an affront to the population of the state in which they occur.
When prosecuting prominent Nazis in Nuremberg, the US chief prosecutor, Robert Jackson, argued that crimes against humanity are committed by individuals, not abstract entities. Only by holding individuals accountable for their actions can widespread atrocities be deterred in future.
The International Criminal Court’s Chief Prosecutor, Fatou Bensouda, has promised to apply the approach first developed in Nuremberg to prosecute individuals for international crimes that result in significant environmental damage. Her recommendations don’t create new environmental crimes, such as “ecocide”, which would punish severe environmental damage as a crime in itself. They do signal, however, a growing appreciation of the role that environmental damage plays in causing harm and suffering to people.
The International Criminal Court was asked in 2014 to open an investigation into allegations of land-grabbing by the Cambodian government. In Cambodia, large corporations and investment firms were being given prime agricultural land by the government, displacing up to 770,000 Cambodians from 4m hectares of land. Prosecuting these actions as crimes against humanity would be a positive first step towards holding individuals like Bolsonaro accountable.
But given the global consequences of the Amazon fires, could environmental destruction of this nature be legally considered a crime against all humanity? Defining it as such would be unprecedented. The same charge could apply to many politicians and business people. It’s been argued that oil and gas executives who’ve funded disinformation about climate change for decades should be chief among them.
Charging individuals for environmental crimes against humanity could be an effective deterrent. But whether the law will develop in time to prosecute people like Bolsonaro is, as yet, uncertain. Until the International Criminal Court prosecutes individuals for crimes against humanity based on their environmental damage, holding individuals criminally accountable for climate change remains unlikely.
This article is part of The Covering Climate Now series
This is a concerted effort among news organisations to put the climate crisis at the forefront of our coverage. This article is published under a Creative Commons license and can be reproduced for free – just hit the “Republish this article” button on the page to copy the full HTML coding. The Conversation also runs Imagine, a newsletter in which academics explore how the world can rise to the challenge of climate change. Sign up here.
Tara Smith does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Cilia: cell's long-overlooked antenna that can drive cancer – or stop it in its tracks
Author: Angharad Mostyn Wilkie, PhD Researcher in Oncology and Cancer Biology, Bangor University
You might know that our lungs are lined with hair-like projections called motile cilia. These are tiny microtubule structures that appear on the surface of some cells or tissues. They can be found lining your nose and respiratory tract too, and along the fallopian tubes and vas deferens in the female and male reproductive tracts. They move from side to side to sweep away any micro-organisms, fluids, and dead cells in the respiratory system, and to help transport the sperm and egg in the reproductive system.
Odds are, however, that you haven’t heard about motile cilia’s arguably more important cousin, primary cilia.
Primary cilia are on virtually all cells in the body but for a long time they were considered to be a non-functional vestigial part of the cell. To add to their mystery, they aren’t present all the time. They project from the centrosome– the part of the cell that pulls it apart during division – and so only appear at certain stages of the cell cycle.
The first sign that these little structures were important came with the realisation that disruption to either their formation or function could result in genetic conditions known as ciliopathies. There are around 20 different ciliopathies, and they affect about one in every 1,000 people. These are often disabling and life-threatening conditions, affecting multiple organ systems. They can cause blindness, deafness, chronic respiratory infections, kidney disease, heart disease, infertility, obesity, diabetes and more. Symptoms and severity vary widely, making it hard to classify and diagnose these disorders.
So how can malfunction of a little organelle which was originally thought to be useless result in such a wide variety of devastating symptoms? Well, it is now known that not only do cilia look like little antennas, they act like them too. The cilia is packed full of proteins that detect messenger signals from other cells or the surrounding environment. These signals are then transmitted into the cell’s nucleus to activate a response – for example, these responses are important for the regulation of several essential signalling pathways.
When this was realised, researchers began to ask whether changes in the structure or function of cilia; changes in protein levels associated with cilia; or movement of these proteins to a different part of the cell could occur due to – or potentially drive – other conditions. Given that scientists already knew then that many of the pathways regulated by cilia could drive cancer progression, looking at the relationship between cilia and cancer was a logical step.
Cilia, signals and cancer
Researchers discovered that in many cancers – including renal cell, ovarian, prostrate, breast and pancreatic – there was a distinct lack of primary cilia in the cancerous cells compared to the healthy surrounding cells. It could be that the loss of cilia was just a response to the cancer, disrupting normal cell regulation – but what if it was actually driving the cancer?
Melanomas are one of the most aggressive types of tumours in humans. Some cancerous melanoma cells express higher levels of a protein called EZH2 than healthy cells. EZH2 suppresses cilia genes so malignant cells have less cilia. This loss of cilia activates some of the carcinogenic signalling pathways, resulting in aggressive metastatic melanoma.
However, loss of cilia does not have the same effect in all cancers. In one type of pancreatic cancer, the presence – not absence – of cilia correlates with increased metastasis and decreased patient survival.
Even within the same cancer the picture is unclear. Medulloblastomas are the most common childhood brain tumour. Their development can be driven by one of the signalling pathways regulated by the cilia, the hedgehog signalling pathway. This pathway is active during embryo development but dormant after. However, in many cancers (including medulloblastomas) hedgehog signalling is reactivated, and it can drive cancer growth. But studies into the effects of cilia in medulloblastomas have found that cilia can both drive and protect against this cancer, depending on the way the hedgehog pathway is initially disrupted.
As such strong links have been found between cilia and cancer, researchers have also been looking into whether treatment which targets this structure could be used for cancer therapies. One of the problems faced when treating cancers is the development of resistance to anti-cancer drugs. Many of these drugs’ targets are part of the signalling pathways regulated by cilia, but scientists have found that blocking the growth of cilia in drug-resistant cancer cell lines could restore sensitivity to a treatment.
What was once thought to just be a cell part left over during evolution, has proven to be integral to our understanding and treatment of cancer. The hope is that further research into cilia will help untangle the complex relationship between them and cancer, and provide both new insights into some of the drivers of cancer as well as new targets for cancer treatment.
Angharad Mostyn Wilkie receives funding from the North West Cancer Research Institute
How to become a great impostor
Author: Tim Holmes, Lecturer in Criminology & Criminal Justice, Bangor University
Unlike other icons who have appeared on the front of Life magazine, Ferdinand Waldo Demara was not famed as an astronaut, actor, hero or politician. In fact, his 23-year career was rather varied. He was, among other things, a doctor, professor, prison warden and monk. Demara was not some kind of genius either – he actually left school without any qualifications. Rather, he was “The Great Impostor”, a charming rogue who tricked his way to notoriety.
My research speciality is crimes by deception and Demara is a man who I find particularly interesting. For, unlike other notorious con-artists, imposters and fraudsters, he did not steal and defraud for the money alone. Demara’s goal was to attain prestige and status. As his biographer Robert Crichton noted in 1959, “Since his aim was to do good, anything he did to do it was justified. With Demara the end always justifies the means.”
Though we know what he did, and his motivations, there is still one big question that has been left unanswered – why did people believe him? While we don’t have accounts from everyone who encountered Demara, my investigation into his techniques has uncovered some of the secrets of how he managed to keep his high level cons going for so long.
Read more: Why do we fall for scams?
Upon leaving education in 1935, Demara lacked the skills to succeed in the organisations he was drawn to. He wanted the status that came with being a priest, an academic or a military officer, but didn’t have the patience to achieve the necessary qualifications. And so his life of deception started. At just 16-years-old, with a desire to become a member of a silent order of Trappist monks, Demara ran away from his home in Lawrence, Massachusetts, lying about his age to gain entry.
When he was found by his parents he was allowed to stay, as they believed he would eventually give up. Demara remained with the monks long enough to gain his hood and habit, but was ultimately forced out of the monastery at the age of 18 as his fellow monks felt he lacked the right temperament.
Demara then attempted to join other orders, including the Brothers of Charity children’s home in West Newbury, Massachusetts, but again failed to follow the rules. In response, he stole funds and a car from the home, and joined the army in 1941, at the age of 19. But, as it turned out, the army was not for him either. He disliked military life so much that he stole a friend’s identity and fled, eventually deciding to join the navy instead.
From monk to medicine
While in the navy, Demara was accepted for medical training. He passed the basic course but due to his lack of education was not allowed to advance. So, in order to get into the medical school, Demara created his first set of fake documents indicating he already had the needed college qualifications. He was so pleased with his creations that he decided to skip applying to medical school and tried to gain a commission as an officer instead. When his falsified papers were discovered, Demara faked his own death and went on the run again.
In 1942, Demara took the identity of Dr Robert Linton French, a former navy officer and psychologist. Demara found French’s details in an old college prospectus which had profiled French when he worked there. Though he worked as a college teacher using French’s name till the end of the war in 1945, Demara was eventually caught and the authorities decided to prosecute him for desertion.
However, due to good behaviour, he only served 18 months of the six-year sentence handed to him, but upon his release he went back to his old ways. This time Demara created a new identity, Cecil Hamann, and enrolled at Northeastern University. Tiring of the effort and time needed to complete his law degree, Demara awarded himself a PhD and, under the persona of “Dr” Cecil Hamann, took up another teaching post at a Christian college, The Brother of Instruction, in Maine in the summer of 1950.
It was here that Demara met and befriended Canadian doctor Joseph Cyr, who was moving to the US to set up a medical practice. Needing help with the immigration paperwork, Cyr gave all his identifying documents to Demara, who offered to fill in the application for him. After the two men parted ways, Demara took copies of Cyr’s paperwork and moved up to Canada. Pretending to be Dr Cyr, Demara approached the Canadian Navy with an ultimatum: make me an officer or I will join the army. Not wanting to lose a trained doctor, Demara’s application was fast tracked.
As a commissioned officer during the Korean war, Demara first served at Stadacona naval base, where he convinced other doctors to contribute to a medical booklet he claimed to be producing for lumberjacks living in remote parts of Canada. With this booklet and the knowledge gained from his time in the US Navy, Demara was able to pass successfully as Dr Cyr.
A military marvel
In 1951, Demara was transferred to be ship’s doctor on the destroyer HMCS Cayuga. Stationed off the coast of Korea, Demara relied on his sick berth attendant, petty officer Bob Horchin, to handle all minor injuries and complaints. Horchin was pleased to have a superior officer who did not interfere in his work and who empowered him to take on more responsibilities.
Though he very successfully passed as a doctor aboard the Cayuga, Demara’s time there came to a dramatic end after three Korean refugees were brought on in need of medical attention. Relying on textbooks and Horchin, Demara successfully treated all three – even completing the amputation of one man’s leg. Recommended for a commendation for his actions, the story was reported in the press where the real Dr Cyr’s mother saw a picture of Demara impersonating her son. Wanting to avoid further public scrutiny and scandal, the Canadian government elected to simply deport Demara back to the US in November 1951.
After returning to America, there were news reports on his actions, and Demara sold his story to Life magazine in 1952. In his biography, Demara notes that he spent the time after his return to the US using his own name and working in different short-term jobs. While he enjoyed the prestige he had gained in his impostor roles, he started to dislike life as Demara, “the great impostor”, gaining weight and developing a drinking problem.
In 1955, Demara somehow acquired the credentials of a Ben W. Jones and disappeared again. As Jones, Demara began working as a guard at Huntsville Prison in Texas, and was eventually put in charge of the maximum security wing that housed the most dangerous prisoners. In 1956, an educational programme that provided prisoners with magazines to read led to Demara’s discovery once more. One of the prisoners found the Life magazine article and showed the cover picture of Demara to prison officals. Despite categorically denying to the prison warden that he was Demara, and pointing to positive feedback he had received from prison officials and inmates about his performance there, Demara chose to run. In 1957, he was caught in North Haven, Maine and served a six-month prison sentence for his actions.
After his release he made several television appearances including on the game show You Bet Your Life, and made a cameo in horror film The Hypnotic Eye. From this point until his death in 1981, Demara would struggle to escape his past notoriety. He eventually returned to the church, getting ordained using his own name and worked as a counsellor at a hospital in California.
How Demara did it
According to biographer Crichton, Demara had an impressive memory, and through his impersonations accumulated a wealth of knowledge on different topics. This, coupled with charisma and good instincts, about human nature helped him trick all those around him. Studies of professional criminals often observe that con artists are skilled actors and that a con game is essentially an elaborate performance where only the victim is unaware of what is really going on.
Demara also capitalised on workplace habits and social conventions. He is a prime example of why recruiters shouldn’t rely on paper qualifications over demonstrations of skill. And his habit of allowing subordinates to do things he should be doing meant Demara’s ability went untested, while at the same time engendering appreciation from junior staff.
He observed of his time in academia that there was always opportunity to gain authority and power in an organisation. There were ways to set himself as an authority figure without challenging or threatening others by “expanding into the power vacuum”. He would set up his own committees, for example, rather than joining established groups of academics. Demara says in the biography that starting fresh committees and initiatives often gave him the cover he needed to avoid conflict and scrutiny.
…there’s no competition, no past standards to measure you by. How can anyone tell you aren’t running a top outfit? And then there’s no past laws or rules or precedents to hold you down or limit you. Make your own rules and interpretations. Nothing like it. Remember it, expand into the power vacuum.
Working from a position of authority as the head of his own committees further entrenched Demara in professions he was not qualified for. It can be argued that Demara’s most impressive attempt at expansion into the “power vacuum” occurred when teaching as Dr Hamann.
Hamann was considered a prestigious appointee for a small Christian college. Claiming to be a cancer researcher, Demara proposed converting the college into a state-approved university where he would be chancellor. The plans proceeded but Demara was not given a prominent role in the new institution. It was then that Demara decided to take Cyr’s identity and leave for Canada. If Demara had succeeded in becoming chancellor of the new LaMennais College (which would go onto become Walsh University) it is conceivable that he would have been able to avoid scrutiny or questioning thanks to his position of authority.
Other notable serial impostors and fakes have relied on techniques similar to Demara’s. Frank Abagnale also recognised the reliance people in large organisations placed on paperwork and looking the part. This insight allowed him at 16 to pass as a 25-year-old airline pilot for Pan Am Airways as portrayed in the film, Catch Me If You Can.
More recently, Gene Morrison was jailed after it was discovered that he had spent 26 years running a fake forensic science business in the UK. After buying a PhD online, Morrison set up Criminal and Forensic Investigations Bureau (CFIB) and gave expert evidence in over 700 criminal and civil cases from 1977 to 2005. Just like Demara used others to do his work, Morrison subcontracted other forensic experts and then presented the findings in court as his own.
Marketing and psychology expert Robert Cialdini’s work on the techniques of persuasion in business might offer insight into how people like Demara can succeed, and why it is that others believe them. Cialdini found that there are six universal principles of influence that are used to persuade business professionals: reciprocity, consistency, social proof, getting people to like you, authority and scarcity.
Demara used all of these skills at various points in his impersonations. He would give power to subordinates to hide his lack of knowledge and enable his impersonations (reciprocity). By using other people’s credentials, he was able to manipulate organisations into accepting him, using their own regulations against them (consistency and social proof). Demara’s success in his impersonations points to how likeable he was and how much of an authority he appeared to be. By impersonating academics and professionals, Demara focused on career paths where at the time there was high demand and a degree of scarcity, too.
Laid bare, one can see how Demara tricked his unsuspecting colleagues into believing his lies through manipulation. Yet within this it is interesting to also consider how often we all rely on gut instinct and the appearance of ability rather than witnessed proof. Our gut instinct is built on five questions we ask ourselves when presented with information: does a fact come from a credible source? Do others believe it? Is there plenty of evidence to support it? Is it compatible with what I believe? Does it tell a good story?
Researchers of social trust and solidarity argue that people also have a fundamental need to trust strangers to tell the truth in order for society to function. As sociologist Niklas Luhmann said, “A complete absence of trust would prevent (one) even getting up in the morning.” Trust in people is in a sense a default setting, so to mistrust requires a loss of confidence in someone which must be sparked by some indicator of a lie.
It was only after the prisoner showed the Life article to the Huntsville Prison warden, that they began to ask questions. Until this point, Demara had offered everything his colleagues would need to believe he was a capable member of staff. People accepted Demara’s claims because it felt right to believe him. He had built a rapport and influenced people’s views of who he was and what he could do.
Another factor to consider when asking why people would believe Demara was the rising dependency on paper proofs of identity at that time. Following World War II, improvements in and a shift towards reliance on paper documentation occurred as social and economic mobility changed in America. Underlying Demara’s impersonations and the actions of many modern con artists is the reliance we have long placed in first paper proofs of identity such as birth certificates, ID cards and, more recently, digital forms of identification.
As his preoccupation was more with prestige than money, it can be argued that Demara had a harder time than other impostors who were only driven by profit. Demara stood out as a surgeon and a prison guard, he was a good fake and influencer, but the added attention that came from his attempts at multiple important professions and media attention led to his downfall. Abagnale similarly had issues with the attention that came with pretending to be an airline pilot, lawyer and surgeon. In contrast, Morrison stuck to his one impersonation for years, avoiding detection and making money until the quality of his work was investigated.
The trick, it appears, to being a good impostor is essentially to be friendly, have access to a history of being trusted by others, have the right paperwork, build others’ confidence in you and understand the social environment you are entering. Although, when Demara was asked to explain why he committed his crimes he simply said, “Rascality, pure rascality”.
Tim Holmes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Tissue donations are important to cancer research, what happens to your cells after they are taken?
Author: Helena Robinson, Postdoctoral Research Officer in Cancer Biology, Bangor University
If you’ve ever had a tumour removed or biopsy taken, you may have contributed to life-saving research. People are often asked to give consent for any tissue that is not needed for diagnosis to be used in other scientific work. Though you probably won’t be told exactly what research your cells will be used for, tissue samples like these are vital for helping us understand and improve diagnosis and treatment of a whole range of illnesses and diseases. But once they’re removed, how are these tissue samples used exactly? How do they go from patient to project?
When tissue is removed from a person’s body, most often it is immediately put into a chemical preservative. It is then taken to a lab and embedded in a wax block. Protecting the tissue like this retains its structure and stops it from decomposing so it can be stored at room temperature for long periods of time.
This process also means that biochemical molecules like protein and DNA are preserved, which can provide vital clues about what processes are occurring in the tissue at that stage in the person’s illness. If we were looking at, for example, whether molecule A occurs in one particular tumour type but not in others (which would make it helpful for diagnosis) we would want a large number of each type to test. But there may not be enough patients of each type currently in treatment, so it is useful to have a library of samples to draw from.
Or we might want to test if patients with tumours containing molecule B are less likely to survive for five years than those without this molecule. This sort of question requires samples with a follow-up time of at least five years. But the answer may help doctors decide whether they need to treat their current patients with B more aggressively or with a different kind of treatment.
To analyse tissues, lab scientists cut very thin slices from the wax blocks and view them under a microscope. The slides are stained with dyes that show the overall tissue structure, and may also be stained with antibodies to show the presence of specific molecules.
Studies often need large numbers of samples from different patients to adequately answer a research question, which can take some time to collect. Take my work for example. My team is interested in finding more about a protein called brachyury, and how it relates to bowel cancer. But to do this we need to compare lots of samples, so we are using tissue from 823 bowel cancer patients and 50 non-cancer patients in our research.
When not in use, the tissue blocks are – with patient consent – placed in a store that researchers can access. The UK has several of these stores, known as biobanks or biorepositories, holding all kinds of tissues. Some cancer biobanks also store different kinds of tumours and blood samples.
While there are no reliable figures available on how many samples are held in all biobanks, or how often they are used, we do know these numbers are significant. The Children’s Cancer and Leukaemia Biobank alone has banked 19,000 samples since 1998. The Northern Ireland Biobank reports that 2,062 patients consented for their tissues to be used in research between 2017-2018, and 4,086 samples were accessed by researchers in that period.
Projects that use biobanks are often trying to identify biomarkers. These are any biological characteristics that give useful information about a disease or condition. Our team is looking at whether the protein brachyury is a useful biomarker to improve bowel cancer diagnosis.
Brachyury is essential for early embryonic development, but it is switched off in most cells by the time you are born. However, several studiesimply that finding brachyury in a tumour indicates a poorer outcome for the patient. But to work out if this link is correct, we need to look at biobank samples. Doing this will help us work out more accurately which patients are at higher risk of cancer recurrence or metastasis. This is important when doctors are deciding on the best course of treatment.
In our research, we also need clinical details, such as what happened to the patient and all the information available at the time of diagnosis. Then we can assess whether testing for brachyury would have added useful information to the diagnosis. Information that accompanies each block is anonymised, which means the researcher analysing the data won’t know the patient’s name or be able to identify them from the sample. But they can see any relevant clinical details such as tumour stage, age, sex and survival.
Biobank samples have had already improved treatment of childhood acute lymphocytic leukaemia. Samples from the Cancer and Leukaemia Biobank were used to demonstrate that children with an abnormality in chromosome 21 had poorer outcomes that those without it. This led to treatment being modified for these children so they are no longer at a disadvantage.
People are often applauded for raising money for research by undertaking gruelling or inventive challenges. Patients who decide their tissue can be used in research should be similarly applauded. Without their unique and valuable gift, we wouldn’t be able to further our understanding, diagnosis and treatment of all kinds of illnesses and diseases.
Helena Robinson receives funding from Cancer Research Wales.
Being left-handed doesn't mean you are right-brained — so what does it mean?
Author: Emma Karlsson, Postdoctoral researcher in Cognitive Neuroscience, Bangor University
There have been plenty of claims about what being left-handed means, and whether it changes the type of person someone is – but the truth is something of an enigma. Myths about handedness appear year after year, but researchers have yet to uncover all of what it means to be left-handed.
So why are people left-handed? The truth is we don’t fully know that either. What we do know is that only around 10% of people across the world are left-handed – but this isn’t split equally between the sexes. About 12% of men are left-handed but only about 8% of women. Some people get very excited about the 90:10 split and wonder why we aren’t all right-handed.
But the interesting question is, why isn’t our handedness based on chance? Why isn’t it a 50:50 split? It is not due to handwriting direction, as left-handedness would be dominant in countries where their languages are written right to left, which it is not the case. Even the genetics are odd – only about 25% of children who have two left-handed parents will also be left-handed.
Being left-handed has been linked with all sorts of bad things. Poor health and early death are often associated, for example – but neither are exactly true. The latter is explained by many people in older generations being forced to switch and use their right hands. This makes it look like there are less left-handers at older ages. The former, despite being an appealing headline, is just wrong.
Positive myths are also abound. People say that left-handers are more creative, as most of them use their “right brain”. This is perhaps one of the more persistent myths about handedness and the brain. But no matter how appealing (and perhaps to the disappointment of those lefties still waiting to wake up one day with the talents of Leonardo da Vinci), the general idea that any of us use a “dominant brain side” that defines our personality and decision making is also wrong.
Brain lateralisation and handedness
It is true, however, that the brain’s right hemisphere controls the left side of the body, and the left hemisphere the right side – and that the hemispheres do actually have specialities. For example, language is usually processed a little bit more within the left hemisphere, and recognition of faces a little bit more within the right hemisphere. This idea that each hemisphere is specialised for some skills is known as brain lateralisation. However, the halves do not work in isolation, as a thick band of nerve fibres – called the corpus callosum – connects the two sides.
Interestingly, there are some known differences in these specialities between right-handers and left-handers. For example, it is often cited that around 95% of right-handers are “left hemisphere dominant”. This is not the same as the “left brain” claim above, it actually refers to the early finding that most right-handers depend more on the left hemisphere for speech and language. It was assumed that the opposite would be true for lefties. But this is not the case. In fact, 70% of left-handers also process language more in the left hemisphere. Why this number is lower, rather than reversed, is as yet unknown.
Researchers have found many other brain specialities, or “asymmetries” in addition to language. Many of these are specialised in the right hemisphere – in most right-handers at least – and include things such as face processing, spatial skills and perception of emotions. But these are understudied, perhaps because scientists have incorrectly assumed that they all depend on being in the hemisphere that isn’t dominant for language in each person.
In fact, this assumption, plus the recognition that a small number of left-handers have unusual right hemisphere brain dominance for language, means left-handers are either ignored – or worse, actively avoided – in many studies of the brain, because researchers assume that, as with language, all other asymmetries will be reduced.
How some of these functions are lateralised (specialised) in the brain can actually influence how we perceive things and so can be studied using simple perception tests. For example, in my research group’s recent study, we presented pictures of faces that were constructed so that one half of the face shows one emotion, while the other half shows a different emotion, to a large number of right-handers and left-handers.
Usually, people see the emotion shown on the left side of the face, and this is believed to reflect specialisation in the right hemisphere. This is linked to the fact that visual fields are processed in such a way there is a bias to the left side of space. This is thought to represent right hemisphere processing while a bias to the right side of space is thought to represent left hemisphere processing. We also presented different types of pictures and sounds, to examine several other specialisations.
Our findings suggest that some types of specialisations, including processing of faces, do seem to follow the interesting pattern seen for language (that is, more of the left-handers seemed to have a preference for the emotion shown on the right side of the face). But in another task that looked at biases in what we pay attention to, we found no differences in the brain-processing patterns for right-handers and left-handers. This result suggests that while there are relationships between handedness and some of the brain’s specialisations, there aren’t for others.
Left-handers are absolutely central to new experiments like this, but not just because they can help us understand what makes this minority different. Learning what makes left-handers different could also help us finally solve many of the long-standing neuropsychological mysteries of the brain.
Emma Karlsson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Brexit uncertainty boosts support for Welsh independence from the UK
Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University
In a move that surprised many, in June 2016, 52.5% of people in Wales voted to leave the European Union. But concerns over Brexit negotiations, and “chaos in UK politics” have mounted since then, and recent polls suggest that support for remain has risen considerably in Wales.
Now, the Welsh government has announced that it will campaign for the UK to remain in the EU while public attention is turning to the question of whether the Welsh should become independent from a post-Brexit UK.
Welsh independence has long been supported by Plaid Cymru, but it now appears to be becoming more mainstream, with more Welsh citizens now considering the possibility of leaving the union. Marches are being held across the country and recent YouGov polls indicate that support for independence, or at least “indy-curiosity” has grown in Wales in the past two years.
At present these bodies do not have control over all matters relating to Wales. They don’t have control over defence and national security, foreign policy, and immigration, for example. But the Assembly does have responsibility for policy and passing laws for the benefit of the people of Wales, and has been doing so for the past 20 years.
Strictly speaking, constitutional law dictates that Wales cannot run its own referendum nor declare independence unilaterally. The new Schedule 7A to the Government of Wales Act 2006 states that “the union of the nations of Wales and England” is a reserved matter, not for the Assembly. But precedent suggests that an independence referendum is not an impossibility.
If there is momentum for Wales to decide its own future, this would put pressure on the UK government to facilitate a legal solution for a referendum. This opportunity was afforded to the former Scottish first minister, Alex Salmond, by former prime minister David Cameron, via the Scottish Independence Act 2013.
While not all are in favour of Welsh independence, the political narrative is changing. Welsh first minister Mark Drakeford has stated that “support for the union is not unconditional” and that “independence has risen up the public agenda”.
Concerned by relationships between the UK’s countries, former prime minister Theresa May referred to the electoral success of nationalist parties such as Plaid Cymru as evidence that the union is “more imperilled now than it has ever been”. She also sanctioned the Dunlop review, with a remit to address “how we can secure our union for the future”.
Her comments echo warnings from former Labour prime minister Gordon Brown, who recently remarked that UK unity is “more at risk than at any time in 300 years – and more in danger than when we had to fight for it in 2014 during a bitter Scottish referendum”.
So if Wales overcame the legal challenges and gained national political support, would the devolved government and parliament be able to manage the country? As noted above the National Assembly has been making laws for Wales since 1999. Frequently cited achievements include the abolishing of prescription charges and financial support for Welsh university students (via a mix of tuition loans and living cost grants). In addition the Social Services and Well-being Act 2014 changed how peoples’ needs are assessed and services delivered.
More recently its Future Generations Act was celebrated for compelling public bodies to think about the long-term impact of their decisions on communities and the environment – albeit with some criticisms from legal experts for being “toothless” in terms of enforceability.
Alongside these headline-grabbing results, the National Assembly itself has been an achievement in its own right. While its initial establishment was something of a battle – in 1979 Wales voted 4:1 against creating an Assembly and in 1997 just 50.3% voted for it – The Wales Act 2017 actually extended the scope of the Assembly’s powers.
This changed its constitutional structure from a conferred powers model (which limited it to specifically listed areas) to a reserved powers model, which empowers the Assembly to produce a multitude of Welsh laws on all matters that are not reserved to the UK parliament.
But even with its strong history, it must be noted that not everyone is in favour of the Assembly. A small number of UKIP assembly members are currently arguing to reverse devolution while others criticise Wales’ record– particularly in the areas of schooling and the NHS.
The are several other dimensions to the question of whether Wales could become an independent state. Socially and economically, opponents advocate that Wales is too small and too poor to stand alone on the world stage. Yes Cymru, a non-partisan pro-independence campaign group, has sought to debunk these myths, pointing out that there are 18 countries in Europe smaller than Wales, and that the assessment of Wales’ fiscal deficit is flawed in excluding significant assets such as water and electricity.
The constitutional shift in power that will follow Brexit will certainly give rise to the prospects of a divided UK. But the outcome of Brexit, and its impact on Welsh independence, hinges on the new prime minister’s actions.
While Boris Johnson has reiterated that the “union comes first”, if there is significant public support for independence in Wales, it will be hard for Johnson to ignore the people’s right to self-determination and arbitrarily enforce the union at all costs. Should the independence movement gain further wide support in the coming months compromises will have to be reached, with at least more incremental devolution being likely in the medium term.
Ultimately, while it would be a monumental change, the question of whether Wales becomes independent hinges on what the people want for their country. If successive UK governments take the union for granted without more meaningful consideration to the cumulative effects on the people of Wales, calls for independence may become louder.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
How the brain prepares for movement and actions
Author: Myrto Mantziara, PhD Researcher, Bangor University
Our behaviour is largely tied to how well we control, organise and carry out movements in the correct order. Take writing, for example. If we didn’t make one stroke after another on a page, we would not be able to write a word.
However, motor skills (single or sequences of actions which through practice become effortless) can become very difficult to learn and retrieve when neurological conditions disrupt the planning and control of sequential movements. When a person has a disorder – such as dyspraxia or stuttering – certain skills cannot be performed in a smooth and coordinated way.
Traditionally scientists have believed that in a sequence of actions, each is tightly associated to the other in the brain, and one triggers the next. But if this is correct, then how can we explain errors in sequencing? Why do we mistype “form” instead of “from”, for example?
Some researchersargue that before we begin a sequence of actions, the brain recalls and plans all items at the same time. It prepares a map where each item has an activation stamp relative to its order in the sequence. These compete with each other until the item with the stronger activation wins. It “comes out” for execution as being more “readied” – so we type “f” in the word “from” first, for example – and then it is erased from the map. This process, called competitive queuing, is repeated for the rest of the actions until we execute all the items of the sequence in the correct order.
This idea that the brain uses simultaneous activations of actions before any movement takes place was proven in a 2002 study. As monkeys were drawing shapes (making three strokes for a triangle, for example), researchers found that before the start of the movement, there existed simultaneous neural patterns for each stroke. How strong the activation was could predict the position of that particular action in execution.
Planning and queuing
What has not been known until now is whether this activation system is used in the human brain. Nor have we known how actions are queued while we prepare them based on their position in the sequence. However recent research from neuroscientists at Bangor University and University College London has shown that there is simultaneous planning and competitive queuing in the human brain too.
For this study, the researchers were interested to see how the brain prepares for executing well-learned action sequences like typing or playing the piano. Participants were trained for two days to pair abstract shapes with five-finger sequences in a computer-based task. They learned the sequences by watching a small dot move from finger to finger on a hand image displayed on the screen, and pressing the corresponding finger on a response device. These sequences were combinations of two finger orders with two different rhythms.
On the third day, the participants had to produce – based on the abstract shape presented for a while on the screen – the correct sequence entirely from memory while their brain activity was recorded.
Looking at the brain signals, the team was able to distinguish participants’ neural patterns as they planned and executed the movements. The researchers found that, milliseconds before the start of the movement, all the finger presses were queued and “stacked” in an ordered manner. The activation pattern of the finger presses reflected their position in the sequence that was performed immediately after. This competitive queuing pattern showed that the brain prepared the sequence by organising the individual actions in the correct order.
The researchers also looked at whether this preparatory queuing activity was shared across different sequences which had different rhythms or different finger orders, and found that it was. The competitive queuing mechanism acted as a template to guide each action into a position, and provided the base for the accurate production of new sequences. In this way the brain stays flexible and efficient enough to be ready to produce unknown combinations of sequences by organising them using this preparatory template.
Interestingly, the quality of the preparatory pattern predicted how accurate a participant was in producing a sequence. In other words, the more well-separated the activities or actions were before the execution of the sequence, the more likely the participant was to execute the sequence without mistakes. The presence of errors, on the other hand, meant that the queuing of the patterns in preparation for the action was less well-defined, and tended to be mingled.
By knowing how our actions are pre-planned in the brain, researchers will be able to find out the parameters of executing smooth and accurate movement sequences. This could lead to a better understanding of the difficulties found in disorders of sequence learning and control, such as stuttering and dyspraxia. It could also help the development of new rehabilitation or treatment techniques which optimise movement planning in order for patients to achieve a more skilled control of action sequences.
Myrto Mantziara is a PhD researcher and receives funding from School of Psychology, Bangor University.
Peut-on parler d’une identité européenne ?
Author: François Dubet, Professeur des universités émérite, Université de BordeauxNathalie Heinich, Sociologue, Centre national de la recherche scientifique (CNRS)Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University
François Dubet, Université de Bordeaux : « Chacun perçoit l’Europe de son propre point de vue »
La question de l’identité est toujours enfermée dans le même paradoxe. D’un côté, l’identité semble inconsistante : une construction faite de bric et de broc, un récit, un ensemble instable d’imaginaires et de croyances qui se décomposent dès que l’on essaie de s’en saisir. Mais d’un autre côté, ces identités incertaines semblent extrêmement solides, enchâssées dans les subjectivités les plus intimes. Souvent, il suffit que les identités collectives imaginaires se défassent pour que les individus se sentent menacés et blessés au plus profond d’eux-mêmes.
Après tout, les centaines de milliers de sujets de sa Majesté qui ont manifesté le 23 mars contre le Brexit se sentaient européens parce que cette part infime d’eux même risque de leur être arrachée, alors même qu’ils ne pourraient pas la définir précisément.
L’identité européenne en mouvement
Je suppose que les historiens et les spécialistes des civilisations pourraient aisément définir quelque chose comme une identité européenne tenant aux histoires communes des sociétés et des États qui se sont formés dans les mondes latins, les mondes chrétiens et germaniques, les guerres répétées, les alliances monarchiques, les révolutions, les échanges commerciaux, la circulation des élites et les migrations intérieures à l’Europe.
Les histoires des États nationaux sont tout simplement incompréhensibles en dehors de l’histoire de l’Europe. Ceci dit, nous aurions beaucoup de mal à définir cette identité fractionnée, clivée, mouvante. Chacun perçoit l’Europe de son propre point de vue, et d’ailleurs quand les institutions européennes se risquent à définir une identité européenne, elles n’y parviennent difficilement.
L’identité européenne serait-elle qu’un leurre, un cumul d’identités nationales, les seules vraiment solides, car étayées par des institutions ?
Vivre l’Europe pour l’aimer
Les sondages, à manier avec précaution, montrent que les individus hiérarchisent leurs sentiments d’appartenance. On se sent Breton et Français, et Européens, et croyant, et une femme ou un homme, et de telle ou telle origine sans que, dans la plupart des cas, ces multiples identifications soient perçues comme des dilemmes.
Même ceux qui en veulent à l’Europe politique car trop libérale et trop bureaucratique, ne semblent guère désireux de revenir aux mobilisations en masse pour défendre leur pays contre leurs voisins européens. Et ce, malgré, la montée des partis d’extrême droite un peu partout en Europe, qui soulignent un attachement à l’identité nationale.
Au-delà d’une conscience politique explicite, il s’est ainsi formé une forme d’identité européenne vécue à travers les déplacements de populations, les loisirs ou modes de vie.
Beaucoup de ceux qui combattent l’Europe n’imaginent probablement plus de demander des visas et de changer des Francs contre des Pesetas pour passer deux semaines en Espagne.
Pourtant les démagogues accusent l’Europe d’être la cause de leurs malheurs, une attaque qui résonne de plus en plus forts dans les oreilles des groupes socio-économiques désavantagés.
Il n’est pas exclu que la critique de l’Europe procède plus de l’amour déçu que de l’hostilité. L’identité européenne existe bien plus qu’on ne le croit. Il suffirait que l’Europe implose pour qu’elle nous manque, et pas seulement au nom de nos intérêts bien compris.
Nathalie Heinich, CNRS/EHESS : « Doit-on parler d’identité européenne ? »
Parler d’« identité » à propos d’une entité chargée de connotations politiques n’est jamais neutre, comme on le voit avec la notion d’« identité française » : soit on affirme l’existence de cette entité (« identité européenne ») en visant implicitement sa distinction par rapport à un collectif supérieur (par exemple l’Amérique, la Chine…), et l’on est d’emblée dans la revendication d’un soutien aux petits (« dominés ») contre les grands (« dominants ») ; soit on vise implicitement sa distinction par rapport à un collectif inférieur (la nation, la France), et l’on est dans la revendication d’une affirmation de la supériorité du grand sur le petit. Tout dépend donc du contexte et des attendus.
Une expression à deux sens
Mais si l’on veut éviter une réponse normative pour s’en tenir à une description neutre, dégagée de jugement de valeur, alors il faut distinguer entre deux sens du terme « identité européenne ». Le premier renvoie à la nature de l’entité abstraite nommée « Europe » : ses frontières, ses institutions, son histoire, sa ou ses cultures, etc. L’exercice est classique, et la littérature historienne et politiste est abondante à ce sujet même si le mot « identité » n’y est pas forcément requis.
Le second sens renvoie, lui, aux représentations que se font les individus concrets de leur « identité d’Européen », c’est-à-dire la manière et le degré auquel ils se rattachent à ce collectif de niveau plus général que l’habituelle identité nationale. Le diagnostic passe alors par l’enquête sociologique sur les trois « moments » de l’identité – autoperception, présentation, désignation – par lesquels un individu se sent, se présente et est désigné comme « européen ». Et cette enquête peut prendre une dimension quantitative, avec un dispositif de type sondage représentatif basé sur ces trois expériences. La question « Peut-on parler d’une identité européenne ? » ne pourra dès lors trouver de réponse qu’au terme d’une telle enquête.
Une question pour les citoyens et leurs représentants
Mais les enjeux politiques de la question n’échappent à personne, et c’est pourquoi il faut avoir à l’esprit la fonction que revêt, dans la réflexion sur l’Europe, l’introduction du mot « identité » : il s’agit bien de transformer un projet économique et social en programme politique acceptable par le plus grand nombre – voire désirable.
C’est pourquoi le problème n’est pas tant de savoir si l’on peut, mais si l’on doit faire de l’Europe un enjeu identitaire et non plus seulement économique et social. Et donc : « Doit-on parler d’identité européenne ? »
La réponse à cette question appartient aux citoyens et à leurs représentants – pas aux chercheurs.
Nikolaos Papadogiannis, Université de Bangor, Royaume-Uni : « L’identité européenne : une pluralité d’options »
Le résultat du référendum britannique de 2016 sur l’adhésion à l’UE a provoqué des ondes de choc à travers l’Europe. Elle a, entre autres, suscité des débats sur la question de savoir si une « culture européenne » ou une « identité européenne » existe réellement ou si les identités nationales dominent toujours.
Il serait erroné, à mon sens, de passer sous silence l’identification de diverses personnes à « l’Europe ». Cette identification est l’aboutissement d’un long processus, en particulier dans la seconde moitié du XXe siècle, qui a impliqué à la fois les politiques des institutions de la CEE/UE et les initiatives locales.
La mobilité transfrontalière des jeunes depuis 1945 est un exemple clé de la première : elle a souvent été développée par des groupes qui n’étaient pas formellement liés à la CEE/UE. Ils ont tout de même contribué à développer un attachement à « l’Europe » dans plusieurs pays du continent.
Comme l’a montré le politologue Ronald Inglehart dans les années 1960, plus les jeunes « étaient jeunes » et plus ils voyageaient, plus ils étaient susceptibles de soutenir une union politique toujours plus étroite en Europe. Plus récemment, les programmes d’échanges Erasmus ont également contribué à développer des formes d’identification à l’Europe.
Se sentir « européen »
Simultanément, se sentir « européen » et adhérer à une identité nationale sont loin d’être incompatibles. Dans les années 1980, de nombreux Allemands de l’Ouest se sont passionnés pour une Allemagne réunifiée faisant partie d’une Europe politiquement unie.
L’attachement à « l’Europe » a également été un élément clé du nationalisme régional dans plusieurs pays européens au cours des trois dernières décennies, tels que le nationalisme écossais, catalan et gallois. Un cri de ralliement pour les nationalistes écossais depuis les années 1980 a été « l’indépendance en Europe ». Il en est encore ainsi aujourd’hui. Il est assez révélateur que le slogan principal du Parti national écossais de centre gauche (SNP), le parti nationaliste le plus puissant d’Écosse, pour les élections du Parlement européen de 2019, soit : « L’avenir de l’Écosse appartient à l’Europe ».
Des objectifs nationaux variés réunis sous la bannière étoilée
Cependant, ce qui mérite plus d’attention, c’est l’importance attachée à la notion d’identité européenne. Divers groupes sociaux et politiques l’ont utilisée, de l’extrême gauche à l’extrême droite.
Le sens qu’ils attachent à cette identité varie également. Pour le SNP, il est compatible avec l’adhésion de l’Écosse à l’UE. Le SNP combine cette dernière avec une compréhension inclusive de la nation écossaise, qui est ouverte aux personnes nées ailleurs dans le monde, mais qui vivent en Écosse.
En Allemagne, par contre, l’AfD (Alternative für Deutschland, Alternative for Germany) d’extrême droite s’identifie à « l’Europe », mais critique l’UE. Elle combine la première avec l’islamophobie. Un exemple clair de ce mélange est une affiche publiée par ce parti avant les élections de 2019. et demandant aux « Européens » de voter pour l’AfD, afin que l’Europe ne devienne pas une « Eurabie ».
Si l’identification à l’Europe existe, il s’agit d’un phénomène complexe, formulé de plusieurs façons. Cela n’implique pas nécessairement un soutien à l’UE. De même, les identités européennes ne s’excluent pas nécessairement mutuellement avec les identités nationales. Enfin, elles peuvent, bien que pas toujours, reposer sur des stéréotypes à l’encontre de personnes considérées comme « non européennes ».
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Is there such thing as a 'European identity'?
Author: Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University
The outcome of the UK’s 2016 referendum on EU membership has sent shockwaves across Europe. Among other impacts, it has prompted debates around the issues whether a “European culture” or a “European identity” actually exist or whether national identities still dominate.
It would be wrong, in my opinion, to write off the identification of various people with “Europe”. This identification has been the outcome of a long process, particularly in the second half of the 20th century, involving both the policies of the European Economic Community (EEC) and EU institutions and grassroots initiatives. Cross-border youth mobility since 1945 is a key example of the former: it was often developed by groups that were not formally linked to the EEC/EU. They still helped develop an attachment to “Europe” in several countries of the continent.
As political scientist Ronald Inglehart showed in the 1960s, the younger people were, and the more they travelled, the more likely they were to support an ever-closer political union in Europe. More recently, Erasmus exchange programmes have also helped develop forms of identification with Europe.
Simultaneously, feeling “European” and subscribing to a national identity have been far from mutually exclusive. Numerous West Germans in the 1980s were passionate about a reunified Germany being part of a politically united Europe.
Attachment to “Europe” has also been a key component of regional nationalism in several European countries in the last three decades, such as the Scottish or the Catalan nationalism. A rallying cry for Scottish nationalists from the 1980s on has been “independence in Europe”, and it continues to be the case today. Indeed, for the 2019 European Parliament elections, the primary slogan of the centre-left Scottish National Party (SNP), currently in power, is “Scotland’s future belongs in Europe”.
What requires further attention is the significance attached to the notion of European identity. Diverse social and political groups have used it, ranging from the far left to the far right, and the meaning they attach varies. For the SNP, it is compatible with the EU membership of Scotland. The party combines the latter with an inclusive understanding of the Scottish nation, which is open to people who have been born elsewhere in the globe, but live in Scotland.
By contrast, Germany’s far-right AfD party (Alternative für Deutschland, Alternative for Germany) is critical of the EU, yet identifies with “Europe”, which it explicitly contrasts with Islam. A clear example is a one of the party’s posters for the upcoming elections that asks “Europeans” to vote for AfD so that the EU doesn’t become “Eurabia”.
Identification with Europe does exist, but it is a complex phenomenon, framed in several ways. and does not necessarily imply support for the EU. Similarly, European identities are not necessarily mutually exclusive with national identities. Finally, both the former and the latter identities may rest upon stereotypes against people regarded as “non-European”.
Nikolaos Papadogiannis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.