On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
How the brain prepares for movement and actions
Author: Myrto Mantziara, PhD Researcher, Bangor University
Our behaviour is largely tied to how well we control, organise and carry out movements in the correct order. Take writing, for example. If we didn’t make one stroke after another on a page, we would not be able to write a word.
However, motor skills (single or sequences of actions which through practice become effortless) can become very difficult to learn and retrieve when neurological conditions disrupt the planning and control of sequential movements. When a person has a disorder – such as dyspraxia or stuttering – certain skills cannot be performed in a smooth and coordinated way.
Traditionally scientists have believed that in a sequence of actions, each is tightly associated to the other in the brain, and one triggers the next. But if this is correct, then how can we explain errors in sequencing? Why do we mistype “form” instead of “from”, for example?
Some researchersargue that before we begin a sequence of actions, the brain recalls and plans all items at the same time. It prepares a map where each item has an activation stamp relative to its order in the sequence. These compete with each other until the item with the stronger activation wins. It “comes out” for execution as being more “readied” – so we type “f” in the word “from” first, for example – and then it is erased from the map. This process, called competitive queuing, is repeated for the rest of the actions until we execute all the items of the sequence in the correct order.
This idea that the brain uses simultaneous activations of actions before any movement takes place was proven in a 2002 study. As monkeys were drawing shapes (making three strokes for a triangle, for example), researchers found that before the start of the movement, there existed simultaneous neural patterns for each stroke. How strong the activation was could predict the position of that particular action in execution.
Planning and queuing
What has not been known until now is whether this activation system is used in the human brain. Nor have we known how actions are queued while we prepare them based on their position in the sequence. However recent research from neuroscientists at Bangor University and University College London has shown that there is simultaneous planning and competitive queuing in the human brain too.
For this study, the researchers were interested to see how the brain prepares for executing well-learned action sequences like typing or playing the piano. Participants were trained for two days to pair abstract shapes with five-finger sequences in a computer-based task. They learned the sequences by watching a small dot move from finger to finger on a hand image displayed on the screen, and pressing the corresponding finger on a response device. These sequences were combinations of two finger orders with two different rhythms.
On the third day, the participants had to produce – based on the abstract shape presented for a while on the screen – the correct sequence entirely from memory while their brain activity was recorded.
Looking at the brain signals, the team was able to distinguish participants’ neural patterns as they planned and executed the movements. The researchers found that, milliseconds before the start of the movement, all the finger presses were queued and “stacked” in an ordered manner. The activation pattern of the finger presses reflected their position in the sequence that was performed immediately after. This competitive queuing pattern showed that the brain prepared the sequence by organising the individual actions in the correct order.
The researchers also looked at whether this preparatory queuing activity was shared across different sequences which had different rhythms or different finger orders, and found that it was. The competitive queuing mechanism acted as a template to guide each action into a position, and provided the base for the accurate production of new sequences. In this way the brain stays flexible and efficient enough to be ready to produce unknown combinations of sequences by organising them using this preparatory template.
Interestingly, the quality of the preparatory pattern predicted how accurate a participant was in producing a sequence. In other words, the more well-separated the activities or actions were before the execution of the sequence, the more likely the participant was to execute the sequence without mistakes. The presence of errors, on the other hand, meant that the queuing of the patterns in preparation for the action was less well-defined, and tended to be mingled.
By knowing how our actions are pre-planned in the brain, researchers will be able to find out the parameters of executing smooth and accurate movement sequences. This could lead to a better understanding of the difficulties found in disorders of sequence learning and control, such as stuttering and dyspraxia. It could also help the development of new rehabilitation or treatment techniques which optimise movement planning in order for patients to achieve a more skilled control of action sequences.
Myrto Mantziara is a PhD researcher and receives funding from School of Psychology, Bangor University.
Peut-on parler d’une identité européenne ?
Author: François Dubet, Professeur des universités émérite, Université de BordeauxNathalie Heinich, Sociologue, Centre national de la recherche scientifique (CNRS)Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University
François Dubet, Université de Bordeaux : « Chacun perçoit l’Europe de son propre point de vue »
La question de l’identité est toujours enfermée dans le même paradoxe. D’un côté, l’identité semble inconsistante : une construction faite de bric et de broc, un récit, un ensemble instable d’imaginaires et de croyances qui se décomposent dès que l’on essaie de s’en saisir. Mais d’un autre côté, ces identités incertaines semblent extrêmement solides, enchâssées dans les subjectivités les plus intimes. Souvent, il suffit que les identités collectives imaginaires se défassent pour que les individus se sentent menacés et blessés au plus profond d’eux-mêmes.
Après tout, les centaines de milliers de sujets de sa Majesté qui ont manifesté le 23 mars contre le Brexit se sentaient européens parce que cette part infime d’eux même risque de leur être arrachée, alors même qu’ils ne pourraient pas la définir précisément.
L’identité européenne en mouvement
Je suppose que les historiens et les spécialistes des civilisations pourraient aisément définir quelque chose comme une identité européenne tenant aux histoires communes des sociétés et des États qui se sont formés dans les mondes latins, les mondes chrétiens et germaniques, les guerres répétées, les alliances monarchiques, les révolutions, les échanges commerciaux, la circulation des élites et les migrations intérieures à l’Europe.
Les histoires des États nationaux sont tout simplement incompréhensibles en dehors de l’histoire de l’Europe. Ceci dit, nous aurions beaucoup de mal à définir cette identité fractionnée, clivée, mouvante. Chacun perçoit l’Europe de son propre point de vue, et d’ailleurs quand les institutions européennes se risquent à définir une identité européenne, elles n’y parviennent difficilement.
L’identité européenne serait-elle qu’un leurre, un cumul d’identités nationales, les seules vraiment solides, car étayées par des institutions ?
Vivre l’Europe pour l’aimer
Les sondages, à manier avec précaution, montrent que les individus hiérarchisent leurs sentiments d’appartenance. On se sent Breton et Français, et Européens, et croyant, et une femme ou un homme, et de telle ou telle origine sans que, dans la plupart des cas, ces multiples identifications soient perçues comme des dilemmes.
Même ceux qui en veulent à l’Europe politique car trop libérale et trop bureaucratique, ne semblent guère désireux de revenir aux mobilisations en masse pour défendre leur pays contre leurs voisins européens. Et ce, malgré, la montée des partis d’extrême droite un peu partout en Europe, qui soulignent un attachement à l’identité nationale.
Au-delà d’une conscience politique explicite, il s’est ainsi formé une forme d’identité européenne vécue à travers les déplacements de populations, les loisirs ou modes de vie.
Beaucoup de ceux qui combattent l’Europe n’imaginent probablement plus de demander des visas et de changer des Francs contre des Pesetas pour passer deux semaines en Espagne.
Pourtant les démagogues accusent l’Europe d’être la cause de leurs malheurs, une attaque qui résonne de plus en plus forts dans les oreilles des groupes socio-économiques désavantagés.
Il n’est pas exclu que la critique de l’Europe procède plus de l’amour déçu que de l’hostilité. L’identité européenne existe bien plus qu’on ne le croit. Il suffirait que l’Europe implose pour qu’elle nous manque, et pas seulement au nom de nos intérêts bien compris.
Nathalie Heinich, CNRS/EHESS : « Doit-on parler d’identité européenne ? »
Parler d’« identité » à propos d’une entité chargée de connotations politiques n’est jamais neutre, comme on le voit avec la notion d’« identité française » : soit on affirme l’existence de cette entité (« identité européenne ») en visant implicitement sa distinction par rapport à un collectif supérieur (par exemple l’Amérique, la Chine…), et l’on est d’emblée dans la revendication d’un soutien aux petits (« dominés ») contre les grands (« dominants ») ; soit on vise implicitement sa distinction par rapport à un collectif inférieur (la nation, la France), et l’on est dans la revendication d’une affirmation de la supériorité du grand sur le petit. Tout dépend donc du contexte et des attendus.
Une expression à deux sens
Mais si l’on veut éviter une réponse normative pour s’en tenir à une description neutre, dégagée de jugement de valeur, alors il faut distinguer entre deux sens du terme « identité européenne ». Le premier renvoie à la nature de l’entité abstraite nommée « Europe » : ses frontières, ses institutions, son histoire, sa ou ses cultures, etc. L’exercice est classique, et la littérature historienne et politiste est abondante à ce sujet même si le mot « identité » n’y est pas forcément requis.
Le second sens renvoie, lui, aux représentations que se font les individus concrets de leur « identité d’Européen », c’est-à-dire la manière et le degré auquel ils se rattachent à ce collectif de niveau plus général que l’habituelle identité nationale. Le diagnostic passe alors par l’enquête sociologique sur les trois « moments » de l’identité – autoperception, présentation, désignation – par lesquels un individu se sent, se présente et est désigné comme « européen ». Et cette enquête peut prendre une dimension quantitative, avec un dispositif de type sondage représentatif basé sur ces trois expériences. La question « Peut-on parler d’une identité européenne ? » ne pourra dès lors trouver de réponse qu’au terme d’une telle enquête.
Une question pour les citoyens et leurs représentants
Mais les enjeux politiques de la question n’échappent à personne, et c’est pourquoi il faut avoir à l’esprit la fonction que revêt, dans la réflexion sur l’Europe, l’introduction du mot « identité » : il s’agit bien de transformer un projet économique et social en programme politique acceptable par le plus grand nombre – voire désirable.
C’est pourquoi le problème n’est pas tant de savoir si l’on peut, mais si l’on doit faire de l’Europe un enjeu identitaire et non plus seulement économique et social. Et donc : « Doit-on parler d’identité européenne ? »
La réponse à cette question appartient aux citoyens et à leurs représentants – pas aux chercheurs.
Nikolaos Papadogiannis, Université de Bangor, Royaume-Uni : « L’identité européenne : une pluralité d’options »
Le résultat du référendum britannique de 2016 sur l’adhésion à l’UE a provoqué des ondes de choc à travers l’Europe. Elle a, entre autres, suscité des débats sur la question de savoir si une « culture européenne » ou une « identité européenne » existe réellement ou si les identités nationales dominent toujours.
Il serait erroné, à mon sens, de passer sous silence l’identification de diverses personnes à « l’Europe ». Cette identification est l’aboutissement d’un long processus, en particulier dans la seconde moitié du XXe siècle, qui a impliqué à la fois les politiques des institutions de la CEE/UE et les initiatives locales.
La mobilité transfrontalière des jeunes depuis 1945 est un exemple clé de la première : elle a souvent été développée par des groupes qui n’étaient pas formellement liés à la CEE/UE. Ils ont tout de même contribué à développer un attachement à « l’Europe » dans plusieurs pays du continent.
Comme l’a montré le politologue Ronald Inglehart dans les années 1960, plus les jeunes « étaient jeunes » et plus ils voyageaient, plus ils étaient susceptibles de soutenir une union politique toujours plus étroite en Europe. Plus récemment, les programmes d’échanges Erasmus ont également contribué à développer des formes d’identification à l’Europe.
Se sentir « européen »
Simultanément, se sentir « européen » et adhérer à une identité nationale sont loin d’être incompatibles. Dans les années 1980, de nombreux Allemands de l’Ouest se sont passionnés pour une Allemagne réunifiée faisant partie d’une Europe politiquement unie.
L’attachement à « l’Europe » a également été un élément clé du nationalisme régional dans plusieurs pays européens au cours des trois dernières décennies, tels que le nationalisme écossais, catalan et gallois. Un cri de ralliement pour les nationalistes écossais depuis les années 1980 a été « l’indépendance en Europe ». Il en est encore ainsi aujourd’hui. Il est assez révélateur que le slogan principal du Parti national écossais de centre gauche (SNP), le parti nationaliste le plus puissant d’Écosse, pour les élections du Parlement européen de 2019, soit : « L’avenir de l’Écosse appartient à l’Europe ».
Des objectifs nationaux variés réunis sous la bannière étoilée
Cependant, ce qui mérite plus d’attention, c’est l’importance attachée à la notion d’identité européenne. Divers groupes sociaux et politiques l’ont utilisée, de l’extrême gauche à l’extrême droite.
Le sens qu’ils attachent à cette identité varie également. Pour le SNP, il est compatible avec l’adhésion de l’Écosse à l’UE. Le SNP combine cette dernière avec une compréhension inclusive de la nation écossaise, qui est ouverte aux personnes nées ailleurs dans le monde, mais qui vivent en Écosse.
En Allemagne, par contre, l’AfD (Alternative für Deutschland, Alternative for Germany) d’extrême droite s’identifie à « l’Europe », mais critique l’UE. Elle combine la première avec l’islamophobie. Un exemple clair de ce mélange est une affiche publiée par ce parti avant les élections de 2019. et demandant aux « Européens » de voter pour l’AfD, afin que l’Europe ne devienne pas une « Eurabie ».
Si l’identification à l’Europe existe, il s’agit d’un phénomène complexe, formulé de plusieurs façons. Cela n’implique pas nécessairement un soutien à l’UE. De même, les identités européennes ne s’excluent pas nécessairement mutuellement avec les identités nationales. Enfin, elles peuvent, bien que pas toujours, reposer sur des stéréotypes à l’encontre de personnes considérées comme « non européennes ».
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Is there such thing as a 'European identity'?
Author: Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University
The outcome of the UK’s 2016 referendum on EU membership has sent shockwaves across Europe. Among other impacts, it has prompted debates around the issues whether a “European culture” or a “European identity” actually exist or whether national identities still dominate.
It would be wrong, in my opinion, to write off the identification of various people with “Europe”. This identification has been the outcome of a long process, particularly in the second half of the 20th century, involving both the policies of the European Economic Community (EEC) and EU institutions and grassroots initiatives. Cross-border youth mobility since 1945 is a key example of the former: it was often developed by groups that were not formally linked to the EEC/EU. They still helped develop an attachment to “Europe” in several countries of the continent.
As political scientist Ronald Inglehart showed in the 1960s, the younger people were, and the more they travelled, the more likely they were to support an ever-closer political union in Europe. More recently, Erasmus exchange programmes have also helped develop forms of identification with Europe.
Simultaneously, feeling “European” and subscribing to a national identity have been far from mutually exclusive. Numerous West Germans in the 1980s were passionate about a reunified Germany being part of a politically united Europe.
Attachment to “Europe” has also been a key component of regional nationalism in several European countries in the last three decades, such as the Scottish or the Catalan nationalism. A rallying cry for Scottish nationalists from the 1980s on has been “independence in Europe”, and it continues to be the case today. Indeed, for the 2019 European Parliament elections, the primary slogan of the centre-left Scottish National Party (SNP), currently in power, is “Scotland’s future belongs in Europe”.
What requires further attention is the significance attached to the notion of European identity. Diverse social and political groups have used it, ranging from the far left to the far right, and the meaning they attach varies. For the SNP, it is compatible with the EU membership of Scotland. The party combines the latter with an inclusive understanding of the Scottish nation, which is open to people who have been born elsewhere in the globe, but live in Scotland.
By contrast, Germany’s far-right AfD party (Alternative für Deutschland, Alternative for Germany) is critical of the EU, yet identifies with “Europe”, which it explicitly contrasts with Islam. A clear example is a one of the party’s posters for the upcoming elections that asks “Europeans” to vote for AfD so that the EU doesn’t become “Eurabia”.
Identification with Europe does exist, but it is a complex phenomenon, framed in several ways. and does not necessarily imply support for the EU. Similarly, European identities are not necessarily mutually exclusive with national identities. Finally, both the former and the latter identities may rest upon stereotypes against people regarded as “non-European”.
Nikolaos Papadogiannis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Climate change is putting even resilient and adaptable animals like baboons at risk
Author: Isabelle Catherine Winder, Lecturer in Zoology, Bangor University
Baboons are large, smart, ground-dwelling monkeys. They are found across sub-Saharan Africa in various habitats and eat a flexible diet including meat, eggs, and plants. And they are known opportunists – in addition to raiding crops and garbage, some even mug tourists for their possessions, especially food.
We might be tempted to assume that this ecological flexibility (we might even call it resilience) will help baboons survive on our changing planet. Indeed, the International Union for the Conservation of Nature (IUCN), which assesses extinction risk, labels five of six baboon species as “of Least Concern”. This suggests that expert assessors agree: the baboons, at least relatively speaking, are at low risk.
Unfortunately, my recent research suggests this isn’t the whole story. Even this supposedly resilient species may be at significant risk of extinction by 2070.
We know people are having huge impacts on the natural world. Scientists have gone as far as naming a new epoch, the Anthropocene, after our ability to transform the planet. Humans drive other species extinct and modify environments to our own ends every day. Astonishing television epics like Our Planet emphasise humanity’s overwhelming power to damage the natural world.
But so much remains uncertain. In particular, while we now have a good understanding of some of the changes Earth will face in the next decades – we’ve already experienced 1°C of warming as well as increases in the frequency of floods, hurricanes and wildfires – we still struggle to predict the biological effects of our actions.
In February 2019 the Bramble Cay melomys (a small Australian rodent) had the dubious honour of being named the first mammal extinct as a result of anthropogenic climate change. Others have suffered range loss, population decline and complex knock-on effects from their ecosystems changing around them. Predicting how these impacts will stack up is a significant scientific challenge.
We can guess at which species are at most risk and which are safe. But we must not fall into the trap of trusting our expectations of resilience, based as they are on a specie’s current success. Our recent research aimed to test these expectations – we suspected that they would not also predict survival under changing climates, and we were right.
Baboons and climate change
Models of the effects of climate change on individual species are improving all the time. These are ecological niche models, which take information on where a species lives today and use it to explore where it might be found in future.
For the baboon study, my masters student Sarah Hill and I modelled each of the six baboon species separately, starting in the present day. We then projected their potential ranges under 12 different future climate scenarios. Our models included two different time periods (2050 and 2070), two different degrees of projected climate change (2.6°C and 6°C of warming) and three different global climate models, each with subtly different perspectives on the Earth system. These two different degrees of warming were chosen because they represent expected “best case” and “worst case” scenarios, as modelled by the Intergovernmental Panel on Climate Change.
Our model outputs allowed us to calculate the change in the area of suitable habitat for each species under each scenario. Three of our species, the yellow, olive and hamadryas baboons, seemed resilient, as we initially expected. For yellow and olive baboons, suitable habitat expanded under all our scenarios. The hamadryas baboon’s habitat, meanwhile, remained stable.
Guinea baboons (the only one IUCN-labelled as Near Threatened) showed a small loss. Under scenarios predicting warmer, wetter conditions, they might even gain a little. Unfortunately, models projecting warming and drying predicted that Guinea baboons could lose up to 41.5% of their suitable habitat.
But Kinda baboons seemed sensitive to the same warmer and wetter conditions that might favour their Guinea baboon cousins. They were predicted to lose habitat under every model, though the loss ranged from a small one (0-22.7%) in warmer and dryer conditions to 70.2% under the worst warm and wet scenario.
And the final baboon species, the chacma baboon from South Africa (the same species that are known for raiding tourist vehicles to steal treats) is predicted the worst habitat loss. Under our 12 scenarios, habitat loss was predicted to range from 32.4% to 83.5%.
The IUCN identifies endangered species using estimates of population and range size and how they have changed. Although climate change impacts are recognised as potentially causing important shifts in both these factors, climate change effect models like ours are rarely included, perhaps because they are often not available.
Our results suggest that in a few decades several baboon species might move into higher-risk categories. This depends on the extent of range (and hence population) loss they actually experience. New assessments will be required to see which category will apply to chacma, Kinda and Guinea baboons in 2070. It’s worth noting also that baboons are behaviourally flexible: they may yet find new ways to survive.
This also has wider implications for conservation practice. First, it suggests that we should try to incorporate more climate change models into assessments of species’ prospects. Second, having cast doubt on our assumption of baboon “resilience”, our work challenges us to establish which other apparently resilient species might be similarly affected. And given that the same projected changes act differently even on closely related baboon species, we presumably need to start to assess species more or less systematically, without prior assumptions, and to try to extract new general principles about climate change impacts as we work.
Sarah and I most definitely would not advocate discarding any of the existing assessment tools – the work the IUCN does is vitally important and our findings just confirm that. But our project may have identified an important additional factor affecting the prospects of even seemingly resilient species in the Anthropocene.
Isabelle Catherine Winder does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Replanting oil palm may be driving a second wave of biodiversity loss
Author: Simon Willcock, Senior Lecturer in Environmental Geography, Bangor UniversityAdham Ashton-Butt, Post-doctoral Research Associate, University of Hull
The environmental impact of palm oil production has been well publicised. Found in everything from food to cosmetics, the deforestation, ecosystem decline and biodiversity loss associated with its use is a serious cause for concern.
What many people may not know, however, is that oil palm trees – the fruit of which is used to create palm oil – have a limited commercial lifespan of 25 years. Once this period has ended, the plantation is cut down and replanted, as older trees start to become less productive and are difficult to harvest. Our research has now found that this replanting might be causing a second wave of biodiversity loss, further damaging the environment where these plantations have been created.
An often overlooked fact is that oil palm plantations actually have higher levels of biodiversity compared to some other crops. More species of forest butterflies would be lost if a forest were converted to a rubber plantation, than if it were converted to oil palm, for example. One reason for this is that oil palm plantations provide a habitat that is more similar to tropical forest than other forms of agriculture (such as soybean production). The vegetation growing beneath the oil palm canopy (called understory vegetation) also provides food and a habitat for many different species, allowing them to thrive. Lizard abundance typically increases when primary forests are converted to oil palm, for example.
This does not mean oil palm plantations are good for the environment. In South-East Asia, where 85% of palm oil is produced, the conversion of forest to oil palm plantations has caused declines in the number of several charismatic animals, including orangutans, sun bears and hornbills. Globally, palm oil production affects at least 193 threatened species, and further expansion could affect 54% of threatened mammals and 64% of threatened birds.
Banning palm oil would likely only displace, not halt this biodiversity loss. Several large brands and retailers are already producing products using sustainably certified palm oil, as consumers reassess the impact of their purchasing. But as it is such an ubiquitous ingredient, if it were outlawed companies would need an alternative to keep producing products which include it, and developing countries would need to find something else to contribute to their economies. Production would shift to the cultivation of other oil crops elsewhere, such as rapeseed, sunflower or soybean, in order to meet global demand. In fact, since oil palm produces the highest yields per hectare – up to nine times more oil than any other vegetable oil crop – it could be argued that cultivating oil palm minimises deforestation.
That’s not to say further deforestation should be encouraged to create plantations though. It is preferable to replace plantations in situ, replanting each site so that land already allocated for palm oil production can be reused. This replanting is no small undertaking – 13m hectares of palm oil plantations are to be uprooted by the year 2030, an area nearly twice the size of Scotland. However, our study reveals that much more needs to be done in the management and processes around this replanting, in order to maximise productivity and protect biodiversity in plantations.
We found significant declines in the biodiversity and abundance of soil organisms as a consequence of palm replanting. While there was some recovery over the seven years it takes the new crop to establish, the samples we took still had nearly 20% less diversity of invertebrates (such as ants, earthworms, millipedes and spiders) than oil palm converted directly from forest.
We also found that second-wave mature oil palm trees had 59% fewer animals than the previous crop. This drastic change could have severe repercussions for soil health and the overall agro-ecosystem sustainability. Without healthy, well-functioning soil, crop production suffers.
It is likely that replanting drives these declines. Prior to replanting, heavy machinery is used to uproot old palms. This severely disrupts the soil, making upper layers vulnerable to erosion and compaction, reducing its capacity to hold water. This is likely to have a negative impact on biodiversity, which is then further reduced due to the heavy use of pesticides.
Ultimately, palm oil appears to be a necessary food product for growing populations. However, now that we have identified some of the detrimental consequences of replanting practices, it is clear that long-term production of palm oil comes at a higher cost than previously thought. The world needs to push for more sustainable palm oil, and those in the industry must explore more biodiversity-friendly replanting practices in order to lessen the long-term impacts of intensive oil palm cultivation.
Simon Willcock receives funding from the UK's Economic and Social Research Council (ESRC; ES/R009279/1 and ES/R006865/10). He is affiliated with Bangor University, and is on the Board of Directors of Alliance Earth. This article was written in collaboration with Anna Ray, a research assistant and undergraduate student studying Environmental Science at Bangor University.
Adham Ashton-Butt receives funding from The Natural Environment Research Council. He is affiliated with The University of Hull and the University of Southampton.
Game of Thrones: neither Arya Stark nor Brienne of Tarth are unusual — medieval romance heroines did it all before
Author: Raluca Radulescu, Professor of Medieval Literature and English Literature, Bangor University
Brienne of Tarth and Arya Stark are very unlike what some may expect of a typical medieval lady. The only daughter of a minor knight, Brienne has trained up as a warrior and has been knighted for her valour in the field of battle. Meanwhile Arya, a tomboyish teen when we first met her in series one, is a trained and hardened assassin. No damsels in distress, then – they’ve chosen to defy their society’s expectations and follow their own paths.
Yet while they are certainly enjoyable to watch, neither character is as unusual as modern viewers may think. While the books and television series play with modern perceptions (and misperceptions) of women’s roles, Arya and Brienne resemble the heroines of medieval times. In those days both real and fictional women took arms to defend cities and fight for their community – inspired by the courage of figures such as Boudicca or Joan of Arc. They went in disguise to look for their loved ones or ran away from home as minstrels or pilgrims. They were players, not bystanders.
Medieval audiences were regularly inspired by stories of women’s acts of courage and emotional strength. There was Josian, for example, the Saracen (Muslim) princess of the popular medieval romance Bevis of Hampton, who promises to convert to Christianity for love (fulfilling the wishes of the Christian audience). She also murders a man to whom she has been married against her wishes.
There was the lustful married lady who attempts to seduce Sir Gawain in the 14th-century poem Sir Gawain and the Green Knight too. As well as Rymenhild, a princess that eventually marries King Horn in an early example of the romance genre– who very much wants to break moral codes by having sex with her beloved before their wedding, which at that point has not been decided upon.
Medieval stories of such intense desire celebrate the young virgin heroine who woos the object of her desire and takes no notice of the personal, social, political and economic effects of sex before marriage. This is the case with both Arya and Brienne. Arya chooses her childhood friend Gendry to take her virginity on the eve of the cataclysmic battle against the undead. Brienne does the same with Jaime Lannister, the night after the cataclysmic battle – but only after he earns her trust over many adventures together.
Boldness and strength
It is the emotional strength and courage of these heroines that drives their stories forward rather than their relationship to the male hero. Throughout Game of Thrones, this emotional strength has also helped Arya and Brienne stay true to their missions. Arya’s continued strength has to be seen in the light of what has happened to her, however. Brienne began the story as a trained “knight” but Arya’s journey has seen her learning, through bitter experience, the skills she needs to survive.
A medieval audience would have been attuned to this message of self-reliance. Especially given the everydaygendered experiences of women who ran businesses, households and countries, married unafraid of conventions, or chose not to marry.
It is not too far-fetched to think that Arya and Brienne could together lead the alliance against the evil queen Cersei, having both learned that fate reserves unlikely rewards for those who prepare well and carry on in the name of ideals rather than to improve their own status. The frequently (and most likely deliberately) unnamed heroines of medieval romance similarly prove to be resourceful – and often rose to power, leading countries or armies, without even a mention of prior training.
The medieval heroines that went unnamed provided a perfect model for women then to project themselves onto. The Duchess in the poem Sir Gowther, under duress (her husband threatens to leave on the grounds of not providing an heir), prays that she be given a son “no matter through what means”, and sleeps with the devil – producing the desired heir.
In the Middle English romance story of Sir Isumbras, his wife – whose name we are not told – transforms from a stereotypical courtly lady, kidnapped by a sultan, to a queen who fights against her captor. She becomes an empty shell onto which medieval women – especially those who do not come from the titled aristocracy – can project themselves. She battles alongside her husband and sons when his men desert him, with no training, only her own natural qualities to rely on.
These real and fictional heroines of the Middle Ages had no choices: they found solutions to seemingly impossible situations, just as Brienne and Arya have done. These two are unsung heroes, female warriors who stand in the background and don’t involve themselves in the “game”. While the men celebrate their victory against the undead White Walkers with a feast at Winterfell, Arya – whose timely assassination of their leader, the Night King, enabled the victory – shuns the limelight.
While the conclusion to the stories of Arya and Brienne is yet to be revealed, given the heroines that inspired these characters it will not be surprising if it is the women warriors – not the men – who will drive the game to its end.
Raluca Radulescu has nothing to disclose.
Allergies aux graminées : le type de pollen compterait plus que la quantité
Author: Simon Creer, Professor in Molecular Ecology, Bangor UniversityGeorgina Brennan, Postdoctoral Research Officer, Bangor University
Lorsque le froid hivernal cède la place à des températures plus élevées, que les journées s’allongent et que la vie végétale renaît, près de 400 millions de personnes dans le monde sont victimes de réactions allergiques provoquées par les pollens en suspension dans l’air, qu’il s’agisse de ceux des arbres ou des plantes herbacées. Les symptômes vont des démangeaisons oculaires accompagnées de congestion et d’éternuements à l’aggravation de l’asthme, avec un coût pour la société qui se chiffre en milliards.
Depuis les années 1950, de nombreux pays partout dans le monde tiennent des comptes concernant les quantités de pollen, afin d’établir des prévisions à destination des personnes allergiques. Au Royaume-Uni, ces prévisions sont fournies par le Met Office en collaboration avec l’University of Worcester. (En France, le Réseau national de surveillance aérobiologique, association de loi 1901, est chargé d’étudier le contenu de l’air en particules biologiques pouvant avoir une incidence sur le risque allergique. Ses bulletins sont accessibles en ligne.)
Jusqu’à présent, les prévisions liées au pollen se basaient sur le comptage du nombre total de grains de pollen présents dans l’air : ceux-ci sont recueillis à l’aide d’échantillonneurs d’air qui capturent les particules sur un tambour collant à rotation lente (2 mm/heure).
Le problème est que ces prévisions portent sur le niveau de tous les pollens présents dans l’air, or les gens souffrent de réactions allergiques différentes selon le type de pollen rencontré. Le pollen de graminées, par exemple, est l’aéroallergène le plus nocif – le nombre de personnes qui y sont allergiques dépasse celui de tout autre allergène atmosphérique. Par ailleurs, les données préliminaires que nous avons recueillies suggèrent que les allergies à ce pollen varient au cours de la saison de floraison.
Repérer le pollen
Le pollen d’un grand nombre d’espèces d’arbres et de plantes allergènes peut être identifié grâce au microscope. Malheureusement, ce n’est pas faisable pour les pollens des graminées, car leurs grains ont une apparence très similaire. Cela signifie qu’il est presque impossible de déterminer à quelles espèces ils appartiennent grâce à un simple examen visuel, en routine.
Dans le but d’améliorer la précision des comptages et des prévisions, nous avons monté un nouveau projet visant à mettre au point des méthodes pour distinguer les différents types de pollen de graminées au Royaume-Uni. L’objectif est de savoir quelles espèces de pollen sont présentes en Grande-Bretagne tout au long de la saison de floraison de ces herbes.
Au cours des dernières années, notre équipe de recherche a exploré plusieurs approches pour identifier les pollens de graminées, parmi lesquelles la génétique moléculaire. L’une des méthodes employées par notre équipe repose sur l’utilisation du séquençage de l’ADN. Il s’agit d’examiner des millions de courtes sections d’ADN (ou marqueurs de codes-barres à ADN). Ces marqueurs sont spécifiques à chaque espèce ou genre de pollen de graminées.
Cette approche est appelée « metabarcoding » et peut être utilisée pour analyser l’ADN provenant de communautés mixtes d’organismes, ainsi que l’ADN provenant de différents types de sources environnementales (par exemple, le sol, les sources aquatiques, le miel et l’air). Cela signifie que nous pouvons de cette façon évaluer la biodiversité de centaines ou de milliers d’échantillons. Il nous a ainsi été possible d’analyser l’ADN des pollens prélevés par des échantillonneurs aériens disposés sur les toits en Grande-Bretagne, à 14 endroits différents.
Saison de floraison
En comparant le pollen que nous avons capturé à des échantillons de la bibliothèque de codes-barres ADN des plantes du Royaume-Uni (une base de données ADN de référence, établie à partir d’espèces de graminées correctement identifiées), nous avons été en mesure d’identifier différents types de pollen de graminées à partir de mélanges complexes de pollen en suspension. Cela nous a permis de visualiser comment les différents types de pollens de graminées sont répartis dans toute la Grande-Bretagne au cours de la saison de floraison. Jusqu’à présent, on ne savait pas si la mixture de de pollens présents dans l’air changeait au fil du temps, reflétant la floraison terrestre, ou si le mélange s’enrichissait de nouvelles espèces, par accumulation régulière au fil de la saison pollinique.
On aurait pu légitimement s’attendre à ce que les mélanges de pollens présents dans l’air aient une composition très variée et hétérogène – en raison de la mobilité des grains de pollen et du fait que différentes espèces fleurissent à divers moments de la saison. Pourtant, nos travaux ont révélé que ce n’est pas le cas. En effet, nous avons constaté que la composition du pollen en suspension dans l’air reproduit la progression saisonnière de la diversité des graminées : d’abord des espèces à floraison précoce, puis floraison de mi- et fin de saison.
Grâce à des données complémentaires, contemporaines et historiques, nous avons également constaté qu’au fur et à mesure que la saison de floraison des graminées progresse, le pollen présent en suspension dans l’air reproduit sensiblement, mais avec un délai, les floraisons observées au sol. Autrement dit, au cours de la saison de floraison, les différents types de pollens ne persistent pas dans l’environnement, mais disparaissent.
L’importance de ces travaux va au-delà de la simple compréhension des plantes. En effet, nous avons accumulé des preuves montrant que les ventes de médicaments antiallergiques ne sont pas, elles non plus, uniformes durant la saison de floraison des graminées. On sait que certains types de pollens peuvent contribuer plus que d’autres aux allergies. On peut donc supposer que lorsque les symptômes allergiques sont particulièrement graves, ils résultent davantage de la présence d’un type de pollen donné dans l’air que d’une augmentation des quantités globales de pollens.
Au cours des prochains mois, nous examinerons différents types de pollens et les données de santé associées, afin d’analyser les liens entre la biodiversité du pollen présent dans l’air et les symptômes allergiques. L’objectif principal de notre travail est d’améliorer à terme les prévisions, la planification et les mesures de prévention afin limiter les allergies aux graminées.
Simon Creer a reçu des financements du Natural Environment Research Council.
Georgina Brennan a reçu des financements du Natural Environment Research Council.
Ligue 1: France gets its first female top flight football referee, but the federation scores an own goal
Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University
As the end of the 2018-19 football season approaches, a match between Amiens and Strasbourg in France’s Ligue 1 would normally attract little attention. However, Sunday’s game has already created headlines as Stéphanie Frappart will become the first ever woman to act as a main referee in the top tier of French men’s football.
Initially, this appointment could be seen as a symbol of progress and inclusion. But the French Football Federation (FFF) announced that Frappart had been appointed as the main official for the Amiens-Strasbourg match in order to “prepare her for World Cup conditions” ahead of the 2019 Women’s World Cup in France.
The FFF’s explanation seems somewhat begrudging as it makes no reference to Frappart’s experience or talent as a match official. It arguably presents her nomination as a means to an end rather than a logical next step for someone who has officiated in Ligue 2 since 2014. Indeed, Frappart has also been a fourth official or video assistant referee in Ligue 1 several times.
Whether Frappart will establish herself as a leading referee within men’s football in France is uncertain. Pascal Garibian, technical director for refereeing in France, has said it is “still too early to say” if she will become a regular main referee in Ligue 1. In addition, it is unclear if she will referee any more top division matches this season.
It is also worth questioning to what extent officiating at Amiens-Strasbourg constitutes good preparation for this summer’s Women’s World Cup. Amiens’ home stadium can welcome 12,000 spectators, 8,000 fewer than the smallest 2019 Women’s World Cup venue. Seven of France’s nine World Cup stadiums have more than double the capacity of Amiens’ Stade de la Licorne. And Amiens has the third lowest average attendance of Ligue 1 teams during the current season.
Frappart becoming the first woman to referee a match in Ligue 1 is significant, but also somewhat paradoxical. In fact, it highlights the lack of career progression enjoyed by female officials within French men’s football – and across Europe, too.
In September 2017, Bibiana Steinhaus became the first female referee in a European main men’s football league (in Germany’s Bundesliga). But while Frappart’s appointment will see Ligue 1 become the second major European men’s league in which a woman has taken charge of a game, it has taken some time to get here.
In 1996, Nelly Viennot became the first female assistant referee in Ligue 1, yet it has taken another 23 years for the first female main referee. In a top-level career lasting from 1996-2007, Viennot was regularly an assistant referee in men’s football, but never a main referee.
Regrettably, it seems that the FFF has taken the sheen off a notable first. A request from FIFA that its member associations help match officials to “prepare in the best conditions possible” for the 2019 Women’s World Cup seems the main reason Frappart will officiate this Sunday. It is somewhat unusual for someone not selected as a top division referee at the start of the season to officiate in Ligue 1. In Germany, Bibiana Steinhaus had been listed as one of the top division referees prior to the 2017-18 season.
As a referee in Ligue 2, Frappart has at times encountered sexist attitudes. When coach of Valenciennes in 2015, David Le Frapper said that “when a woman referees in a man’s sport, things are complicated” following a match Frappart refereed. Such comments are reminiscent of Sky presenters Richard Keys and Andy Gray’s reaction to Sian Massey-Ellis’ presence as assistant referee at an English Premier League match in 2011, when they suggested that female officials “don’t know the offside rule”.
During the last decade, the FFF has provoked controversy when seeking to encourage more women to get involved in football. In 2010, they sought to boost the profile of women’s football in France via a campaign featuring Adrianna Karembeu. Several posters were based on obvious gender stereotypes.
One featured an image of female footballers in a changing room and the slogan “For once you won’t scream when seeing another girl wearing the same outfit”. The FFF had previously promoted women’s football via an image of three leading players posing naked alongside the question “Is this what we have to do for you to come to see us play?”
Nelly Viennot’s presence as the first female assistant referee in Ligue 1 did not herald the arrival of many more female officials in French men’s football. Stéphanie Frappart is still the only woman to have been the main referee in Ligue 2. It is unclear to what extent attitudes to female referees in French men’s football are evolving. It may well be several years before we realise the real impact of Frappart’s appointment as referee for the match between Amiens and Strasbourg.
Jonathan Ervine does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
How did the moon end up where it is?
Author: Mattias Green, Reader in Physical Oceanography, Bangor UniversityDavid Waltham, Professor of Geophysics, Royal Holloway
Nearly 50 years since man first walked on the moon, the human race is once more pushing forward with attempts to land on the Earth’s satellite. This year alone, China has landed a robotic spacecraft on the far side of the moon, while India is close to landing a lunar vehicle, and Israel continues its mission to touch down on the surface, despite the crash of its recent venture. NASA meanwhile has announced it wants to send astronauts to the moon’s south pole by 2024.
But while these missions seek to further our knowledge of the moon, we are still working to answer a fundamental question about it: how did it end up where it is?
On July 21, 1969, the Apollo 11 crew installed the first set of mirrors to reflect lasers targeted at the moon from Earth. The subsequent experiments carried out using these arrays have helped scientists to work out the distance between the Earth and moon for the past 50 years. We now know that the moon’s orbit has been getting larger by 3.8cm per year– it is moving away from the Earth.
This distance, and the use of moon rocks to date the moon’s formation to to 4.51 billion years ago, are the basis for the giant impact hypothesis (the theory that the moon formed from debris after a collision early in Earth’s history). But if we assume that lunar recession has always been 3.8cm/year, we have to go back 13 billion years to find a time when the Earth and moon were close together (for the moon to form). This is much too long ago – but the mismatch is not surprising, and it might be explained by the world’s ancient continents and tides.
Tides and recession
The distance to the moon can be linked to the history of Earth’s continental configurations. The loss of tidal energy (due to friction between the moving ocean and the seabed) slows the planet’s spin, which forces the moon to move away from it – the moon recedes. The tides are largely controlled by the shape and size of the Earth’s ocean basins. When the Earth’s tectonic plates move around, the ocean geometry changes, and so does the tide. This affects the moon’s retreat, so it appears smaller in the sky.
This means that if we know how Earth’s tectonic plates have changed position, we can work out where the moon was in relation to our planet at a given point in time.
We know that the strength of the tide (and so the recession rate) also depends on the distance between Earth and the moon. So we can assume that the tides were stronger when the moon was young and closer to the planet. As the moon rapidly receded early in its history, the tides will have become weaker and the recession slower.
The detailed mathematics that describe this evolution were first developed by George Darwin, son of the great Charles Darwin, in 1880. But his formula produces the opposite problem when we input our modern figures. It predicts that Earth and the moon were close together only 1.5 billion years ago. Darwin’s formula can only be reconciled with modern estimates of the moon’s age and distance if its typical recent recession rate is reduced to about one centimetre per year.
The implication is that today’s tides must be abnormally large, causing the 3.8cm recession rate. The reason for these large tides is that the present-day North Atlantic Ocean is just the right width and depth to be in resonance with the tide, so the natural period of oscillation is close to that of the tide, allowing them to get very large. This is much like a child on a swing who moves higher if pushed with the right timing.
But go back in time – a few million years is enough – and the North Atlantic is sufficiently different in shape that this resonance disappears, and so the moon’s recession rate will have been slower. As plate tectonics moved the continents around, and as the slowing of Earth’s rotation changed the length of days and the period of tides, the planet would have slipped in and out of similar strong-tide states. But we don’t know the details of the tides over long periods of time and, as a result, we cannot say where the moon was in the distant past.
One promising approach to resolve this is to try to detect Milankovitch cycles from physical and chemical changes in ancient sediments. These cycles come about because of variations in the shape and orientation of Earth’s orbit, and variations in the orientation of Earth’s axis. These produced climate cycles, such as the ice ages of the last few million years.
Most Milankovitch cycles don’t change their periods over Earth’s history but some are affected by the rate of Earth’s spin and the distance to the moon. If we can detect and quantify those particular periods, we can use them to estimate day-length and Earth-moon distance at the time the sediments were deposited. So far, this has only been attempted for a single point in the distant past. Sediments from China suggest that 1.4 billion years ago the Earth-moon distance was 341,000km (its current distance is 384,000km).
Now we are aiming to repeat these calculations for sediments in hundreds of locations laid down at different time periods. This will provide a robust and near-continuous record of lunar recession over the past few billion years, and give us a better appreciation of how tides changed in the past. Together, these interrelated studies will produce a consistent picture of how the Earth-moon system has evolved through time.
Mattias Green receives funding from The Natural Environmental Research Council.
David Waltham receives funding from NERC
DNA analysis finds that type of grass pollen, not total count, could be important for allergy sufferers
Author: Simon Creer, Professor in Molecular Ecology, Bangor UniversityGeorgina Brennan, Postdoctoral Research Officer, Bangor University
As the winter cold is replaced by warmer temperatures, longer days and an explosion of botanical life, up to 400m people worldwide will develop allergic reactions to airborne pollen from trees, grasses and weeds. Symptoms will range from itchy eyes, congestion and sneezing, to the aggravation of asthma and an associated cost to society that runs into the billions.
Ever since the 1950s, countries around the world have been recording pollen counts to create forecasts for allergy sufferers. In the UK this forecast is provided by the Met Office in collaboration with the University of Worcester. To date, pollen forecasts have been based on counting the total number of grains of pollen in the air from trees, weeds and grass. The pollen is collected using air sampling machines that capture the particles on a slowly rotating sticky drum.
However, while these forecasts focus on the level of all pollens in the air, people suffer from allergic reactions to different types of pollen. Grass pollen, for example, is the most harmful aeroallergen – more people are allergic to grass pollen than any other airborne allergen. And now our own preliminary health data suggests that allergies to this pollen vary across the grass flowering season.
In an effort to improve the accuracy of pollen counts and forecasts, we have been working on a new project to distinguish between different types of grass pollen in the UK. The aim is to find out what species of pollen are present across Britain throughout the grass flowering season.
Microscopes are used to identify the pollen of many allergenic tree and weeds, but unfortunately this can’t be done for grass pollen, since all grass pollen grains look highly similar underneath a microscope. This means it is almost impossible to routinely distinguish the species of grass they come from using visual observation.
So, over the past few years, our research team, PollerGEN, has been investigating whether a new wave of approaches, including molecular genetics, can be used to identify different airborne grass pollens instead. One method that our team has employed to identify the pollen relies on using DNA sequencing to examine millions of short sections of DNA (also called barcode markers). These markers are unique to each species or genus of grass pollen.
This approach is called “metabarcoding” and it can be used to analyse DNA derived from mixed communities of organisms, as well as DNA from many different types of environmental sources (for example, soil, aquatic sources, honey and the air). It means that we can assess the biodiversity of hundreds to thousands of samples. In particular, it has allowed us to analyse pollen DNA collected by aerial samplers at 14 rooftop locations across Britain.
By comparing the pollen we captured to samples in the UK plant DNA barcode library (an established reference DNA database of correctly identified grass species) we have been able to identify different types of grass pollen from complex mixtures of airborne pollen. This has allowed us to visualise how different types of grass pollen are distributed throughout Britain across the grass flowering season.
While there was a real chance that aerial pollen mixtures could be very varied and haphazard – due to the mobility of pollen in the environment and the fact that different grasses flower at different times of the season – our newly published study has found that this is not the case. We have found that the composition of airborne pollen resembles a seasonal progression of diversity, featuring early, then mid and late-season flowering grasses.
By combining other historical and contemporary data, we also found that as the grass flowering season progresses, airborne pollen follows a sensible, but delayed appearance from the first flowering times noted from the ground. This means that different types of grass pollen are not present throughout each period of the flowering season. They disappear from the environmental mixture.
This research is important to more than just our understanding of plants. Our own emerging evidence suggests that over-the-counter medications are not uniform throughout the grass flowering season. So certain types of grass pollen may be contributing more to allergenic disease than others. It could be that when symptoms are particularly bad, allergies are caused by the type of grass pollen in the air, not just the amount.
In the next few months, we will be looking into different forms of pollen and health data, to investigate links between the biodiversity of aerial pollen and allergenic symptoms. The overarching aim of our work is to eventually provide better forecasting, planning and prevention measures to enable less people to suffer from grass allergenic disease.
Simon Creer receives funding from The Natural Environment Research Council.
Georgina Brennan receives funding from The Natural Environment Research Council.
Kuasa bahasa: Kata-kata menerjemahkan pikiran dan pengaruhi cara berpikir
Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University
Pernahkah pada masa sekolah atau di kemudian hari Anda mengkhawatirkan bahwa waktu Anda untuk mencapai semua cita-cita Anda akan habis? Jika demikian, apakah akan lebih mudah menyampaikan perasaan ini kepada orang lain jika ada kata yang memiliki makna itu? Dalam bahasa Jerman, ada. Perasaan panik yang terkait dengan peluang seseorang yang tampaknya akan habis disebut Torschlusspanik.
Bahasa Jerman memiliki banyak koleksi istilah seperti itu, terdiri dari dua, tiga atau lebih kata yang tersambung untuk membentuk satu kata super atau kata majemuk. Kata majemuk sangat kuat karena mereka bermakna lebih dari bagian-bagian pembentuk kata tersebut. Torschlusspanik, misalnya, secara harfiah tersusun dari “gerbang” - “menutup” - “panik”.
Jika Anda tiba di stasiun kereta sedikit terlambat dan melihat pintu kereta Anda masih terbuka, Anda mungkin pernah mengalami satu bentuk konkret Torschlusspanik, didorong oleh bunyi beep khas saat pintu kereta hendak ditutup. Tapi kata majemuk dari Jerman ini memiliki asosiasi yang lebih kaya dari sekadar makna literal. Kata ini membangkitkan sesuatu yang lebih abstrak, merujuk pada perasaan bahwa kehidupan semakin menutup pintu peluang seiring berjalannya waktu.
Bahasa Inggris juga mempunyai banyak kata majemuk. Beberapa menggabungkan kata-kata yang konkret seperti “seahorse” (kuda laut), “butterfly” (kupu-kupu), atau “turtleneck” (sweater yang kerahnya menutupi leher). Lainnya lebih abstrak, seperti “backwards” (mundur) atau “whatsoever” (apa pun). Dan tentu saja seperti dalam bahasa Jerman atau bahasa Prancis, dalam bahasa Inggris kata majemuk juga termasuk kata-kata super, karena maknanya sering berbeda dari arti kata per kata. Seekor kuda laut (seahorse) bukan kuda (horse), seekor kupu-kupu (butterfly) bukan seekor lalat (fly), penyu (turtles) tidak memakai sweater dengan kerahnya menutupi leher (turtleneck), dan lainnya.
Salah satu ciri luar biasa dari kata majemuk adalah ketika diterjemahkan ke dalam bahasa lain hasilnya tidak pas, paling tidak ketika kata itu diterjemahkan secara harfiah per bagian. Siapa yang menyangka bahwa “lembaran-bawa(carry-sheets)” adalah dompet (wallet) - porte-feuille -, atau bahwa “dukung-tenggorokan (support-throat)” adalah BH - soutien-gorge - dalam bahasa Prancis?
Ini menimbulkan pertanyaan tentang apa yang terjadi ketika kita sulit menemukan padanan sebuah kata dalam bahasa lain. Misalnya, apa yang terjadi ketika seorang penutur asli bahasa Jerman mencoba menyampaikan dalam bahasa Inggris bahwa mereka baru saja berlari cepat karena Torschlusspanik? Secara alami, mereka akan memparafrase, yaitu, mereka akan membuat narasi dengan contoh-contoh untuk membuat lawan bicara mereka memahami apa yang mereka coba katakan.
Tapi kemudian, ini menimbulkan pertanyaan lain yang lebih besar: Apakah orang-orang yang memiliki kata-kata yang tidak dapat diterjemahkan dalam bahasa lain memiliki akses ke konsep yang berbeda? Ambil contoh hiraeth misalnya, kata yang indah dari bahasa Welsh yang terkenal karena tidak dapat diterjemahkan. Hiraeth dimaksudkan untuk menyampaikan perasaan yang terkait dengan ingatan pahit tentang kehilangan sesuatu atau seseorang, sambil bersyukur atas keberadaan mereka.
Hiraeth bukan nostalgia, itu bukan penderitaan, atau frustrasi, atau melankolis, atau penyesalan. Hiraeth juga menyampaikan perasaan yang dialami seseorang ketika mereka meminta seseorang untuk menikahi mereka dan mereka ditolak.
Kata yang berbeda, pikiran yang berbeda?
Keberadaan sebuah kata dalam bahasa Welsh untuk menyampaikan perasaan khusus ini menimbulkan pertanyaan mendasar tentang hubungan pemikiran-bahasa. Filsuf seperti Herodotus (450 SM) menanyakan hal ini pada masa Yunani kuno. Pertanyaan ini muncul kembali pada pertengahan abad terakhir, di bawah dorongan Edward Sapir dan mahasiswanya Benjamin Lee Whorf. Pertanyaan ini telah berkembang menjadi yang dikenal sebagai hipotesis relativitas linguistik.
Relativitas linguistik adalah gagasan bahwa bahasa, yang mayoritas orang setuju berasal dari dan mengekspresikan pemikiran manusia, dapat memberi umpan balik pada pemikiran, mempengaruhi pemikiran sebagai balasannya. Jadi, dapatkah kata-kata yang berbeda atau konstruksi tata bahasa yang berbeda “membentuk” cara berpikir secara berbeda dalam penutur bahasa yang berbeda? Ide ini mulai dilirik produsen budaya populer, dan muncul dalam film fiksi sains Arrival.
Meski ide ini intuitif bagi sebagian orang, terdapat klaim berlebihan tentang tingkat keragaman kosakata di beberapa bahasa. Klaim semacam ini mendorong ahli bahasa terkenal untuk menulis esai satir seperti “hoax kosakata Eskimo yang begitu banyak”, saat Geoff Pullum mencela fantasi tentang jumlah kata yang digunakan oleh orang Eskimo untuk merujuk pada salju. Namun, berapa pun jumlah kata sebenarnya untuk salju di Eskimo, Pullum gagal menjawab pertanyaan penting: apa yang sebenarnya kita ketahui tentang persepsi orang Eskimo tentang salju?
Meski banyak kritik terhadap hipotesis relativitas linguistik, penelitian eksperimental untuk mencari bukti ilmiah adanya perbedaan antara penutur bahasa yang berbeda semakin banyak. Contohnya, Panos Athanasopoulos di Lancaster University, telah membuat pengamatan yang mengejutkan bahwa adanya kata-kata khusus untuk membedakan kategori warna beriringan dengan kemampuan apresiasi kontras warna.
Jadi, ia menunjukkan, penutur asli bahasa Yunani, yang memiliki istilah khusus untuk biru terang dan biru tua (masing-masing ghalazio dan ble) cenderung menganggap berbagai macam warna biru berbeda satu sama lain ketimbang penutur asli bahasa Inggris, yang menggunakan istilah yang sama “blue” untuk menggambarkannya.
Tapi para pemikir termasuk Steven Pinker di Harvard tidak terkesan, dengan alasan bahwa efek seperti itu sepele dan tidak menarik, karena individu yang terlibat dalam eksperimen cenderung menggunakan bahasa di kepala mereka ketika membuat penilaian tentang warna–sehingga perilaku mereka secara dangkal dipengaruhi oleh bahasa, sementara semua orang melihat dunia dengan cara yang sama.
Agar perdebatan ini lebih berkembang, saya percaya kita perlu mempelajari otak manusia, dengan mengukur persepsi secara lebih langsung, terutama dalam waktu yang singkat sebelum akses mental ke bahasa. Hal ini mungkin terjadi saat ini, berkat metode neurosains dan - secara luar biasa - hasil awal condong mendukung intuisi Sapir dan Whorf.
Jadi, ya, suka atau tidak, mungkin saja memiliki kata-kata yang berbeda berarti memiliki pikiran yang terstruktur berbeda.
Guillaume Thierry has received funding from the European Research Council, the Economic and Social Research Council, the British Academy, the Arts and Humanities Research Council, the Biotechnology and Biological Research Council, and the Arts Council of Wales.
Our Planet is billed as an Attenborough documentary with a difference but it shies away from uncomfortable truths
Author: Julia P G Jones, Professor of Conservation Science, Bangor University
Over six decades, Sir David Attenborough’s name has become synonymous with high-quality nature documentaries. But while for his latest project, the Netflix series Our Planet, he is once again explaining incredible shots of nature and wildlife – this series is a little different from his past films. Many of his previoussmashhits have portrayed the natural world as untouched and perfect, Our Planet is billed as putting the threats facing natural ecosystems front and centre to the narrative. In the opening scenes we are told: “For the first time in human history the stability of nature can no longer be taken for granted.”
This is a very significant departure – and one which is arguably long overdue. Those of us who study the pressures on wild nature have been frustrated that nature documentaries give the impression that everything is OK. Some argue that they may do more harm than good by giving viewers a sense of complacency.
Conservation scientists were expecting that the new series wouldn’t shy away from the awful truth: the wonders shown in these mesmerising nature programmes are tragically reduced– and many are at risk of being lost forever.
I had the privilege of seeing the One Planet team at work back in 2015 (these films take years to make). I spent three weeks at the camp in western Madagascar where they were working on their forest film. While the camera crew were working day and night filming fossa (lemur-hunting carnivores), and trying to get the perfect footage of leaf bugs producing honeydew (the series is worth watching for this sequence alone), the team was also digging deep into the complex issues of what is happening to this wondrous biodiversity. Their researcher spent many hours with Malagasy conservation scientist Rio Heriniaina talking to local community leaders about the challenges they face and the reasons for the very rapid rate of forest loss in the region.
However, none of that fascinating footage made the final cut. Following a scene showing fossa mating, we are told that their forests have since been burnt. This was already happening in 2015. As Heriniaina told me:
Madagascar’s dry forests are vanishing before our eyes. Every burning season large areas of forest go up in flames to clear space for peanuts and corn. There is no simply answer to as why, and no simple solutions. Poverty plays a role but so does corruption and the influence of powerful people who profit from the destruction.
This is my main critique of Our Planet. Despite being billed as an unflinching look at the threats facing the intricate and endlessly fascinating ecosystems being depicted, it actually tends to shy away from showing these threats or, even more importantly, addressing the question of what can be done to resolve them. Like previous documentaries, shots have been carefully positioned to cut out evidence of human influence.
In my three decades of watching wildlife documentaries, I remember only one moment which broke from this tradition. In Simon Reeves’ 2012 series about the Indian Ocean, he showed people living in and around the habitats he was filming. He humanised them. He was also honest about how limited the picturesque natural habitats he was filming were. In a memorable sequence showing a sifaka leaping between trees, he asked the camera man to turn around, revealing the miles of sisal plantation which surround the tiny remnant of forest where endless crews go to film these charismatic lemurs. When Planet Earth II came out in 2016 I was disappointed to see a return to more of the same – that same remnant forest in southern Madagascar appeared, but without the context.
As with previous documentaries, you could come away from Our Planet thinking the places being portrayed are completely separate from people. Human presence in and around many of these habitats has been erased. However to be successful, conservation can’t ignore people.
Maybe it is churlish to complain that Our Planet, like other such films, avoids showing the uncomfortable truth about just how threatened so much of nature really is. Perhaps the pure and unsullied vision is what makes them so popular. So many of us working in conservation were drawn in through watching Sir David Attenborough’s other films as children. By introducing viewers to fascinating facts about ecology (who knew that winds blowing across deserts feed life in the ocean?) and the mind-boggling behaviours of birds (such as the manakins shown doing a shuffle dance), Our Planet will engage a whole new generation.
Researchers have shown time and time again that knowledge isn’t enough to change people’s behaviour. However feeling connected with nature does matter. One thing the series will certainly do is make people fall in love with the planet. That is certainly a good thing.
Julia P G Jones has received funding from NERC and the Leverhulme Trust to support her research in Madagascar.
Food banks are becoming institutionalised in the UK
Author: Dave Beck, Postdoctoral Teaching Fellow in Sociology, Bangor University
I was one of 58 academics, activists and food writers who published a stark open letter warning against food banks becoming institutionalised in the UK. We believe the country is now reaching a point where “left behind people” and retailers’ “leftover food” share a symbiotic relationship. Food banks are becoming embedded within welfare provision, fuelled by corporate involvement and ultimately creating an industry of poverty.
We advocate challenging this link between food waste and food poverty. The UK has a welfare system that should be there for people in their time of need. But instead food banks – of which there are at least 2,000 across the country – are in receipt of government subsidies supporting redistribution, and fresh food is being introduced through publicly funded corporate philanthropy.
While people are certainly being helped by food banks in their moments of need, we cannot accept that they solve long-term poverty. In the US and Canada, academic Andy Fisher has highlighted that food bank institutionalisation has been politically and corporately encouraged over the last 35 years, but this has done nothing to alleviate food poverty. It has, in fact, only served corporate interest and entrenched food poverty further.
How has this happened?
For my PhD research I looked into the rise of food banks and critically examined their role as a new and emerging provider of aid for people struggling with welfare reform. My work also assessed the structural causes of food poverty associated with the Welfare Reform Act 2012, and the changing language of social security.
Austerity policies provided the initial fertile ground which led to many more people needing to access food banks. Under welfare reform, access to welfare became subject to heavy conditions. People came under heightened sanctions if they failed to follow their claimant commitment, while the so-called bedroom tax saw some losing housing benefit entitlement if they had a spare bedroom in their council or housing association-owned property.
This paved the way for food banks to fill the void left behind by retrenched welfare. Now food banks are increasingly accepting large donations and working with big retailers and food redistribution organisations, as they become an accepted part of UK life.
For food banks to become part of an institutionalised provision, leading food poverty expert Graham Riches argues that there is a three-stage process. First, there needs to be a national food bank provider, for example Feeding America in the US, and Food Banks Canada. These organisations coordinate and support linked food pantries under their banner. Within the UK, the Trussell Trust, with a strong network of 427 foodbanks (plus associated distribution centres), has a similar role.
Second, this national provider must create partnership alliances with food companies and food redistribution organisations. For the last seven years, the Trussell Trust has worked with UK food retailer Tesco. Recently, it has also collaborated with Fareshare and Asda to increase redistribution to its food banks.
Contacted by The Conversation for this article, the Trussell Trust insists it is “campaigning to create a future without food banks”. Emma Revie, chief executive, highlighted its role in campaigning for changes to the benefits system to properly support people who need help. She added there was no desire for food banks to “become the new normal”.
But the engagement of retail giants serves to embed food banks, as it combines two socially distinct problems – food surplus and food poverty– while doing nothing to solve the structural issues of poverty. It serves the retailer well too, by improving their corporate social responsibility (large retailers are seen to be acting for the social good of their community), not to mention the increase in sales through their tills. Shoppers are purchasing their donations from these retailers and putting them in store donation bins to be taken to the food banks.
The third stage is an increasing influence and relationship with national government. A national food bank provider can then emerge as an accepted response to declining welfare. This has happened in the US and Canada, although UK food banks at present are still in a campaigning position.
Not the new normal
However, I think that food banks also need to complete two more stages for there to be complete institutionalisation. Through their partnership with larger organisations, food banks recognise the need to invest in facilities and transport to deal with redistributed food, especially if it includes fresh food. They also begin to invest in time and energy from dedicated volunteers who make food banks warm and welcoming places. This is common now in North America and has also already begun in the UK, potentially creating an air of permanence about them.
Fifth and finally, when food banks are truly institutionalised we will see them accepted by society as being an adequate substitute for welfare, especially for “less deserving” people. This recognition was evidenced when Asda removed all unmanned food bank collection baskets in February 2016, signalling the end of customers’ donations. Following a social media uproar, and a challenge put forward by the charities affected, Asda reinstated the baskets.
Food bank collection baskets in supermarkets are now commonplace. Their removal and subsequent disquiet shows how there is social acceptance of food banks. People realise the value of them for those in need, fuelling the process of embedding food banks, not just within society, but within our social conscience.
But we need to remember that food poverty has no place within our society. We should be campaigning for change, not acceptance of a new normal. As the US and Canada have seen, once food banks become embedded, they do not go away. Food banks may be vital in times of crisis, but they are not a substitute for proper support.
Editor’s note: This article was updated to amend the number of food banks in the UK from 3,000 to “at least 2,000”
Dave Beck does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Snake venom can vary in a single species — and it’s not just about adaptation to their prey
Author: Wolfgang Wüster, Senior Lecturer in Zoology, Bangor UniversityGiulia Zancolli, Associate Research Scientist, Université de Lausanne
Few sights and sounds are as emblematic of the North American southwest as a defensive rattlesnake, reared up, buzzing, and ready to strike. The message is loud and clear, “Back off! If you don’t hurt me, I won’t hurt you.” Any intruders who fail to heed the warning can expect to fall victim to a venomous bite.
But the consequences of that bite are surprisingly unpredictable. Snake venoms are complex cocktails made up of dozens of individual toxins that attack different parts of the target’s body. The composition of these cocktails is highly variable, even within single species. Biologists have come to assume that most of this variation reflects adaptation to what prey the snakes eat in the wild. But our study of the Mohave rattlesnake (Crotalus scutulatus, also known as the Mojave rattlesnake) has uncovered an intriguing exception to this rule.
A 20-minute drive can take you from a population of this rattlesnake species with a highly lethal neurotoxic venom, causing paralysis and shock, to one with a haemotoxic venom, causing swelling, bruising, blistering and bleeding. The neurotoxic venom (known as venom A) can be more than ten times as lethal as the haemotoxic venom (venom B), at least to lab mice.
The Mohave rattlesnake is not alone in having different venoms like this – several other rattlesnake species display the same variation. But why do we see these differences? Snake venom evolved to subdue and kill prey. One venom may be better at killing one prey species, while another may be more toxic to different prey. Natural selection should favour different venoms in snakes eating different prey – it’s a classic example of evolution through natural selection.
This idea that snake venom varies due to adaptation to eating different prey has become widely accepted among herpetologists and toxinologists. Some have found correlations between venom and prey. Others have shown prey-specific lethality of venoms, or identified toxins fine-tuned for killing the snakes’ natural prey. The venom of some snakes even changes along with their diet as they grow.
We expected the Mohave rattlesnake to be a prime example of this phenomenon. The extreme differences in venom composition, toxicity and mode of action (whether it is neurotoxic or haemotoxic) seem an obvious target for natural selection for different prey. And yet, when we correlated differences in venom composition with regional diet, we were shocked to find there is no link.
In the absence of adaptation to local diet, we expected to see a connection between gene flow (transfer of genetic material between populations) and venom composition. Populations with ample gene flow would be expected to have more similar venoms than populations that are genetically less connected. But once again, we drew a blank – there is no link between gene flow and venom. This finding, together with the geographic segregation of the two populations with different venoms, suggests that instead there is strong local selection for venom type.
The next step in our research was to test for links between venom and the physical environment. Finally, we found some associations. The haemotoxic venom is found in rattlesnakes which live in an area which experiences warmer temperatures and more consistently low rainfall compared to where the rattlesnakes with the neurotoxic venom are found. But even this finding is deeply puzzling.
It has been suggested that, as well as killing prey, venom may also help digestion. Rattlesnakes eat large prey in one piece, and then have to digest it in a race against decay. A venom that starts predigesting the prey from the inside could help, especially in cooler climates where digestion is more difficult.
But the rattlesnakes with haemotoxic venom B, which better aids digestion, are found in warmer places, while snakes from cooler upland deserts invariably produce the non-digestive, neurotoxic venom A. Yet again, none of the conventional explanations make sense.
Clearly, the selective forces behind the extreme venom variation in the Mohave rattlesnake are complex and subtle. A link to diet may yet be found, perhaps through different kinds of venom resistance in key prey species, or prey dynamics affected by local climate. In any case, our results reopen the discussion on the drivers of venom composition, and caution against the simplistic assumption that all venom variation is driven by the species composition of regional diets.
From a human perspective, variation in venom composition is the bane of anyone working on snakebite treatments, or antidote development. It can lead to unexpected symptoms, and antivenoms may not work against some populations of a species they supposedly cover. Anyone living within the range of the Mohave rattlesnake can rest easy though – the available antivenoms cover both main venom types.
Globally, however, our study underlines the unpredictability of venom variation, and shows again that there are no shortcuts to understanding it. Those developing antivenoms need to identify regional venom variants and carry out extensive testing to ensure that their products are effective against all intended venoms.
Wolfgang Wüster receives funding from The Leverhulme Trust.
Giulia Zancolli receives funding from Santander Early Career Research Scholarship.
The power of language: we translate our thoughts into words, but words also affect the way we think
Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University
Have you ever worried in your student years or later in life that time may be starting to run out to achieve your goals? If so, would it be easier conveying this feeling to others if there was a word meaning just that? In German, there is. That feeling of panic associated with one’s opportunities appearing to run out is called Torschlusspanik.
German has a rich collection of such terms, made up of often two, three or more words connected to form a superword or compound word. Compound words are particularly powerful because they are (much) more than the sum of their parts. Torschlusspanik, for instance, is literally made of “gate”-“closing”-“panic”.
If you get to the train station a little late and see your train’s doors still open, you may have experienced a concrete form of Torschlusspanik, prompted by the characteristic beeps as the train doors are about to close. But this compound word of German is associated with more than the literal meaning. It evokes something more abstract, referring to the feeling that life is progressively shutting the door of opportunities as time goes by.
English too has many compound words. Some combine rather concrete words like “seahorse”, “butterfly”, or “turtleneck”. Others are more abstract, such as “backwards” or “whatsoever”. And of course in English too, compounds are superwords, as in German or French, since their meaning is often distinct from the meaning of its parts. A seahorse is not a horse, a butterfly is not a fly, turtles don’t wear turtlenecks, etc.
One remarkable feature of compound words is that they don’t translate well at all from one language to another, at least when it comes to translating their constituent parts literally. Who would have thought that a “carry-sheets” is a wallet – porte-feuille–, or that a “support-throat” is a bra – soutien-gorge– in French?
This begs the question of what happens when words don’t readily translate from one language to another. For instance, what happens when a native speaker of German tries to convey in English that they just had a spurt of Torschlusspanik? Naturally, they will resort to paraphrasing, that is, they will make up a narrative with examples to make their interlocutor understand what they are trying to say.
But then, this begs another, bigger question: Do people who have words that simply do not translate in another language have access to different concepts? Take the case of hiraeth for instance, a beautiful word of Welsh famous for being essentially untranslatable. Hiraeth is meant to convey the feeling associated with the bittersweet memory of missing something or someone, while being grateful of their existence.
Hiraeth is not nostalgia, it is not anguish, or frustration, or melancholy, or regret. And no, it is not homesickness, as Google translate may lead you to believe, since hiraeth also conveys the feeling one experiences when they ask someone to marry them and they are turned down, hardly a case of homesickness.
Different words, different minds?
The existence of a word in Welsh to convey this particular feeling poses a fundamental question on language–thought relationships. Asked in ancient Greece by philosophers such as Herodotus (450 BC), this question has resurfaced in the middle of the last century, under the impetus of Edward Sapir and his student Benjamin Lee Whorf, and has become known as the linguistic relativity hypothesis.
Linguistic relativity is the idea that language, which most people agree originates in and expresses human thought, can feedback to thinking, influencing thought in return. So, could different words or different grammatical constructs “shape” thinking differently in speakers of different languages? Being quite intuitive, this idea has enjoyed quite of bit of success in popular culture, lately appearing in a rather provocative form in the science fiction movie Arrival.
Although the idea is intuitive for some, exaggerated claims have been made about the extent of vocabulary diversity in some languages. Exaggerations have enticed illustrious linguists to write satirical essays such as “the great Eskimo vocabulary hoax”, where Geoff Pullum denounces the fantasy about the number of words used by Eskimos to refer to snow. However, whatever the actual number of words for snow in Eskimo, Pullum’s pamphlet fails to address an important question: what do we actually know about Eskimos’ perception of snow?
No matter how vitriolic critics of the linguistic relativity hypothesis may be, experimental research seeking scientific evidence for the existence of differences between speakers of different languages has started accumulating at a steady pace. For instance, Panos Athanasopoulos at Lancaster University, has made striking observations that having particular words to distinguish colour categories goes hand-in-hand with appreciating colour contrasts. So, he points out, native speakers of Greek, who have distinct basic colour terms for light and dark blue (ghalazio and ble respectively) tend to consider corresponding shades of blue as more dissimilar than native speaker of English, who use the same basic term “blue” to describe them.
But scholars including Steven Pinker at Harvard are unimpressed, arguing that such effects are trivial and uninteresting, because individuals engaged in experiments are likely to use language in their head when making judgements about colours – so their behaviour is superficially influenced by language, while everyone sees the world in the same way.
To progress in this debate, I believe we need to get closer to the human brain, by measuring perception more directly, preferably within the small fraction of time preceding mental access to language. This is now possible, thanks to neuroscientific methods and – incredibly – early results lean in favour of Sapir and Whorf’s intuition.
So, yes, like it or not, it may well be that having different words means having differently structured minds. But then, given that every mind on earth is unique and distinct, this is not really a game changer.
Guillaume Thierry has received funding from the European Research Council, the Economic and Social Research Council, the British Academy, the Arts and Humanities Research Council, the Biotechnology and Biological Research Council, and the Arts Council of Wales.
Why Paris is the perfect city to introduce break dancing to the Olympics
Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University
Along with surfing, climbing and skateboarding, break dancing has been proposed for inclusion at the Paris 2024 Olympic Games. While fans of the sports have been delighted by the news, it has provoked some criticism too, not least from followers of sports such as squash and karate which will not be considered for the 2024 games.
But the inclusion of break dancing in Paris 2024 would not be a complete surprise. Indeed, there are several reasons why it would actually make sense. Firstly, break dancing proved itself as a popular event when it was included in the Youth Olympics for the first time at Buenos Aires in 2018. Secondly, the launch of break dancing as an Olympic sport in 2024 would fit with the very ethos of the Paris games.
The Paris 2024 organising committee plan to locate Olympic events in two key areas – a Central Paris zone and a Greater Paris zone. The Olympic Marathon will pass many central Parisian landmarks, archery will take place near the Eiffel Tower, at Esplanade des Invalides, and the road cycling will travel along the Champs-Elysees. Meanwhile, athletics events as well as the opening and closing ceremonies will take place outside central Paris, in the Stade de France.
So why does this mean that break dancing should have a place in the 2024 games? The Stade de France – like much of the Greater Paris zone – is located in Seine-Saint-Denis, a part of Paris’s suburban fringe that is said to be the birthplace of hip hop in France. Including an event like break dancing would not just be a big moment for urban culture worldwide, but important for French culture in the capital too.
Hip hop culture is big in France overall. Indeed, the hip hop market in France is now the second largest in the world, after the USA. And since the 1980s, break dancing, rap music, and graffiti have been particularly popular in the often-impoverished “banlieues” outside many major French cities.
However, French politicians have often been suspicious of break dancing. Within French rap music, there is an at times aggressive critique of French politicians and the police. Leading rap groups such as NTM, Sniper and La Rumeur have used their music to blame both groups for injustices and inequalities experienced by young people in the banlieues.
In an attempt to change negative perceptions, several films, including Jean-Pierre Thorn’s Génération hip hop ou le mouv’ des ZUP (1996), Faire kiffer les anges (1997) and On n'est pas des marques de vélo (2003), have shown how important hip hop culture has been in giving young people from such areas a powerful means of expression. Thorn’s 2010 film 93, La Belle Rebelle sought to reinforce the idea that areas such as Seine-Saint-Denis are characterised by cultural diversity and dynamism. The film showed how many varied performers have come from the often stigmatised area, including well-known figures such as Serge Teyssot-Gay from the rock group Noir Désir, slam artist Grand Corps Malade and members of the iconic French rap group NTM.
Professor Dayna Oscherwitz has argued too that hip hop culture has become the dominant vehicle for urban youth from the banlieues to articulate their vision of the world. She says that it allows them to describe the reality of life in the banlieues, and to highlight the problems they face.
Including break dancing at Paris 2024 would connect the games with the urban culture of the area surrounding the Stade de France. It would see the French capital embracing a discipline often associated with its outer suburbs rather than the city centre, and provide a means to engage with young people too. It may even go some way to dispelling the negative reports more often coming out of these areas.
Prior to London 2012, sports activist Mark Perryman argued that the Olympics can, and should, become more inclusive. Crucially, Perryman argued that the Olympics would be more successful if more events were free for spectators to attend. He cited the Tour de France as an example of a highly profitable major sporting event that is free for spectators. Perryman also argued that the Olympics should favour sports which are accessible to participants because they do not require expensive equipment. This last point provides a good argument for the inclusion of break dancing. No specialist equipment or professional training is necessarily needed to begin break dancing.
However, it is important to add a note of caution. If Olympic break dancing is to successfully engage young people from Paris’s banlieues, this will partially depend on them being able to buy tickets. The distribution and pricing of tickets for some Olympic events attracted criticism at Rio 2016 and London 2012. Empty seats were visible at several venues, notably due to tickets remaining unsold or being given to sponsors who did not use them.
On one hand, the symbolic importance of including break dancing in the Paris 2024 games should perhaps not be overstated. However, this one event could help anchor the games within the areas in which many venues will be located, as well as re-energise the Olympic movement for a young, urban audience both in France and worldwide.
Jonathan Ervine tidak bekerja, menjadi konsultan, memiliki saham, atau menerima dana dari perusahaan atau organisasi mana pun yang akan mengambil untung dari artikel ini, dan telah mengungkapkan bahwa ia tidak memiliki afiliasi selain yang telah disebut di atas.
Why the pine marten is not every red squirrel's best friend
Author: Craig Shuttleworth, Honorary Visiting Research Fellow, Bangor UniversityMatt Hayward, Associate professor, University of Newcastle
Pine martens are returningto areas of the UK after an absence of nearly a century. Following releases in mid-Wales during 2015, reintroductions are proposed in north Wales and southern England for 2019.
The pine marten is a small native carnivore that inhabits a range of woodland habitats. It’s an excellent climber and often nests within tree cavities. This opportunistic predator has a varied diet including fruit, eggs, songbirds and small mammals.
By the 1920s, pine martens were virtually extinct in the UK after centuries of persecution to protect game birds and poultry. Only a population in north-west Scotland and small numbers in northern Wales and England survived. With UK legal protection, their range has expanded since the 1980s, increasing their encounters with the grey squirrel.
Since George Monbiot penned “how to eradicate grey squirrels without firing a shot” in 2015, the media has courted the charismatic mammal as the saviour for the UK’s embattled red squirrels.
The media message is simple: the return of pine martens will herald thedecline or even eradication of grey squirrels, which, since their arrival from North America in 1876, have caused regional extinctions of the native red squirrel. That’s because pine martens supposedly prefer eating greys, while leaving reds alone.
The optimism around pine martens in the UK originated from research in Ireland and Scotland. In Scotland, scientists studied forests containing pine martens, red squirrels and grey squirrels. The more pine martens they recorded using a woodland area, the more likely they were to find red squirrels and the less likely grey squirrels were to be there. Like earlier Irish studies, this suggested that pine martens suppress grey squirrel populations to the overall benefit of red squirrels.
However, that’s not quite the whole story. There’s a desire in the media to find heroes and villains in nature which simplifies the situation and obscures the potential impact of a returning predator on British wildlife and livestock. Sadly, ecology and conservation are rarely simple and the restoration of pine martens will not always follow a script.
Red squirrels on the menu?
The Scottish pine marten researchers make clear that pine martens sometimes eat red squirrels. In a small number of other studies conducted elsewhere in Europe, reds were in fact a significant seasonal component of pine marten diet – up to 53% in one case.
It’s therefore incorrect to suggest, as some conservation groups have, that dietary studies show pine martens very rarely eat red squirrels. The reality is that predation rates reflect the relative abundance of red squirrels to other prey, encounter rates and local habitat characteristics.
Why grey squirrels have declined in the presence of pine martens remains uncertain. The impact of martens on greys may vary geographically and it’s unwise to simply extrapolate the findings from Scotland and Ireland to the rest of the British Isles without a note of caution. Suggesting the pine marten is the best long-term solution for grey squirrel control in England is premature and requires more research to confirm.
Pine martens have been absent from much of England for around 100 years, a period of significant agricultural and urban change. Landscapes have altered dramatically and many potential prey species have regionally declined. Pine marten predation upon these could therefore prove to be locally significant.
This should not be a barrier to reintroducing pine martens. Instead, it reinforces the need for informed discussions with all interest groups likely to be affected. We must acknowledge that as a last resort, lethal control of predators may be necessary to conserve rare species such as some ground nesting birds.
As the pine marten becomes more common in the UK and Ireland, inevitably there will be scenarios where lethal intervention is unavoidable. A pine marten predating a seabird colony was shot in 2018 under licence to protect an internationally important breeding population.
Measures to prevent predation of poultry or game birds are frequently recommended where pine marten restoration is occurring. These include the installation of electric fencing, cutting back branches overhanging pens and ensuring that wire netting has no holes martens could get through.
While these management recommendations are useful, many people may find it difficult to implement them. As a result, any negative impacts of a returning arboreal predator will fall heavily upon a handful of poultry owners.
The return of the pine marten may also complicate the conservation or reintroduction of other species. Although the location and other details are confidential, there were concerns that a pine marten was adversely affecting a red squirrel conservation programme after an individual was found to be regularly visiting release enclosures.
As pine martens naturally spread from Scotland into northern England, adaptive and measured responses will be needed to responsibly manage their return. An approach to conservation that’s media-friendly but built on limited evidence rarely works, and certainly won’t in pine marten restoration.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
UK Human Rights Act is at risk of repeal – here's why it should be protected
Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University
There have long been attempts to “scrap” the Human Rights Act 1998, which incorporates the European Convention on Human Rights (ECHR) into UK law. But while none have gained traction to date, parliamentarians have recently raised concerns that the government could be wavering in its commitment to the act post-Brexit.
The House of Lords’ EU justice sub-committee said in January that it was worried to see the government change the wording of the political declaration it agreed with the EU, which sketches out a non-binding vision for what the UK’s relationship with Europe will look like after Brexit.
In its draft form, the declaration said that the future relationship should incorporate the UK’s “commitment” to the convention. However, by the time the final version was published in November 2018, that had changed to a commitment to “respect the framework” of the convention.
The committee wrote to the government for clarification and received a response from Edward Argar, the parliamentary under-secretary of state for justice, who stated that the government would not repeal or replace the act while Brexit is ongoing but that “it is right that we wait until the process of leaving the EU concludes before considering the matter further”.
Responding publicly, committee chairman Helena Kennedy said that this was a “troubling” reply, noting: “Again and again we are told that the government is committed … but without a concrete commitment”.
Critics of the act say that reforms are needed to “restore” the supremacy of the UK courts, by limiting the interference of the European Court of Human Rights (ECHR) in domestic issues, such as voting rights for prisoners. This has long been a key issue for Conservative governments, which have wanted to ignore Strasbourg rulings. The idea is that the Human Rights Act could be replaced with a “British” bill of rights which would allegedly give the UK more control over the laws it implements.
The most cited criticism is that the act protects terrorists and hate preachers, such as Abu Hamza, who, at a time when he was advocating radical Islam and violence within UK cities, initially could not be deported on grounds that doing so would have contravened his right to freedom from torture.
The successes of human rights laws are less frequently celebrated, however. The act was relied upon by Hillsborough families, and the victims’ right to life, in order to secure a second inquiry. Individuals pursuing their freedom to manifest their religion have used it to enforce their right to wear religious symbols at work. Victims of the Stafford hospital scandal used the law to secure an inquiry, which led to major improvements in accountability and public safety. And it has helped those seeking LGBTQ+ equality, as well as British soldiers in their challenge for improved resources.
Dispelling the myths
The problem is that there are several misconceptions fuelling the drive to change the Human Rights Act. First, the ECHR is unrelated to the EU. But mistaken links between the two are causing misplaced animosity towards the convention. The convention and its related institutions were regularly confused as being part of the EU during the referendum debates. Though the UK is due to leave the EU, it is not leaving – and does not necessarily have to leave – the Council of Europe. The council predates the EU, and has a larger membership (47 member states compared to the EU’s 28). While the EU is concerned with matters such as the single market and free movement of people, the council addresses issues in relation to human rights and the rule of law.
Another point causing problems is the notion that the UK needs to move towards a supposedly “more British” and “less European” understanding of human rights. History tells us that in the aftermath of World War II the convention was actually partly written by the British. It was advocated by Winston Churchill and co-written by Conservative MP David Maxwell-Fyfe.
Britain was not just a supporter of the convention, but a leader in co-drafting the rules, and ensuring greater enforcement at a supranational level, via the European court. Furthermore, the UK was the very first country to ratify the convention in 1951. The irony is that the Conservative party is now questioning the role of human rights when it was the one that drafted the convention in 1950.
Even if the Humans Right Act was reformed or repealed now, the UK would still be subject to the convention as a signatory. UK citizens would still have access to the protections that the convention has introduced.
If the act is truly under threat of repeal, lessons must be learnt from Brexit. There needs to be an open and honest debate about what the act and convention actually do, and what they have achieved.
If, in repealing the act and introducing a “British bill of rights”, the UK leaves the Council of Europe, it could cause a dangerous unravelling of the UK’s constitution, and upset the devolution settlement. It could also remove another layer of international protection for the UK’s constitutional values. To do so at a time when much uncertainty remains (following the UK leaving the EU) would have far reaching consequences for protecting citizens’ rights against the state.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Pourquoi les personnes souffrant d’anxiété ont du mal à gérer leurs émotions
Author: Leanne Rowlands, PhD Researcher in Neuropsychology, Bangor University
Nous régulons tous nos émotions, chaque jour de notre vie. Grâce à ce processus psychologique, nous pouvons gérer la façon dont nous ressentons et exprimons nos émotions, quelle que soit la situation qui se présente. Mais chez certaines personnes, cette régulation n’est pas efficace. Les sentiments qu’elles éprouvent sont intenses et difficiles à supporter, ce qui les amène souvent, pour leur échapper, à adopter des comportements tels que l’automutilation, la consommation d’alcool ou la suralimentation.
Pour réguler nos émotions, nous avons recours à diverses stratégies, telles que la réévaluation (qui consiste à changer ce que nous ressentons à propos de quelque chose) et le déploiement attentionnel (qui revient à détourner notre attention de quelque chose). Ces stratégies reposent sur des systèmes neuronaux sous-jacents du cortex préfrontal de notre cerveau. Si ceux-ci dysfonctionnent, nous pouvons perdre la capacité à gérer efficacement nos émotions.
Mais la dysrégulation émotionnelle ne se produit pas uniquement lorsque le cerveau néglige d’utiliser ses stratégies de régulation. Elle peut aussi survenir lorsque les tentatives pour atténuer les émotions non désirées s’avèrent infructueuses, ou encore lorsque des stratégies contre-productives d’atténuation sont mises en œuvre, c’est-à-dire lorsque le coût desdites stratégies est supérieur aux avantages à court terme procurés par l’atténuation d’une émotion intense. Décider de ne pas ouvrir ses factures pour s’épargner une crise d’anxiété peut aider à se sentir mieux à court terme, mais se traduit à long terme par une continuelle augmentation de coûts.
Les tentatives de régulation infructueuses et l'emploi d’atténuations contre-productives sont au cœur de nombreux problèmes de santé mentale, tels que les troubles anxieux et les troubles de l’humeur. Mais le chemin menant à la dysrégulation émotionnelle n’est pas toujours le même. En fait, la recherche a trouvé plusieurs causes à ces situations.
Des systèmes neuronaux dysfonctionnels
Dans les troubles anxieux, le dysfonctionnement des systèmes émotionnels du cerveau se traduit par des réactions émotionnelles beaucoup plus intenses que celles qui se produisent habituellement, ainsi que par une perception accrue de la menace et une vision négative du monde. Ces caractéristiques influent sur l’efficacité des stratégies de régulation des émotions. En résulte une dépendance excessive vis-à-vis de stratégies inadaptées, par exemple celles consistant à éviter ou essayer de supprimer les émotions.
Dans le cerveau des personnes atteintes de troubles anxieux, le système sur lequel repose la réévaluation ne fonctionne pas aussi efficacement que dans le cerveau des personnes qui ne sont pas affectées. Lorsque cette stratégie d’atténuation des émotions est utilisée, certaines parties du cortex préfrontal sont moins activées comparativement à celles des personnes non anxieuses. En fait, plus le niveau des symptômes d’anxiété est élevé, moins ces régions du cerveau sont activées. Cela signifie que plus les symptômes sont intenses, moins ils peuvent être réévalués.
De même, les personnes atteintes de trouble dépressif majeur– qui se traduit par une incapacité à réguler ou réparer les émotions, se traduisant par de longs épisodes de dépression – éprouvent des difficultés à utiliser le contrôle cognitif pour gérer leurs émotions négatives et diminuer leur intensité émotionnelle. Ceci s’explique par des différences neurobiologiques, telles qu’une diminution de la densité de la matière grise et du volume du cortex préfrontal. Chez les personnes dépressives, on constate moins d’activation cérébrale et un métabolisme moins élevé dans cette région du cerveau lorsqu’elles accomplissent des tâches visant à réguler leurs émotions.
La fonction des systèmes cérébraux de motivation est par ailleurs parfois moins efficace chez les personnes atteintes de trouble dépressif majeur que chez les autres. Ces réseaux de connexions neurales relient le striatum ventral, situé au milieu du cerveau, et le cortex préfrontal. Ce moins bon fonctionnement pourrait expliquer leur moindre aptitude à réguler les émotions positives. Une difficulté connue sous le nom d’anhédonie), qui se traduit par un manque de plaisir et d’appétit pour la vie.
Des stratégies moins efficaces
Les capacités à utiliser une stratégie de régulation ou une autre varient selon les gens, cela ne fait guère de doute. Mais chez certains, il est des stratégies qui ne fonctionnent tout simplement pas. Il se peut que les personnes atteintes de troubles anxieux considèrent la réévaluation comme une stratégie moins efficace parce que le biais d’attention qui les affecte fait qu’ils accordent involontairement plus d’attention aux informations négatives et menaçantes. Cela peut les empêcher d’interpréter les situations de façon positive – ce qui constitue un aspect clé de la réévaluation.
Il est également possible que la réévaluation ne fonctionne pas aussi bien chez les personnes atteintes de troubles de l’humeur que chez les autres. Les biais cognitifs dont souffrent les personnes atteintes de trouble dépressif majeur peut les amener à interpréter les situations comme étant plus négatives qu’elles ne le sont, et à avoir du mal à éprouver des pensées positives.
Des stratégies inadaptées
Bien que des stratégies inadaptées puissent aider les gens à se sentir mieux à court terme, elles se traduisent à long terme par des coûts dont les conséquences sont la persistance de l’anxiété et des troubles de l’humeur. Les personnes anxieuses comptent davantage sur des stratégies inadaptées telles que la suppression (qui consiste à essayer d’inhiber ou de cacher les réactions émotionnelles), et moins sur les stratégies d’adaptation comme la réévaluation. Bien que les recherches à ce sujet soient encore en cours, on pense que lorsqu’elles expérimentent des émotions intenses, ces personnes trouvent très difficile de se désengager – la première étape nécessaire à la réévaluation – et se tournent donc plutôt vers une stratégie inadaptée de suppression.
Le recours à des stratégies inadaptées comme la suppression et la rumination (au cours de laquelle les gens ont des pensées négatives et auto-dépréciatrices répétitives) est également une caractéristique fréquemment rencontrée chez les personnes souffrant de trouble dépressif majeur.
Il est important de souligner que les troubles de l’humeur ne sont pas uniquement dus à des anomalies neurologiques. Les recherches suggèrent qu’elles résultent d’une conjugaison de différents paramètres. Physiologie cérébrale, psychologie, facteurs environnementaux contribuent ensemble à ces désordres.
Alors que les scientifiques recherchent nouveaux traitements prometteurs, des actions simples peuvent aider les gens à atténuer l’influence des pensées et des émotions négatives sur leur humeur. Les personnes en proie à ces troubles peuvent par exemple vraiment gagner à s’engager dans des actions positives, en exprimant leur gratitude, en faisant montre de bonté envers les autres, et en réfléchissant aux éléments qui constituent des atouts en terme de caractère.
Leanne Rowlands a reçu des financements du Fonds social européen par l'intermédiaire du gouvernement gallois.
Why people with anxiety and other mood disorders struggle to manage their emotions
Author: Leanne Rowlands, PhD Researcher in Neuropsychology, Bangor University
Regulating our emotions is something we all do, every day of our lives. This psychological process means that we can manage how we feel and express emotions in the face of whatever situation may arise. But some people cannot regulate their emotions effectively, and so experience difficult and intense feelings, often partaking in behaviours such as self-harm, using alcohol, and over-eating to try to escape them.
There are several strategies that we use to regulate emotions– for example, reappraisal (changing how you feel about something) and attentional deployment (redirecting your attention away from something). Underlying neural systems in the brain’s prefrontal cortex are responsible for these strategies. However, dysfunction of these neural mechanisms can mean that a person is unable to manage their emotions effectively.
Emotion dysregulation does not simply occur when the brain neglects to use regulation strategies. It includes unsuccessful attempts by the brain to reduce unwanted emotions, as well as the counterproductive use of strategies that have a cost that outweighs the short term benefits of easing an intense emotion. For example, avoiding anxiety by not opening bills might make someone feel better in the short term, but comes with the long-term cost of ever increasing charges.
These unsuccessful attempts at regulation and counterproductive use of strategies are a core feature of many mental health conditions, including anxiety and mood disorders. But there is not one simple pathway that causes the dysregulation in these conditions. In fact research has found several causes.
1. Dysfunctional neural systems
In anxiety disorders, dysfunction of the brain’s emotional systems is related to emotional responses being of a much higher intensity than usual, along with an increased perception of threat and a negative view of the world. These characteristics influence how effective emotion regulation strategies are, and result in an over-reliance on maladaptive strategies like avoiding or trying to suppress emotions.
In the brains of those with anxiety disorders, the system supporting the reappraisal does not work as effectively. Parts of the prefrontal cortex show less activation when this strategy is used, compared to non-anxious people. In fact, the higher the levels of anxiety symptoms, the less activation is seen in these brain areas. This means that the more intense the symptoms, the less they are able to reappraise.
Similarly, those with major depressive disorder (MDD)– the inability to regulate or repair emotions, resulting in prolonged episodes of low mood – struggle to use cognitive control to manage negative emotions and decrease emotional intensity. This is due to neurobiological differences, such as decreased density of grey matter, and reduced volume in the brain’s prefrontal cortex. During emotion regulation tasks, people who have depression show less brain activation and metabolism in this area.
People with MDD sometimes show less effective function in the brain’s motivation systems – a network of neural connections from the ventral striatum, located in the middle of the brain, and prefrontal cortex – too. This might explain their difficulty in regulating positive emotions (known as anhedonia) leading to a lack of pleasure and motivation for life.
2. Less effective strategies
There is little doubt that people have different abilities in using different regulation strategies. But for some they simply don’t work as well. It’s possible that people with anxiety disorders find reappraisal a less effective strategy because their attentional bias means they involuntarily pay more attention towards negative and threatening information. This can stop them from being able to come up with more positive meanings for a situation – a key aspect of reappraisal.
It’s possible that reappraisal doesn’t work as well for people with mood disorders either. Cognitive biases can lead people with MDD to interpret situations as being more negative, and make it difficult to think more positive thoughts.
3. Maladaptive strategies
Although maladaptive strategies might make people feel better in the short term they come with long term costs of maintaining anxiety and mood disorders. Anxious people rely more on maladaptive strategies like suppression (trying to inhibit or hide emotional responses), and less on adaptive strategies like reappraisal. Though research into this is ongoing, it’s thought that during intense emotional experiences these people find it very difficult to disengage – a necessary first step in reappraisal – so they turn to maladaptaive suppression instead.
The use of maladaptive strategies like suppression and rumination (where people have repetitive negative and self-depreciating thoughts) is also a common feature of MDD. These, together with difficulties using adaptive strategies like reappraisal, prolong and exacerbate depressed mood. It means that people who have MDD are even less able to use reappraisal during a depressed episode.
It’s important to note that mood disorders don’t just come from neural abnormalities. The research suggests that a combination of brain physiology, psychological and environmental factors are what contributes to the disorders, and their maintenance.
While researchers are pursing promising new treatments, simple actions can help people loosen the influence of negative thoughts and emotions on mood. Positive activities like expressing gratitude, sharing kindness, and reflecting on character strengths really do help.
Leanne Rowlands receives funding from EU Social fund through the Welsh Government.