On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
Welsh schools: an approach to bilingualism that can help overcome division
Author: Peredur Webb-Davies, Senior Lecturer in Welsh Linguistics, Bangor University
Being a Welsh-English bilingual isn’t easy. For one thing, you hear that encouraging others to learn your language is detrimental both to their education and wellbeing. For another, to speak a minority language such as Welsh you need to constantly make the effort to be exposed to it and maintain your bilingualism.
A row has recently arisen in the Carmarthenshire village of Llangennech over plans to turn an English language school into a Welsh school. Parents who objected to the change told Guardian reporters that they have been labelled “anti-Welsh bigots”, in an article headlined“Welsh-only teaching – a political tool that harms children?”.
Needless to say, those who have gone through Welsh language schooling were not happy with the report. And for good reason too: though parents may have their own concerns, research has proven the benefits of bilingualism. The fear heavily implied that sitting in a Welsh classroom somehow hermetically insulates a child from the English language is simply not founded.
Schools in Wales need to deal with – and provide education for – children from two main backgrounds: those who speak Welsh at home and those who do not. The former benefit from Welsh-medium education in that they are able to broaden and improve their Welsh ability, as well as learning to read and write in it, while the latter need to be taught Welsh from the ground up. In most schools, a classroom will have a mixture of children from different backgrounds, although children will get different levels of exposure to Welsh depending on the school. Welsh is not treated as a foreign language like French or German, because children at schools in Wales will inevitably have some exposure to Welsh culturally and socially.
This means that teachers in nearly all schools in Wales have two different audiences: children who speak English as a first language, and children who speak Welsh as a first language.
But rather than this being a problem, teachers use different approaches in the classroom to deal with it. Few lessons are in just Welsh or English – the majority use a strategic bilingual approach such as code-switching (alternating between both languages as they teach), targeted translation (where specific terms or passages are translated as they are taught), or translanguaging (blending two languages together to help students learn a topic’s terminology in both).
One cannot simply divide Wales’s schools into Welsh-speaking or English-speaking. The former are bilingual schools – as well as ensuring that Welsh survives and flourishes, the aim of schools in Wales is to produce children who are bilingual when they finish their education.
It’s an obvious statement to make, but the more Welsh a child hears at home and school, the more proficient they become. It doesn’t have a negative effect on the rest of their education.
Like all languages, Welsh is evolving as time goes on, and schools are vital for not only nurturing speakers’ abilities, but for helping it stay relevant to the world. Similar to how there isn’t just one type of bilingual – speakers of two languages vary in proficiency – there also isn’t just one type of spoken Welsh.
My own research into grammar variation across age ranges found that younger generations are using certain innovative grammatical constructions much more frequently than older generations. The Welsh language that children hear from their peers is different to what they hear from their parents and grandparents. This includes grammatical features such as word order: where an older speaker might say “fy afal i” for “my apple”, a younger speaker is more likely to use “afal fi”. Similarly, research on code-switching by Welsh speakers has found that younger people are more likely than older speakers to mix Welsh and English in the same sentence. So schools and communities need to be able to expose children to Welsh of all registers for them to grow in proficiency and confidence, and learn these new social constructions.
Proficiency is a big part in shaping language attitudes – and, for a nation like Wales, where fear of language death is common, support for Welsh is vital.
Research sourcing the views of teenagers from north Wales found that more proficient speakers had more positive attitudes towards Welsh. On the other hand, participants with lower Welsh proficiency reported that they reacted negatively towards Welsh at school because they felt pressure to match their more proficient peers.
One of the biggest ironies in contemporary Wales is that it would be easier just to use – and learn in – English, but doing so would unquestionably lead to the death of Welsh – and the end of a language is no small matter.
Identifying precisely why some speakers feel that they cannot engage in Welsh-medium education, or use their Welsh outside of school, would be beneficial to fostering a bilingual Wales and would help heal the kinds of social divisions reported in Llangennech.
The cognitive, cultural and economical benefits of bilingualism have been widely demonstrated. To become bilingual in Welsh you must be exposed to Welsh and, for the majority of Welsh children, the classroom is their main source of this exposure. As such, we should see Welsh schools as central to any community’s efforts to contribute to the bilingual future that’s in Wales’s best interests.
Peredur Webb-Davies receives funding from the RCUK as part of a jointly-funded project with the National Science Foundation (USA).
Confidence can be a bad thing – here's why
Author: Stuart Beattie, Lecturer of Psychology, Bangor UniversityTim Woodman, Professor and Head of the School of Sport, Health and Exercise Sciences, Bangor University
Have you ever felt 100% confident in your ability to complete a task, and then failed miserably? After losing in the first round at Queen’s Club for the first time since 2012, world number one tennis player, Andy Murray, hinted that “overconfidence” might have been his downfall. Reflecting on his early exit, Murray said: “Winning a tournament is great and you feel good afterwards, but you can also sometimes think that your game is in a good place and maybe become a little bit more relaxed in that week beforehand.”
There is no doubt that success breeds confidence, and in turn, the confidence gained from success positively influences performance – normally. However, recently, this latter part of the relationship between confidence and performance has been called into doubt. High confidence can have its drawbacks. One may only need to look at the results of the recent general election to note that Theresa May called for an early election partly based on her confidence to win an overall majority.
Our research at the Institute for the Psychology of Elite Performance at Bangor University has extensively examined the relationship between confidence and performance. So, what are the advantages and disadvantages of having high (or indeed low) levels of confidence for an upcoming task?
Confidence and performance
First, let’s look at the possible outcomes of having low confidence (some form of self-doubt). Low confidence is the state of thinking that we are not quite ready to face an upcoming task. In this case, one of two things happens: either we disengage from the task, or we invest extra effort into preparing for it. In one of our studies participants were required to skip with a rope continuously for one minute. Participants were then told that they had to repeat the task but using a more difficult rope to skip with (in fact it was the same type of rope). Results revealed that confidence decreased but performance improved. In this case, self-doubt can be quite beneficial.
Now let’s consider the role of overconfidence. A high level of confidence is usually helpful for performing tasks because it can lead you to strive for difficult goals. But high confidence can also be detrimental when it causes you to lower the amount of effort you give towards these goals. Overconfidence often makes people no longer feel the need to invest all of their effort – think of the confident student who studies less for an upcoming exam.
Interestingly, some of our research findings show that when people are faced with immediate feedback after a golf putting task (knowing exactly how well you have just performed), confidence expectations (number of putts they thought they could make next) far exceeded actual obtained performance levels by as much as 46%. When confidence is miscalibrated (believing you are better than you really are), it will have a negative effect on subsequent task performance.
This overconfidence in our ability to perform a task seems to be a subconscious process, and it looks like it is here to stay. Fortunately, in the long term the pros of being overconfident (reaching for the stars) seem to far outweigh the cons (task failure) because if at first you do not succeed you can always try again. But miscalibrated confidence will be more likely to occur if vital performance information regarding your previous levels of performance accomplishments is either ignored or not available. When this happens people tend to overestimate rather than underestimate their abilities.
So, Andy Murray, this Queen’s setback is a great wake-up call – just in time for Wimbledon.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.
How operational deployment affects soldiers' children
Author: Leanne K Simpson, PhD Candidate, School of Psychology | Institute for the Psychology of Elite Performance, Bangor University
So many of us have seen delightful videos of friends and family welcoming their loved ones home from an operational tour of duty. The moment they are reunited is heartwarming, full of joy and tears – but, for military personnel who were deployed to Iraq and Afghanistan post 9/11, their time away came with unprecedented levels of stress for their whole family.
Military personnel faced longer and more numerous deployments, with short intervals in between. The impact of operational deployments on military personnel’s mental health is well reported. Far less is known, however, about how deployment affects military families, particularly those with young children.
Military families are often considered the “force behind the forces”, boosting soldiers’ morale and effectiveness during operational deployment. But this supportive role can come at a price.
Research has shown that deployments which last less than a total of 13 months in a three-year period will not harm military marriages. In fact, divorce rates are similar to the general population during service– although these marriages are more fragile when a partner exits the “military bubble”.
But studies have also found that children of service personnel have significantly more mental health problems– including anxiety and depression – than their civilian counterparts. Mental health issues are also particularly high among military spouses raising young children alone during deployment.
Our understanding of how younger children cope with deployment often stems from mothers’ retrospective reports, or from the children themselves when they become adolescents. Very little is known about the impact of deployment on young children who are at the greatest risk of social and emotional adjustment problems.
Unsurprisingly, the studies that have been conducted indicate that it is the currently deployed and post-deployed families that experience problematic family functioning.
A new study that I have co-authored with Dr Rachel Pye– soon to be published in Military Medicine – examines how UK military families with young children function during three of the five stages described in the “emotional cycle of deployment”, when their father is or has recently been on a tour of duty.
The emotional cycle of an extended deployment – six months or longer – consists of five distinct stages: pre-deployment, deployment, sustainment, re-deployment, and post-deployment. Each stage comes with its own emotional challenges for family members. The cycle can be painful to deal with, but those who know what to expect from each stage are more likely to maintain good mental health.
Strength in rules
Our research has found that all military families, regardless of deployment stage, have significantly more rules and structured routines than non-military families. Usually this would be indicative of poor family functioning – as it is associated with resistance to change – but we suggest that rigidity may actually be a strength for military families. It gives stability to an often uncertain way of life.
The findings also support previous research with similar US military families where a parent had been deployed. These families were highly resilient, with high levels of well-being, low levels of depression and high levels of positive parenting.
We used a unique way of examining the impact of deployment on young children. Each of the participants was asked to draw their family so that we could measure their perception of family functioning.
Pictures drawn by children of fathers who had returned from deployment within the last six months were quite distinctive. The father was often drawn larger and more detailed than other family members. But in the pictures drawn by children whose fathers were currently deployed, the father was often not included, or the child used less detail or colour.
When the pictures were re-analysed ignoring the physical distance between the child and parents – which is often used as an indicator of emotional distance, but could for this sample represent a real physical distance – the differences in how the fathers were drawn was still evident.
What all this means is that children who had a father return from deployment within the previous six months, or a father who was currently deployed, were part of the poorest-functioning families in our study.
This may seem like a negative result but our research also indicated that the effect is temporary. The children’s drawings showed differences between the currently deployed and the post-deployed families, but military children without a deployed parent scored similarly to non-military children.
So although military families are negatively affected by deployment, the impact doesn’t last. The vast majority successfully adapt to each stage of deployment.
Like any family, military families do experience problems – but this research highlights the robust, stoic nature of military families and their incredible ability to bounce back from adversity, demonstrating that they truly are the “force behind the forces”.
Leanne K Simpson receives funding from the British Ministry of Defence via their Defence Science, and Technology Laboratory via their PhD studentship scheme researching mental robustness in military personnel. This article does not reflect the views of the research councils or other publicly-funded bodies.
'Facts are not truth': Hilary Mantel goes on the record about historical fiction
Author: Michael Durrant, Lecturer in Early Modern Literature, Bangor University
In a recent talk at the Hay literary festival, Cambridge historian and biographer John Guy said he had seen an increasing number of prospective students citing Hilary Mantel’s Booker Prize-winning historical novels, Wolf Hall and Bring up the Bodies, as supporting evidence for their knowledge of Tudor history.
Guy suggested that Mantel’s as yet incomplete trilogy on Thomas Cromwell’s life and career – the third instalment, The Mirror and the Light, comes out later this year – has become something of a resource for a number of budding history undergraduates, despite the fact that they contain historical inaccuracies (casting, for example, Thomas More as a woman-hating tyrant, Anne Boleyn as a female devil and getting the wrong sheriff of London to lead More to his execution).
The Guardian quotes Guy as saying that this “blur between fact and fiction is troubling”. In fact, Guy’s comments on the blurring of fact and fiction, and related concerns of authenticity, do read as a worrying prognosis. In the age of Trump and fake news, it seems particularly important that we call bullshit on so-called “alternative facts” and place an unquestionable fix on fiction.
Yet historical fiction, in all its varieties, can and frequently does raise vital questions about how we write, and conceptualise, historical processes. Indeed, when writers of historical fiction make stuff up about the past, they sometimes do so in an effort to sharpen, rather than dull, our capacities to separate fact from fiction.
‘There are no endings’
In the first of five Reith Lectures to be aired on BBC Radio 4, Mantel similarly argues that in death “we enter into fiction” and the lives of the dead are given shape and meaning by the living – whether that be the historian or the historical novelist. As the narrator of Bring up the Bodies puts it: “There are no endings.” Endings are, instead, “all beginnings”, the foundation of interpretative acts.
In Mantel’s view, the past is not something we passively consume, either, but that which we actively “create” in each act of remembrance. That’s not to say, of course, that Mantel is arguing that there are no historical “facts” or that the past didn’t happen. Rather, she reminds us that the evidence we use to give narrative shape to the past is “always partial”, and often “incomplete”. “Facts are not truth”, Mantel argues, but “the record of what’s left on the record.” It is up to the living to interpret, or, indeed, misinterpret, those accounts.
In this respect the writer of historical fiction is not working in direct opposition to the professional historian: both must think creatively about what remains, deploying – especially when faced with gaps and silences in the archive – “selection, elision, artful arrangement”, literary manoeuvres more closely associated with novelist Philippa Gregory than with Guy the historian. However, exceptional examples from both fields should, claims Mantel, be “self-questioning” and always willing to undermine their own claims to authenticity.
Mantel’s own theorising of history writing shares much with that other great Tudor storyteller: William Shakespeare.
While Shakespeare’s Richard III (1592), can be read as a towering achievement in historical propaganda – casting Richard, the last of the Plantagenets, as an evil usurper, and Richmond, first Tudor king and Elizabeth I’s grandfather, as prophetic saviour – the play invites serious speculation about the idiosyncratic nature of historical truth.
Take this exchange in Act II Scene IV of the play, which comes just before the doomed young princes are led to the tower. Here, the younger of the two, Richard, duke of York, asks his grandmother, the duchess of York, about stories he’s heard about his uncle’s birth:
York: Marry, they say my uncle grew so fast
That he could gnaw a crust at two hours old … Duchess of York: I pray thee, pretty York, who told thee this?
York: Grandam, his nurse.
Duchess of York: His nurse? Why, she was dead ere thou wast born.
York: If ’twere not she, I cannot tell who told me.
Fresh in the knowledge that his uncle’s nurse died before he was born, the boy has no idea who told him the story of his uncle’s gnashing baby teeth. Has he misremembered his source, blurring the lines between fact and fiction? Was the boy’s uncle born a monster, or is that a convenient fiction his enemies might wish to tell themselves? And why on earth would Shakespeare bother to include this digression?
In all other respects, Richard III invites straightforward historical divisions between good (the Tudors) and evil (the Plantagenet dynasty). But here, subversive doubts creep in about the provenance of the stories we tell about real historical people, with the “historical fact” briefly revealed as a messy, fallible concept, always on the edge of make-believe.
Richard III reminds us that historical facts can be fictionalised, but also that the fictional can just as easily turn into fact. Mantel’s Tudor cycle has been haunted by similar anxieties. In the often terrifying world of Henry VIII’s court, her novels show how paranoia breeds rumour, how rumour bleeds into and shapes fact and, as a result, “how difficult it is to get at the truth”. History isn’t just a different country for Mantel, it’s something intimately tied to the fictions we cling to.
And indeed in Wolf Hall that blurred relationship between fact and fiction, history and myth, is often front and centre. In Wolf Hall the past is somewhere above, between, and below the official record. History is not to be found in “coronations, the conclaves of cardinals, the pomp and processions.” Instead it’s in “a woman’s sigh”, or the smell she “leaves on the air”, a “hand pulling close the bed curtain”; all those things that are crucially absent from the archive.
The fact of history’s ephemerality opens a “gap” for the fictional, into which we “pour [our] fears, fantasies, desires”. As Mantel has asked elsewhere: “Is there a firm divide between myth and history, fiction and fact: or do we move back and forth on a line between, our position indeterminate and always shifting?”
For the Canadian novelist, Guy Gavriel Kay, fantasy is a necessary precondition of all forms of historical writing: “When we work with distant history, to a very great degree, we are all guessing.”
This is why Kay is at leave to employ the conventions of fantasy to deal with the past, transposing real historical events, peoples, and places – medieval Spain and Roderigo Diaz (El Cid) in The Lions of Al-Rassan (1995), for example, or the Viking invasions of Britain in The Last Light of the Sun (2004) – into the realm of the fantastical.
Kay researches (he provides bibliographies in all his books) and then unravels history and historical evidence, putting a “quarter turn” on the assumed facts: renaming historical figures, reversing and collapsing the order of known events, substituting invented religions for real ones, introducing magic into the history of Renaissance Europe, or China. He has described the result of this process as “near-history”: alternative pasts that are at once radically strange and weirdly familiar.
Like Mantel, Kay’s (near-)historical fictions can be read as less an effort to evade the blur between fact and fiction than to honestly point towards that blur as a condition of history itself. After all, history is debatable and often impossible to verify. It’s a reminder, perhaps, that we sometimes need the tropes of fiction to smooth over those complexities, or render them legible, truthful, in the contemporary moment. We need metaphors, and similes, so that the dead can speak and act, live and die.
Michael Durrant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Happy 100th birthday, Mr President: how JFK's image and legacy have endured
Author: Gregory Frame, Lecturer in Film Studies, Bangor University
John F Kennedy was born 100 years ago on May 29, 1917. While the achievements of his presidency and the content of his character have been subjects of contestation among historians and political commentators since the 1970s, there is little question regarding the enduring power of his image. As the youngest man to win election to the presidency, entering the White House with a beautiful wife and young children in tow, he projected the promise of a new era in American politics and society.
In Norman Mailer’s sprawling, seminal essay about Kennedy, published in Esquire in November 1960, Kennedy was the embodiment of what America wanted to be: young, idealistic, affluent and cosmopolitan. When America was faced with the choice between Kennedy and Richard Nixon in the 1960 presidential election, Mailer posed the question: “Would the nation be brave enough to enlist the romantic dream of itself, would it vote for the image in the mirror of its unconscious” – or would it opt for “the stability of the mediocre”?
Kennedy knew the importance of his image, which is why he placed so much emphasis on his performances in the televised debates. His success in this arena arguably tipped the very close election in his favour. According to journalist Theodore White, television transmogrified Nixon into a “glowering”, “heavy” figure; by contrast, Kennedy appeared glamorous, sophisticated – almost beautiful.
Master of the medium
Carrying this success into his presidency, Kennedy used television to communicate with the people to great effect through broadcast press conferences and interviews. As demonstrated by the miniseries Kennedy (1983), where Kennedy was played by perennial screen politician Martin Sheen, JFK’s presidency can be reduced to a series of televised moments: his oft-quoted inaugural address (“Ask not what your country can do for you…”); his tours of France and West Germany (“Ich bin ein Berliner”); and his calm, assured broadcasts to the nation during the civil rights demonstrations and the Cuban Missile Crisis.
As American historian Alan Brinkley wrote in 1998: “Even many of those who have become disillusioned with Kennedy over the years are still struck, when they see him on film [or on television], by how smooth, polished and spontaneously eloquent he was, how impressive a presence, how elegant a speaker.”
Most of the Kennedy miniseries is in colour. But in its reconstruction of monochrome images of Kennedy on television, it employs the medium as a means of memorialising him, infatuated with his image in its nostalgic reverie for a more stable and prosperous time.
Kennedy’s image on television (and in newsreel footage) is so seductive it is unsurprising Oliver Stone used it in the opening sequence to his controversial debunking of the official theories behind the president’s assassination in the film JFK (1991). As John Hellmann suggested, this footage establishes Kennedy “as the incarnation of the ideal America in the body of the beautiful man”.
The moving image played a fundamental role in establishing Kennedy as the image-ideal president. As I have argued elsewhere, other presidents have sought to establish their own images in relation to Kennedy’s, from Bill Clinton in 1992 to Barack Obama in 2008 and beyond. Kennedy is a seductive figure – not because of what he did or achieved, but because he cultivated the notion that he reflected the best the United States could be if it dared to dream.
Towards the conclusion of Oliver Stone’s Nixon, the eponymous president, played by Anthony Hopkins, stumbles drunkenly around the White House on the verge of resignation. He looks up to the portrait of Kennedy and says, rather forlornly: “When they [the people] look at you, they see what they want to be. When they look at me, they see what they are.”
Stone is here acknowledging Nixon’s frail humanity as the “ego” to Kennedy’s “ego-ideal”. Where Nixon is deficient and ordinary, Kennedy’s image retains the illusion of perfection in the collective memory.
Politics as reality TV
The 100th anniversary of Kennedy’s birth allows us to reflect upon this legacy. If Kennedy was the superhero and Nixon the flawed human, then Donald Trump is a compendium of some of the worst qualities a politician can have: impulsive, arrogant, narcissistic. In a chaotic, ephemeral and often trivial media environment, Trump, a man with an insatiable appetite for the spotlight and no discernible ideological convictions, has thrived. He believes – and he has not been disabused of this notion – that he can perform the presidency as he performed on reality television in The Apprentice, most recently firing the director of the FBI on television.
We may bemoan the idea that politics has become a television show, but it has. Is that Kennedy’s fault? Yes and no. His polished performances on television hid many questionable tactics and character flaws beneath the surface, but it is often said that we get the politicians we deserve, and in allowing politics to become messily intertwined with the discourses of celebrity and, subsequently, the values of reality television, human beings fostered the conditions that created Kennedy and Trump.
If Kennedy was alive today would he be horrified by what politics has become? No, he’d be on Snapchat.
Gregory Frame does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Teaching students to survive a zombie apocalypse with psychology
Author: John A Parkinson, Professor in Behavioural Neuroscience, Bangor UniversityRebecca Sharp, Senior Lecturer in Psychology, Bangor University
Playing games is ubiquitous across all cultures and time periods – mainly because most people like playing games.
Games involve rules, points, systems, as well as a theme or storyline and can be massively fun and engaging. And there is an increasing body of research that shows “gamification” – where other activities are designed to be like a game – can be successful in encouraging positive changes in behaviour.
Broadly speaking, games work effectively because they can make the world more fun to work in. They can also help to achieve “optimal functioning” – which basically means doing the best you can do.
This can be seen in Jane McGonigal’s game and app Superbetter, which helps people live better lives by living more “gamefully”. It does this by helping users adopt new habits, develop a talent, learn or improve a skill, strengthen a relationship, make a physical or athletic breakthrough, complete a meaningful project, or pursue a lifelong dream.
This is also exactly what we’ve done at Bangor University. Here, students on the undergraduate course in behavioural psychology had one of their modules fully gamified. And it started when they received this message, after they enrolled on the course:
Notice to all civilians: this module will run a little differently. The risk of infection is high, please report to the safe quarantine zone in Pontio Base Five at 1200 hours on Friday 30 September. Stay safe, stay alert, and avoid the Infected.
Curiosity piqued, the class arrived at their first lecture of the semester to be greeted by “military personnel” who demanded they be scanned for infection prior to entry.
They were given a brown envelope containing “top secret” documents about their mission fighting the infection. The documents explained the game, and that the module had been gamified to enhance their learning.
What commenced next was the immersion. In addition to themed lectures and materials, the presence of actors and a storyline that was influenced by choices made by the class, students were given weekly “missions” by key characters in the game.
These online quiz-based missions prompted students to study the module materials between lectures to earn points. Points gained allowed students to progress through levels – from “civilian” to “resurrection prevention leader”. Points could also be exchanged for powerful incentives, such as being able to choose the topic of their next assignment, or the topic of a future lecture.
A life gamified
Part of our thinking behind wanting to teach in this way is because although students enrol at university, they don’t always perform optimally – instead intentions are often derailed by distractions.
At a psychological level, there are multiple competing signals trying to access behaviour – but only one can win. This discordance between goals and actual behaviour is called the “intention action gap”, and gamification has the potential to close this gap.
This is because, successful learning requires a student to set goals and then achieve them over and over again. Games use techniques, such as clear rules and rewards, to enhance motivation and promote goal-directed behaviour. And because education is about achieving specific learning goals, the use of games to clarify and promote engagement can be highly effective in providing clear guidance on goal-direction and action – which can make users less fearful of failure. In this way, gamification can result in students achieving better outcomes by optimising learning.
The application of gamification to a module on behavioural psychology was a novel (albeit ironic) approach to demonstrate to students the very concepts they were learning.
When compared to the previous year’s performance and to a matched same-year non-gamified module, the gamification had a large impact on attendance – which was higher than both the non-gamified module, and the previous year’s group.
Many of the class also engaged with materials between lectures – such as the online “missions” to help them learn and review the content between lectures.
When asked their thoughts at the end of the semester, many students said they enjoyed the gamification and liked the immersive experience. Some even asked for more zombies.
Gamification is clearly well-suited to teaching behavioural psychology as it demonstrates directly some of the concepts students are learning. But it could also easily be adapted and applied to other subjects.
The psychologist, Burrhus Frederic Skinner said that:
Education is what survives when what has been learnt has been forgotten.
So while the students may well forget the precise definition “positive reinforcers” in years to come, they will know implicitly what they are and how to apply them thanks to the game. In other words, they have learned how to learn. And hopefully, their gamified experience will help them survive future “apocalyptic” challenges.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.
Can environmental documentaries make waves?
Author: Michela Cortese, Associate Lecturer, Bangor University
Trump’s first 100 days in office were, among other things, marked by a climate march in Washington DC that attracted tens of thousands of demonstrators. No surprises there. Since the beginning of his mandate in January, Trump has signed orders to roll back the number of federally protected waterways, restart the construction of contentious oil pipeline, and cut the budget from the Environmental Protection Agency (EPA). Among the various orders and memoranda, the one signed to overhaul Obama’s Clean Power Plan is probably the most remarkable, along with promoting coal extractions all over the US.
A good time, then, to follow up Al Gore’s iconic documentary An Inconvenient Truth, which was released 11 years ago in a similarly discouraging political climate. At that time George W Bush, who is remembered for undermining climate science and for strongly supporting oil interests, was in power. In his own first 100 days at the White House, Bush backed down from the promise of regulating carbon dioxide from coal power plants and announced that the US would not implement the Kyoto climate change treaty.
This summer sees the release of An Inconvenient Sequel: Truth to Power. More than ten years have passed and the documentary looks likely to be released in a very similar context. With republicans in power, war in the Middle East, and regulations on the environment to be reversed, this inconvenient sequel is a reminder that the climate of the conversation about global warming has not changed much in the interim.
But the strategies needed to grab the attention of the public certainly have. In the fast-paced, ever-evolving media landscape of the 21st century, knowing how to engage the public on environmental matters is no easy thing. The tendency of the environmental films that have mushroomed since 2000 has been to use a rhetoric of fear. But how effective has this been? Certainly, environmental activism has grown, particularly with the help of social media, but the role of these productions is unclear, and there is a lack of research on audience response to these films.
The selling point of An Inconvenient Truth was its personal approach. Although it had a lecture-style tone, this was a documentary that was all about Gore. He told his story entwined with that of the planet. It was extraordinary that people paid to go to the cinema to watch a politician giving a lecture. This was a big shift in cinema. Arguably, this format was enlivened by the way in which Gore opened up about his personal history.
The documentary opened with the politician’s notorious quote: “I am Al Gore, and I used to be the next president of the United States.” In November 2000 Gore had lost the presidential elections to George W Bush with an extraordinarily narrow defeat. The choice to run with a very personal rhetoric was certainly strategic – the right time for the former vice president to open up six years from that unfortunate election. Gore told the story of global warming through his personal life, featuring his career disappointments, family tragedies and constantly referring to the scientists he interviewed as “my friend”.
This was a very innovative way of approaching the matter of climate change. We are talking about a politician who decided to offer an insight on his private life for a greater cause: to engage the public on a vital scientific subject. The originality of the documentary led to An Inconvenient Truth scoring two Oscars at the Academy Awards 2006.
Today, An Inconvenient Truth is seen as the prototype of activist film-making. Founder of the Climate Reality Project in 2006 and co-recipient of the 2007 Nobel Peace Prize (with the IPCC), Gore and his movement soon became the core of environmental activism, gathering several environmental groups that, despite their differences, today march together for the greatest challenge of our time.
Eleven years on, the revolution under Gore’s lead that many expected has yet to be fulfilled. The next decade was beset with disappointments. More recently, the 2015 Paris Agreement has marked a new era for climate action, proving that both developed and developing countries are now ready to work together to reduce carbon emissions. But today there is a new protagonist – or antagonist – in the picture. The trailer for An Inconvenient Sequel shows Gore watching Trump shouting his doubts about global warming to the crowd and announcing his plans to strip back the EPA’s budget.
It will be interesting to see how the tone of the film moves off from that of the original. The “personal reveal” tactic won’t work so well the second time round. And a change in the narrative is certainly evident from the trailer. The graphs of the previous documentary are replaced with more evocative images of extreme weather and disasters. While statistics about carbon dioxide emissions and sea-level rises were predominantly used to trigger emotions in the audience, this time round Gore can show the results of his predictions. One example of this is the iconic footage of a flooded World Trade Centre Memorial, a possibility which was discussed by Gore in the 2006 documentary and criticised by many for being a “fictional” element at that time rather than an “evidence” of climate impact.
Unfortunately, I am not sure how much this shift will affect the public or whether the sequel will be the manifesto of that revolution that Gore and his followers have been waiting for. The role that the media have played in the communication of climate change issues has changed and developed alongside the evolution of the medium itself and people’s perception of the environment. The last decade has seen an explosion of sensational images and audiences are fatigued by this use of fear.
Many look for media that includes “positive” messages rather than the traditional onslaught of facts and images triggering negative emotions. It has never been more difficult for environmental communicators to please viewers and readers in the midst of a never-ending flow of information available to them.
Michela Cortese received funding from research councils in the past.
Is talking to yourself a sign of mental illness? An expert delivers her verdict
Author: Paloma Mari-Beffa, Senior Lecturer in Neuropsychology and Cognitive Psychology, Bangor University
Being caught talking to yourself, especially if using your own name in the conversation, is beyond embarrassing. And it’s no wonder – it makes you look like you are hallucinating. Clearly, this is because the entire purpose of talking aloud is to communicate with others. But given that so many of us do talk to ourselves, could it be normal after all – or perhaps even healthy?
We actually talk to ourselves silently all the time. I don’t just mean the odd “where are my keys?” comment – we actually often engage in deep, transcendental conversations at 3am with nobody else but our own thoughts to answer back. This inner talk is very healthy indeed, having a special role in keeping our minds fit. It helps us organise our thoughts, plan actions, consolidate memory and modulate emotions. In other words, it helps us control ourselves.
Talking out loud can be an extension of this silent inner talk, caused when a certain motor command is triggered involuntarily. The Swiss psychologist Jean Piaget observed that toddlers begin to control their actions as soon as they start developing language. When approaching a hot surface, the toddler will typically say “hot, hot” out loud and move away. This kind of behaviour can continue into adulthood.
Non-human primates obviously don’t talk to themselves but have been found to control their actions by activating goals in a type of memory that is specific to the task. If the task is visual, such as matching bananas, a monkey activates a different area of the prefrontal cortex than when matching voices in an auditory task. But when humans are tested in a similar manner, they seem to activate the same areas regardless of the type of task.
In a fascinating study, researchers found that our brains can operate much like those of monkeys if we just stop talking to ourselves – whether it is silently or out loud. In the experiment, the researchers asked participants to repeat meaningless sounds out loud (“blah-blah-blah”) while performing visual and sound tasks. Because we cannot say two things at the same time, muttering these sounds made participants unable to tell themselves what to do in each task. Under these circumstances, humans behaved like monkeys do, activating separate visual and sound areas of the brain for each task.
This study elegantly showed that talking to ourselves is probably not the only way to control our behaviour, but it is the one that we prefer and use by default. But this doesn’t mean that we can always control what we say. Indeed, there are many situations in which our inner talk can become problematic. When talking to ourselves at 3am, we typically really try to stop thinking so we can go back to sleep. But telling yourself not to think only sends your mind wandering, activating all kinds of thoughts – including inner talk – in an almost random way.
This kind of mental activation is very difficult to control, but seems to be suppressed when we focus on something with a purpose. Reading a book, for example, should be able to suppress inner talk in a quite efficient way, making it a favourite activity to relax our minds before falling asleep.
But researchers have found that patients suffering from anxiety or depression activate these “random” thoughts even when they are trying to perform some unrelated task. Our mental health seems to depend on both our ability to activate thoughts relevant to the current task and to suppress the irrelevant ones – mental noise. Not surprisingly, several clinical techniques, such as mindfulness, aim to declutter the mind and reduce stress. When mind wandering becomes completely out of control, we enter a dreamlike state displaying incoherent and context-inappropriate talk that could be described as mental illness.
Loud vs silent chat
So your inner talk helps to organise your thoughts and flexibly adapt them to changing demands, but is there anything special about talking out loud? Why not just keep it to yourself, if there is nobody else to hear your words?
In a recent experiment in our laboratory at Bangor University, Alexander Kirkham and I demonstrated that talking out loud actually improves control over a task, above and beyond what is achieved by inner speech. We gave 28 participants a set of written instructions, and asked to read them either silently or out loud. We measured participants’ concentration and performance on the tasks, and both were improved when task instructions had been read aloud.
Much of this benefit appears to come from simply hearing oneself, as auditory commands seem to be better controllers of behaviour than written ones. Our results demonstrated that, even if we talk to ourselves to gain control during challenging tasks, performance substantially improves when we do it out loud.
This can probably help explain why so many sports professionals, such as tennis players, frequently talk to themselves during competitions, often at crucial points in a game, saying things like “Come on!” to help them stay focused. Our ability to generate explicit self instructions is actually one of the best tools we have for cognitive control, and it simply works better when said aloud.
So there you have it. Talking out loud, when the mind is not wandering, could actually be a sign of high cognitive functioning. Rather than being mentally ill, it can make you intellectually more competent. The stereotype of the mad scientist talking to themselves, lost in their own inner world, might reflect the reality of a genius who uses all the means at their disposal to increase their brain power.
Paloma Mari-Beffa does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Rhinos should be conserved in Africa, not moved to Australia
Author: Matt Hayward, Senior Lecturer in Conservation, Bangor University
Rhinos are one of the most iconic symbols of the African savanna: grey behemoths with armour plating and fearsome horns. And yet it is the horns that are leading to their demise. Poaching is so prolific that zoos cannot even protect them.
Some people believe rhino horns can cure several ailments; others see horns as status symbols. Given horns are made of keratin, this is really about as effective as chewing your finger nails. Nonetheless, a massive increase in poaching over the past decade has led to rapid declines in some rhino species, and solutions are urgently needed.
One proposal is to take 80 rhinos from private game farms in South Africa and transport them to captive facilities in Australia, at a cost of over US$4m. Though it cannot be denied that this is a “novel” idea, I, and colleagues from around the world, have serious concerns about the project, and we have now published a paper looking into the problematic plan.
The first issue is whether the cost of moving the rhinos is unjustified. The $4m cost is almost double the anti-poaching budget for South African National Parks ($2.2m), the managers of the estate where most white rhinos currently reside in the country.
The money would be better spent on anti-poaching activities in South Africa to increase local capacity. Or, from an Australian perspective, given the country’s abysmal record of mammal extinctions, it could go towards protecting indigenous species there.
In addition, there is the time cost of using the expertise of business leaders, marketeers and scientists. All could be working on conservation issues of much greater importance.
Bringing animals from the wild into captivity introduces strong selective pressure for domestication. Essentially, those animals that are too wild don’t breed and so don’t pass on their genes, while the sedate (unwild) animals do. This is exacerbated for species like rhinos where predation has shaped their evolution: they have grown big, dangerous horns to protect themselves. So captivity will likely be detrimental to the survival of any captive bred offspring should they be returned to the wild.
It is not known yet which rhino species will be the focus of the Australian project, but it will probably be the southern white rhino subspecies– which is the rhino species least likely to go extinct. The global population estimate for southern white rhinos (over 20,000) is stable, despite high poaching levels.
This number stands in stark contrast to the number of northern white (three), black (4,880 and increasing), great Indian (2,575), Sumatran (275) and Javan (up to 66) rhinos. These latter three species are clearly of much greater conservation concern than southern white rhinos.
There are also well over 800 southern white rhinos currently held in zoos around the world.
With appropriate management, the population size of the southern white is unlikely to lose genetic diversity, so adding 80 more individuals to zoos is utterly unnecessary. By contrast, across the world there are 39 other large mammalian herbivore species that are threatened with extinction that are far more in need of conservation funding than the five rhino species.
Rhinos inhabit places occupied by other less high profile threatened species – like African wild dogs and pangolins – which do not benefit from the same level of conservation funding. Conserving wildlife in their natural habitat has many benefits for the creatures and plants they coexist with. Rhinos are keystone species, creating grazing lawns that provide habitats for other species and ultimately affect fire regimes (fire frequency and burn patterns). They are also habitats themselves for a range of species-specific parasites. Abandoning efforts to conserve rhinos in their environment means these ecosystem services will no longer be provided.
Finally, taking biodiversity assets (rhinos) from Africa and transporting them to foreign countries extends the history of exploitation of Africa’s resources. Although well-meaning, the safe-keeping of rhinos by Western countries is as disempowering and patronising as the historical appropriation of cultural artefacts by colonial powers.
Conservation projects are ultimately more successful when led locally. With its strong social foundation, community-based conservation has had a significant impact on rhino protection and population recovery in Africa. In fact, local capacity and institutions are at the centre of one of the world’s most successful conservation success stories – the southern white rhino was brought back from the brink, growing from a few hundred in South Africa at the turn of the last century to over 20,000 throughout southern Africa today.
In our opinion, this project is neo-colonial conservation that diverts money and public attention away from the fundamental issues necessary to conserve rhinos. There is no evidence of what will happen to the rhinos transported to Australia once the poaching crisis is averted, but there seems nothing as robust as China’s “panda diplomacy” where pandas provided to foreign zoos remain the property of China, alongside a substantial annual payment, as do any offspring produced, for the duration of the arrangement.
With increased support, community-based rhino conservation initiatives can continue to lead the way. It is money that is missing, not the will to conserve them nor the expertise necessary to do so. Using the funding proposed for the Australian Rhino Project to support locally-led conservation or to educate people to reduce consumer demand for rhino horn in Asia seem far more acceptable options.
The research that this article refers to was done in conjunction with William J. Ripple, Graham I. H. Kerley, Marietjie Landman, Roan D. Plotz and Stephen T. Garnett
Fact Check: do six million people earn less than the living wage?
Author: Tony Dobbins, Professor of Employment Studies, Bangor University
I’m angry and fed up with the way in which six million people earn less than the living wage.
Jeremy Corbyn, leader of the Labour Party, interviewed on the BBC’s Andrew Marr show on April 23.
To assess this claim by Jeremy Corbyn, distinguishing various low-wage floors is important. In 2017, the Living Wage Foundation’s higher voluntary Real Living Wage (RLW) is £9.75 an hour in London, £8.45 elsewhere, based on a calculation of living costs.
The government’s compulsory wage floor is lower and covers all employees. For employees aged 25 and over, it’s called the National Living Wage (NLW) and is £7.50 per hour. For younger employees, it’s called the National Minimum Wage, and ranges from £3.50 to £7.05.
Corbyn’s claim concerns the RLW, and the Labour Party directed The Conversation to figures from the Office for National Statistics (ONS), which show that in 2014 an “estimated 5.9m jobs were paid below the Living Wage”.
But the underlying ONS data refers to the number of employee jobs with hourly earnings below the RLW in April 2014, so Corbyn should be referring to jobs rather than people when making this claim. The two are not identical because some people may hold more than one job. It has been estimated that, in 2014, 5.4m people with one job earned less than the RLW.
More recent ONS data from April 2016 estimates that the number of UK employee jobs paid below the RLW increased from 6.16m (22.8%) in 2015 to 6.22m (23.2%) in 2016. More jobs are now paid below the RLW, up from 19% in 2012. Many more part-time jobs are paid below it, compared to full-time jobs, and more women’s jobs than men’s are below the threshold.
Regarding the legal thresholds, in April 2016, when the NLW was introduced, an estimated 362,000 jobs were paid less than the statutory minimum – 1.3% of UK employee jobs. This includes those aged between 16 and 25.
The labour market (notably outside London and the South East) is still suffering from wage stagnation after the 2008 financial crisis and subsequent economic recession and austerity, with the low-paid hit hardest. The UK has drifted further towards a low wage, low productivity, low-quality employment model, while the membership density and bargaining power of trade unions to win higher wages has weakened.
Given ONS earnings projections, it would be more accurate for Jeremy Corbyn and others to refer to the number of employee jobs (rather than people) paid below the RLW. The latest available data indicates that 5.4m people with one job were earning less than the RLW in 2014. That said, in April 2016, 6.22m employee jobs were paid below the RLW, continuing a rising trend in recent years. So while Corbyn’s statement is somewhat misleading, it is true in essence.
Chris Grover, senior lecturer in social policy, Lancaster University
I agree with the verdict, and Corbyn should have referred to six million jobs, rather than six million people. The concept of a “living wage” is a handy device to highlight low pay. What is less clear is to what extent a person earning such a wage might expect to “live”. This is most visible in the government’s NLW, which aims to increase the wages of older workers to 60% of median hourly earnings by 2020. This approach relates wages to what others earn, rather than the cost of living.
The RLW is related to living costs. But it is calculated using weighted averages for a range of families. For this and other reasons, some families, particularly those headed by lone mothers and couples with more than three children, being paid the RLW will face continuing poverty while in paid work. Even deeper poverty will be faced by such families being paid the NLW.
The Conversation is checking claims made by public figures. Statements are checked by an academic with expertise in the area. A second academic expert then reviews an anonymous copy of the article. Please get in touch if you spot a claim you would like us to check by emailing us at email@example.com. Please include the statement you would like us to check, the date it was made, and a link if possible.
Chris Grover has previously received funding from the British Academy.
Tony Dobbins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
All aboard for a train ticket to bring Europe together again
Author: Nikolaos Papadogiannis, Lecturer in Modern and Contemporary History, Bangor University
In many countries, turning 18 marks the transition into adulthood. With it comes the delights and difficulties of all new rights and responsibilities, from voting to drinking alcohol. Now, there’s talk that it could also be the beginning of an international adventure.
Last year, members of the European Parliament debated whether young Europeans should be given a free Interrail pass on their 18th birthday. The initiative was welcomed by representatives from across the political spectrum, and attracted grassroots support from over 33,000 petitioners. Although the idea has yet to become an official policy, the European Commission has shown interest.
Since Interrail launched in 1972, it has given young Europeans the opportunity to travel at low cost across most of the continent, including countries that don’t belong to the European Economic Community or the European Union. At the moment, a monthly Interrail pass costs between €43 and €493, depending on how far and how frequently one travels. Around 300,000 young Europeans use this programme each year, but if the free Interrail pass initiative is successful, it could attract a sizeable proportion of the 5.4m 18-year-old Europeans annually.
The argument goes that underwriting Interrail passes for young adults is good value for money, because it helps the next generation of European residents to experience and understand other cultures. In theory, meeting and making friends with people from other European countries will strengthen cultural and political ties across the continent. Yet this optimistic outlook deserves closer scrutiny: we shouldn’t simply assume that young Europeans will take up the offer, or that travel will build a common European identity.
Destination: Europe and beyond
This is not the first time that travel has been touted as a way of fostering good relations across Europe. From the ashes of World War II, diverse initiatives sprung up to promote reconciliation through youth tourism. For example, the International Youth Hostel Federation successfully persuaded European governments to ease restrictions on youth travel by changing or getting rid of passport, visa and currency requirements.
Such initiatives proved attractive, and young people increasingly engaged in cross-border travel. By the 1960s, the majority of people aged 20 to 24 in West Germany, Belgium and the Netherlands had visited two or more“foreign” countries. This trend continued in the following decades: in Germany, at the beginning of the 1990s, 17 to 19-year-olds had visited seven to eight countries on average, both within and outside of Europe.
But since then, the financial crisis in several European countries, together with high youth unemployment rates, have apparently taken their toll. Recent market research has shown that the number of foreign trips by young Europeans has fallen by around 10% over the decade to 2015. Based on these observations, it seems that initiatives which make travel cheaper and easier can encourage young Europeans to venture across the continent – and that the time is ripe to introduce another such policy.
Ever closer union?
The question remains, whether travelling would strengthen cultural or political ties across Europe. There is some basis for such a claim: young supporters of European unification in 1950 asserted that“our passport is the European flag”. It was not just youngsters who were already pro-European, that travelled across the continent. According to a study by Ronald Inglehart in the 1960s, the younger people were, and the more they travelled, the more likely they were to subscribe to the idea of an ever closer political union in Europe – though this did not necessarily mean that they approved of the existing European institutions.
Yet, historically, youth tourism has brought about frictions as well as friendships. For example, my own research shows moments of cultural misunderstanding in youth hostels as far back as the mid-1960s, when staff at one West German youth hostel bemoaned that many French guests drank too much alcohol. Other scholars have investigated why local men in Greece in the 1980s sought out women from Northern Europe, including young ones, in tourist resorts to have sex with. Those men saw themselves as part of a poorer society, and sought to “sexually conquer” women tourists from richer countries, in order to take “revenge”.
These experiences show that youth tourism has the potential to deepen divides in Europe by playing on some negative stereotypes.
Leaving the station
A free Interrail pass could increase the number of young people travelling across the continent. But if the European Commission is looking to build stronger ties across Europe, this scheme won’t necessarily be enough to challenge negative stereotypes, let alone save the European idea. The commission will need to seek out other ways to maximise the impact of the scheme.
Getting young tourists to narrate their Interrail experiences on social media could help achieve that. It wouldn’t be difficult: those who take up the pass could be asked to contribute to a blog, Instagram page or Facebook group. This would create a place for young travellers to describe how they feel about the people of different nationalities, ethnicities (including migrants) and genders they encounter on their travels, and where residents are given the chance to respond.
This would present an opportunity for all to honestly reflect on moments they shared together – both enjoyable and uncomfortable. Ideally, the commission would encourage all to think critically about the prejudices against one another that circulate throughout the media. Travel and the use of social media won’t eliminate racism. But they could well help people from across the continent to empathise with one another – and that is certainly a goal worth funding.
Nikolaos Papadogiannis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Heat from the Atlantic Ocean is melting Arctic sea ice further eastwards than ever before
Author: Tom Rippeth, Professor of Physical Oceanography, Bangor University
The seasonal sea-ice retreat across the Arctic Ocean is perhaps one of the most conspicuous indicators of climate change. In September 2012, a new record was set for the time that we have been tracking sea ice with satellites: the minimum sea ice extent was some 50% below the climatic average for that month. Four years on, and the September 2016 record tied with 2007 for the second lowest sea ice extent since measurements began in 1978.
The seasonal retreat of sea ice is largely because the atmosphere in the Arctic is heated under 24 hours of daylight in the summer, and this makes the ice melt. In the cold of the perpetual darkness of winter, the sea ice extent returns to its winter norm: the only heat available to slow sea ice growth is from winds and ocean currents moving warm air and water in from the south.
However, during the winter of 2016/17 the sea ice did not return to its winter norm. In fact, the sea ice extent was the lowest ever recorded for this time of year.
Though the Arctic is not exactly in the UK’s backyard, the changes in sea ice coverage are thought to be at least partly responsible for the recent run of severe weather events experienced across the northern hemisphere. These include unusually cold winter weather across parts of Europe and the US, and deadly smogs in parts of China.
The Arctic is warming about twice as fast as the rest of the world. As the difference between atmospheric temperatures in the Arctic and mid-latitudes (which includes the UK, part of North America, and a band of northern Europe and Asia) decreases, the speed at which weather systems (depressions) track across the Atlantic to northwestern Europe is reduced. This means that snow and rain can persist for longer, and high pressure systems are “harder to shift”, which can lead to further reductions in air quality.
The largest oceanic heat input to the Arctic comes from water that has been in the Atlantic Ocean, and has travelled through the Fram Strait and around Svalbard. This “Atlantic water” circulates around the Arctic in an anti-clockwise direction. This water is currently the warmest it has been for 2,000 years and now contains enough heat to completely melt the sea ice within a couple of years.
However, while this water is warmer than the ambient Arctic water, it is also saltier, and so heavier, too. It sits at depths of 100 to 400 metres across much of the Arctic Ocean. This means that the Atlantic water heat is insulated from the surface by a layer of lighter, colder and fresher Arctic Ocean water which sits above it.
Atlantic water contact with the sea surface – which then melts the sea ice impacting coverage and thickness – has previously been restricted to the region around Svalbard, where the Atlantic water enters the Arctic Ocean. However, new measurements reported by a team of international scientists have shown, for the first time, that previously insulated Atlantic water heat is now being stirred up to the sea surface. This results in enhanced sea ice melt, much further to the east, north of Siberia.
We previously measured the upward Atlantic water heat flux in this region in 2007 and 2008. At the time it was very modest. However, the new measurements estimate this flux to have increased by two to four times over the winters of 2013/14 and 2014/15. The result of this increase is that sea ice thickness has been reduced by between 18 and 40cm. This exceeds the impact of the atmospheric heat on sea ice melt alone (estimated to be 18cm).
The researchers attribute the change to a reduction in the vertical density gradient within the overlying Arctic water layer. The Atlantic water has moved closer to the sea surface, and created conditions much more like those found around Svalbard, where there is less sea ice. Lead researcher Igor Polyakov describes the change as the “atlantificiation” of this part of the Arctic Ocean.
These important new results highlight the increasing role of heat coming from the Atlantic Ocean in driving sea ice retreat in the Arctic Ocean. They are a profound sign of the planet’s changing climate, and show that there is a link between retreating Arctic sea ice and the severe weather that has been witnessed in mid-latitude countries.
Furthermore, they show that the impact of Atlantic water heat on sea ice is highly variable across the Arctic Ocean, with significant heat fluxes restricted to geographic “hot spots”. The identification of these hot spots will be key to improving how we forecast the weather in the northern hemisphere and understand how the retreat of Arctic sea ice impacts on it.
Tom Rippeth receives funding from the Natural Environmental Research Council and Bangor University. He is affiliated with the Liberal Democrats.
Bloomageddon: seven clever ways bluebells win the woodland turf war
Author: Vera Thoss, Lecturer in Chemistry, Bangor University
The appearance of vivid bluebell carpets in British woodlands is a sure and spectacular sign of spring. Bluebells – Hyacinthoides non-scripta (L.) Chouard ex Rothm– are Britain’s favourite wildflower and particularly fine carpets attract visitors to well-known sites such as Kew Gardens in London and Coed Cefn in Powys, Wales.
Bluebells also form carpets without a wooded canopy – for example, on Skomer Island in Wales – and point to the locations of ancient forests, long after the trees themselves have vanished. This is because, unlike trees, bluebells have most of their biomass and reproductive organs (the bulb) below ground where they are better protected.
They certainly are worth treasuring. It is estimated that Britain is home to half the world’s population of bluebells. But they are now threatened by the introduction of the related Spanish bluebell, (Hyacinthoides hispanica), leading to hybridisation and loss of habitat. Once removed, it takes decades to establish a population of bluebells large enough to create the characteristic carpets.
They are beautiful flowers, but have you ever wondered how bluebells pull off an even more impressive feat: being in their flowering prime when other plants have only just started to grow? Here are seven of their cleverest tricks.
1) The cold triggers growth: While most plants require a number of hours above a certain temperature before they start growing again, bluebells are dormant during the heat of the summer. Instead, their seeds are triggered to germinate when the temperature drops below 10°C, allowing them to get a vital head start and be in full bloom when spring finally arrives.
2) They dig deep: Bluebells have contractile roots, which pull the bulb deeper and deeper into the soil with every year of growth. This protects the bulb from frost, which starts from the soil surface, and temperature fluctuations, and provides better access to water in drought conditions.
3) They use fructans as reserve carbohydrates: While most plants use glucose and build starch or cellulose, bluebells predominantly convert sunlight into fructose, from which they build fructans. This adaptation allows them to photosynthesise at temperatures below 10°C. The plant’s large bulb comprises up to 70% fructans, which fuels their winter growth.
Fructans also serve another purpose, minimising the formation of new cells and causing existing cells to elongate instead. This is an advantage because the plants can grow without biosynthesising all the material needed to make new cells. You can see the effects of this by looking at a bluebell’s leaves: at first, they are firm and upright, but gradually lose their rigidity as the cells elongate.
4) They spear through any obstacles: The leaves that emerge from the bulb are as close to each other as possible and shaped like a spear with a small, sharp tip. This allows them to find their way through any obstacle – both below and above ground. When the leaves start emerging in mid-winter, there tends to be a lot of dead leaf matter and other detritus lying on the forest floor. Having an arsenal of little spears is critical for punching your way through this into the sunlight.
5) They cooperate:Bluebells are known to cooperate with mycorrhiza– symbiotic fungi. The fungi obtains carbon from the bluebell in exchange for nutrients, particularly phosphorus. Both parties win, thanks to their use of a wood wide web.
6) … and compete: Phosphorus is an important resource for plants – and bluebells “know” it. As well as securing their supply of it with the help of mycorrhiza, they also restrict the supply available to other plants. They do this by storing phosphorus in the form of phytate, which can only be converted into a usable form with specialised enzymes.
7) They shape their surroundings: Bluebells engineer the soil and their environment to optimally support their own kind while making it harder for other species to grow. As well as storing phosphorus in the form of phytate, and using fructans instead of glucose-based polymers, they quite literally win the turf war by carpeting the space above ground.
Vera Thoss is director of Vera Bluebell limited.
Celebrated 'English' poet Edward Thomas was one of Wales' finest writers
Author: Andrew Webb, Senior Lecturer in English Literature, Bangor University
Shortly after 7am on April 9 1917, 39-year-old writer Edward Thomas was killed by a shell during the Battle of Arras in northern France. He left a body of mostly unpublished work that has since cemented his place as one of Britain’s greatest poets.
All of Thomas’s 144 poems were written in the two and a half years leading up to his death. Almost immediately on its posthumous publication, his poetry came to speak for a rural England whose surviving people and culture had been decimated by four years of war. In a foreword to the 1920 Collected Poems, Walter de la Mare described Thomas’s poetry as “a mirror of England”, suggesting that it offered readers a portrayal of a rural nation that had been “shattered” by the catastrophic experience of World War I.
Thomas has become one of the most widely read English language poets of the 20th century. His Collected Poems has gone through numerous editions, and poems such as “Adlestrop” and “Old Man” have been widely anthologised.
Thomas has a deserved reputation as a poet with an unparalleled eye for the details of the natural world, managing through these observations to make some profound reflections on the human and environmental cost of war. His influence on subsequent generations of English poets is hard to overstate: former poet laureate Ted Hughes famously called Thomas “the father of us all”.
There has been plenty of discussion of Thomas’s work over the past few decades and yet there is one major aspect that has remained largely unexamined: his association with Wales.
An English poet?
Calling Thomas an English poet belies his own complex national identity. Born in London to Welsh parents in 1878, Thomas made frequent trips back to Swansea and the Carmarthenshire areas of south Wales to stay with relatives. He had strong friendships with Welsh-language poets Watcyn Wyn and John Jenkins (“Gwili”), and later attended Lincoln College, Oxford from 1897 to 1900, where he was tutored by Owen M. Edwards, one of the most significant figures in nonconformist Welsh culture.
Edwards awakened Thomas’s sense of Welsh national identity – after graduating he asked his former tutor “to suggest any kind of work … to help you and the Welsh cause”. Three years earlier, Edwards had called for “a literature that will be English in language and Welsh in spirit”, and it seems that Thomas took up his challenge, declaring that:“in English I might do something by writing of Wales”.
Welsh in spirit
The visits to Gwili and Watcyn Wyn became more frequent and both poets feature in Thomas’s 1905 travel book Beautiful Wales. A description of Gwili fishing in a Carmarthenshire stream also features in one of three books of Wales-oriented sketches and short stories published by Thomas between 1902 and 1911: Horae Solitariae, Rest and Unrest, and Light and Twilight. These books are full of Welsh subject matter, including sketches, as well as adaptations of, and allusions to, Welsh folk material and literature.
In his review work for newspapers, Thomas lamented the lack of a widely circulated collection of Welsh folk tales, something that he himself put right in 1911 when he published Celtic Stories, an anthology of Welsh and Irish folk stories written “when Wales and Ireland were entirely independent of England”.
While Thomas’s reputation as a quintessentially English writer rests largely on his poetry, it is now clear that even this is not as English as we previously thought. Welsh subject matter clearly creeps into some of his poems. The following verse from Words is a riddle-like reference to the tradition of Welsh bardic poetry:
Make me content
With some sweetness
Have no wings…
The lines below from Roads allude to Sarn Helen, the mythical Roman road linking fortresses in the north and south of Wales:
Helen of the roads,
The mountain ways of Wales
And the Mabinogion tales,
Is one of the true gods
Recently, however, we have realised that Thomas’s knowledge of Welsh-language poetic metres influenced his work too. Thomas’s poetry has long been regarded as innovative, but critics have tended to look for its origins in his relationship with American poet Robert Frost, the Imagism movement, or in the spoken voice.
What we have missed is the formal crossover between Welsh-language literary forms and Thomas’s use of intricate sound patterns. The opening lines of “Head and Bottle”, for example, repeat the consonant sounds of “l”, “s” and “m” across the first line, and again in the second line. There is also the internal rhyme in “sun”, “sum” and “hum”:
The downs will lose the sun, white alyssum
Lose the bees’ hum
This is a clear example of cynghanedd, the intricate system of consonantal repetition and internal rhyme which is unique to Welsh-language poems.
Thomas certainly was one of the greatest English-language poets but, one hundred years on, it is becoming clear that he belongs just as much to an Anglophone Welsh literary tradition as he does to the literature of England.
Andrew Webb does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Grey squirrels are bad for the British countryside – full stop
Author: Craig Shuttleworth, Honorary Visiting Research Fellow, Bangor University
According to some animal rights groups the grey squirrel is a victim of circumstance. They say it has been made a scapegoat for regional red squirrel population extinctions and claim that loss of the reds is caused entirely coincidentally by habitat change. They suggest the true facts are being hidden and scientific research being intentionally misinterpreted.
Well, no – put this argument to the test and you’ll see that the facts actually do stack up against the grey squirrel. The reality is that, while the grey squirrel is an important part of North American forest ecosystems, since being brought to Europe by the Victorians in 1876, the animal has had severe ecological and economic impacts on British woodlands.
Acrobatic and entertaining they may be, but the charge sheet against the grey squirrel is based on hundreds of peer-reviewed research papers. There really is no defence for it.
Greys vs red in Europe
Today there are approximately 2.5m grey squirrels in Britain, but less than 140,000 reds. Grey squirrels out-compete native reds for food and space. They also dig up and consume seed that red squirrels have buried as a winter store. This behaviour reduces red squirrel skeletal growth rates and adult size, and greatly depresses juvenile survival rates too.
In addition, greys harbour infections – including squirrel pox, which can devastate red squirrel populations. They elevate local viral and nematode infection rates, and bring in new parasites, such as Strongyloides robustus, which are picked up by red squirrels.
Occasionally a healthy red squirrel is found with squirrel pox antibodies – some researchers have suggested that this is evidence of them evolving resistance to the pox. Unfortunately, 63% of red squirrels dying from pox have also been found to have this antibody response present and there is no evidence that these antibodies confer immunity. Even if they did, research has also shown that antibodies are gone within 18 months and, irrespective of any resistance, red populations would be replaced by grey via competition anyway.
Grey squirrels also damage and kill forest trees making it impossible for foresters to grow high-grade hardwood. This means such material is imported instead, bringing with it the risk of new tree pests and pathogens.
Tree damage is most frequently seen on the branches and trunks of oak, beech and maple; bark is stripped by squirrels eager to consume the the sap underneath. Tree stems break or die following stripping, which in turn leads to changes in the structure and species composition of high canopy in amenity woodlands.
Even songbirds are affected by grey squirrels. A recent study gave evidence of negative association between woodland songbird fledging rates and presence of grey squirrels – though it must be noted that this was not observed annually and only seen on some of the sites studied. Earlier studies didn’t find evidence to indicate greys affect bird population, but also didn’t exclude the possibility – even for bird species whose population is increasing overall.
The Wildlife Trust has recently started to recruit 5,000 volunteers to monitor and control grey squirrel populations. However, a look beyond the headlines will reveal thousands of people are already legally trapping and shooting greys across the country to control their numbers. Volunteer groups cull 6,000 grey squirrels per year in the north of England, for example. Even in areas where reds are absent, locals control grey squirrels to protect woodlands or prevent damage to property. This is not some dramatic new approach by the Wildlife Trust, but is simply reinforcing an established national movement.
The eradication of greys from the Welsh isle of Anglesey saw red squirrel numbers increase from 40 to 700 and there are other examples of grey control halting or reversing red squirrel decline. Research has also demonstrated that red squirrels do not prefer conifer to broadleaved habitat and are just as happy in either.
Future control may involve giving the squirrels contraception, but will almost certainly not rely solely on this because of logistical barriers. The pine marten may assist in some landscapes too: one Irish study found a strong negative correlation between pine martens and greys in the woodlands studied. However, the use of trapping and shooting will inevitably continue as part of an integrated national approach.
And so the grey squirrel stands guilty as charged. Their presence has decimated the British countryside since they were introduced from North America, and if we do not continue to control the species, the future for red squirrels and woodland ecosystems will be bleak.
Craig Shuttleworth is an independent advisor to the European Squirrel Initiative and is on the management board of the EU LIFE14 NAT/UK/000467 invasive species project. He is a Director of Red Squirrels Trust Wales which receives funding from Welsh Government to study viral infections in squirrel species including squirrelpox.
Devil rays get worldwide protection – and genetic tools could catch out illegal traders
Author: Jane Hosegood, PhD Candidate, Bangor University
Devil rays, close cousins of the enormous manta rays, are stars of nature documentaries. They tend to collect together in large numbers, and some species leap from the water. Because of this they are popular with divers and, like mantas, important for tourism. But as is so often the case with some of our favourite species, these charismatic creatures are under threat from humans, specifically, because of the gill plate trade.
Devil rays are pulled out of the sea in huge numbers, all over the world, and butchered on beaches for their gill plates, the feather-like organs that they use to filter plankton and small fish – their preferred prey – from the oceans. The gill plates are then sold in markets in parts of Asia as a purported health tonic, despite the fact that there is no scientific evidence whatsoever to support this claim.
Unfortunately, these rays are both vulnerable to targeted fishing and often victims of bycatch. To make matters worse, the nine described species of devil ray are known to be some of the slowest reproducing of any elasmobranch, the group which includes sharks and rays. Females take many years to reach maturity, and only produce a single live pup every few years. Huge declines of these rays have been documented all over the world – at a rate of to 99% in some places.
So what is being done? In September and October 2016, the Convention on the International Trade in Endangered Species (CITES) met in Johannesburg, South Africa, for its 17th conference. This is the same organisation which is responsible for regulating trade in some of the world’s most infamous wildlife products, including elephant ivory and rhino horn. The meeting happens every three years, and delegates from the 183 signatory countries discussed listing all nine devil ray species under the convention, to regulate the trade in the species and their parts.
I was lucky enough to be present at the meeting, and to see the devil ray proposal achieve the required two-thirds majority vote. These new regulations are being implemented in April 2017 – so it is becoming illegal to trade in devil rays, or any of their parts, such as gill plates, across international borders without permits approving that the trade is not detrimental to the wild population.
One of the main concerns about enforcing the devil ray regulations is identifying between species, which are visually very similar. Compound that with the fact that those monitoring the fisheries – and customs officials – are often presented with gill plates, and not the whole specimen. Despite these issues, the devil ray listing will also greatly benefit the existing protections for manta rays, as manta gill plates can no longer be hidden among devil ray gill plates.
A large part of my work focuses on developing traceability tools that can identify a devil ray, or any of its parts, and which region it has come from. The intention is that this will assist with enforcement and monitoring of the new CITES regulations. I am also doing the same for the manta rays, which were listed on CITES in 2013.
Essentially, we take tissue samples from individuals of known species and sequence short fragments of the DNA that they contain. This allows us to build up a picture of the genetic signatures of each species and population, to which we can compare samples from an unknown individual or part. What we are looking for is a minimum number of regions within the genome that are unique enough within species to give us confidence in assignment and this assists us in identifying which species it came from.
The project is fortunate to have had a lot of support from international researchers and organisations, and therefore has access to one of the world’s most comprehensive sets of manta and devil ray tissue samples, which will allow the final tool to be as robust as possible. The hope is, that with regulations such as CITES effectively enforced, marine life will still be as vibrant and exciting for many generations to come.
Jane Hosegood is studying for a PhD within the Molecular Ecology and Fisheries Genetics Laboratory at Bangor University. The PhD project is funded as a CASE studentship by the Natural Environment Research Council through the ENVISION DTP with the Royal Zoological Society of Scotland as CASE partner. There are also links with TRACE Wildlife Forensics Network and Jane is Genetics Project Manager to the Manta Trust. Jane has also received funding from the Save Our Seas Foundation, the People’s Trust for Endangered Species, the Fisheries Society of the British Isles, and the Genetics Society.
How football’s richest clubs fail to pay staff a real living wage
Author: Tony Dobbins, Professor of Employment Studies, Bangor UniversityPeter Prowse, Professor of Human Resource Management and Employment Relations, Sheffield Hallam University
English football’s top flight, the Premier League, dominates the sporting world’s league tables for revenue. Star players, managers and executives command lucrative wages. Thanks to the biggest TV deal in world football, the 20 Premier League clubs share £10.4 billion between them.
But this wealth bonanza is not being distributed fairly within clubs. Wages are dramatically lower for staff at the opposite end of the Premier League labour market to players and executives. Many encounter in-work poverty.
Indeed, Everton and Chelsea are the only two Premiership clubs fully accredited with the Living Wage Foundation to pay all lower-paid directly employed staff, as well as external contractors and agency staff, a real living wage. This is a (voluntary) wage that is higher than the legally required national living wage. It is calculated based on what employees and their families need to live, reflecting real rises in living costs. In London it’s £9.75 an hour, elsewhere it’s £8.45.
Of 92 clubs in England and Scotland’s football leagues, only three others – Luton Town, Derby County and Hearts – are also accredited with the Living Wage Foundation. And many club staff – cleaners, caterers, stewards and other match-day roles – are employed indirectly by agencies or contractors and not paid the real living wage.
In 2015, The Independent newspaper asked 20 Premier League clubs simple questions: Does your club pay the living wage to full-time staff? Does it pay, or is it committed to paying the living wage to part-time and contracted staff? Seven clubs failed to reply or said “no comment”.
Good business, good society?
Many football clubs are embedded in urban communities, some classified as among the most impoverished places in Western Europe. What does it say about ethics and employment practices, especially of wealthier Premier League clubs, when many match-day staff don’t receive a proper living wage?
Aside from moral factors relating to fairer distribution of wealth as the glue underpinning more equal societies, there is also a good business case for companies to pay a real living wage. According to the Living Wage Foundation, organisations among the 2,900 accredited as paying the voluntary living wage report significant improvements in quality of work, lower staff absence and turnover – and an improved corporate reputation as a result.
Everton FC, located in an area of Liverpool with high social deprivation, has announced that becoming an accredited Living Wage Foundation employer will significantly increase wages for contractors and casual, match-day staff. Denise Barrett-Baxendale, the club’s deputy chief executive, has said: “Supporting the accredited living wage is quite simply the right thing to do; it improves our employees’ quality of life but also benefits our business and society as a whole.” Everton’s neighbours Liverpool FC has yet to make a similar commitment.
Independent academic research suggests that while workers benefit from the real living wage, it’s not an automatic fix. Higher hourly pay does not necessarily translate into a better standard of living if working hours are too low. The problem is that there are large concentrations of part-time living wage jobs with few hours and so small income increases are offset by rising costs of living.
Ending foul pay
There has recently been growing mobilisation among the public, civil society, supporters groups and some politicians to pressure football clubs to pay the real living wage. The GMB, a big general workers union, launched the GMB End Foul Pay campaign. London’s mayor, Sadiq Khan, recently urged every London Premier League club to pay all staff the London living wage.
In Manchester, living wage campaigners have targeted the city’s two big clubs Manchester City and Manchester United. While progress has been reported at Manchester City, Manchester United has yet to commit to extending the living wage to its directly employed part-time match-day staff. By contrast, FC United of Manchester, the breakaway non-league club formed by Manchester United fans disenchanted with the Glazers’ ownership, pays the real living wage to all staff, setting an example to the much richer football giant. Manchester United presently ranks as the “richest club in the world”, having achieved record-breaking revenues of £515.3m in 2015-16.
But despite these grassroots campaigns and political exhortations, few football clubs are taking concrete measures to improve the wages and working conditions of lower-paid staff. It appears that leaving pay determination to the prerogative of club owners and executives is not working. Stronger regulation and political intervention may have to be contemplated – such as raising the legal national living wage and giving better legal rights and protections to indirectly employed staff on precarious contracts.
Such issues clearly go beyond football clubs in an economy that still hasn’t recovered from the 2008 financial crisis. The state of the UK labour market is currently being considered by the government’s review of modern employment practices, but we can expect little to change when the economic model remains fundamentally the same.
The misguided political ideology of self-regulating market forces has created stark inequalities as wealth continues to trickle up disproportionately to the top 1% and countervailing institutions, particularly trade unions, have been emasculated. Low pay in football clubs and elsewhere reflects this broader systemic context of contemporary capitalism.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.
Comment expliquer les échouages massifs de cétacés ?
Author: Peter Evans, Honorary Senior Lecturer, Bangor University
Environ 600 globicéphales se sont échoués en février dernier sur une plage de Nouvelle-Zélande et 400 d’entre eux ont péri. Ce genre d’échouage massif est observé depuis longtemps et se produit dans le monde de façon régulière.
Fin 2015, 337 rorquals boréaux avaient ainsi trouvé la mort dans un fjord du Chili après l’échouage le plus massif connu pour cette espèce. De tels événements peuvent également se produire en Europe du Nord. En février 2017, 29 cachalots ont été retrouvés échoués sur les côtes allemandes, néerlandaises, britanniques et françaises, un autre record en mer du Nord pour cette espèce. Et ces dernières semaines, on déplore sur la façade atlantique de l’Hexagone un nombre alarmant de dauphins échoués.
Surtout des causes naturelles
Comment expliquer que de telles créatures, évoluant dans un environnement 100 % aquatique, s’aventurent dans des zones côtières si inhospitalières où, inévitablement, beaucoup d’entre elles risquent de mourir ?
Ces échouages massifs concernent presque exclusivement des espèces océaniques. Parmi elles, les globicéphales noirs et tropicaux sont les plus touchés. Les autres espèces comprennent les fausses orques, les dauphins d’Électre, les baleines de Cuvier et les cachalots. Toutes ces espèces évoluent normalement dans des eaux à plus de 1 000 mètres de profondeur, sont très sociables, vivant au sein de groupes qui peuvent atteindre plusieurs centaines d’individus.
S’il peut être tentant d’imputer ces échouages massifs aux activités humaines, on s’aperçoit que ces accidents concernent surtout des espèces de baleines évoluant dans les profondeurs et surviennent souvent aux mêmes endroits ; il est ainsi possible dans bien des cas d’expliquer ces échouages par des causes naturelles. Ces accidents ont le plus souvent lieu dans des zones peu profondes dont les fonds de sable sont légèrement pentus. Avec de telles caractéristiques, il n’est pas étonnant que ces animaux habitués à évoluer en eaux profondes rencontrent des difficultés, et qu’elles échouent souvent à nouveau après avoir été remises à l’eau.
L’écholocalisation qui aide ces espèces à naviguer est d’autre part assez inefficace dans un tel environnement. Il est ainsi tout à fait plausible que la plus grande part de ces échouages soient imputables à des erreurs de navigation ; c’est notamment le cas lorsque les cétacés se retrouvent en territoires dangereux pour avoir poursuivi une proie. Cela pourrait notamment expliquer l’échouage des cachalots mentionnés plus haut en mer du Nord, dans l’estomac desquels on a retrouvé des encornets.
La fréquence des échouages pour les cachalots en mer du Nord est ainsi plus importante au sud du Dogger Bank, une zone peu profonde et ensablée. Ces mêmes caractéristiques se retrouvent aux Farewell Spit et Golden Bay, dans la partie sud de la Nouvelle-Zélande, où de récents échouages de globicéphales ont eu lieu ; de tels accidents s’étaient déjà produits à plusieurs reprises ces dernières années.
Ces deux zones ont été le théâtre de multiples échouages pour ces espèces dans le passé. Dans la partie sud de la mer du Nord, les premières observations de tels phénomènes remontent à 1577.
L’erreur de navigation et la mauvaise évaluation de la profondeur de l’eau ne sont cependant pas les seules causes de ces échouages. Certains individus malades ou affaiblis auront en effet tendance à rechercher des eaux moins profondes, leur permettant de remonter plus aisément à la surface pour respirer. Une fois leur masse ayant reposé sur une surface dure, il y a de fortes chances pour que les parois de leur cage thoracique se trouvent compressées et leurs organes internes abîmées.
Des activités humaines pas en reste
On compte depuis février près de 800 dauphins échoués sur la côte atlantique française. Un record inquiétant qui s’explique principalement par des captures accidentelles dans des engins de pêche. Dans plus de 90 % des cas, les animaux, ramenés vers les côtes par les vents forts qui ont soufflé en février et mars, se sont échoués à l’état de carcasses, leur mort étant survenue avant l’échouage.
Autres actions humaines en cause, celles qui impliquent le recours à des sonars, comme c’est souvent le cas pour les activités militaires. Cette relation de cause à effet a été mise en avant pour la première fois en 1996 lors d’un exercice militaire conduit par l’OTAN au large des côtes grecques, et qui vit l’échouage de 12 baleines de Cuvier mâles. Malheureusement, aucune analyse vétérinaire n’a pu être conduite.
Mais en mai 2000, un autre échouage massif se produisit aux Bahamas en marge d’activités navales utilisant des sonars. Certaines baleines furent examinées et l’on découvrit des hémorragies, tout particulièrement à proximité de leur oreille interne, indiquant un traumatisme acoustique.
Après un accident similaire aux îles Canaries en septembre 2002, les services vétérinaires identifièrent des symptômes identiques liés à des accidents de décompression. Ce qui indique que les animaux ne meurent pas forcément des suites de l’échouage, mais peuvent arriver sur le rivage déjà mortellement atteints. De nombreux chercheurs pensent aujourd’hui que les sonars navals perturbent la capacité des baleines à gérer les gaz à l’intérieur de leur organisme, affectant ainsi leur capacité à plonger et remonter à la surface en toute sécurité.
Le bruit marin est devenu un problème majeur, les activités humaines (des technologies aux explosions) ayant introduit toute une gamme de sons d’intensité et de fréquence variées. Les tremblements de terre sous-marins sont une autre source de bruit marin intense, pouvant conduire à des blessures et des échouages, bien que les données manquent encore pour l’affirmer.
Le rôle des liens sociaux
Pour les échouages auxquels on a assisté en Nouvelle-Zélande, impliquant un grand nombre d’individus, on peut également se demander jusqu’à quel point ces animaux peuvent s’entraîner les uns les autres dans des eaux dangereuses.
Il y a quelques années de cela, je suis venu en aide à deux dauphins communs à bec court échoués vivants sur le rivage de l’estuaire Teifi dans l’ouest du Pays de Galles. Un d’eux périt assez rapidement et une analyse post-mortem révéla qu’il souffrait d’une affection pulmonaire parasitaire sévère. L’autre individu resta près de lui, totalement désespéré, sifflant régulièrement.
Nous sommes parvenus à le remettre à la mer et il s’en alla, mais cet épisode me montra à quel point les liens sociaux sont forts entre ces animaux. Et lorsque nous assistons à ce qui peut paraître comme des suicides collectifs de baleines ou de dauphins, il se peut bien que cela résulte d’un échange entre eux, soulignant leur caractère profondément sociable.
De récentes recherches indiquent d’ailleurs que les individus impliqués dans ces échouages massifs n’ont pas forcément de liens de parenté, soulignant la force des liens sociaux parmi les cétacés.
Peter Evans does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Britons see volunteering as a hobby or a way to network rather than a chore
Author: Stephanie Jones, PhD student of sociology, studying civil society, volunteering and participation, Bangor University
Despite the UK being named Europe’s most generous country last year, new data from the Office for National statistics has shown that volunteering for charities and other organisations in the country declined by 7% in the three years to 2015. Furthermore, over the past decade there has been a 15.4% fall in the total number of regular hours dedicated to volunteering, dropping from to 2.28 billion from 1.93 billion hours.
This, according to the Office for National Statistics, resulted in a loss of more than £1 billion between 2012 and 2015.
This downturn doesn’t show the whole picture, however: the ONS also found that more young people are getting involved with volunteering initiatives. And that though the amount of time spent volunteering has declined, more people are signing up to volunteer.
These national stats also don’t show how volunteering is distributed across the country. Wales, for example, has a population of just over 3m people, 940,533 of which are currently engaged in formal voluntary activity (approximately 32%). In England, 15.9m individuals volunteer frequently from an overall population of 53.9m. While in Scotland 1.3m people volunteer out of a population of 5.3m.
But again these figures just can’t tell the whole story: in addition to the 940,533 formal volunteers in Wales – who work with community groups, raise funds for charities and work in charity shops – 1.6m volunteered informally, doing things like visiting elderly neighbours and running errands. Likewise, informal volunteering figures are also high in England (18m) and Scotland, (1.9m).
While these informal volunteers tend to stick close to home and help their local community, my own research has found formal volunteers go further afield. Looking at the role and experiences of heritage railway organisations’ volunteers in North Wales, I found that people say they are substantially more willing to travel across the country to work on projects which mean a lot to them. Volunteering has declined considerably over the past decade in rural locations but while participation in “traditional” forms of associations – for example, attending church or chapel and taking part in their charitable projects – has fallen, volunteering in the heritage industry has increased.
Community and heritage
The North Wales Ffestiniog railway is the oldest narrow-gauge railway, with approximately 200 years of history. Currently, it has nine societies – one is based in Gwynedd, North Wales, while the other eight are located across England, from Sussex up to Dee and Merseyside. Volunteers perform a vital role in preserving the railway’s heritage, and perform various roles such as driving, guarding and inspecting tickets on the trains.
While those who help neighbours do so on a daily or weekly basis, the motivations for the Ffestiniog volunteers are very different. They work with the railway often while on holiday to relax from a highly demanding job, escape from their lives and to participate in an activity which is not available in many towns and cities. Some also do it to maintain links to an area where they once lived or visited during childhood.
I’ve also found that volunteering is often seen as a way to make new connections. This is particularly prominent within the heritage sector where a significant proportion of its volunteers are either retired, or near retirement age. Their participation can help them maintain a social life, facilitate social connections, create new networks, and bridge the gap between employment and retirement.
Quite often numerous members of the same family take part in one volunteering activity too which becomes a family tradition. One participant has told me that his daughter, son-in-law, two nephews and three great-nephews all work with the Ffestiniog railway. And that “the whole atmosphere of the railway is of an extended family; other volunteers find the same thing”.
Just like the informal volunteers, who help their neighbours or local community, the formal railway volunteers find a community and communal spirit in the organisation. While they may not be related by blood or marriage, volunteers’ strong friendships often results in feelings of family, belonging, camaraderie, community, and identity. As another volunteer said:
It is fantastic sharing a hobby and interest with people who feel and enjoy the same things. I suppose in a way, it is seen as a peculiar hobby to outsiders, but there is a great sense of community and communal spirit amongst us, we are a team, probably a dysfunctional one, but a team nonetheless.
The truth is that facts and figures could never really represent the true picture of volunteering in Britain. Whether formal or informal, close by or far away, Britons are using their free time to make a difference – one that stats on their own could never truly portray.
Stephanie Jones receives funding from WISERD social research.
Should family members be given more power to help relatives dying at home?
Author: Clare Wilkinson, Deputy Head of Research, School of Healthcare Sciences and Professor of General Practice, Bangor University
It makes sense that people with terminal illnesses would want to be cared for, and die, at home. Familiar surroundings, with family and friends, are what people generally prefer and can bring comfort at the end of life.
But as people get weaker, in the last weeks or days of life, their care does become more challenging for an untrained carer. The patient might not be able to swallow, for example. When this happens, it is standard practice for medicines to be given by a syringe driver. This is a little needle under the skin attached to a pump so medicines can be given over a full 24 hours. The usual medicines that might be included are for pain, agitation, nausea or vomiting, and noisy breathing.
In some cases, however, not all symptoms can be easily relieved, and the patient may require extra medication to stop the symptoms above from breaking through. These breakthrough symptoms can occur even when a syringe driver is in place to deliver medication. When it happens, family members are advised to call a healthcare professional, usually a district nurse. The nurse will visit and give the patient an injection under the skin. But it can take a long time, often more than an hour, for the nurse to arrive and give the medicine.
District nurses have heavy caseloads, and from what we have seen this waiting time happens whether people live in urban or rural areas. It can be a distressing wait for both patient and carer, and the symptoms can worsen further by the time the nurse arrives. Carers have told us it makes them feel powerless to help their loved ones.
So what can done to help those in their last moments of need? In some countries, like Australia, carers are trained by nurses to give symptom-relieving medicine to their dying relatives at home. End of life care can be a controversial topic, so though family and friends being able to give medication at home may seem like an obvious choice, not everyone may welcome this approach. This is why we are now conducting a study into attitudes towards the idea, and develop a process for how it could work.
At present, nurses in the UK train families and carers to do many of the tasks that are needed. This may include basic nursing – lifting, turning and washing the person – as well as giving all the medicines the person can take by mouth, and deciding when more help is needed from a healthcare professional. Best care involves a team of support for the family or carers, and always includes a GP and district nurses who are key sources of advice and help, and often actively involved at home as needed.
Giving medicines for breakthrough symptoms is an extension of this role, and builds on excellent palliative care already in place. It is already a legal practice in Britain: carers can give strong painkillers to patients who are unable to make decisions for themselves. But as this is not yet a routine practice, the regulatory framework needs to be very clearly set out to ensure health professionals understand and are comfortable with extending this role into allowing carers to give injections.
The key issue is ensuring the families are chosen well, trained well, and feel competent to do the job. These injections will be “no-needle”, as they will go into a port already in the person’s skin, and so may perhaps be a bit less intimidating for a wary carer to administer.
This is not about hastening death, or pressuring carers to do more than they feel able to do. Studies have found that people in the UK put great emphasis of empowerment of family carers and symptom management in the last days of life, and so we are confident that there will be a good reception to the new practice if it is brought in.
Some NHS organisations, such as the Lincolnshire Community Health Services, have already started allowing“lay carers” to administer medication through a port. But we want to make this a more common practice, and give carers the power to help in the final moments of need. Speaking to patients, carers and health care professionals across the country, we have found that they are greatly in favour of allowing family carers to administer medications.
We are now working on a feasibility study which we hope could lead to a change in practice. We want to help make dying at home less physically and emotionally painful for both patients and carers.
Clare Wilkinson receives funding from the National Institute for Health Research for her role as chair of the Primary Care Panel. The CARer-ADministration of as-needed sub-cutaneous medication for breakthrough symptoms in homebased dying patients project was funded by NIHR.