Research stories

On our News pages

Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.

On TheConversation.com

Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.

Investing in warmer housing could save the NHS billions

Author: Dr Nathan Bray, Research Officer in Health Economics, Bangor UniversityEira Winrow, PhD Research Candidate and Research Project Support Officer, Bangor UniversityRhiannon Tudor Edwards, Professor of Health Economics, Bangor University

Bitterly cold.Ruslan Guzov/Shutterstock

British weather isn’t much to write home about. The temperate maritime climate makes for summers which are relatively warm and winters which are relatively cold. But despite rarely experiencing extremely cold weather, the UK has a problem with significantly more people dying during the winter compared to the rest of the year. In fact, 2.6m excess winter deaths have occurred since records began in 1950 – that’s equivalent to the entire population of Manchester.

Although the government has been collecting data on excess winter deaths – that is, the difference between the number of deaths that occur from December to March compared to the rest of the year – for almost 70 years, the annual statistics are still shocking. In the winter of 2014/15, there were a staggering 43,900 excess deaths, the highest recorded figure since 1999/2000. In the last 10 years, there has only been one winter where less than 20,000 excess deaths occurred: 2013/14. Although excess winter deaths have been steadily declining since records began, in the winter of 2015/16 there were still 24,300.

According to official statistics, respiratory disease is the underlying cause for over a third of excess winter deaths, predominantly due to pneumonia and influenza. About three-quarters of these excess respiratory deaths occur in people aged 75 or over. Unsurprisingly, cold homes (particularly those below 16°C) cause a substantially increased risk of respiratory disease and older people are significantly more likely to have difficulty heating their homes.

Health and homes

The UK is currently in the midst of a housing crisis – and not just due to a lack of homes. According to a 2017 government report, a fifth of all homes in England fail to meet the Decent Homes Standard– which is aimed at bringing all council and housing association homes up to a minimum level. Despite the explicit guidelines, an astonishing 16% of private rented homes and 12% of housing association homes still have no form of central heating.

Even when people have adequate housing, the cost of energy and fuel can be a major issue. Government schemes, such as the affordable warmth grant, have been implemented to help low income households increase indoor warmth and energy efficiency. However, approximately 2.5m households in England (about one in nine) are still in fuel poverty – struggling to keep their homes adequately warm due to the cost of energy and fuel – and this figure is rising.

Poor housing costs the NHS a whopping £1.4 billion every year. Reports indicate that the health impact of poor housing is almost on a par with that of smoking and alcohol. Clearly, significant public health gains could be made through high quality, cost-effective home improvements, particulalrly for social housing. Take insulation, for example: evidence shows that properly fitted and safe insulation can increase indoor warmth, reduce damp, and improve respiratory health, which in turn reduces work and school absenteeism, and use of health services.

Warmth on prescription

In our recent research, we examined whether warmer social housing could improve population health and reduce use of NHS services in the northeast of England. To do this, we analysed the costs and outcomes associated with retrofitting social housing with new combi-boilers and double glazed windows.

After the housing improvements had been installed, NHS service use costs reduced by 16% per household – equating to an estimated NHS cost reduction of over £20,000 in just six months for the full cohort of 228 households. This reduction was offset by the initial expense of the housing improvements (around £3,725 per household), but if these results could be replicated and sustained, the NHS could eventually save millions of pounds over the lifetime of the new boilers and windows.

The benefits were not confined to NHS savings. We also found that the overall health status and financial satisfaction of main tenants significantly improved. Furthermore, over a third of households were no longer exhibiting signs of fuel poverty – households were subsequently able to heat all rooms in the home, where previously most had left one room unheated due to energy costs.

Perhaps it is time to think beyond medicines and surgery when we consider the remit of the NHS for improving health, and start looking into more projects like this. NHS-provided “boilers on prescription” have already been trialled in Sunderland with positive results. This sort of cross-government thinking promotes a nuanced approach to health and social care.

We don’t need to assume that the NHS should foot the bill entirely for ill health related to housing, for instance the Treasury could establish a cross-government approach by investing in housing to simultaneously save NHS money. A £10 billion investment into better housing could pay for itself in just seven years through NHS cost savings. With a growing need to prevent ill health and avoidable death, maybe it’s time for the government to think creatively right across the public sector, and adopt a new slogan: improving health by any means necessary.

The Conversation

Nathan Bray receives funding from Health and Care Research Wales and the EU Horizon 2020 Framework Programme for Research and Innovation

Eira Winrow receives PhD funding from Health and Care Research Wales.

Rhiannon Tudor Edwards receives funding from the National Institute for Health Research, Health Technology Assessment (HTA), Health and Care Research Wales and the EU Horizon 2020 Framework Programme for Research and Innovation.

Why we taught psychology students how to run a marathon

Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University

Pavel1964/Shutterstock

Mike Fanelli, champion marathon runner and coach, tells his athletes to divide their race into thirds. “Run the first part with your head,” he says, “the middle part with your personality, and the last part with your heart.” Sage advice – particularly if you are a third year psychology student at Bangor University, preparing for one of the final milestones in your undergraduate experience: running the Liverpool Marathon.

For many students, the concluding semester of third year is a time of uncertainty. Not only are they tackling the demands of a dissertation and battling exams, but they are also teetering on the precipice of an unknown future, away from the comfort of university.

As spring draws to a close, the academic atmosphere provides a heady cocktail of sleep-deprivation, achievement and stress. Yet 22 of our students managed to do all this and train for a marathon as part of their “Born To Run” class. None of them had completed such a distance before – in fact, most had run no further than 5km prior to their module induction.

Practice runs.Will Philpin, whenitrainscreative.com

Rewind several months, and I am listening to my PhD supervisor, John Parkinson, and fellow academic Fran Garrad-Cole discuss their plans for “the running module”, which would coincide with more traditional lectures on postive and motivational psychology. I was greatly enthused by the idea given the psychological benefits of physical activity. Exercise is related to improvements in mood, self-esteem and social integration, as well as reducing symptoms of depression.

Particularly relevant to those under pressure at work or school, is the association between physical activity and the ability to cope with stress, as well as enhanced cognitive functioning. But despite these benefits, designing a class around running a marathon was no easy task.

Race to success

As neither module organiser nor student, it was easy for me to relish the gamble of this venture. My participation – assisting the classes and helping the students to train for the marathon – did not place my professional reputation on the line, nor did it have the potential to significantly impact the outcome of my degree. The danger with this kind of practical application is that when things fail, the failure is highly visible.

It would be easy to reduce “success” into a binary distinction of running or not running on race day. Yet this perspective would very much miss the point. The aim of the module wasn’t to complete a marathon, but to create graduates who set huge challenges, and nail them, whenever that may be.

Running seminars.Will Philpin, whenitrainscreative.com

Not every student ran the marathon, but for the 13 who did, the three who ran the half, and those who didn’t run at all, the lessons on perseverance and resilience demonstrate that failure can be an essential component of success.

The message from the Born to Run module was essentially one of courage. T S Elliot once said, “Only those who risk going too far can possibly find out how far one can go.” This statement rings true on multiple levels. It was visible in the students’ bravery in publicly committing to such a challenging goal, John and Fran’s professional risk, and in both the mental and physical ardour that training for a marathon takes.

What I saw was the incredible impact that setting high expectations balanced with warm support and strategic expertise can have on student engagement. Most importantly, I learnt how bringing your own passion into the classroom can transform the learning experience, transcending both their academic and personal life.

So to return to Mike Fanelli, the final stages of the module, as well as the marathon, are about the heart. The technical strategies the students learnt saw them through the first few miles, and the traits they were encouraged to develop enabled them to cover the next third. But in the final part, when delirium sets in, it’s the emotional bond created by such a challenging yet supportive experience that gets you through.

The pleasure I felt at eventually crossing the line was multiplied immeasurably by sharing this experience with the others I have seen develop over the semester. I will be forever grateful to one student, Patrick, for pulling me through that last mile, and forever in awe of Fran, John and the first ever Born to Runners.

The Conversation

Rhi Willmot has nothing to disclose.

Documenting three good things could improve your mental well-being in work

Author: Kate Isherwood, PhD Student in Health and Well-being, Bangor University

Stressful day.Kuprevich/Shutterstock

The UK is facing a mental health crisis in the workplace. Around 4.6m working people– 7% of the British population – suffer from either depression or anxiety. In total, 25% of all EU citizens will report a mental health disorder at some point in their lives.

People who have been diagnosed with a mental health disorder, or show symptoms of one, and remain in work are known as “presentees”. These individuals may have trouble concentrating, memory problems, find it difficult to make decisions, and have a loss of interest in their work. They underperform and are non-productive.

Medication and/or talking therapies – like cognitive behavioural therapy (CBT) – have been shown to be highly effective in treating common mental health disorders. But these interventions are aimed at those who are already signed off sick due to a mental health diagnosis (“absentees”).

Stress and pressure in work is not the same as at home, so those with mental health issues who are still in work need a different kind of help. In the workplace, employees can be subject to tight deadlines and heavy workloads, and may potentially be in an environment where there is a stigma against talking about mental health.

Reframing mental health

So what can be done for those working people who have depression or anxiety? Research has found that simply treating a person before they are signed off sick will not only protect their mental health, but can actually result in increased workplace productivity and well-being. For example, when a group of Australian researchers introduced CBT sessions into a British insurance company, they found it greatly improved workplace mental health.

In the study, seven three-hour sessions of traditional CBT were offered to all staff in the company. The sessions focused on thinking errors, goal-setting, and time management techniques. At follow up appointments seven weeks and three months after the sessions had ended, the participants showed significant improvements in things like job satisfaction, self-esteem, and productivity. They had also improved on clinical measures of things like attributional style – how a person explains life events to themselves – psychological well-being and psychological distress.

Positive notes.wavebreakmedia/shutterstock

However, there have been concerns that using the types of treatment typically given to people outside work may be distracting to an employee. The worry is that they don’t directly contribute to company targets, instead offering more indirect benefits that can’t be as easily measured.

But there is an alternative that doesn’t take up too much company time and can still have a huge impact on employees’ mental health: positive psychology.

Three good things

In the last 15 years, psychological study has moved away from the traditional disease model, which looks at treating dysfunction or mental ill-health, towards the study of strengths that enable people to thrive. This research focuses on helping people to identify and utilise their own strengths, and encourages their ability to flourish.

Positive psychology concentrates on the development of “light-touch” methods – that take no longer than ten to 15 minutes a day – to encourage people to stop, reflect and reinterpret their day.

Something as easy as writing down three good things that have happened to a person in one day is proven to have a significant impact on happiness levels. In addition, previous research has also found that learning how to identify and use one’s own strengths, or express gratitude for even the littlest things, can also reduce depression and increase happiness too.

This is effective in the workplace as well: when a positive work-reflection diary system was put in place at a Swiss organisation, researchers found that it had a significant impact on employee well-being. Writing in diaries decreased employees’ depressive moods at bedtime, which had an effect on their mood the next morning. The staff members were going to work happier, simply by thinking positively about how their shift had gone the day before.

Added to this, when another group of researchers asked employees of an outpatient family clinic to spend ten minutes every day completing an online survey, stress levels, mental and physical complaints all significantly decreased. The questionnaire asked the participants to reflect on their day, and write about large or small, personal or work-related events that had gone well and explain why they had occurred – similar to the three good things diary. The staff members reported events like a nice coffee with a co-worker, a positive meeting, or just the fact that it was Friday. It showed that even small events can have a huge impact on happiness.

The simple practice of positive reflection creates a real shift in what people think about, and can change how they perceive their work lives. And, as an added benefit, if people share positive events with others, it can boost social bonds and friendships, further reducing workplace stress.

Reframing the day can also create a feedback loop that enhances its impact. When we are happier, we are more productive; when we are more productive, we reach our goals, which helps us to focus on our achievements more, which in turn makes us happier.

The Conversation

Kate Isherwood does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

What language tells us about changing attitudes to extremism

Author: Josie Ryan, PhD Researcher, Bangor University

Words are more than their dictionary definition.Amir Ridhwan/Shutterstock

The words “extreme”,“extremist” and “extremism” carry so many connotations these days – far more than a basic dictionary definition could ever cover. Most would agree that Islamic State, the London Bridge and Manchester Arena attackers, as well as certain “hate preachers” are extremists. But what about Darren Osbourne who attacked the Finsbury Park Mosque? Or Thomas Mair who murdered Labour MP Jo Cox? Or even certain media outlets and public figures who thrive on stirring up hatred between people? Their acts are hateful and ideologically-driven, but calls for them to be described in the same terms as Islamic extremists are more open to debate.

The word “extreme” comes from the Latin (as so many words do) “extremus”, meaning, literally, far from the centre. But the words “extremist” and “extremism” are relatively new to the English language.

Much language is metaphorical, especially when we talk about abstract things, such as ideas. So, when we use “extreme” metaphorically, we mean ideas and behaviour that are not moderate and do not conform to the mainstream. These are meanings we can find in a dictionary, but this is not necessarily how or when extreme, extremist, and extremism are used in everyday life.

Lingua

One way of finding out how words are used is to look at massive databases of language, called corpora. To find out more about how these words developed in Britain, I turned to the Hansard corpus, a collection of parliament speeches, from 1803 to 2005. Political language is quite specific, but analysing it is a good way to see how the issues of the day are being described. In addition, having a record which covers two centuries shows us how words and their meanings have changed over time.

Apart from the adverb “extremely” – used in the same way as “really” and “very” – my search showed that the word extreme was used most frequently in its adjective form during this 200-year period. However, usage of extreme as an adjective has been declining since the mid-1800s, as has the noun form. At the same time, two new nouns, “extremist” and “extremism” begin to appear in the corpus in the late 1800s, and usage gradually increases as time goes on. No longer are certain views and opinions described as extreme, instead extremist and extremism are used as a shorthand for complex ideas, characteristics, processes and even people.

In the graph above, we can see three peaks in the frequency of the noun extremist(s). It is interesting to see which groups have been labelled as extremist in the past as this can provide clues about who is considered an extremist these days, and also who is not.

In the 1920s, extremist and extremism were often used in connection with the Irish and Indian fights for independence from the British Empire. 50 years on, they are linked with another particularly violent period in Irish history, while Rhodesia was also fighting for independence from Britain in the 1970s. The final increase in usage of the terms extremist and extremism comes, perhaps unsurprisingly, at the start of the 21st century.

However, the words have not been solely linked to violence: they were very often used to describe miners in the 1920s and animal rights activists in the 2000s. Both of these groups have had a lot of support from the British population if not from politicians speaking in parliament.

I also looked at the words that appear around the extreme words, or “collocates”. What I found is that the collocates of the search terms become increasingly negative over the period covered in the Hansard corpus. They also became less connected to situations, and more closely connected to political or religious ideas and violence. For example, in the late 20th century and early 2000s, “extremism” became more associated with Islam, and at the same time, it was collocated with words such as “threat”, “hatred”, “attack”, “terror”, “evil”, “destroy”, “fight”, and “xenophobic”.

Extremism

After 2005, the extremist terms became much more frequently associated with the Islamic faith – to the point where the word “extremist” is now almost exclusively used to refer to a Muslim who has committed a terrorist act, and some have suggested there is reluctance to use it otherwise.

Looking at the collocates of extremist and extremism in a corpus of UK web news, which runs from 2010-2017, five of the top 10 collocates are related to Islam. “Right wing” and “far-right” also appear in the top 10. However, the top three collocates – “Islamic”, “Islamist” and “Muslim” – appear 50% more frequently than the other seven collocates in the list added together.

The most interesting thing to come out of this investigation is what has gone unsaid. Extremist and extremism are not being used as they were in the past to describe violent, hateful, and ideologically-driven acts, with no reference to ethnicity or faith. Today, the terms have become almost solely reserved for use in reference to Muslims who perpetrate terrorist attacks.

The words we use can affect and reveal how we perceive the world around us. Word meanings change over time, but reluctance to use the same word for the same behaviour betrays a bias towards crimes that are, perhaps, uncomfortably mainstream.

The Conversation

Josie Ryan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Forget Jon Snow, watch the young women to find out how Game of Thrones ends

Author: Raluca Radulescu, Professor of Medieval Literature and English Literature, Bangor University

Sky Atlantic/©2017 Home Box Office, Inc

For Game of Thrones fans, the current series has been a bit of a mystery. As the television writers have picked up the storyline where author George R. R. Martin’s A Song of Fire and Ice novels ended, there is, for the first time, no original text to refer back to.

Much virtual ink has been spilled recently over the role of the female characters in the political struggle, yet one of the most crucial themes of this series is going largely undiscussed: the role of children, particularly young girls.

Arya Stark.Sky Atlantic/©2015 Home Box Office, Inc.

The children of Game of Thrones might form the thick-woven fabric of the tapestry we have been watching, but they have not really taken centre stage. There were little nods in past episodes towards the vital importance of the children in Game of Thrones: take the little orphans of King’s Landing, for example, who killed Grand Maester Pycelle of the Citadel – a rather more unusual turn of the plot. Later episodes have been more obvious about the power of children, but it is only now that the series is being so explicit about it.

The latest episode to air, episode six, lays the central role of children and young people on a little more thickly. Without giving too much away, the struggle between Sansa and Arya, the Stark sisters, seemingly comes to a head, while a shocking event involving Daenerys Targaryen causes her once more to tearfully utter the phrase “they are my children”, while telling Jon Snow that she is unable to bear a child of her own. We have also recently heard that current queen of the Seven Kingdoms Cersei is pregnant once more with a new heir to the Lannister line.

Seen but not heard

From the start of the series, and indeed Martin’s novels, the struggle over dictating the future of the Seven Kingdoms has been very similar to that during the real-life Wars of the Roses. Cersei’s naked ambition and her son Joffrey’s stark cruelty (puns intended) remind of Margaret of Anjou. She was the 15th century French queen to the mentally unstable Lancastrian king Henry VI, whose son – allegedly begotten in adultery, though not incest as in Game of Thrones – was Prince Edward.

Like Margaret of Anjou, Cersei uses her reputation – and children – to her advantage. She takes charge of the family fortunes and boldly looks at the future as an opportunity for herself. There’s every chance she’ll don armour at some point, as Queen Margaret herself was rumoured to have done during the Wars of the Roses.

Unlike Margaret, however, Cersei faces a battle with the upcoming dynasties of women. Cersei still believes that she is the most important woman in Westeros, but the younger females we first saw as children have come more into the limelight during this and the last series. Cersei’s power is waning, while other prominent women such as Daenerys, or indeed the young lady Lyanna Mormont – the head of one of the great families of the North – are unafraid to ride into battle. Even Sansa, who Cersei once tried to humiliate and oppress, is now standing in as ruler of the North while her half-brother Jon Snow seemingly prefers his place in the heart of the action.

Jon Snow rides to fight.Sky Atlantic/©2017 Home Box Office, Inc.

Since the first episodes, we have been watching these young women grow and change – but only now is their true significance being made clear. Where once they were shown in the more expected, traditional roles of a medieval female, now they are warriors in their own right. A feisty young Arya has transformed from the lively girl with her sword “needle” to an assassin, a “Faceless Man” trained in the dark arts and haunting Winterfell. Sansa meanwhile has become a different kind of fighter, going from dreams of being a princess to overcoming years of abuse and ultimately emulating her own strong mother, Lady Catlin Stark.

Valar Morghulis: All men must die

Yet Cersei is not that “old” – and potentially still has decades ahead of her to sit on the Iron Throne. If there’s one lesson that can be learned from Lady Olenna of House Tyrell – the wise older woman who tells Daenerys she has survived many powerful men – it is that even when women are no longer young and the focus of attention, they still have some influence to wield. Cersei may have lost her first three children – and the control she had in using them as pawns to her game – but her new pregnancy could very well serve to change that once more.

Ultimately lineages are the most important factor in winning the game of thrones – and it could very well be that Cersei’s new child grows to fight a ruling Daenerys, who, as of episode seven, had not yet named an heir to her throne.

As the battle focuses between the two – or three, if you count Sansa – queens, it has never been more clear that the young female combatants are now far more relevant than the adult male leaders – most of whom have been killed off. As children these women signalled change in dynastic struggles – but now they are grown up, they are heralding a second echelon of much wiser, perhaps untainted rulers: theirs is the future of Westeros.

The Conversation

Raluca Radulescu has nothing to disclose.

Independent music labels are creating their own streaming services to give artists a fair deal

Author: Steffan Thomas, Lecturer in Film and Media, Bangor University

Kaspars Grinvalds/Shutterstock

Music streaming services are hard to beat. With millions of users – Spotify alone had 60m by July 2017, and is forecast to add another 10m by the end of the year – paying to access a catalogue of more than 30m songs, any initial concerns seem to have fallen by the wayside.

But while consumers enjoy streaming, tension is still bubbling away for the artists whose music is being used. There is a legitimacy associated with having music listed on major digital platforms, and a general acknowledgement that without being online you are not a successful business operation or artist.

Even the biggest stars are struggling to deny the power of Spotify, Apple Music and the like. Less than three years after pop princess Taylor Swift announced she would be removing her music from Spotify, the best-selling artist is back online, as it were. Swift’s initial decision came amid concerns that music streaming services were not paying artists enough for using their work – a view backed up by others including Radiohead’s Thom Yorke.

But while Yorke and Swift can survive without the power of streaming, independent production companies with niche audiences may not be able to.

Struggling artists

Though the music industry is starting to get used to streaming – streamed tracks count towards chart ratings, and around 100,000 tracks are added every month to Spotify’s distribution list – it is still proving difficult for independent music companies to compete for exposure on these platforms.

Coping with diminishing sales of CDs and other physical copies of music, independent labels are already in a tough place. Independent labels and artists are also unable to negotiate with large digital aggregators such as Spotify or Deezer for more favourable rates, and are forced to accept the terms given. Independent labels lack the expertise, but mostly lack the catalogue size for bargaining power. Major record labels, backed by industry organisations, on the other hand can and have successfully negotiated more favourable terms for their artists based on the share of the catalogue that they represent.

Digital sounds.rawpixel.com/Shutterstock

There’s also been a shift in industry approach that some independent labels may find difficult to do. These days, major labels are focused less on the artists themselves and more on which music will do best on new platforms. This undermines the ethos for many culturally rich independent labels who work hard to safeguard niche areas of their market. For them, it is about building up different genres, not simply releasing songs that will generate the most money.

So if niche labels can’t get a strong footing on large services, what can they do?

Independent streaming

Where once there were free sites such as SoundCloud, which gave emerging and niche musicians a place to share their music, indy labels are now developing their own streaming services to make sure their artists get the best exposure – and the best deal.

Wales in particular is leading the way for the minority language independent music scene. Streaming service Apton, launched in March 2016, provides a curated service to its music fans. It operates at a competitive price point, with a more selective catalogue representing several Welsh labels. More importantly, it returns a much fairer price to its recording artists than Spotify’s reported 0.00429p per stream.

By using a specialist, curated and targeted music service – such as Apton, or similar services The Overflow and PrimePhonic– consumers are better able to find the music they are looking for. Listeners are also more likely to value the service, as they can access and experience a greater percentage of a label’s catalogue or remain within a niche genre of music, compared with mainstream mass-market streaming services, where mass market recommendations are generated via popular playlists. Users of these streaming sites and apps also value the knowledge that the money they spend is being used to support the artists they follow.

Though they are certainly doing well as is, streaming services at all levels need more work to become the default for music listening. In addition, it is vital that music publishers start using streaming as a gateway for consumers to engage with the music they want to hear, rather than what they want to sell. If the former strategy continues to be followed, it may have a devastating effect on budding artists.

Likewise, listeners need to feel that streaming offers a level of transparency, value and that there is a two-way relationship worthy of their time and attention – something the major players could certainly learn from the independents.

The Conversation

Steffan Thomas was previously affiliated with Sain Records. ApTon is owned by Sain Records and was developed in response to research produced during his PhD. However, I have no ongoing role within the company and retain no commercial interest in the service.

Migrating birds use a magnetic map to travel long distances

Author: Richard Holland, Senior Lecturer in Animal Cognition, Bangor University

Anjo Kan/Shutterstock

Birds have an impressive ability to navigate. They can fly long distances, to places that they may never have visited before, sometimes returning home after months away.

Though there has been a lot of research in this area, scientists are still trying to understand exactly how they manage to find their intended destinations.

Much of the research has focused on homing pigeons, which are famous for their ability to return to their lofts after long distance displacements. Evidence suggests that pigeons use a combination of olfactory cues to locate their position, and then the sun as a compass to head in the right direction.

We call this “map and compass navigation”, as it mirrors human orienteering strategies: we locate our position on a map, then use a compass to head in the right direction.

But pigeons navigate over relatively short distances, in the region of tens to hundreds of kilometres. Migratory birds, on the other hand, face a much bigger challenge. Every year, billions of small songbirds travel thousands of kilometres between their breeding areas in Europe and winter refuges in Africa.

This journey is one of the most dangerous things the birds will do, and if they cannot pinpoint the right habitat, they will not survive. We know from displacement experiments that these birds can also correct their path from places they have never been to, sometimes from across continents, such as in a study on white crowned sparrows in the US.

Over these vast distances, the cues that pigeons use may not work for migrating birds, and so scientists think they may require a more global mapping mechanism.

Navigation and location

To locate our position, we humans calculate latitude and longitude, that is our positon on the north-south and east-west axes of the earth. Human navigators have been able to calculate latitude from the height of the sun at midday for millennia, but it took us much longer to work out how to calculate longitude.

Eventually it was solved by having a highly accurate clock that could be used to tell the difference between local sunrise time and Greenwich meantime. Initially, scientists thought birds might use a similar mechanism, but so far no evidence suggests that shifting a migratory bird’s body clock effects its navigation ability.

There is another possibility, however, which has been proposed for some time, but never tested – until now.

The earth’s magnetic pole and the geographical north pole (true north) are not in the same place. This means that when using a magnetic compass, there is some angular difference between magnetic and true north, which varies depending on where you are on the earth. In Europe, this difference, known as declination, is consistent on an east west axis, and so can possibly be a clue to longitude.

A reed warbler.Rafal Szozda/Shutterstock

To find out whether declination is used by migrating birds, we tested the orientation of migratory reed warblers. Migrating birds that are kept in a cage will show increased activity, and they tend to hop in the direction they migrate. We used this technique to measure their orientation after we had changed the declination of the magnetic field by eight degrees.

First, the birds were tested at the Courish spit in Russia, but the changed declination – in combination with unchanged magnetic intensity – indicated a location near Aberdeen in Scotland. All other cues were available and still told them they were in Russia.

If the birds were simply responding to the change in declination – like a magnetic compass would – they would have only shifted eight degrees. But we saw a dramatic reorientation: instead of facing their normal south-west, they turned to face south-east.

This was not consistent with a magnetic compass response, but was consistent with the birds thinking they had been displaced to Scotland, and correcting to return to their normal path. That is to say they were hopping towards the start of their migratory path as if they were near Aberdeen, not in Russia.

This means that it seems that declination is a cue to longitudinal position in these birds.

There are still some questions that need answering, however. We still don’t know for certain how birds detect the magnetic field, for example. And while declination varies consistently in Europe and the US, if you go east, it does not give such a clear picture of where the bird is, with many values potentially indicating more than one location.

There is definitely still more to learn about how birds navigate, but our findings could open up a whole new world of research.

The Conversation

Ricahrd Holland receives funding from the Leverhulme Trust and BBSRC

Welsh language media could hold the solution to Wales's democratic deficit

Author: Ifan Morgan Jones, Lecturer in Journalism, Bangor University

Billy Stock/Shutterstock

For the people of Wales, the country’s democratic deficit has become almost part and parcel of everyday life. While the country has spent its nearly 20 years of devolution building up many of the political institutions that underpin a modern nation, Wales does not yet have a well-developed public sphere. The result is that the Welsh public are not only voting under a misapprehension of what the assembly and government are responsible for, but there is also a lack of public scrutiny.

The problem has been mostly blamed on the lack of political coverage by English language media in Wales. Major outlets like the Trinity Mirror-owned Media Wales, BBC Wales and ITV Cymru have all claimed they are working to remedy the situation, yet still the deficit remains.

The Assembly itself is keen to get to grips with the issue too: a taskforce – of which I was a member – recently recommended direct state investment in journalists that would report on Welsh politics. This may sound like a step into the unknown, but in truth it would not be a radical departure. Three Welsh-language websites that discuss public affairs – Golwg 360, Barn magazine’s website and O’r Pedwar Gwynt– already receive grants from the Welsh government, via the Welsh Books Council. Another Welsh-language news website, BBC Cymru Fyw, is paid for by the licence fee.

Barn magazine, September 2007.CC BY-SA

The two most prominent of these sites, BBC Cymru Fyw and Golwg360, attracts a small but committed audience of more than 57,000 unique weekly visitors between them. Around half of readers are under 40 years of age – younger than that for Welsh-language print publications, television and radio.

Part of the success of these sites comes from reaching an audience that wouldn’t have made a conscious decision to seek out news stories about Wales or in Welsh in the past. Quite simply because the content appears in their social media feeds, they are more likely to click on it than they ever would be to go out and buy a Welsh-language newspaper or magazine, or tune in to a Welsh-language TV or radio channel.

Though this audience also visits English language outlets for news, readers visit Welsh language sites in search of a certain kind of content that is not available in the English language. My own analysis of Golwg 360’s statistics, as well as interviews with journalists from all four news sites, suggests that the most popular subjects are the Welsh language, Welsh politics, education in Wales, the Welsh media, the Welsh language and arts and Welsh institutions.

Meanwhile, subjects that were already well covered by other English-language news sites – such as British and international current affairs – or sports, tend to do poorly.

Resources

However, journalists working for Welsh sites other than the BBC’s Cymru Fyw, did suggest that they did not feel they have sufficient resources to properly scrutinise Welsh institutions –so their ability to carry out in-depth, investigative journalism was severely limited. This problem was made worse by a demand for multimedia content that the journalists did not feel they had the time, resources or technological capability to deliver.

While the number of news platforms providing Welsh-language news is impressive, there may still be a lack of plurality. BBC Cymru Fyw and Golwg360 cover many of the same topics, for example. And the investigative journalism conducted by the numerous Welsh language print magazines does not always find an audience because it isn’t publicised online.

None of the journalists I interviewed felt that their dependence on the Welsh government or the license fee for funding limited what they felt they could report. In fact, it was felt by some that the commercial press was more likely to restrict what they covered because of commercial interests.

The funding of Welsh language journalism by the Welsh government has clearly been a success. It has created a lively public sphere of avid readers who take a great interest in news about the Assembly itself as well as other Welsh political institutions.

One would wish that funding English-language journalism in such a way would be unnecessary – and that the commercial media in Wales will turn a corner and strengthen over the next few years. However, if it continues to weaken as it has over the past 20 years, the future of devolution could depend on a radical solution.

The Conversation

Ifan Morgan Jones does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Forest conservation approaches must recognise the rights of local people

Author: Sarobidy Rakotonarivo, Postdoctoral Research Fellow, University of StirlingNeal Hockley, Research Lecturer in Economics & Policy, Bangor University

Protected areas are being established without acknowledging the customary rights of local communities.Sarobidy Rakotonarivo

Until the 1980s, biodiversity conservation in the tropics focused on the “fines and fences” approach: creating protected areas from which local people were forcibly excluded. More recently, conservationists have embraced the notion of “win-win”: a dream world where people and nature thrive side by side.

But over and over, we have seen these illusions shattered and the need to navigate complicated trade-offs appears unavoidable.

To this day, protected areas are being established coercively. They exclude local communities without acknowledging their customary rights. Sadly, most conservation approaches are characterised by a model of “let’s conserve first, and then compensate later if we can find the funding”.

A new conservation model, Reducing Emissions from Deforestation and forest Degradation (REDD+) is an example of this. Finalised at the Paris climate conference in 2015, it seemed to offer something for everyone: supplying global ecosystem services – such as capturing and storing carbon dioxide and biodiversity conservation – while improving the lives of local communities.

Unfortunately, REDD+ is often built on protected area regimes that exclude local people. For example in Kenya, REDD+ led to the forceful eviction of forest dependent people and exacerbated inequality in access to land. The approach is underpinned by laws (often a legacy of the colonial era) that fail to recognise local people’s traditional claims to the forest. In doing so, REDD+ fails to provide compensation to the people it most affects and risks perpetuating the illusion of win-win solutions in conservation.

REDD+ is just one way in which forest conservation can disadvantage local people. In our research we set out to estimate the costs that local people will incur as a result of a REDD+ pilot project in Eastern Madagascar: the Corridor Ankeniheny-Zahamena.

Our aim was to see whether we could robustly estimate these costs in advance, so that adequate compensation could be provided using the funds generated by REDD+. Our research found that costs were very significant, but also hard to estimate in advance. Instead, we suggest that a more appropriate approach might be to recognise local people’s customary tenure.

Social costs of protected areas

Madagascar, considered one of the top global biodiversity hotspots, recently tripled the island’s protected area network from 1.7 million hectares to 6 million hectares. This covers 10% of the country’s total land area.

Although the state has claimed ownership of these lands since colonial times, they are often the customary lands of local communities whose livelihoods are deeply entwined with forest use. The clearance of forests for cultivation has traditionally provided access to fertile soils for millions of small farmers in the tropics. Conservation restrictions obviously affect them negatively.

Swidden agriculture in the eastern rainforests of Madagascar.Sarobidy Rakotonarivo

Conservationists need to assess the costs of conservation before they start. This could help to design adequate compensation schemes and alternative policy options.

We set out to estimate the local welfare costs of conservation in the eastern rainforests of Madagascar using innovative multi-disciplinary methods which included qualitative as well as quantitative data. We asked local people to trade off access to forests for swidden agriculture (land cleared for cultivation by slashing and burning vegetation) with compensation schemes such as cash payments or support for improved rice farming.

Choice experiment surveys with local households in Madagascar.Sarobidy Rakotonarivo

We selected households that differed in their past experience of forest protection from two sites in the eastern rainforests of Madagascar.

The findings

We found that households have different views about the social costs of conservation.

When households had more experience of conservation restrictions, neither large cash payments nor support for improved rice farming were seen as enough compensation.

Less experienced households, on the other hand, had strong aspirations to secure forest tenure. Competition for new forest lands is becoming increasingly fierce and government protection, despite undermining traditional tenure systems, is weakly enforced. They therefore believed that legal forest tenure is better since it would enable them to establish claims over forest lands.

Unfortunately, knowing what would constitute “fair” compensation is extremely complex.

Firstly, local people have very different appraisals of the social costs of conservation. That makes it difficult to estimate accurately the potential negative costs of an intervention.

It’s also hard to evaluate how cash or agricultural projects will stimulate development. This makes it challenging to estimate how much, or what type of compensation should be given.

These challenges are compounded by the high transaction costs of identifying those eligible as well as the lack of political power of communities to demand compensations.

The solution

Conservation approaches, particularly fair compensation for restrictions that are imposed coercively, need a major rethink.

One solution could be to formally recognise local people’s claims to the forest and then negotiate renewable conservation agreements with them. This is an approach already used successfully in many Western countries. In the US for example, conservation organisations negotiate “easements” with landowners, to protect wildlife. Agreements like this ensure that local people’s participation is genuinely voluntary and that compensation payments are sufficient.

Our research shows that there’s a strong demand from local people for securing local forest tenure. There’s also evidence that doing so may better protect forest resources because without customary tenure local people are more likely to clear forests faster than they would do if they were given secure rights.

We therefore argue that securing local tenure may be an essential part of social safeguards for conservation models like REDD+. It could also have the added benefit of helping to reduce poverty.

The social costs of forest conservation have been generally under-appreciated and advocacy for nature conservation reveals a lack of awareness of the high price that local people have to pay. As local forest dwellers have the greatest impact on resources and also the most to lose from non-sustainable uses of these resources, a radical change in current practices is needed.

The Conversation

Sarobidy Rakotonarivo received funding from the European Commission through the forest-for-nature-and-society (fonaso.eu) joint doctoral programme, and the Ecosystem Services for Poverty Alleviation (ESPA) programme (p4ges project: NE/K010220/1) funded by the Department for International Development (DFID), the Economic and Social Research Council (ESRC) and the Natural Environment Research Council (NERC).

Neal Hockley received funding for this work from the Ecosystem Services for Poverty Alleviation program (ESPA), funded by the UK Department for International Development, the Natural Environment Research Council and the Economic and Social Research Council.

Want to develop 'grit'? Take up surfing

Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University

Rhi Willmot, Author provided

My friend, Joe Weghofer, is a keen surfer, so when he was told he’d never walk again, following a 20ft spine-shattering fall, it was just about the worst news he could have received. Yet, a month later, Joe managed to stand. A further month, and he was walking. Several years on, he is back in the water, a board beneath his feet. Joe has what people in the field of positive psychology call “grit”, and I believe surfing helped him develop this trait.

Grit describes the ability to persevere with long-term goals, sustaining interest and energy over months or years. For Joe, this meant struggling through arduous physiotherapy exercises and remaining engaged and hopeful throughout his recovery.

Research suggests that gritty people are more likely to succeed in a range of challenging situations. Grittier high school students are more likely to graduate. Grittier novice teachers are more likely to remain in the profession and gritty military cadets are more likely to make it through intense mental and physical training. The secret to this success is found in the ability to keep going when things get tough. Gritty people don’t give up and they don’t get bored.

Joe shortly after his accident.Rhi Willmot, Author provided

Research also suggests that grit can be learned. Certain conditions can foster grit, allowing grit developed in one domain to transfer to other, more challenging, situations. Surfing is a good example of how grit can be gently cultivated, strengthened and then honed. So although getting back in the water itself was important to Joe, his previous surfing experience may well have developed his ability to persevere long before he became injured. Here’s how:

Effort

Gritty people have a strong appreciation of the connection between hard work and reward. In contrast to simply running onto a hockey pitch, or diving into a pool, surfing is unique in that you have to battle through the white water at the shoreline before you can even begin to enjoy the feeling of sliding down a glassy, green wave. This is difficult, but the adrenaline rush of riding a wave is worth the cost of paddling out.

The theory of learned industriousness suggests that pairing effort and reward doesn’t just reinforce behaviour but also makes the very sensation of effort rewarding in itself. Repeated cycles of paddling out and surfing in are particularly effective at developing an association between intense effort and potent reward. This is especially relevant given that grit is described as a combination of effort and enjoyment. Gritty people don’t just slave away, they eagerly chase difficult goals in a ferocious pursuit of success.

Passion

Surfers’ passion for their sport is well known – it may even be described as an addiction. One of the properties that makes surfing so addictive is its unpredictability.

The ocean is a constantly changing environment, making it difficult to know exactly when and where the next wave is about to break. This means watery reinforcement is delivered on something called a variable-interval schedule; any number of quality waves might arrive at any point in a given time frame. Importantly, we receive a stronger release of the motivating neurotransmitter dopamine when a reward is unexpected. So when a surfer is surprised by the next perfect wave, dopamine-sensitive pleasure centres in the brain become all the more stimulated.

Behaviour that is trained under a variable-interval schedule is much more likely to be maintained than behaviour that is rewarded more consistently, making surfers better able to persevere when the waves take a long time to materialise.

Joe, enjoying the activity that made him who he is.Rhi Willmot, Author provided

Purpose

The final grit-honing element of surfing is its ability to provide a sense of purpose. Feeling purposeful – a state psychologists describe as a belief that life is meaningful and worthwhile – involves doing things that take us closer to our important goals. It usually means acting in line with our values and being part of something bigger than ourselves. This could refer to religious practice, connecting to nature or simply helping other people.

Research suggests that as levels of grit increase, so does a sense of purpose. But this doesn’t mean that gritty people are saints – just that they have an awareness of how their activities connect to a cause beyond themselves, as well as their own deeply held values.

The physical and mental challenge offered by surfing provides a sense of personal fulfilment. It’s always possible to paddle faster, ride for longer or try the next manoeuvre, but spending time waiting for the next wave also provides a valuable opportunity to reflect.

The ocean is a powerful beast. Serenity can quickly be replaced with chaos when an indomitable set of waves arrives, five-foot-high walls of water, stacked one after the other. Witnessing the power of nature in this way can certainly deliver a sense of perspective, helping you to feel connected to something meaningful and awe inspiring.

Of course, surfing isn’t the only way to build grit. The important lesson here is that developing our passion and identifying our purpose can help us persevere with the activities we love. This provides a valuable reservoir of strength, to be used when we need it the most. And while coming back from such a serious injury requires more than just grit, Joe’s persistent effort and unwillingness to give in have undoubtedly helped him to once again enjoy the sport that made him who he is.

The Conversation

Rhi Willmot does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Artists and architects think differently to everyone else – you only have to hear them talk

Author: Thora Tenbrink, Reader in Cognitive Linguistics, Bangor University

How often have you thought that somebody talks just like an accountant, or a lawyer, or a teacher? In the case of artists, this goes a long way back. Artists have long been seen as unusual – people with a different way of perceiving reality. Famously, the French architect Le Corbusier argued in 1946 that painters, sculptors and architects are equipped with a “feeling of space” in a very fundamental sense.

Artists have to think about reality in different ways to other people every day in their jobs. Painters have to create an imaginary 3D image on a 2D plane, performing a certain magic. Sculptors turn a block of marble into something almost living. Architects can design buildings that would seem impossible.

Think of Edgar Mueller’s famous street art. Or Michaelangelo’s Pietà. Or Frank Lloyd Wright’s Fallingwater, which seems to defy physics. All of these people are (or were) experts in rearranging the spatial relationships in their environment, each in their own way. This is a necessary skill for anyone who takes up these crafts as a profession. How could this not affect the ways in which they think – and talk – about space?

Our recent study, a collaboration of UCL and Bangor University, set out to test this. Do architects, painters, and sculptors conceive of spaces in different ways from other people and from each other? The answer is: yes, they do – in a range of quite subtle ways.

Painters, sculptors, architects (all “spatial” professionals with at least eight years of experience) and a group of people in unrelated (“non-spatial”) professions took part in the study. There were 16 people in each professional group, with similar age range and equal gender distribution. They were shown a Google Street view image, a painting of St Peter’s Basilica in the Vatican and a computer-generated surreal scene.

Michelangelo’s Pietà in St Peter’s Basilica in the Vatican.Stanislav Traykov via WIkimedia Commons

For each picture, they were given a few tasks that made them think about the spatial scene in certain ways: they were asked to describe the environment, explain how they would explore the space shown and suggest changes to it in the image. This picture-based task was chosen because of its simplicity – it doesn’t take an expert to describe a picture or to imagine exploring or changing it.

From the answers, we categorised elements of the responses for both qualitative and quantitative analyses using a new technique called Cognitive Discourse Analysis with the aim of highlighting aspects of thought that underlie linguistic choices beyond what speakers are consciously aware of. We made a short film about the research which you can watch below.

Telltale language

Our analysis led to the identification of consistent patterns in the language used for talking about the pictures that were revealing. Painters, sculptors and architects all gave more elaborate, detailed descriptions than the others.

Painters were more likely to describe the depicted space as a 2D image and said things like: “It’s obvious the image wants you to follow the boat off onto the horizon.” They tended to shift between describing the scene as a 3D space or as a 2D image. By contrast, architects were more likely to describe barriers and boundaries of the space – as in: “There are voids within walls which become spaces in their own right.” Sculptors’ responses were between the two – they were somewhat like architects except for one measure: with respect to the bounded descriptions of space, they appeared more like painters.

Painters and architects also differed in how they described the furthest point of the space, as painters called it the “back” and architects called it the “end”. The “non-spatial” group rarely used either one of these terms – instead they referred to the same location by using other common spatial terms such as “centre” or “bottom” or “there”. All of this had nothing to do with expert language or register – obviously people can talk in detail about their profession. But our study reflected the way they think about spatial relationships in a task that did not require their expertise.

The “non-spatial” group did not experience any problems with the task – but their language seemed less systematic and less rich than that of the three spatial professional groups.

Thinking and talking like a professional

Our career may well change the way we think, in somewhat unexpected ways. In the late 1930s, American linguist Benjamin Lee Whorf suggested that the language we speak affects the way we think– and this triggered extensive research into how culture changes cognition. Our study goes a step further – it shows that even within the same culture, people of different professions differ in how they appreciate the world.

Frank Lloyd Wright’s Fallingwater in Mill Run, Pennsylvania.Iam architect via Wikipedia Commons, CC BY-SA

The findings also raise the possibility that people who are already inclined to see the world as a 2D image, or who focus on the borders of a space, may be more inclined to pursue painting or architecture. This also makes sense – perhaps we develop our thinking in a particular way, for whatever reasons, and this paves our way towards a particular profession. Perhaps architects, painters and sculptors already talked in their own fashion about spatial relationships, before they starting their careers.

This remains to be looked at in detail. But it’s clear from our study that artists and architects have a heightened awareness of their surroundings which is reflected in the way they talk about spatial environments. So next time you are at dinner with an architect, painter, or sculptor, show them a photograph of a landscape and get them to describe it – and see if you can spot the telltale signs of their profession slipping out.

The Conversation

Thora Tenbrink's research was carried out with Claudia Cialone and Hugo Spiers.

How we're using ancient DNA to solve the mystery of the missing last great auk skins

Author: Jessica Emma Thomas, PhD Researcher, Bangor University

The great auk by John James Audubon.University of Pittsburgh/Wikimedia

On a small island off the coast of Iceland, 173 years ago, a sequence of tragic events took place that would lead to the loss of an iconic bird: the great auk.

The great auk, Pinguinus impennis, was a large, black and white bird that was found in huge numbers across the North Atlantic Ocean. It was often mistaken to be a member of the penguin family, but its closest living relative is actually the razorbill, and it is related to puffins, guillemots and murres.

Being flightless, the great auk was particularly vulnerable to hunting. Humans killed the birds in their thousands for meat, oil and feathers. By the start of the 19th dentury, the north-west Atlantic populations had been decimated, and the last few remaining breeding birds were to be found on the islands off the south-west coast of Iceland. But these faced another threat: due to their scarcity, the great auk had become a desirable item for both private and institutional collections.

The great auk’s breeding range across the North Atlantic.Maps were created using spatial data from BirdLife International/IUCN with National Geographic basemap in ArcGIS., Author provided

The fateful voyage of 1844

Between 1830 and 1841 several trips were taken to Iceland’s Eldey Island, to catch, kill, and sell the birds for exhibitions. Following a period of no reported captures, great auk dealer Carl Siemsen commissioned an expedition to Eldey to search for any remaining birds.

Between June 2-5 1844, 14 men set sail in an eight-oared boat for the island. Three braved the dangerous landing and spotted two great auks among the smaller birds that also bred there. A chase began but the birds ran at a slow pace, their small wings extended, expressing no call of alarm. They were caught with relative ease and killed, their egg, broken in the hunt, was discarded.

But the birds – a male and a female – were never to reach Siemsen. The expedition leader sold them to a man named Christian Hansen, who then sold them on to Herr Möller, an apothecary in Reykjavik. Möller skinned the birds and sent them, and their preserved body parts, to Denmark.

The last male great auk killed on Eldey Island, June 1844.Thierry Hubin/Royal Belgian Institute of Natural Sciences

The internal organs of these two birds now reside in the Natural History Museum of Denmark. The skins, however were lost track of, and – despite considerable effort by numerous scholars– their location has remained unknown.

Missing skins

In 1999, great auk expert Errol Fuller proposed a list of candidate specimens, the origins of which were not known, which he believed could be from the last pair of great auks. But how to find which of these were the true skins? For this we turned to the field of ancient DNA (aDNA).

In the last 30 years, aDNA technology has progressed greatly, and has been used to address a wide range of ecological and evolutionary questions, providing insight into countless species’ pasts, including humans. Museum specimens play a key role in aDNA research and have been used to solve several issues of unidentified or misidentified specimens – for example Greenlandic Norse fur, rare kiwi specimens, Aukland island shags, and mislabelled penguin samples.

We took things a step further, using aDNA techniques and a detective-like approach to try and resolve the mystery of what happened to the skins of the last two great auks.

Ancient DNA

We sampled the organs from the last birds, along with candidate specimens from Brussels, Belgium; Oldenburg and Kiel, in Germany; and Los Angeles. We then extracted and sequenced the mitochondrial genomes from each, and compared the sequences from the candidate skins to those from that came from the organs of the last pair.

The hearts of the last two documented great auks. The female’s was sampled for our study.Natural History Museum of Denmark, Author provided

The results showed that the skin held in the museum in Brussels was a perfect match for the oesophagus from the male bird. Unfortunately, there was no match between the other candidate skins and the female’s organs.

The specimens from Brussels and Los Angeles were thought to be the most likely candidates due to their history: both birds were in the hands of a well-known great auk dealer, Israel of Copenhagen, in 1845. As the bird in Brussels was a match, we thought it likely that the one in Los Angeles would also be a match for the female’s organs. It was surprising when it wasn’t. However, our research led us to speculate that a mix up which occurred following the death of Captain Vivian Hewitt in 1965 – who owned four birds which are now in Cardiff, Birmingham, Los Angeles and Cincinnati – was not resolved as once thought.

The identity of the birds now in Birmingham and Cardiff are now known after photographs were used to identify them – but those in Los Angeles and Cincinnati have been harder to determine. It was thought that their identities could be found from annotated photographs taken in 1871, but we speculate that they were not correctly identified, and that the bird in Cincinnati may be the original bird from Israel of Copenhagen. If this is the case, then it could explain why the Los Angeles bird fails to match with either of the last great auk organs held in Copenhagen.

We now have permission to test the great auk specimen in the Cincinnati Museum of Natural History and Science, and hopefully solve this final piece of a centuries-old puzzle. There is no guarantee that this bird will be a match either, but if it is, we will finally know what happened to the last two specimens of the extinct great auk.

The Conversation

Jessica Thomas is a double-degree PhD student enrolled at Bangor University and the University of Copenhagen. She receives funding from NERC PhD Studentship (NE/L501694/1), the Genetics Society-Heredity Fieldwork Grant, and European Society for Evolutionary Biology–Godfrey Hewitt Mobility Award.

Chefs and home cooks are rolling the dice on food safety

Author: Paul Cross, Senior Lecturer in the Environment, Bangor UniversityDan Rigby, Professor, Environmental Economics, University of Manchester

stock_photo_world/Shutterstock

Encouraging anyone to honestly answer an embarrassing question is no easy task – not least when it might affect their job.

For our new research project, we wanted to know whether chefs in a range of restaurants and eateries, from fast food venues and local cafes to famous city bistros and award-winning restaurants, were undertaking “unsafe” food practices. As some of these – such as returning to the kitchen within 48 hours of a bout of diarrhoea or vomiting – contravene Food Standard Agency guidelines, it was unlikely that all respondents would answer as honestly if asked about them.

This was not just a project to catch specific food professionals in a lie, we wanted to find out the extent to which the public and chefs handled food in unsafe ways. With up to 500,000 cases of food-borne diseases reported every year in the UK, at a cost of approximately £1.5 billion in terms of resource in welfare losses, the need to identify risky food handling is urgent.

The Food Standards Agency (FSA) is acutely aware of the problem and has instigated initiatives such as the Food Hygiene Rating Scheme (FHRS) that involves inspections and punishments following the identification of poor food handling behaviours in restaurants and eateries. However, such initiatives do not always manage to change the behaviour of the food handlers – and inadequate food handling practices frequently go unseen or unreported.

Dicing with destiny

Yet still, we were faced with the issue of getting honest answers to our research questions. So we rolled a dice, or to be precise, two of them. As part of our research, 132 chefs and 926 members of the public were asked to agree or disagree with the following four statements:

I always wash my hands immediately after handling raw meat, poultry or fish;

I have worked in a kitchen within 48 hours of suffering from diarrhoea and/or vomiting;

I have worked in a kitchen where meat that is “on the turn” has been served;

I have served chicken at a barbecue when I wasn’t totally sure that it was fully cooked.

Here, the dice rolling was part of a randomised response technique (RRT): interviewees secretly rolled two dice and gave “forced” responses if particular values resulted. If they rolled a 2, 3 or 4, they had to answer yes. If they rolled 11 or 12, they had to answer no. All other values required an honest answer.

Denying the first, or admitting to the other three statements would be embarrassing for members of the public, and could possibly lead to dismissal for professional caterers. Because they knew that a “yes” could have been forced by the interviewee’s dice roll, they were more willing to report a true, unforced, “yes”.

We were unable to distinguish between individuals who had given a forced response and those who had answered truthfully. But we knew statistically that 75% of the dice rolls would lead to a honest response and so were able to determine the proportion of the public and chefs who had admitted to performing one of the risky behaviours. We also looked at the results in terms of factors such as price, awards and FHRS ratings to find out how they associated with the practices.

Outdoor cooking.Normana Karia/Shutterstock

Kitchen challenge

What we found from all of the responses was that it a can be quite challenging for consumers to find an eatery where such unsafe practices are absent. Chefs working in award-winning kitchens were more likely (almost one in three) to have returned to work within 48 hours of suffering from diarrhoea and vomiting. A serious cause for concern as returning to work in a kitchen too soon after illness is a proven way to spread infection and disease.

Not washing hands was also more likely in upmarket establishments – despite over one-third of the public agreeing that the more expensive a meal was, the safer they would expect it to be.

Chefs working in restaurants with a good Food Hygiene Rating Scheme score – a 3, 4, 5 on a scale of one to five in England and Wales, or a “pass” in Scotland – were just as likely to have committed the risky practices, or to have worked with others who had.

We also found a high proportion of chefs across the board had served meat which was “on the turn”. This is equally worrying, as it is part of a long-established cost-cutting practice that often involves masking the flavour of meat that is going off by adding a sauce.

Meanwhile at home, 20% of the public admitted to serving meat on the turn, 13% had served barbecued chicken when unsure it was sufficiently cooked, and 14% admitted to not washing their hands after touching raw meat or fish.

That is not to say that all chefs – or members of the public – practice unsafe food handling, indeed the majority did not admit to the poor food practices. But the number of professional kitchens where chefs admit to risky behaviour is still a cause for concern and avoiding them is not easy. People opting for a “fine-dining” establishment which holds awards, demands high prices and has a good FHRS score might not be as protected, nor reassured, as they think.

The Conversation

Paul Cross receives funding from Natural Environment Research Council. The Enigma project is funded by the major UK Research Councils and this study was a collaboration between Bangor, Manchester and Liverpool Universities.

Dan Rigby, as part of the Enigma project (www.enigmaproject.org.uk), received funding for this work from the Medical Research Council, Natural Environment Research Council, Economic and Social Research Council, Biotechnology and Biosciences Research Council and the Food Standards Agency, through the Environmental & Social Ecology of Human Infectious Diseases Initiative (ESEI).

Brexit's impact on farming policy will take Britain back to the 1920s – but that's not necessarily a bad thing

Author: David Arnott, PhD Researcher, Bangor University

Howard Pimborough/Shutterstock

Not much regarding Brexit is clear. But one thing we do know is that the UK’s decision to leave the EU has triggered proposals to implement the most significant changes to agricultural policy since it joined the European Common Agricultural Policy (CAP) in 1973.

The CAP was designed to provide a stable, sustainably produced supply of safe, affordable food. It also ensured a decent standard of living for farmers and agricultural workers, providing support through subsidies.

Now, the UK’s main political parties agree direct subsidy provision has to be reviewed and fundamentally changed. The current system favours large landowners over the small and is seen by many as encouraging inefficiency in farming practices. At present, support comes in the form of a two-pillar system, one providing direct support payments, and the other giving payments which reward the farmer for conducting environmental practices through participation in agri-environment schemes.

In its election manifesto, the Conservative Party agreed to maintain all subsidy support until 2022. After that, it will move to a one-pillar system, providing payment for public goods, woodland regeneration, carbon sequestration and greenhouse gas reduction, among other things. It would shift towards a free market economy where payments would no longer directly support farming businesses without public good provision.

Speaking to Farming Today, environment secretary Michael Gove, said: “There’s a huge opportunity to design a better system for supporting farmers, but first I need to listen to environmentalists about how we can use that money to better protect the environment … and also to farmers to learn how to make the regime work better.”

Labour Party policy meanwhile aims to reconfigure funds for farming to support smaller traders, local economies, community benefits and sustainable practices. Both major parties through their manifestos seem to agree in principle that change must – and will – come, albeit for differing reasons.

When combined with exit from the single market and the customs union, these policies will create an agricultural playing field pretty similar to that of 100 years ago.

1921 - 1931

During World War I and the post-war reconstruction, the agriculture and food ministries controlled their respective industries. This culminated in the Agriculture Act (1920) which provided support for farmers in the form of guaranteed prices for agricultural products and minimum wages for farm labourers. But within six months of its implementation, falling prices and a struggling economy forced the repeal of the act, which returned the country to the laissez-faire economy that had existed before 1914, when there was a free market economy with little or no government involvement.

At this time, Labour and the Conservatives were united in their anti-subsidy approach, strongly believing agricultural issues should be solved in the open market.

Green and pleasant land.Jarek Kilian/Shutterstock

These sentiments – which eventually led to a free market period lasting from 1921-1931 – are reflected in the policies of today. The 1920s Labour Party opposed state support to farmers while land was privately owned – today, Labour wants to move subsidies away from wealthy landowners.

In the 1930s the Conservatives stated: “It is no longer national policy to buy all over the world in the cheapest markets”. Their ambition today is to: “make a resounding success of our world-leading food and farming industry; producing more, selling more, and exporting more of our great British food”.

However, there were some significant downsides when the Agriculture Act was repealed: agricultural wages fell by as much as 40%. Productivity fell too, rural poverty increased, small farms failed and land was abandoned through urban migration. Some described the countryside as a desolate waste.

Future rules

Not all see small-scale farm failure as bad, however. In the 1960s, agricultural economist Professor John F. Nash described farmer support as: “providing small or average farmers with what is considered a reasonable income, encouraging them to remain small or average farmers. They will remain in farms that would otherwise be unprofitable or use systems which otherwise might be too costly.” He argued that there were too many small farms and they needed to increase their output to survive without subsidies.

Though uncertainty remains around the precise nature of future policy, it will definitely affect the shape of agriculture in the UK. Small, unproductive farms may struggle to survive and tenancies may not be renewed. A reduction in land prices could see small farms bought out by larger enterprises.

Cutting subsidies could be the best thing for Britain environmentally: it could encourage more famers to pursue sustainable practices. But in 1986, when New Zealand removed farming subsidies, it had the effect of changing farm structure from small to large-scale commercial units. This model, while viewed as a success in productivity and innovation terms, had a devastating effect on the environment.

But, if implemented, the Conservative manifesto pledge would work very differently to the New Zealand example, providing alternatives to increased production through support to farmers for the provision of environmental services. Nothing is definite. Uncertainty ensues – and farmers can only wait to see what happens and hope that a step into the past can make for a brighter future.

The Conversation

David Arnott is a PhD research student at Bangor University currently working on a Welsh European Funding Office Flexible Integrated Energy Systems (FLEXIS) project. The aim of this part of the project is to, 'Evaluate the impact of policy change on farmer decision-making and carbon management.' Farmers of all types and farm size are currently being recruited to assist in the research which will be conducted over the next 2 years. Participation will involve completion of a short survey and, if interested, involvement in a series of face to face interviews to be conducted on a 6 monthly basis. If you are interested in participating in this topical, ground-breaking research project or would like more information please contact d.arnott@bangor.ac.uk or twitter @DavidArnott10

Tech firms want to detect your emotions and expressions, but people don't like it

Author: Andrew McStay, Reader in Advertising and Digital Media, Bangor University

Sergey Nivens

As revealed in a patent filing, Facebook is interested in using webcams and smartphone cameras to read our emotions, and track expressions and reactions. The idea is that by understanding emotional behaviour, Facebook can show us more of what we react positively to in our Facebook news feeds and less of what we do not – whether that’s friends’ holiday photos, or advertisements.

This might appear innocuous, but consider some of the detail. In addition to smiles, joy, amazement, surprise, humour and excitement, the patent also lists negative emotions. Possibly being read for signs of disappointment, confusion, indifference, boredom, anger, pain and depression is neither innocent, nor fun.

In fact, Facebook is no stranger to using data about emotions. Some readers might remember the furore when Facebook secretly tweaked user’s news feeds to understand “emotional contagion”. This meant that when users logged into their Facebook pages, some were shown content in their news feeds with a greater number of positive words and others were shown content deemed as sadder than average. This changed the emotional behaviour of those users that were “infected”.

Given that Facebook has around two billion users, this patent to read emotions via cameras is important. But there is a bigger story, which is that the largest technology companies have been buying, researching and developing these applications for some time.

Watching you feel

For example, Apple bought Emotient in 2016, a firm that pioneered facial coding software to read emotions. Microsoft offers its own “cognitive services”, and IBM’s Watson is also a key player in industrial efforts to read emotions. It’s possible that Amazon’s Alexa voice-activated assistant could soon be listening for signs of emotions, too.

This is not the end though: interest in emotions is not just about screens and worn devices, but also our environments. Consider retail, where increasingly the goal is to understand who we are and what we think, feel and do. Somewhat reminiscent of Steven Spielberg’s 2002 film Minority Report, eyeQ Go, for example, measures facial emotional responses as people look at goods at shelf-level.

What these and other examples show is that we are witnessing a rise of interest in our emotional lives, encompassing any situation where it might be useful for a machine to know how a person feels. Some less obvious examples include emotion-reactive sex toys, the use of video cameras by lawyers to identify emotions in witness testimony, and in-car cameras and emotion analysis to prevent accidents (and presumably to lower insurance rates).

How long till machines can tell what we can?jura-photography

Users are not happy

In a report assessing the rise of “emotion AI” and what I term “empathic media”, I point out that this is not innately bad. There are already games that use emotion-based biofeedback, which take advantage of eye-trackers, facial coding and wearable heart rate sensors. These are a lot of fun, so the issue is not the technology itself but how it is used. Does it enhance, serve or exploit? After all, the scope to make emotions and intimate human life machine-readable has to be treated cautiously.

The report covers views from industry, policymakers, lawyers, regulators and NGOs, but it’s useful to consider what ordinary people say. I conducted a survey of 2,000 people and asked questions about emotion detection in social media, digital advertising outside the home, gaming, interactive movies through tablets and phones, and using voice and emotion analysis through smartphones.

I found that more than half (50.6%) of UK citizens are “not OK” with any form of emotion capture technology, while just under a third (30.6%) feel “OK” with it, as long as the emotion-sensitive application does not identify the individual. A mere 8.2% are “OK” with having data about their emotions connected with personally identifiable information, while 10.4% “don’t know”. That such a small proportion are happy for emotion-recognition data to be connected with personally identifying information about them is pretty significant considering what Facebook is proposing.

But do the young care? I found that younger people are twice as likely to be “OK” with emotion detection than the oldest people. But we should not take this to mean they are “OK” with having data about emotions linked with personally identifiable information. Only 13.8% of 18- to 24-year-olds accept this. Younger people are open to new forms of media experiences, but they want meaningful control over the process. Facebook and others, take note.

New frontiers, new regulation?

So what should be done about these types of technologies? UK and European law is being strengthened, especially given the introduction of the General Data Protection Regulation. While this has little to say about emotions, there are strict codes on the use of personal data and information about the body (biometrics), especially when used to infer mental states (as Facebook have proposed to do).

This leaves us with a final problem: what if the data used to read emotions is not strictly personal? What if shop cameras pick out expressions in such a way as to detect emotion, but not identify a person? This is what retailers are proposing and, as it stands, there is nothing in the law to prevent them.

I suggest we need to tackle the following question: are citizens and the reputation of the industries involved best served by covert surveillance of emotions?

If the answer is no, then codes of practice need to be amended immediately. Questions of ethics, emotion capture and rendering bodies passively machine-readable is not contingent upon personal identification, but something more important. Ultimately, this is a matter of human dignity, and about what kind of environment we want to live in.

There’s nothing definitively wrong with technology that interacts with emotions. The question is whether they can be shaped to serve, enhance and entertain, rather than exploit. And given that survey respondents of all ages are rightfully wary, it’s a question that the people should be involved in answering.

The Conversation

Andrew McStay receives funding from AHRC and ESRC.

The ATM at 50: how a hole in the wall changed the world

Author: Bernardo Batiz-Lazo, Professor of Business History and Bank Management, Bangor University

Back in the day...Lloyds Banking Group Archives & Museum,

Next time you withdraw money from a hole in the wall, consider singing a rendition of happy birthday. For on June 27, the Automated Teller Machine (or ATM) celebrates its half century. Fifty years ago, the first cash machine was put to work at the Enfield branch of Barclays Bank in London. Two days later, a Swedish device known as the Bankomat was in operation in Uppsala. And a couple of weeks after that, another one built by Chubb and Smith Industries was inaugurated in London by Westminster Bank (today part of RBS Group).

These events fired the starting gun for today’s self-service banking culture – long before the widespread acceptance of debit and credit cards. The success of the cash machine enabled people to make impromptu purchases, spend more money on weekend and evening leisure, and demand banking services when and where they wanted them. The infrastructure, systems and knowledge they spawned also enabled bankers to offer their customers point of sale terminals, and telephone and internet banking.

There was substantial media attention when these “robot cashiers” were launched. Banks promised their customers that the cash machine would liberate them from the shackles of business hours and banking at a single branch. But customers had to learn how to use – and remember – a PIN, perform a self-service transaction and trust a machine with their money.

People take these things for granted today, but when cash machines first appeared many had never before been in contact with advanced electronics.

And the system was far from perfect. Despite widespread demand, only bank customers considered to have “better credit” were offered the service. The early machines were also clunky, heavy (and dangerous) to move, insecure, unreliable, and seldom conveniently located.

Indeed, unlike today’s machines, the first ATMs could do only one thing: dispense a fixed amount of cash when activated by a paper token or bespoke plastic card issued to customers at retail branches during business hours. Once used, tokens would be stored by the machine so that branch staff could retrieve them and debit the appropriate accounts. The plastic cards, meanwhile, would have to be sent back to the customer by post. Needless to say, it took banks and technology companies years to agree common standards and finally deliver on their promise of 24/7 access to cash.

The globalisation effect

Estimates by RBR London concur with my research, suggesting that by 1970, there were still fewer than 1,500 of the machines around the world, concentrated in Europe, North America and Japan. But there were 40,000 by 1980 and a million by 2000.

A number of factors made this ATM explosion possible. First, sharing locations created more transaction volume at individual ATMs. This gave incentives for small and medium-sized financial institutions to invest in this technology. At one point, for instance, there were some 200 shared ATM networks in the US and 80 shared networks in Japan.

They also became more popular once banks digitised their records, allowing the machines to perform a host of other tasks, such as bank transfers, balance requests and bill payments. Over the last five decades, a huge number of people have made the shift away from the cash economy and into the banking system. Consequently, ATMs became a key way of avoiding congestion at branches.

ATM design began to accommodate people with visual and mobility disabilities, too. And in recent decades, many countries have allowed non-bank companies, known as Independent ATM Deployers (IAD) to operate machines. The IAD were key to populating non-bank locations such as corner shops, petrol stations and casinos.

Indeed, while a large bank in the UK might own 4,000 devices and one in the US as many as 12,000, Cardtronics, the largest IAD, manages a fleet of 230,000 ATMs in 11 countries.

Ready cash? You can bank on it.Shutterstock

Bank to the future

The ATM has remained a relevant and convenient self-service channel for the last half century – and its history is one of invention and re-invention, evolution rather than revolution.

Self-service banking and ATMs continue to evolve. Instead of PIN authentication, some ATMS now use “tap and go” contactless payment technology using bank cards and mobile phones. Meanwhile, ATMs in Poland and Japan have used biometric recognition, which can identify a customer’s iris, fingerprint or voice, for some time, while banks in other countries are considering them.

So it’s a good time to consider what the history of cash dispensers can teach us. The ATM was not the result of a eureka moment of a single middle-aged man in a bath or garage, but from active collaboration between various groups of bankers and engineers to solve the significant challenges of a changing world. It took two decades for the ATM to mature and gain widespread, worldwide acceptance, but today there are 3.5m ATMs with another 500,000 expected by 2020.

Research I am currently undertaking suggests that ATMs may have reached saturation point in some Western countries. However, research by the ATM Industry Association suggests there is strong demand for them in China, India and the Middle East. In fact, while in the West people tend to use them for three self-service functions (cash withdrawal, balance enquiries, and purchasing mobile phone airtime), Chinese customers consumers regularly use them for as many as 100 different tasks.

Taken for granted?

Interestingly, people in most urban areas around the world tend to interact with the same five ATMs. But they shouldn’t be taken for granted. In many countries in Africa, Asia and South America, they offer services to millions of people otherwise excluded from the banking sector.

In most developed counties, meanwhile, the retail branch and the ATM are the only two channels over which financial institutions have 100% control. This is important when you need to verify the authenticity of your customer. Banks do not control the make and model of their customers’ smart phones, tablets or personal computers, which are vulnerable to hacking and fraud. While ATMs are targeted by thieves, mass cybernetic attacks on them have yet to materialise.

I am often asked whether the advent of a cashless, digital economy heralds the end of the ATM. My response is that while the world might do away with cash and call ATMs something else, the revolution of automated self-service banking that began 50 years ago is here to stay.

The Conversation

Bernardo Bátiz-Lazo has received funding to research ATM and payments history from the British Academy, Fundación de Estudios Financieros (Fundef-ITAM), Charles Babbage Institute and the Hagley Museum and Archives. He is also active in the ATM Industry Association, consults with KAL ATM Software and is a regular contributor to www.atmmarketplace.com.

Welsh schools: an approach to bilingualism that can help overcome division

Author: Peredur Webb-Davies, Senior Lecturer in Welsh Linguistics, Bangor University

Research has shown just how beneficial education in Welsh can be.National Assembly for Wales/Flickr, CC BY-SA

Being a Welsh-English bilingual isn’t easy. For one thing, you hear that encouraging others to learn your language is detrimental both to their education and wellbeing. For another, to speak a minority language such as Welsh you need to constantly make the effort to be exposed to it and maintain your bilingualism.

A row has recently arisen in the Carmarthenshire village of Llangennech over plans to turn an English language school into a Welsh school. Parents who objected to the change told Guardian reporters that they have been labelled “anti-Welsh bigots”, in an article headlined“Welsh-only teaching – a political tool that harms children?”.

Needless to say, those who have gone through Welsh language schooling were not happy with the report. And for good reason too: though parents may have their own concerns, research has proven the benefits of bilingualism. The fear heavily implied that sitting in a Welsh classroom somehow hermetically insulates a child from the English language is simply not founded.

Schools in Wales need to deal with – and provide education for – children from two main backgrounds: those who speak Welsh at home and those who do not. The former benefit from Welsh-medium education in that they are able to broaden and improve their Welsh ability, as well as learning to read and write in it, while the latter need to be taught Welsh from the ground up. In most schools, a classroom will have a mixture of children from different backgrounds, although children will get different levels of exposure to Welsh depending on the school. Welsh is not treated as a foreign language like French or German, because children at schools in Wales will inevitably have some exposure to Welsh culturally and socially.

This means that teachers in nearly all schools in Wales have two different audiences: children who speak English as a first language, and children who speak Welsh as a first language.

But rather than this being a problem, teachers use different approaches in the classroom to deal with it. Few lessons are in just Welsh or English – the majority use a strategic bilingual approach such as code-switching (alternating between both languages as they teach), targeted translation (where specific terms or passages are translated as they are taught), or translanguaging (blending two languages together to help students learn a topic’s terminology in both).

One cannot simply divide Wales’s schools into Welsh-speaking or English-speaking. The former are bilingual schools – as well as ensuring that Welsh survives and flourishes, the aim of schools in Wales is to produce children who are bilingual when they finish their education.

It’s an obvious statement to make, but the more Welsh a child hears at home and school, the more proficient they become. It doesn’t have a negative effect on the rest of their education.

Language death

Like all languages, Welsh is evolving as time goes on, and schools are vital for not only nurturing speakers’ abilities, but for helping it stay relevant to the world. Similar to how there isn’t just one type of bilingual – speakers of two languages vary in proficiency – there also isn’t just one type of spoken Welsh.

My own research into grammar variation across age ranges found that younger generations are using certain innovative grammatical constructions much more frequently than older generations. The Welsh language that children hear from their peers is different to what they hear from their parents and grandparents. This includes grammatical features such as word order: where an older speaker might say “fy afal i” for “my apple”, a younger speaker is more likely to use “afal fi”. Similarly, research on code-switching by Welsh speakers has found that younger people are more likely than older speakers to mix Welsh and English in the same sentence. So schools and communities need to be able to expose children to Welsh of all registers for them to grow in proficiency and confidence, and learn these new social constructions.

Proficiency is a big part in shaping language attitudes – and, for a nation like Wales, where fear of language death is common, support for Welsh is vital.

Research sourcing the views of teenagers from north Wales found that more proficient speakers had more positive attitudes towards Welsh. On the other hand, participants with lower Welsh proficiency reported that they reacted negatively towards Welsh at school because they felt pressure to match their more proficient peers.

One of the biggest ironies in contemporary Wales is that it would be easier just to use – and learn in – English, but doing so would unquestionably lead to the death of Welsh – and the end of a language is no small matter.

Identifying precisely why some speakers feel that they cannot engage in Welsh-medium education, or use their Welsh outside of school, would be beneficial to fostering a bilingual Wales and would help heal the kinds of social divisions reported in Llangennech.

The cognitive, cultural and economical benefits of bilingualism have been widely demonstrated. To become bilingual in Welsh you must be exposed to Welsh and, for the majority of Welsh children, the classroom is their main source of this exposure. As such, we should see Welsh schools as central to any community’s efforts to contribute to the bilingual future that’s in Wales’s best interests.

The Conversation

Peredur Webb-Davies receives funding from the RCUK as part of a jointly-funded project with the National Science Foundation (USA).

Confidence can be a bad thing – here's why

Author: Stuart Beattie, Lecturer of Psychology, Bangor UniversityTim Woodman, Professor and Head of the School of Sport, Health and Exercise Sciences, Bangor University

Have you ever felt 100% confident in your ability to complete a task, and then failed miserably? After losing in the first round at Queen’s Club for the first time since 2012, world number one tennis player, Andy Murray, hinted that “overconfidence” might have been his downfall. Reflecting on his early exit, Murray said: “Winning a tournament is great and you feel good afterwards, but you can also sometimes think that your game is in a good place and maybe become a little bit more relaxed in that week beforehand.”

There is no doubt that success breeds confidence, and in turn, the confidence gained from success positively influences performance – normally. However, recently, this latter part of the relationship between confidence and performance has been called into doubt. High confidence can have its drawbacks. One may only need to look at the results of the recent general election to note that Theresa May called for an early election partly based on her confidence to win an overall majority.

Our research at the Institute for the Psychology of Elite Performance at Bangor University has extensively examined the relationship between confidence and performance. So, what are the advantages and disadvantages of having high (or indeed low) levels of confidence for an upcoming task?

Confidence and performance

First, let’s look at the possible outcomes of having low confidence (some form of self-doubt). Low confidence is the state of thinking that we are not quite ready to face an upcoming task. In this case, one of two things happens: either we disengage from the task, or we invest extra effort into preparing for it. In one of our studies participants were required to skip with a rope continuously for one minute. Participants were then told that they had to repeat the task but using a more difficult rope to skip with (in fact it was the same type of rope). Results revealed that confidence decreased but performance improved. In this case, self-doubt can be quite beneficial.

Now let’s consider the role of overconfidence. A high level of confidence is usually helpful for performing tasks because it can lead you to strive for difficult goals. But high confidence can also be detrimental when it causes you to lower the amount of effort you give towards these goals. Overconfidence often makes people no longer feel the need to invest all of their effort – think of the confident student who studies less for an upcoming exam.

‘There’s no way I’ll miss from here.’Jacob Lund/shutterstock.com

Interestingly, some of our research findings show that when people are faced with immediate feedback after a golf putting task (knowing exactly how well you have just performed), confidence expectations (number of putts they thought they could make next) far exceeded actual obtained performance levels by as much as 46%. When confidence is miscalibrated (believing you are better than you really are), it will have a negative effect on subsequent task performance.

This overconfidence in our ability to perform a task seems to be a subconscious process, and it looks like it is here to stay. Fortunately, in the long term the pros of being overconfident (reaching for the stars) seem to far outweigh the cons (task failure) because if at first you do not succeed you can always try again. But miscalibrated confidence will be more likely to occur if vital performance information regarding your previous levels of performance accomplishments is either ignored or not available. When this happens people tend to overestimate rather than underestimate their abilities.

So, Andy Murray, this Queen’s setback is a great wake-up call – just in time for Wimbledon.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

How operational deployment affects soldiers' children

Author: Leanne K Simpson, PhD Candidate, School of Psychology | Institute for the Psychology of Elite Performance, Bangor University

So many of us have seen delightful videos of friends and family welcoming their loved ones home from an operational tour of duty. The moment they are reunited is heartwarming, full of joy and tears – but, for military personnel who were deployed to Iraq and Afghanistan post 9/11, their time away came with unprecedented levels of stress for their whole family.

Military personnel faced longer and more numerous deployments, with short intervals in between. The impact of operational deployments on military personnel’s mental health is well reported. Far less is known, however, about how deployment affects military families, particularly those with young children.

Military families are often considered the “force behind the forces”, boosting soldiers’ morale and effectiveness during operational deployment. But this supportive role can come at a price.

Research has shown that deployments which last less than a total of 13 months in a three-year period will not harm military marriages. In fact, divorce rates are similar to the general population during service– although these marriages are more fragile when a partner exits the “military bubble”.

But studies have also found that children of service personnel have significantly more mental health problems– including anxiety and depression – than their civilian counterparts. Mental health issues are also particularly high among military spouses raising young children alone during deployment.

Military children

Our understanding of how younger children cope with deployment often stems from mothers’ retrospective reports, or from the children themselves when they become adolescents. Very little is known about the impact of deployment on young children who are at the greatest risk of social and emotional adjustment problems.

Unsurprisingly, the studies that have been conducted indicate that it is the currently deployed and post-deployed families that experience problematic family functioning.

A new study that I have co-authored with Dr Rachel Pye– soon to be published in Military Medicine – examines how UK military families with young children function during three of the five stages described in the “emotional cycle of deployment”, when their father is or has recently been on a tour of duty.

The emotional cycle of an extended deployment – six months or longer – consists of five distinct stages: pre-deployment, deployment, sustainment, re-deployment, and post-deployment. Each stage comes with its own emotional challenges for family members. The cycle can be painful to deal with, but those who know what to expect from each stage are more likely to maintain good mental health.

Possible negative changes in child behaviour resulting from deployment.

Strength in rules

Our research has found that all military families, regardless of deployment stage, have significantly more rules and structured routines than non-military families. Usually this would be indicative of poor family functioning – as it is associated with resistance to change – but we suggest that rigidity may actually be a strength for military families. It gives stability to an often uncertain way of life.

The findings also support previous research with similar US military families where a parent had been deployed. These families were highly resilient, with high levels of well-being, low levels of depression and high levels of positive parenting.

We used a unique way of examining the impact of deployment on young children. Each of the participants was asked to draw their family so that we could measure their perception of family functioning.

Pictures drawn by children of fathers who had returned from deployment within the last six months were quite distinctive. The father was often drawn larger and more detailed than other family members. But in the pictures drawn by children whose fathers were currently deployed, the father was often not included, or the child used less detail or colour.

Example drawings from children whose fathers were either currently deployed, about to deploy or had recently returned from combat operations.Leanne K Simpson

When the pictures were re-analysed ignoring the physical distance between the child and parents – which is often used as an indicator of emotional distance, but could for this sample represent a real physical distance – the differences in how the fathers were drawn was still evident.

What all this means is that children who had a father return from deployment within the previous six months, or a father who was currently deployed, were part of the poorest-functioning families in our study.

This may seem like a negative result but our research also indicated that the effect is temporary. The children’s drawings showed differences between the currently deployed and the post-deployed families, but military children without a deployed parent scored similarly to non-military children.

So although military families are negatively affected by deployment, the impact doesn’t last. The vast majority successfully adapt to each stage of deployment.

Like any family, military families do experience problems – but this research highlights the robust, stoic nature of military families and their incredible ability to bounce back from adversity, demonstrating that they truly are the “force behind the forces”.

The Conversation

Leanne K Simpson receives funding from the British Ministry of Defence via their Defence Science, and Technology Laboratory via their PhD studentship scheme researching mental robustness in military personnel. This article does not reflect the views of the research councils or other publicly-funded bodies.

'Facts are not truth': Hilary Mantel goes on the record about historical fiction

Author: Michael Durrant, Lecturer in Early Modern Literature, Bangor University

In a recent talk at the Hay literary festival, Cambridge historian and biographer John Guy said he had seen an increasing number of prospective students citing Hilary Mantel’s Booker Prize-winning historical novels, Wolf Hall and Bring up the Bodies, as supporting evidence for their knowledge of Tudor history.

Guy suggested that Mantel’s as yet incomplete trilogy on Thomas Cromwell’s life and career – the third instalment, The Mirror and the Light, comes out later this year – has become something of a resource for a number of budding history undergraduates, despite the fact that they contain historical inaccuracies (casting, for example, Thomas More as a woman-hating tyrant, Anne Boleyn as a female devil and getting the wrong sheriff of London to lead More to his execution).

The Guardian quotes Guy as saying that this “blur between fact and fiction is troubling”. In fact, Guy’s comments on the blurring of fact and fiction, and related concerns of authenticity, do read as a worrying prognosis. In the age of Trump and fake news, it seems particularly important that we call bullshit on so-called “alternative facts” and place an unquestionable fix on fiction.

Yet historical fiction, in all its varieties, can and frequently does raise vital questions about how we write, and conceptualise, historical processes. Indeed, when writers of historical fiction make stuff up about the past, they sometimes do so in an effort to sharpen, rather than dull, our capacities to separate fact from fiction.

‘There are no endings’

In the first of five Reith Lectures to be aired on BBC Radio 4, Mantel similarly argues that in death “we enter into fiction” and the lives of the dead are given shape and meaning by the living – whether that be the historian or the historical novelist. As the narrator of Bring up the Bodies puts it: “There are no endings.” Endings are, instead, “all beginnings”, the foundation of interpretative acts.

In Mantel’s view, the past is not something we passively consume, either, but that which we actively “create” in each act of remembrance. That’s not to say, of course, that Mantel is arguing that there are no historical “facts” or that the past didn’t happen. Rather, she reminds us that the evidence we use to give narrative shape to the past is “always partial”, and often “incomplete”. “Facts are not truth”, Mantel argues, but “the record of what’s left on the record.” It is up to the living to interpret, or, indeed, misinterpret, those accounts.

Wolf Hall won the Booker Prize in 2009.

In this respect the writer of historical fiction is not working in direct opposition to the professional historian: both must think creatively about what remains, deploying – especially when faced with gaps and silences in the archive – “selection, elision, artful arrangement”, literary manoeuvres more closely associated with novelist Philippa Gregory than with Guy the historian. However, exceptional examples from both fields should, claims Mantel, be “self-questioning” and always willing to undermine their own claims to authenticity.

Richard’s teeth

Mantel’s own theorising of history writing shares much with that other great Tudor storyteller: William Shakespeare.

While Shakespeare’s Richard III (1592), can be read as a towering achievement in historical propaganda – casting Richard, the last of the Plantagenets, as an evil usurper, and Richmond, first Tudor king and Elizabeth I’s grandfather, as prophetic saviour – the play invites serious speculation about the idiosyncratic nature of historical truth.

Take this exchange in Act II Scene IV of the play, which comes just before the doomed young princes are led to the tower. Here, the younger of the two, Richard, duke of York, asks his grandmother, the duchess of York, about stories he’s heard about his uncle’s birth:

York: Marry, they say my uncle grew so fast
That he could gnaw a crust at two hours old … Duchess of York: I pray thee, pretty York, who told thee this?
York: Grandam, his nurse.
Duchess of York: His nurse? Why, she was dead ere thou wast born.
York: If ’twere not she, I cannot tell who told me.

Fresh in the knowledge that his uncle’s nurse died before he was born, the boy has no idea who told him the story of his uncle’s gnashing baby teeth. Has he misremembered his source, blurring the lines between fact and fiction? Was the boy’s uncle born a monster, or is that a convenient fiction his enemies might wish to tell themselves? And why on earth would Shakespeare bother to include this digression?

Bring up the Bodies won the Booker Prize in 2012.

In all other respects, Richard III invites straightforward historical divisions between good (the Tudors) and evil (the Plantagenet dynasty). But here, subversive doubts creep in about the provenance of the stories we tell about real historical people, with the “historical fact” briefly revealed as a messy, fallible concept, always on the edge of make-believe.

Near-history

Richard III reminds us that historical facts can be fictionalised, but also that the fictional can just as easily turn into fact. Mantel’s Tudor cycle has been haunted by similar anxieties. In the often terrifying world of Henry VIII’s court, her novels show how paranoia breeds rumour, how rumour bleeds into and shapes fact and, as a result, “how difficult it is to get at the truth”. History isn’t just a different country for Mantel, it’s something intimately tied to the fictions we cling to.

And indeed in Wolf Hall that blurred relationship between fact and fiction, history and myth, is often front and centre. In Wolf Hall the past is somewhere above, between, and below the official record. History is not to be found in “coronations, the conclaves of cardinals, the pomp and processions.” Instead it’s in “a woman’s sigh”, or the smell she “leaves on the air”, a “hand pulling close the bed curtain”; all those things that are crucially absent from the archive.

Brought to life: Thomas Cromwell.Hans Holbein via the Frick Collection.

The fact of history’s ephemerality opens a “gap” for the fictional, into which we “pour [our] fears, fantasies, desires”. As Mantel has asked elsewhere: “Is there a firm divide between myth and history, fiction and fact: or do we move back and forth on a line between, our position indeterminate and always shifting?”

For the Canadian novelist, Guy Gavriel Kay, fantasy is a necessary precondition of all forms of historical writing: “When we work with distant history, to a very great degree, we are all guessing.”

Guy Gavriel Key’s Lions of Al-Rassan.

This is why Kay is at leave to employ the conventions of fantasy to deal with the past, transposing real historical events, peoples, and places – medieval Spain and Roderigo Diaz (El Cid) in The Lions of Al-Rassan (1995), for example, or the Viking invasions of Britain in The Last Light of the Sun (2004) – into the realm of the fantastical.

Kay researches (he provides bibliographies in all his books) and then unravels history and historical evidence, putting a “quarter turn” on the assumed facts: renaming historical figures, reversing and collapsing the order of known events, substituting invented religions for real ones, introducing magic into the history of Renaissance Europe, or China. He has described the result of this process as “near-history”: alternative pasts that are at once radically strange and weirdly familiar.

Like Mantel, Kay’s (near-)historical fictions can be read as less an effort to evade the blur between fact and fiction than to honestly point towards that blur as a condition of history itself. After all, history is debatable and often impossible to verify. It’s a reminder, perhaps, that we sometimes need the tropes of fiction to smooth over those complexities, or render them legible, truthful, in the contemporary moment. We need metaphors, and similes, so that the dead can speak and act, live and die.

The Conversation

Michael Durrant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.