Research stories

On our News pages

Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.


Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on

Independent music labels are creating their own streaming services to give artists a fair deal

Author: Steffan Thomas, Lecturer in Film and Media, Bangor University

Kaspars Grinvalds/Shutterstock

Music streaming services are hard to beat. With millions of users – Spotify alone had 60m by July 2017, and is forecast to add another 10m by the end of the year – paying to access a catalogue of more than 30m songs, any initial concerns seem to have fallen by the wayside.

But while consumers enjoy streaming, tension is still bubbling away for the artists whose music is being used. There is a legitimacy associated with having music listed on major digital platforms, and a general acknowledgement that without being online you are not a successful business operation or artist.

Even the biggest stars are struggling to deny the power of Spotify, Apple Music and the like. Less than three years after pop princess Taylor Swift announced she would be removing her music from Spotify, the best-selling artist is back online, as it were. Swift’s initial decision came amid concerns that music streaming services were not paying artists enough for using their work – a view backed up by others including Radiohead’s Thom Yorke.

But while Yorke and Swift can survive without the power of streaming, independent production companies with niche audiences may not be able to.

Struggling artists

Though the music industry is starting to get used to streaming – streamed tracks count towards chart ratings, and around 100,000 tracks are added every month to Spotify’s distribution list – it is still proving difficult for independent music companies to compete for exposure on these platforms.

Coping with diminishing sales of CDs and other physical copies of music, independent labels are already in a tough place. Independent labels and artists are also unable to negotiate with large digital aggregators such as Spotify or Deezer for more favourable rates, and are forced to accept the terms given. Independent labels lack the expertise, but mostly lack the catalogue size for bargaining power. Major record labels, backed by industry organisations, on the other hand can and have successfully negotiated more favourable terms for their artists based on the share of the catalogue that they represent.


There’s also been a shift in industry approach that some independent labels may find difficult to do. These days, major labels are focused less on the artists themselves and more on which music will do best on new platforms. This undermines the ethos for many culturally rich independent labels who work hard to safeguard niche areas of their market. For them, it is about building up different genres, not simply releasing songs that will generate the most money.

So if niche labels can’t get a strong footing on large services, what can they do?

Independent streaming

Where once there were free sites such as SoundCloud, which gave emerging and niche musicians a place to share their music, indy labels are now developing their own streaming services to make sure their artists get the best exposure – and the best deal.

Wales in particular is leading the way for the minority language independent music scene. Streaming service Apton, launched in March 2016, provides a curated service to its music fans. It operates at a competitive price point, with a more selective catalogue representing several Welsh labels. More importantly, it returns a much fairer price to its recording artists than Spotify’s reported 0.00429p per stream.

By using a specialist, curated and targeted music service – such as Apton, or similar services The Overflow and PrimePhonic– consumers are better able to find the music they are looking for. Listeners are also more likely to value the service, as they can access and experience a greater percentage of a label’s catalogue or remain within a niche genre of music, compared with mainstream mass-market streaming services, where mass market recommendations are generated via popular playlists. Users of these streaming sites and apps also value the knowledge that the money they spend is being used to support the artists they follow.

Though they are certainly doing well as is, streaming services at all levels need more work to become the default for music listening. In addition, it is vital that music publishers start using streaming as a gateway for consumers to engage with the music they want to hear, rather than what they want to sell. If the former strategy continues to be followed, it may have a devastating effect on budding artists.

Likewise, listeners need to feel that streaming offers a level of transparency, value and that there is a two-way relationship worthy of their time and attention – something the major players could certainly learn from the independents.

The Conversation

Steffan Thomas was previously affiliated with Sain Records. ApTon is owned by Sain Records and was developed in response to research produced during his PhD. However, I have no ongoing role within the company and retain no commercial interest in the service.

Migrating birds use a magnetic map to travel long distances

Author: Richard Holland, Senior Lecturer in Animal Cognition, Bangor University

Anjo Kan/Shutterstock

Birds have an impressive ability to navigate. They can fly long distances, to places that they may never have visited before, sometimes returning home after months away.

Though there has been a lot of research in this area, scientists are still trying to understand exactly how they manage to find their intended destinations.

Much of the research has focused on homing pigeons, which are famous for their ability to return to their lofts after long distance displacements. Evidence suggests that pigeons use a combination of olfactory cues to locate their position, and then the sun as a compass to head in the right direction.

We call this “map and compass navigation”, as it mirrors human orienteering strategies: we locate our position on a map, then use a compass to head in the right direction.

But pigeons navigate over relatively short distances, in the region of tens to hundreds of kilometres. Migratory birds, on the other hand, face a much bigger challenge. Every year, billions of small songbirds travel thousands of kilometres between their breeding areas in Europe and winter refuges in Africa.

This journey is one of the most dangerous things the birds will do, and if they cannot pinpoint the right habitat, they will not survive. We know from displacement experiments that these birds can also correct their path from places they have never been to, sometimes from across continents, such as in a study on white crowned sparrows in the US.

Over these vast distances, the cues that pigeons use may not work for migrating birds, and so scientists think they may require a more global mapping mechanism.

Navigation and location

To locate our position, we humans calculate latitude and longitude, that is our positon on the north-south and east-west axes of the earth. Human navigators have been able to calculate latitude from the height of the sun at midday for millennia, but it took us much longer to work out how to calculate longitude.

Eventually it was solved by having a highly accurate clock that could be used to tell the difference between local sunrise time and Greenwich meantime. Initially, scientists thought birds might use a similar mechanism, but so far no evidence suggests that shifting a migratory bird’s body clock effects its navigation ability.

There is another possibility, however, which has been proposed for some time, but never tested – until now.

The earth’s magnetic pole and the geographical north pole (true north) are not in the same place. This means that when using a magnetic compass, there is some angular difference between magnetic and true north, which varies depending on where you are on the earth. In Europe, this difference, known as declination, is consistent on an east west axis, and so can possibly be a clue to longitude.

A reed warbler.Rafal Szozda/Shutterstock

To find out whether declination is used by migrating birds, we tested the orientation of migratory reed warblers. Migrating birds that are kept in a cage will show increased activity, and they tend to hop in the direction they migrate. We used this technique to measure their orientation after we had changed the declination of the magnetic field by eight degrees.

First, the birds were tested at the Courish spit in Russia, but the changed declination – in combination with unchanged magnetic intensity – indicated a location near Aberdeen in Scotland. All other cues were available and still told them they were in Russia.

If the birds were simply responding to the change in declination – like a magnetic compass would – they would have only shifted eight degrees. But we saw a dramatic reorientation: instead of facing their normal south-west, they turned to face south-east.

This was not consistent with a magnetic compass response, but was consistent with the birds thinking they had been displaced to Scotland, and correcting to return to their normal path. That is to say they were hopping towards the start of their migratory path as if they were near Aberdeen, not in Russia.

This means that it seems that declination is a cue to longitudinal position in these birds.

There are still some questions that need answering, however. We still don’t know for certain how birds detect the magnetic field, for example. And while declination varies consistently in Europe and the US, if you go east, it does not give such a clear picture of where the bird is, with many values potentially indicating more than one location.

There is definitely still more to learn about how birds navigate, but our findings could open up a whole new world of research.

The Conversation

Ricahrd Holland receives funding from the Leverhulme Trust and BBSRC

Welsh language media could hold the solution to Wales's democratic deficit

Author: Ifan Morgan Jones, Lecturer in Journalism, Bangor University

Billy Stock/Shutterstock

For the people of Wales, the country’s democratic deficit has become almost part and parcel of everyday life. While the country has spent its nearly 20 years of devolution building up many of the political institutions that underpin a modern nation, Wales does not yet have a well-developed public sphere. The result is that the Welsh public are not only voting under a misapprehension of what the assembly and government are responsible for, but there is also a lack of public scrutiny.

The problem has been mostly blamed on the lack of political coverage by English language media in Wales. Major outlets like the Trinity Mirror-owned Media Wales, BBC Wales and ITV Cymru have all claimed they are working to remedy the situation, yet still the deficit remains.

The Assembly itself is keen to get to grips with the issue too: a taskforce – of which I was a member – recently recommended direct state investment in journalists that would report on Welsh politics. This may sound like a step into the unknown, but in truth it would not be a radical departure. Three Welsh-language websites that discuss public affairs – Golwg 360, Barn magazine’s website and O’r Pedwar Gwynt– already receive grants from the Welsh government, via the Welsh Books Council. Another Welsh-language news website, BBC Cymru Fyw, is paid for by the licence fee.

Barn magazine, September 2007.CC BY-SA

The two most prominent of these sites, BBC Cymru Fyw and Golwg360, attracts a small but committed audience of more than 57,000 unique weekly visitors between them. Around half of readers are under 40 years of age – younger than that for Welsh-language print publications, television and radio.

Part of the success of these sites comes from reaching an audience that wouldn’t have made a conscious decision to seek out news stories about Wales or in Welsh in the past. Quite simply because the content appears in their social media feeds, they are more likely to click on it than they ever would be to go out and buy a Welsh-language newspaper or magazine, or tune in to a Welsh-language TV or radio channel.

Though this audience also visits English language outlets for news, readers visit Welsh language sites in search of a certain kind of content that is not available in the English language. My own analysis of Golwg 360’s statistics, as well as interviews with journalists from all four news sites, suggests that the most popular subjects are the Welsh language, Welsh politics, education in Wales, the Welsh media, the Welsh language and arts and Welsh institutions.

Meanwhile, subjects that were already well covered by other English-language news sites – such as British and international current affairs – or sports, tend to do poorly.


However, journalists working for Welsh sites other than the BBC’s Cymru Fyw, did suggest that they did not feel they have sufficient resources to properly scrutinise Welsh institutions –so their ability to carry out in-depth, investigative journalism was severely limited. This problem was made worse by a demand for multimedia content that the journalists did not feel they had the time, resources or technological capability to deliver.

While the number of news platforms providing Welsh-language news is impressive, there may still be a lack of plurality. BBC Cymru Fyw and Golwg360 cover many of the same topics, for example. And the investigative journalism conducted by the numerous Welsh language print magazines does not always find an audience because it isn’t publicised online.

None of the journalists I interviewed felt that their dependence on the Welsh government or the license fee for funding limited what they felt they could report. In fact, it was felt by some that the commercial press was more likely to restrict what they covered because of commercial interests.

The funding of Welsh language journalism by the Welsh government has clearly been a success. It has created a lively public sphere of avid readers who take a great interest in news about the Assembly itself as well as other Welsh political institutions.

One would wish that funding English-language journalism in such a way would be unnecessary – and that the commercial media in Wales will turn a corner and strengthen over the next few years. However, if it continues to weaken as it has over the past 20 years, the future of devolution could depend on a radical solution.

The Conversation

Ifan Morgan Jones does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Forest conservation approaches must recognise the rights of local people

Author: Sarobidy Rakotonarivo, Postdoctoral Research Fellow, University of StirlingNeal Hockley, Research Lecturer in Economics & Policy, Bangor University

Protected areas are being established without acknowledging the customary rights of local communities.Sarobidy Rakotonarivo

Until the 1980s, biodiversity conservation in the tropics focused on the “fines and fences” approach: creating protected areas from which local people were forcibly excluded. More recently, conservationists have embraced the notion of “win-win”: a dream world where people and nature thrive side by side.

But over and over, we have seen these illusions shattered and the need to navigate complicated trade-offs appears unavoidable.

To this day, protected areas are being established coercively. They exclude local communities without acknowledging their customary rights. Sadly, most conservation approaches are characterised by a model of “let’s conserve first, and then compensate later if we can find the funding”.

A new conservation model, Reducing Emissions from Deforestation and forest Degradation (REDD+) is an example of this. Finalised at the Paris climate conference in 2015, it seemed to offer something for everyone: supplying global ecosystem services – such as capturing and storing carbon dioxide and biodiversity conservation – while improving the lives of local communities.

Unfortunately, REDD+ is often built on protected area regimes that exclude local people. For example in Kenya, REDD+ led to the forceful eviction of forest dependent people and exacerbated inequality in access to land. The approach is underpinned by laws (often a legacy of the colonial era) that fail to recognise local people’s traditional claims to the forest. In doing so, REDD+ fails to provide compensation to the people it most affects and risks perpetuating the illusion of win-win solutions in conservation.

REDD+ is just one way in which forest conservation can disadvantage local people. In our research we set out to estimate the costs that local people will incur as a result of a REDD+ pilot project in Eastern Madagascar: the Corridor Ankeniheny-Zahamena.

Our aim was to see whether we could robustly estimate these costs in advance, so that adequate compensation could be provided using the funds generated by REDD+. Our research found that costs were very significant, but also hard to estimate in advance. Instead, we suggest that a more appropriate approach might be to recognise local people’s customary tenure.

Social costs of protected areas

Madagascar, considered one of the top global biodiversity hotspots, recently tripled the island’s protected area network from 1.7 million hectares to 6 million hectares. This covers 10% of the country’s total land area.

Although the state has claimed ownership of these lands since colonial times, they are often the customary lands of local communities whose livelihoods are deeply entwined with forest use. The clearance of forests for cultivation has traditionally provided access to fertile soils for millions of small farmers in the tropics. Conservation restrictions obviously affect them negatively.

Swidden agriculture in the eastern rainforests of Madagascar.Sarobidy Rakotonarivo

Conservationists need to assess the costs of conservation before they start. This could help to design adequate compensation schemes and alternative policy options.

We set out to estimate the local welfare costs of conservation in the eastern rainforests of Madagascar using innovative multi-disciplinary methods which included qualitative as well as quantitative data. We asked local people to trade off access to forests for swidden agriculture (land cleared for cultivation by slashing and burning vegetation) with compensation schemes such as cash payments or support for improved rice farming.

Choice experiment surveys with local households in Madagascar.Sarobidy Rakotonarivo

We selected households that differed in their past experience of forest protection from two sites in the eastern rainforests of Madagascar.

The findings

We found that households have different views about the social costs of conservation.

When households had more experience of conservation restrictions, neither large cash payments nor support for improved rice farming were seen as enough compensation.

Less experienced households, on the other hand, had strong aspirations to secure forest tenure. Competition for new forest lands is becoming increasingly fierce and government protection, despite undermining traditional tenure systems, is weakly enforced. They therefore believed that legal forest tenure is better since it would enable them to establish claims over forest lands.

Unfortunately, knowing what would constitute “fair” compensation is extremely complex.

Firstly, local people have very different appraisals of the social costs of conservation. That makes it difficult to estimate accurately the potential negative costs of an intervention.

It’s also hard to evaluate how cash or agricultural projects will stimulate development. This makes it challenging to estimate how much, or what type of compensation should be given.

These challenges are compounded by the high transaction costs of identifying those eligible as well as the lack of political power of communities to demand compensations.

The solution

Conservation approaches, particularly fair compensation for restrictions that are imposed coercively, need a major rethink.

One solution could be to formally recognise local people’s claims to the forest and then negotiate renewable conservation agreements with them. This is an approach already used successfully in many Western countries. In the US for example, conservation organisations negotiate “easements” with landowners, to protect wildlife. Agreements like this ensure that local people’s participation is genuinely voluntary and that compensation payments are sufficient.

Our research shows that there’s a strong demand from local people for securing local forest tenure. There’s also evidence that doing so may better protect forest resources because without customary tenure local people are more likely to clear forests faster than they would do if they were given secure rights.

We therefore argue that securing local tenure may be an essential part of social safeguards for conservation models like REDD+. It could also have the added benefit of helping to reduce poverty.

The social costs of forest conservation have been generally under-appreciated and advocacy for nature conservation reveals a lack of awareness of the high price that local people have to pay. As local forest dwellers have the greatest impact on resources and also the most to lose from non-sustainable uses of these resources, a radical change in current practices is needed.

The Conversation

Sarobidy Rakotonarivo received funding from the European Commission through the forest-for-nature-and-society ( joint doctoral programme, and the Ecosystem Services for Poverty Alleviation (ESPA) programme (p4ges project: NE/K010220/1) funded by the Department for International Development (DFID), the Economic and Social Research Council (ESRC) and the Natural Environment Research Council (NERC).

Neal Hockley received funding for this work from the Ecosystem Services for Poverty Alleviation program (ESPA), funded by the UK Department for International Development, the Natural Environment Research Council and the Economic and Social Research Council.

Want to develop 'grit'? Take up surfing

Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University

Rhi Willmot, Author provided

My friend, Joe Weghofer, is a keen surfer, so when he was told he’d never walk again, following a 20ft spine-shattering fall, it was just about the worst news he could have received. Yet, a month later, Joe managed to stand. A further month, and he was walking. Several years on, he is back in the water, a board beneath his feet. Joe has what people in the field of positive psychology call “grit”, and I believe surfing helped him develop this trait.

Grit describes the ability to persevere with long-term goals, sustaining interest and energy over months or years. For Joe, this meant struggling through arduous physiotherapy exercises and remaining engaged and hopeful throughout his recovery.

Research suggests that gritty people are more likely to succeed in a range of challenging situations. Grittier high school students are more likely to graduate. Grittier novice teachers are more likely to remain in the profession and gritty military cadets are more likely to make it through intense mental and physical training. The secret to this success is found in the ability to keep going when things get tough. Gritty people don’t give up and they don’t get bored.

Joe shortly after his accident.Rhi Willmot, Author provided

Research also suggests that grit can be learned. Certain conditions can foster grit, allowing grit developed in one domain to transfer to other, more challenging, situations. Surfing is a good example of how grit can be gently cultivated, strengthened and then honed. So although getting back in the water itself was important to Joe, his previous surfing experience may well have developed his ability to persevere long before he became injured. Here’s how:


Gritty people have a strong appreciation of the connection between hard work and reward. In contrast to simply running onto a hockey pitch, or diving into a pool, surfing is unique in that you have to battle through the white water at the shoreline before you can even begin to enjoy the feeling of sliding down a glassy, green wave. This is difficult, but the adrenaline rush of riding a wave is worth the cost of paddling out.

The theory of learned industriousness suggests that pairing effort and reward doesn’t just reinforce behaviour but also makes the very sensation of effort rewarding in itself. Repeated cycles of paddling out and surfing in are particularly effective at developing an association between intense effort and potent reward. This is especially relevant given that grit is described as a combination of effort and enjoyment. Gritty people don’t just slave away, they eagerly chase difficult goals in a ferocious pursuit of success.


Surfers’ passion for their sport is well known – it may even be described as an addiction. One of the properties that makes surfing so addictive is its unpredictability.

The ocean is a constantly changing environment, making it difficult to know exactly when and where the next wave is about to break. This means watery reinforcement is delivered on something called a variable-interval schedule; any number of quality waves might arrive at any point in a given time frame. Importantly, we receive a stronger release of the motivating neurotransmitter dopamine when a reward is unexpected. So when a surfer is surprised by the next perfect wave, dopamine-sensitive pleasure centres in the brain become all the more stimulated.

Behaviour that is trained under a variable-interval schedule is much more likely to be maintained than behaviour that is rewarded more consistently, making surfers better able to persevere when the waves take a long time to materialise.

Joe, enjoying the activity that made him who he is.Rhi Willmot, Author provided


The final grit-honing element of surfing is its ability to provide a sense of purpose. Feeling purposeful – a state psychologists describe as a belief that life is meaningful and worthwhile – involves doing things that take us closer to our important goals. It usually means acting in line with our values and being part of something bigger than ourselves. This could refer to religious practice, connecting to nature or simply helping other people.

Research suggests that as levels of grit increase, so does a sense of purpose. But this doesn’t mean that gritty people are saints – just that they have an awareness of how their activities connect to a cause beyond themselves, as well as their own deeply held values.

The physical and mental challenge offered by surfing provides a sense of personal fulfilment. It’s always possible to paddle faster, ride for longer or try the next manoeuvre, but spending time waiting for the next wave also provides a valuable opportunity to reflect.

The ocean is a powerful beast. Serenity can quickly be replaced with chaos when an indomitable set of waves arrives, five-foot-high walls of water, stacked one after the other. Witnessing the power of nature in this way can certainly deliver a sense of perspective, helping you to feel connected to something meaningful and awe inspiring.

Of course, surfing isn’t the only way to build grit. The important lesson here is that developing our passion and identifying our purpose can help us persevere with the activities we love. This provides a valuable reservoir of strength, to be used when we need it the most. And while coming back from such a serious injury requires more than just grit, Joe’s persistent effort and unwillingness to give in have undoubtedly helped him to once again enjoy the sport that made him who he is.

The Conversation

Rhi Willmot does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Artists and architects think differently to everyone else – you only have to hear them talk

Author: Thora Tenbrink, Reader in Cognitive Linguistics, Bangor University

How often have you thought that somebody talks just like an accountant, or a lawyer, or a teacher? In the case of artists, this goes a long way back. Artists have long been seen as unusual – people with a different way of perceiving reality. Famously, the French architect Le Corbusier argued in 1946 that painters, sculptors and architects are equipped with a “feeling of space” in a very fundamental sense.

Artists have to think about reality in different ways to other people every day in their jobs. Painters have to create an imaginary 3D image on a 2D plane, performing a certain magic. Sculptors turn a block of marble into something almost living. Architects can design buildings that would seem impossible.

Think of Edgar Mueller’s famous street art. Or Michaelangelo’s Pietà. Or Frank Lloyd Wright’s Fallingwater, which seems to defy physics. All of these people are (or were) experts in rearranging the spatial relationships in their environment, each in their own way. This is a necessary skill for anyone who takes up these crafts as a profession. How could this not affect the ways in which they think – and talk – about space?

Our recent study, a collaboration of UCL and Bangor University, set out to test this. Do architects, painters, and sculptors conceive of spaces in different ways from other people and from each other? The answer is: yes, they do – in a range of quite subtle ways.

Painters, sculptors, architects (all “spatial” professionals with at least eight years of experience) and a group of people in unrelated (“non-spatial”) professions took part in the study. There were 16 people in each professional group, with similar age range and equal gender distribution. They were shown a Google Street view image, a painting of St Peter’s Basilica in the Vatican and a computer-generated surreal scene.

Michelangelo’s Pietà in St Peter’s Basilica in the Vatican.Stanislav Traykov via WIkimedia Commons

For each picture, they were given a few tasks that made them think about the spatial scene in certain ways: they were asked to describe the environment, explain how they would explore the space shown and suggest changes to it in the image. This picture-based task was chosen because of its simplicity – it doesn’t take an expert to describe a picture or to imagine exploring or changing it.

From the answers, we categorised elements of the responses for both qualitative and quantitative analyses using a new technique called Cognitive Discourse Analysis with the aim of highlighting aspects of thought that underlie linguistic choices beyond what speakers are consciously aware of. We made a short film about the research which you can watch below.

Telltale language

Our analysis led to the identification of consistent patterns in the language used for talking about the pictures that were revealing. Painters, sculptors and architects all gave more elaborate, detailed descriptions than the others.

Painters were more likely to describe the depicted space as a 2D image and said things like: “It’s obvious the image wants you to follow the boat off onto the horizon.” They tended to shift between describing the scene as a 3D space or as a 2D image. By contrast, architects were more likely to describe barriers and boundaries of the space – as in: “There are voids within walls which become spaces in their own right.” Sculptors’ responses were between the two – they were somewhat like architects except for one measure: with respect to the bounded descriptions of space, they appeared more like painters.

Painters and architects also differed in how they described the furthest point of the space, as painters called it the “back” and architects called it the “end”. The “non-spatial” group rarely used either one of these terms – instead they referred to the same location by using other common spatial terms such as “centre” or “bottom” or “there”. All of this had nothing to do with expert language or register – obviously people can talk in detail about their profession. But our study reflected the way they think about spatial relationships in a task that did not require their expertise.

The “non-spatial” group did not experience any problems with the task – but their language seemed less systematic and less rich than that of the three spatial professional groups.

Thinking and talking like a professional

Our career may well change the way we think, in somewhat unexpected ways. In the late 1930s, American linguist Benjamin Lee Whorf suggested that the language we speak affects the way we think– and this triggered extensive research into how culture changes cognition. Our study goes a step further – it shows that even within the same culture, people of different professions differ in how they appreciate the world.

Frank Lloyd Wright’s Fallingwater in Mill Run, Pennsylvania.Iam architect via Wikipedia Commons, CC BY-SA

The findings also raise the possibility that people who are already inclined to see the world as a 2D image, or who focus on the borders of a space, may be more inclined to pursue painting or architecture. This also makes sense – perhaps we develop our thinking in a particular way, for whatever reasons, and this paves our way towards a particular profession. Perhaps architects, painters and sculptors already talked in their own fashion about spatial relationships, before they starting their careers.

This remains to be looked at in detail. But it’s clear from our study that artists and architects have a heightened awareness of their surroundings which is reflected in the way they talk about spatial environments. So next time you are at dinner with an architect, painter, or sculptor, show them a photograph of a landscape and get them to describe it – and see if you can spot the telltale signs of their profession slipping out.

The Conversation

Thora Tenbrink's research was carried out with Claudia Cialone and Hugo Spiers.

How we're using ancient DNA to solve the mystery of the missing last great auk skins

Author: Jessica Emma Thomas, PhD Researcher, Bangor University

The great auk by John James Audubon.University of Pittsburgh/Wikimedia

On a small island off the coast of Iceland, 173 years ago, a sequence of tragic events took place that would lead to the loss of an iconic bird: the great auk.

The great auk, Pinguinus impennis, was a large, black and white bird that was found in huge numbers across the North Atlantic Ocean. It was often mistaken to be a member of the penguin family, but its closest living relative is actually the razorbill, and it is related to puffins, guillemots and murres.

Being flightless, the great auk was particularly vulnerable to hunting. Humans killed the birds in their thousands for meat, oil and feathers. By the start of the 19th dentury, the north-west Atlantic populations had been decimated, and the last few remaining breeding birds were to be found on the islands off the south-west coast of Iceland. But these faced another threat: due to their scarcity, the great auk had become a desirable item for both private and institutional collections.

The great auk’s breeding range across the North Atlantic.Maps were created using spatial data from BirdLife International/IUCN with National Geographic basemap in ArcGIS., Author provided

The fateful voyage of 1844

Between 1830 and 1841 several trips were taken to Iceland’s Eldey Island, to catch, kill, and sell the birds for exhibitions. Following a period of no reported captures, great auk dealer Carl Siemsen commissioned an expedition to Eldey to search for any remaining birds.

Between June 2-5 1844, 14 men set sail in an eight-oared boat for the island. Three braved the dangerous landing and spotted two great auks among the smaller birds that also bred there. A chase began but the birds ran at a slow pace, their small wings extended, expressing no call of alarm. They were caught with relative ease and killed, their egg, broken in the hunt, was discarded.

But the birds – a male and a female – were never to reach Siemsen. The expedition leader sold them to a man named Christian Hansen, who then sold them on to Herr Möller, an apothecary in Reykjavik. Möller skinned the birds and sent them, and their preserved body parts, to Denmark.

The last male great auk killed on Eldey Island, June 1844.Thierry Hubin/Royal Belgian Institute of Natural Sciences

The internal organs of these two birds now reside in the Natural History Museum of Denmark. The skins, however were lost track of, and – despite considerable effort by numerous scholars– their location has remained unknown.

Missing skins

In 1999, great auk expert Errol Fuller proposed a list of candidate specimens, the origins of which were not known, which he believed could be from the last pair of great auks. But how to find which of these were the true skins? For this we turned to the field of ancient DNA (aDNA).

In the last 30 years, aDNA technology has progressed greatly, and has been used to address a wide range of ecological and evolutionary questions, providing insight into countless species’ pasts, including humans. Museum specimens play a key role in aDNA research and have been used to solve several issues of unidentified or misidentified specimens – for example Greenlandic Norse fur, rare kiwi specimens, Aukland island shags, and mislabelled penguin samples.

We took things a step further, using aDNA techniques and a detective-like approach to try and resolve the mystery of what happened to the skins of the last two great auks.

Ancient DNA

We sampled the organs from the last birds, along with candidate specimens from Brussels, Belgium; Oldenburg and Kiel, in Germany; and Los Angeles. We then extracted and sequenced the mitochondrial genomes from each, and compared the sequences from the candidate skins to those from that came from the organs of the last pair.

The hearts of the last two documented great auks. The female’s was sampled for our study.Natural History Museum of Denmark, Author provided

The results showed that the skin held in the museum in Brussels was a perfect match for the oesophagus from the male bird. Unfortunately, there was no match between the other candidate skins and the female’s organs.

The specimens from Brussels and Los Angeles were thought to be the most likely candidates due to their history: both birds were in the hands of a well-known great auk dealer, Israel of Copenhagen, in 1845. As the bird in Brussels was a match, we thought it likely that the one in Los Angeles would also be a match for the female’s organs. It was surprising when it wasn’t. However, our research led us to speculate that a mix up which occurred following the death of Captain Vivian Hewitt in 1965 – who owned four birds which are now in Cardiff, Birmingham, Los Angeles and Cincinnati – was not resolved as once thought.

The identity of the birds now in Birmingham and Cardiff are now known after photographs were used to identify them – but those in Los Angeles and Cincinnati have been harder to determine. It was thought that their identities could be found from annotated photographs taken in 1871, but we speculate that they were not correctly identified, and that the bird in Cincinnati may be the original bird from Israel of Copenhagen. If this is the case, then it could explain why the Los Angeles bird fails to match with either of the last great auk organs held in Copenhagen.

We now have permission to test the great auk specimen in the Cincinnati Museum of Natural History and Science, and hopefully solve this final piece of a centuries-old puzzle. There is no guarantee that this bird will be a match either, but if it is, we will finally know what happened to the last two specimens of the extinct great auk.

The Conversation

Jessica Thomas is a double-degree PhD student enrolled at Bangor University and the University of Copenhagen. She receives funding from NERC PhD Studentship (NE/L501694/1), the Genetics Society-Heredity Fieldwork Grant, and European Society for Evolutionary Biology–Godfrey Hewitt Mobility Award.

Chefs and home cooks are rolling the dice on food safety

Author: Paul Cross, Senior Lecturer in the Environment, Bangor UniversityDan Rigby, Professor, Environmental Economics, University of Manchester


Encouraging anyone to honestly answer an embarrassing question is no easy task – not least when it might affect their job.

For our new research project, we wanted to know whether chefs in a range of restaurants and eateries, from fast food venues and local cafes to famous city bistros and award-winning restaurants, were undertaking “unsafe” food practices. As some of these – such as returning to the kitchen within 48 hours of a bout of diarrhoea or vomiting – contravene Food Standard Agency guidelines, it was unlikely that all respondents would answer as honestly if asked about them.

This was not just a project to catch specific food professionals in a lie, we wanted to find out the extent to which the public and chefs handled food in unsafe ways. With up to 500,000 cases of food-borne diseases reported every year in the UK, at a cost of approximately £1.5 billion in terms of resource in welfare losses, the need to identify risky food handling is urgent.

The Food Standards Agency (FSA) is acutely aware of the problem and has instigated initiatives such as the Food Hygiene Rating Scheme (FHRS) that involves inspections and punishments following the identification of poor food handling behaviours in restaurants and eateries. However, such initiatives do not always manage to change the behaviour of the food handlers – and inadequate food handling practices frequently go unseen or unreported.

Dicing with destiny

Yet still, we were faced with the issue of getting honest answers to our research questions. So we rolled a dice, or to be precise, two of them. As part of our research, 132 chefs and 926 members of the public were asked to agree or disagree with the following four statements:

I always wash my hands immediately after handling raw meat, poultry or fish;

I have worked in a kitchen within 48 hours of suffering from diarrhoea and/or vomiting;

I have worked in a kitchen where meat that is “on the turn” has been served;

I have served chicken at a barbecue when I wasn’t totally sure that it was fully cooked.

Here, the dice rolling was part of a randomised response technique (RRT): interviewees secretly rolled two dice and gave “forced” responses if particular values resulted. If they rolled a 2, 3 or 4, they had to answer yes. If they rolled 11 or 12, they had to answer no. All other values required an honest answer.

Denying the first, or admitting to the other three statements would be embarrassing for members of the public, and could possibly lead to dismissal for professional caterers. Because they knew that a “yes” could have been forced by the interviewee’s dice roll, they were more willing to report a true, unforced, “yes”.

We were unable to distinguish between individuals who had given a forced response and those who had answered truthfully. But we knew statistically that 75% of the dice rolls would lead to a honest response and so were able to determine the proportion of the public and chefs who had admitted to performing one of the risky behaviours. We also looked at the results in terms of factors such as price, awards and FHRS ratings to find out how they associated with the practices.

Outdoor cooking.Normana Karia/Shutterstock

Kitchen challenge

What we found from all of the responses was that it a can be quite challenging for consumers to find an eatery where such unsafe practices are absent. Chefs working in award-winning kitchens were more likely (almost one in three) to have returned to work within 48 hours of suffering from diarrhoea and vomiting. A serious cause for concern as returning to work in a kitchen too soon after illness is a proven way to spread infection and disease.

Not washing hands was also more likely in upmarket establishments – despite over one-third of the public agreeing that the more expensive a meal was, the safer they would expect it to be.

Chefs working in restaurants with a good Food Hygiene Rating Scheme score – a 3, 4, 5 on a scale of one to five in England and Wales, or a “pass” in Scotland – were just as likely to have committed the risky practices, or to have worked with others who had.

We also found a high proportion of chefs across the board had served meat which was “on the turn”. This is equally worrying, as it is part of a long-established cost-cutting practice that often involves masking the flavour of meat that is going off by adding a sauce.

Meanwhile at home, 20% of the public admitted to serving meat on the turn, 13% had served barbecued chicken when unsure it was sufficiently cooked, and 14% admitted to not washing their hands after touching raw meat or fish.

That is not to say that all chefs – or members of the public – practice unsafe food handling, indeed the majority did not admit to the poor food practices. But the number of professional kitchens where chefs admit to risky behaviour is still a cause for concern and avoiding them is not easy. People opting for a “fine-dining” establishment which holds awards, demands high prices and has a good FHRS score might not be as protected, nor reassured, as they think.

The Conversation

Paul Cross receives funding from Natural Environment Research Council. The Enigma project is funded by the major UK Research Councils and this study was a collaboration between Bangor, Manchester and Liverpool Universities.

Dan Rigby, as part of the Enigma project (, received funding for this work from the Medical Research Council, Natural Environment Research Council, Economic and Social Research Council, Biotechnology and Biosciences Research Council and the Food Standards Agency, through the Environmental & Social Ecology of Human Infectious Diseases Initiative (ESEI).

Brexit's impact on farming policy will take Britain back to the 1920s – but that's not necessarily a bad thing

Author: David Arnott, PhD Researcher, Bangor University

Howard Pimborough/Shutterstock

Not much regarding Brexit is clear. But one thing we do know is that the UK’s decision to leave the EU has triggered proposals to implement the most significant changes to agricultural policy since it joined the European Common Agricultural Policy (CAP) in 1973.

The CAP was designed to provide a stable, sustainably produced supply of safe, affordable food. It also ensured a decent standard of living for farmers and agricultural workers, providing support through subsidies.

Now, the UK’s main political parties agree direct subsidy provision has to be reviewed and fundamentally changed. The current system favours large landowners over the small and is seen by many as encouraging inefficiency in farming practices. At present, support comes in the form of a two-pillar system, one providing direct support payments, and the other giving payments which reward the farmer for conducting environmental practices through participation in agri-environment schemes.

In its election manifesto, the Conservative Party agreed to maintain all subsidy support until 2022. After that, it will move to a one-pillar system, providing payment for public goods, woodland regeneration, carbon sequestration and greenhouse gas reduction, among other things. It would shift towards a free market economy where payments would no longer directly support farming businesses without public good provision.

Speaking to Farming Today, environment secretary Michael Gove, said: “There’s a huge opportunity to design a better system for supporting farmers, but first I need to listen to environmentalists about how we can use that money to better protect the environment … and also to farmers to learn how to make the regime work better.”

Labour Party policy meanwhile aims to reconfigure funds for farming to support smaller traders, local economies, community benefits and sustainable practices. Both major parties through their manifestos seem to agree in principle that change must – and will – come, albeit for differing reasons.

When combined with exit from the single market and the customs union, these policies will create an agricultural playing field pretty similar to that of 100 years ago.

1921 - 1931

During World War I and the post-war reconstruction, the agriculture and food ministries controlled their respective industries. This culminated in the Agriculture Act (1920) which provided support for farmers in the form of guaranteed prices for agricultural products and minimum wages for farm labourers. But within six months of its implementation, falling prices and a struggling economy forced the repeal of the act, which returned the country to the laissez-faire economy that had existed before 1914, when there was a free market economy with little or no government involvement.

At this time, Labour and the Conservatives were united in their anti-subsidy approach, strongly believing agricultural issues should be solved in the open market.

Green and pleasant land.Jarek Kilian/Shutterstock

These sentiments – which eventually led to a free market period lasting from 1921-1931 – are reflected in the policies of today. The 1920s Labour Party opposed state support to farmers while land was privately owned – today, Labour wants to move subsidies away from wealthy landowners.

In the 1930s the Conservatives stated: “It is no longer national policy to buy all over the world in the cheapest markets”. Their ambition today is to: “make a resounding success of our world-leading food and farming industry; producing more, selling more, and exporting more of our great British food”.

However, there were some significant downsides when the Agriculture Act was repealed: agricultural wages fell by as much as 40%. Productivity fell too, rural poverty increased, small farms failed and land was abandoned through urban migration. Some described the countryside as a desolate waste.

Future rules

Not all see small-scale farm failure as bad, however. In the 1960s, agricultural economist Professor John F. Nash described farmer support as: “providing small or average farmers with what is considered a reasonable income, encouraging them to remain small or average farmers. They will remain in farms that would otherwise be unprofitable or use systems which otherwise might be too costly.” He argued that there were too many small farms and they needed to increase their output to survive without subsidies.

Though uncertainty remains around the precise nature of future policy, it will definitely affect the shape of agriculture in the UK. Small, unproductive farms may struggle to survive and tenancies may not be renewed. A reduction in land prices could see small farms bought out by larger enterprises.

Cutting subsidies could be the best thing for Britain environmentally: it could encourage more famers to pursue sustainable practices. But in 1986, when New Zealand removed farming subsidies, it had the effect of changing farm structure from small to large-scale commercial units. This model, while viewed as a success in productivity and innovation terms, had a devastating effect on the environment.

But, if implemented, the Conservative manifesto pledge would work very differently to the New Zealand example, providing alternatives to increased production through support to farmers for the provision of environmental services. Nothing is definite. Uncertainty ensues – and farmers can only wait to see what happens and hope that a step into the past can make for a brighter future.

The Conversation

David Arnott is a PhD research student at Bangor University currently working on a Welsh European Funding Office Flexible Integrated Energy Systems (FLEXIS) project. The aim of this part of the project is to, 'Evaluate the impact of policy change on farmer decision-making and carbon management.' Farmers of all types and farm size are currently being recruited to assist in the research which will be conducted over the next 2 years. Participation will involve completion of a short survey and, if interested, involvement in a series of face to face interviews to be conducted on a 6 monthly basis. If you are interested in participating in this topical, ground-breaking research project or would like more information please contact or twitter @DavidArnott10

Tech firms want to detect your emotions and expressions, but people don't like it

Author: Andrew McStay, Reader in Advertising and Digital Media, Bangor University

Sergey Nivens

As revealed in a patent filing, Facebook is interested in using webcams and smartphone cameras to read our emotions, and track expressions and reactions. The idea is that by understanding emotional behaviour, Facebook can show us more of what we react positively to in our Facebook news feeds and less of what we do not – whether that’s friends’ holiday photos, or advertisements.

This might appear innocuous, but consider some of the detail. In addition to smiles, joy, amazement, surprise, humour and excitement, the patent also lists negative emotions. Possibly being read for signs of disappointment, confusion, indifference, boredom, anger, pain and depression is neither innocent, nor fun.

In fact, Facebook is no stranger to using data about emotions. Some readers might remember the furore when Facebook secretly tweaked user’s news feeds to understand “emotional contagion”. This meant that when users logged into their Facebook pages, some were shown content in their news feeds with a greater number of positive words and others were shown content deemed as sadder than average. This changed the emotional behaviour of those users that were “infected”.

Given that Facebook has around two billion users, this patent to read emotions via cameras is important. But there is a bigger story, which is that the largest technology companies have been buying, researching and developing these applications for some time.

Watching you feel

For example, Apple bought Emotient in 2016, a firm that pioneered facial coding software to read emotions. Microsoft offers its own “cognitive services”, and IBM’s Watson is also a key player in industrial efforts to read emotions. It’s possible that Amazon’s Alexa voice-activated assistant could soon be listening for signs of emotions, too.

This is not the end though: interest in emotions is not just about screens and worn devices, but also our environments. Consider retail, where increasingly the goal is to understand who we are and what we think, feel and do. Somewhat reminiscent of Steven Spielberg’s 2002 film Minority Report, eyeQ Go, for example, measures facial emotional responses as people look at goods at shelf-level.

What these and other examples show is that we are witnessing a rise of interest in our emotional lives, encompassing any situation where it might be useful for a machine to know how a person feels. Some less obvious examples include emotion-reactive sex toys, the use of video cameras by lawyers to identify emotions in witness testimony, and in-car cameras and emotion analysis to prevent accidents (and presumably to lower insurance rates).

How long till machines can tell what we can?jura-photography

Users are not happy

In a report assessing the rise of “emotion AI” and what I term “empathic media”, I point out that this is not innately bad. There are already games that use emotion-based biofeedback, which take advantage of eye-trackers, facial coding and wearable heart rate sensors. These are a lot of fun, so the issue is not the technology itself but how it is used. Does it enhance, serve or exploit? After all, the scope to make emotions and intimate human life machine-readable has to be treated cautiously.

The report covers views from industry, policymakers, lawyers, regulators and NGOs, but it’s useful to consider what ordinary people say. I conducted a survey of 2,000 people and asked questions about emotion detection in social media, digital advertising outside the home, gaming, interactive movies through tablets and phones, and using voice and emotion analysis through smartphones.

I found that more than half (50.6%) of UK citizens are “not OK” with any form of emotion capture technology, while just under a third (30.6%) feel “OK” with it, as long as the emotion-sensitive application does not identify the individual. A mere 8.2% are “OK” with having data about their emotions connected with personally identifiable information, while 10.4% “don’t know”. That such a small proportion are happy for emotion-recognition data to be connected with personally identifying information about them is pretty significant considering what Facebook is proposing.

But do the young care? I found that younger people are twice as likely to be “OK” with emotion detection than the oldest people. But we should not take this to mean they are “OK” with having data about emotions linked with personally identifiable information. Only 13.8% of 18- to 24-year-olds accept this. Younger people are open to new forms of media experiences, but they want meaningful control over the process. Facebook and others, take note.

New frontiers, new regulation?

So what should be done about these types of technologies? UK and European law is being strengthened, especially given the introduction of the General Data Protection Regulation. While this has little to say about emotions, there are strict codes on the use of personal data and information about the body (biometrics), especially when used to infer mental states (as Facebook have proposed to do).

This leaves us with a final problem: what if the data used to read emotions is not strictly personal? What if shop cameras pick out expressions in such a way as to detect emotion, but not identify a person? This is what retailers are proposing and, as it stands, there is nothing in the law to prevent them.

I suggest we need to tackle the following question: are citizens and the reputation of the industries involved best served by covert surveillance of emotions?

If the answer is no, then codes of practice need to be amended immediately. Questions of ethics, emotion capture and rendering bodies passively machine-readable is not contingent upon personal identification, but something more important. Ultimately, this is a matter of human dignity, and about what kind of environment we want to live in.

There’s nothing definitively wrong with technology that interacts with emotions. The question is whether they can be shaped to serve, enhance and entertain, rather than exploit. And given that survey respondents of all ages are rightfully wary, it’s a question that the people should be involved in answering.

The Conversation

Andrew McStay receives funding from AHRC and ESRC.

The ATM at 50: how a hole in the wall changed the world

Author: Bernardo Batiz-Lazo, Professor of Business History and Bank Management, Bangor University

Back in the day...Lloyds Banking Group Archives & Museum,

Next time you withdraw money from a hole in the wall, consider singing a rendition of happy birthday. For on June 27, the Automated Teller Machine (or ATM) celebrates its half century. Fifty years ago, the first cash machine was put to work at the Enfield branch of Barclays Bank in London. Two days later, a Swedish device known as the Bankomat was in operation in Uppsala. And a couple of weeks after that, another one built by Chubb and Smith Industries was inaugurated in London by Westminster Bank (today part of RBS Group).

These events fired the starting gun for today’s self-service banking culture – long before the widespread acceptance of debit and credit cards. The success of the cash machine enabled people to make impromptu purchases, spend more money on weekend and evening leisure, and demand banking services when and where they wanted them. The infrastructure, systems and knowledge they spawned also enabled bankers to offer their customers point of sale terminals, and telephone and internet banking.

There was substantial media attention when these “robot cashiers” were launched. Banks promised their customers that the cash machine would liberate them from the shackles of business hours and banking at a single branch. But customers had to learn how to use – and remember – a PIN, perform a self-service transaction and trust a machine with their money.

People take these things for granted today, but when cash machines first appeared many had never before been in contact with advanced electronics.

And the system was far from perfect. Despite widespread demand, only bank customers considered to have “better credit” were offered the service. The early machines were also clunky, heavy (and dangerous) to move, insecure, unreliable, and seldom conveniently located.

Indeed, unlike today’s machines, the first ATMs could do only one thing: dispense a fixed amount of cash when activated by a paper token or bespoke plastic card issued to customers at retail branches during business hours. Once used, tokens would be stored by the machine so that branch staff could retrieve them and debit the appropriate accounts. The plastic cards, meanwhile, would have to be sent back to the customer by post. Needless to say, it took banks and technology companies years to agree common standards and finally deliver on their promise of 24/7 access to cash.

The globalisation effect

Estimates by RBR London concur with my research, suggesting that by 1970, there were still fewer than 1,500 of the machines around the world, concentrated in Europe, North America and Japan. But there were 40,000 by 1980 and a million by 2000.

A number of factors made this ATM explosion possible. First, sharing locations created more transaction volume at individual ATMs. This gave incentives for small and medium-sized financial institutions to invest in this technology. At one point, for instance, there were some 200 shared ATM networks in the US and 80 shared networks in Japan.

They also became more popular once banks digitised their records, allowing the machines to perform a host of other tasks, such as bank transfers, balance requests and bill payments. Over the last five decades, a huge number of people have made the shift away from the cash economy and into the banking system. Consequently, ATMs became a key way of avoiding congestion at branches.

ATM design began to accommodate people with visual and mobility disabilities, too. And in recent decades, many countries have allowed non-bank companies, known as Independent ATM Deployers (IAD) to operate machines. The IAD were key to populating non-bank locations such as corner shops, petrol stations and casinos.

Indeed, while a large bank in the UK might own 4,000 devices and one in the US as many as 12,000, Cardtronics, the largest IAD, manages a fleet of 230,000 ATMs in 11 countries.

Ready cash? You can bank on it.Shutterstock

Bank to the future

The ATM has remained a relevant and convenient self-service channel for the last half century – and its history is one of invention and re-invention, evolution rather than revolution.

Self-service banking and ATMs continue to evolve. Instead of PIN authentication, some ATMS now use “tap and go” contactless payment technology using bank cards and mobile phones. Meanwhile, ATMs in Poland and Japan have used biometric recognition, which can identify a customer’s iris, fingerprint or voice, for some time, while banks in other countries are considering them.

So it’s a good time to consider what the history of cash dispensers can teach us. The ATM was not the result of a eureka moment of a single middle-aged man in a bath or garage, but from active collaboration between various groups of bankers and engineers to solve the significant challenges of a changing world. It took two decades for the ATM to mature and gain widespread, worldwide acceptance, but today there are 3.5m ATMs with another 500,000 expected by 2020.

Research I am currently undertaking suggests that ATMs may have reached saturation point in some Western countries. However, research by the ATM Industry Association suggests there is strong demand for them in China, India and the Middle East. In fact, while in the West people tend to use them for three self-service functions (cash withdrawal, balance enquiries, and purchasing mobile phone airtime), Chinese customers consumers regularly use them for as many as 100 different tasks.

Taken for granted?

Interestingly, people in most urban areas around the world tend to interact with the same five ATMs. But they shouldn’t be taken for granted. In many countries in Africa, Asia and South America, they offer services to millions of people otherwise excluded from the banking sector.

In most developed counties, meanwhile, the retail branch and the ATM are the only two channels over which financial institutions have 100% control. This is important when you need to verify the authenticity of your customer. Banks do not control the make and model of their customers’ smart phones, tablets or personal computers, which are vulnerable to hacking and fraud. While ATMs are targeted by thieves, mass cybernetic attacks on them have yet to materialise.

I am often asked whether the advent of a cashless, digital economy heralds the end of the ATM. My response is that while the world might do away with cash and call ATMs something else, the revolution of automated self-service banking that began 50 years ago is here to stay.

The Conversation

Bernardo Bátiz-Lazo has received funding to research ATM and payments history from the British Academy, Fundación de Estudios Financieros (Fundef-ITAM), Charles Babbage Institute and the Hagley Museum and Archives. He is also active in the ATM Industry Association, consults with KAL ATM Software and is a regular contributor to

Welsh schools: an approach to bilingualism that can help overcome division

Author: Peredur Webb-Davies, Senior Lecturer in Welsh Linguistics, Bangor University

Research has shown just how beneficial education in Welsh can be.National Assembly for Wales/Flickr, CC BY-SA

Being a Welsh-English bilingual isn’t easy. For one thing, you hear that encouraging others to learn your language is detrimental both to their education and wellbeing. For another, to speak a minority language such as Welsh you need to constantly make the effort to be exposed to it and maintain your bilingualism.

A row has recently arisen in the Carmarthenshire village of Llangennech over plans to turn an English language school into a Welsh school. Parents who objected to the change told Guardian reporters that they have been labelled “anti-Welsh bigots”, in an article headlined“Welsh-only teaching – a political tool that harms children?”.

Needless to say, those who have gone through Welsh language schooling were not happy with the report. And for good reason too: though parents may have their own concerns, research has proven the benefits of bilingualism. The fear heavily implied that sitting in a Welsh classroom somehow hermetically insulates a child from the English language is simply not founded.

Schools in Wales need to deal with – and provide education for – children from two main backgrounds: those who speak Welsh at home and those who do not. The former benefit from Welsh-medium education in that they are able to broaden and improve their Welsh ability, as well as learning to read and write in it, while the latter need to be taught Welsh from the ground up. In most schools, a classroom will have a mixture of children from different backgrounds, although children will get different levels of exposure to Welsh depending on the school. Welsh is not treated as a foreign language like French or German, because children at schools in Wales will inevitably have some exposure to Welsh culturally and socially.

This means that teachers in nearly all schools in Wales have two different audiences: children who speak English as a first language, and children who speak Welsh as a first language.

But rather than this being a problem, teachers use different approaches in the classroom to deal with it. Few lessons are in just Welsh or English – the majority use a strategic bilingual approach such as code-switching (alternating between both languages as they teach), targeted translation (where specific terms or passages are translated as they are taught), or translanguaging (blending two languages together to help students learn a topic’s terminology in both).

One cannot simply divide Wales’s schools into Welsh-speaking or English-speaking. The former are bilingual schools – as well as ensuring that Welsh survives and flourishes, the aim of schools in Wales is to produce children who are bilingual when they finish their education.

It’s an obvious statement to make, but the more Welsh a child hears at home and school, the more proficient they become. It doesn’t have a negative effect on the rest of their education.

Language death

Like all languages, Welsh is evolving as time goes on, and schools are vital for not only nurturing speakers’ abilities, but for helping it stay relevant to the world. Similar to how there isn’t just one type of bilingual – speakers of two languages vary in proficiency – there also isn’t just one type of spoken Welsh.

My own research into grammar variation across age ranges found that younger generations are using certain innovative grammatical constructions much more frequently than older generations. The Welsh language that children hear from their peers is different to what they hear from their parents and grandparents. This includes grammatical features such as word order: where an older speaker might say “fy afal i” for “my apple”, a younger speaker is more likely to use “afal fi”. Similarly, research on code-switching by Welsh speakers has found that younger people are more likely than older speakers to mix Welsh and English in the same sentence. So schools and communities need to be able to expose children to Welsh of all registers for them to grow in proficiency and confidence, and learn these new social constructions.

Proficiency is a big part in shaping language attitudes – and, for a nation like Wales, where fear of language death is common, support for Welsh is vital.

Research sourcing the views of teenagers from north Wales found that more proficient speakers had more positive attitudes towards Welsh. On the other hand, participants with lower Welsh proficiency reported that they reacted negatively towards Welsh at school because they felt pressure to match their more proficient peers.

One of the biggest ironies in contemporary Wales is that it would be easier just to use – and learn in – English, but doing so would unquestionably lead to the death of Welsh – and the end of a language is no small matter.

Identifying precisely why some speakers feel that they cannot engage in Welsh-medium education, or use their Welsh outside of school, would be beneficial to fostering a bilingual Wales and would help heal the kinds of social divisions reported in Llangennech.

The cognitive, cultural and economical benefits of bilingualism have been widely demonstrated. To become bilingual in Welsh you must be exposed to Welsh and, for the majority of Welsh children, the classroom is their main source of this exposure. As such, we should see Welsh schools as central to any community’s efforts to contribute to the bilingual future that’s in Wales’s best interests.

The Conversation

Peredur Webb-Davies receives funding from the RCUK as part of a jointly-funded project with the National Science Foundation (USA).

Confidence can be a bad thing – here's why

Author: Stuart Beattie, Lecturer of Psychology, Bangor UniversityTim Woodman, Professor and Head of the School of Sport, Health and Exercise Sciences, Bangor University

Have you ever felt 100% confident in your ability to complete a task, and then failed miserably? After losing in the first round at Queen’s Club for the first time since 2012, world number one tennis player, Andy Murray, hinted that “overconfidence” might have been his downfall. Reflecting on his early exit, Murray said: “Winning a tournament is great and you feel good afterwards, but you can also sometimes think that your game is in a good place and maybe become a little bit more relaxed in that week beforehand.”

There is no doubt that success breeds confidence, and in turn, the confidence gained from success positively influences performance – normally. However, recently, this latter part of the relationship between confidence and performance has been called into doubt. High confidence can have its drawbacks. One may only need to look at the results of the recent general election to note that Theresa May called for an early election partly based on her confidence to win an overall majority.

Our research at the Institute for the Psychology of Elite Performance at Bangor University has extensively examined the relationship between confidence and performance. So, what are the advantages and disadvantages of having high (or indeed low) levels of confidence for an upcoming task?

Confidence and performance

First, let’s look at the possible outcomes of having low confidence (some form of self-doubt). Low confidence is the state of thinking that we are not quite ready to face an upcoming task. In this case, one of two things happens: either we disengage from the task, or we invest extra effort into preparing for it. In one of our studies participants were required to skip with a rope continuously for one minute. Participants were then told that they had to repeat the task but using a more difficult rope to skip with (in fact it was the same type of rope). Results revealed that confidence decreased but performance improved. In this case, self-doubt can be quite beneficial.

Now let’s consider the role of overconfidence. A high level of confidence is usually helpful for performing tasks because it can lead you to strive for difficult goals. But high confidence can also be detrimental when it causes you to lower the amount of effort you give towards these goals. Overconfidence often makes people no longer feel the need to invest all of their effort – think of the confident student who studies less for an upcoming exam.

‘There’s no way I’ll miss from here.’Jacob Lund/

Interestingly, some of our research findings show that when people are faced with immediate feedback after a golf putting task (knowing exactly how well you have just performed), confidence expectations (number of putts they thought they could make next) far exceeded actual obtained performance levels by as much as 46%. When confidence is miscalibrated (believing you are better than you really are), it will have a negative effect on subsequent task performance.

This overconfidence in our ability to perform a task seems to be a subconscious process, and it looks like it is here to stay. Fortunately, in the long term the pros of being overconfident (reaching for the stars) seem to far outweigh the cons (task failure) because if at first you do not succeed you can always try again. But miscalibrated confidence will be more likely to occur if vital performance information regarding your previous levels of performance accomplishments is either ignored or not available. When this happens people tend to overestimate rather than underestimate their abilities.

So, Andy Murray, this Queen’s setback is a great wake-up call – just in time for Wimbledon.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

How operational deployment affects soldiers' children

Author: Leanne K Simpson, PhD Candidate, School of Psychology | Institute for the Psychology of Elite Performance, Bangor University

So many of us have seen delightful videos of friends and family welcoming their loved ones home from an operational tour of duty. The moment they are reunited is heartwarming, full of joy and tears – but, for military personnel who were deployed to Iraq and Afghanistan post 9/11, their time away came with unprecedented levels of stress for their whole family.

Military personnel faced longer and more numerous deployments, with short intervals in between. The impact of operational deployments on military personnel’s mental health is well reported. Far less is known, however, about how deployment affects military families, particularly those with young children.

Military families are often considered the “force behind the forces”, boosting soldiers’ morale and effectiveness during operational deployment. But this supportive role can come at a price.

Research has shown that deployments which last less than a total of 13 months in a three-year period will not harm military marriages. In fact, divorce rates are similar to the general population during service– although these marriages are more fragile when a partner exits the “military bubble”.

But studies have also found that children of service personnel have significantly more mental health problems– including anxiety and depression – than their civilian counterparts. Mental health issues are also particularly high among military spouses raising young children alone during deployment.

Military children

Our understanding of how younger children cope with deployment often stems from mothers’ retrospective reports, or from the children themselves when they become adolescents. Very little is known about the impact of deployment on young children who are at the greatest risk of social and emotional adjustment problems.

Unsurprisingly, the studies that have been conducted indicate that it is the currently deployed and post-deployed families that experience problematic family functioning.

A new study that I have co-authored with Dr Rachel Pye– soon to be published in Military Medicine – examines how UK military families with young children function during three of the five stages described in the “emotional cycle of deployment”, when their father is or has recently been on a tour of duty.

The emotional cycle of an extended deployment – six months or longer – consists of five distinct stages: pre-deployment, deployment, sustainment, re-deployment, and post-deployment. Each stage comes with its own emotional challenges for family members. The cycle can be painful to deal with, but those who know what to expect from each stage are more likely to maintain good mental health.

Possible negative changes in child behaviour resulting from deployment.

Strength in rules

Our research has found that all military families, regardless of deployment stage, have significantly more rules and structured routines than non-military families. Usually this would be indicative of poor family functioning – as it is associated with resistance to change – but we suggest that rigidity may actually be a strength for military families. It gives stability to an often uncertain way of life.

The findings also support previous research with similar US military families where a parent had been deployed. These families were highly resilient, with high levels of well-being, low levels of depression and high levels of positive parenting.

We used a unique way of examining the impact of deployment on young children. Each of the participants was asked to draw their family so that we could measure their perception of family functioning.

Pictures drawn by children of fathers who had returned from deployment within the last six months were quite distinctive. The father was often drawn larger and more detailed than other family members. But in the pictures drawn by children whose fathers were currently deployed, the father was often not included, or the child used less detail or colour.

Example drawings from children whose fathers were either currently deployed, about to deploy or had recently returned from combat operations.Leanne K Simpson

When the pictures were re-analysed ignoring the physical distance between the child and parents – which is often used as an indicator of emotional distance, but could for this sample represent a real physical distance – the differences in how the fathers were drawn was still evident.

What all this means is that children who had a father return from deployment within the previous six months, or a father who was currently deployed, were part of the poorest-functioning families in our study.

This may seem like a negative result but our research also indicated that the effect is temporary. The children’s drawings showed differences between the currently deployed and the post-deployed families, but military children without a deployed parent scored similarly to non-military children.

So although military families are negatively affected by deployment, the impact doesn’t last. The vast majority successfully adapt to each stage of deployment.

Like any family, military families do experience problems – but this research highlights the robust, stoic nature of military families and their incredible ability to bounce back from adversity, demonstrating that they truly are the “force behind the forces”.

The Conversation

Leanne K Simpson receives funding from the British Ministry of Defence via their Defence Science, and Technology Laboratory via their PhD studentship scheme researching mental robustness in military personnel. This article does not reflect the views of the research councils or other publicly-funded bodies.

'Facts are not truth': Hilary Mantel goes on the record about historical fiction

Author: Michael Durrant, Lecturer in Early Modern Literature, Bangor University

In a recent talk at the Hay literary festival, Cambridge historian and biographer John Guy said he had seen an increasing number of prospective students citing Hilary Mantel’s Booker Prize-winning historical novels, Wolf Hall and Bring up the Bodies, as supporting evidence for their knowledge of Tudor history.

Guy suggested that Mantel’s as yet incomplete trilogy on Thomas Cromwell’s life and career – the third instalment, The Mirror and the Light, comes out later this year – has become something of a resource for a number of budding history undergraduates, despite the fact that they contain historical inaccuracies (casting, for example, Thomas More as a woman-hating tyrant, Anne Boleyn as a female devil and getting the wrong sheriff of London to lead More to his execution).

The Guardian quotes Guy as saying that this “blur between fact and fiction is troubling”. In fact, Guy’s comments on the blurring of fact and fiction, and related concerns of authenticity, do read as a worrying prognosis. In the age of Trump and fake news, it seems particularly important that we call bullshit on so-called “alternative facts” and place an unquestionable fix on fiction.

Yet historical fiction, in all its varieties, can and frequently does raise vital questions about how we write, and conceptualise, historical processes. Indeed, when writers of historical fiction make stuff up about the past, they sometimes do so in an effort to sharpen, rather than dull, our capacities to separate fact from fiction.

‘There are no endings’

In the first of five Reith Lectures to be aired on BBC Radio 4, Mantel similarly argues that in death “we enter into fiction” and the lives of the dead are given shape and meaning by the living – whether that be the historian or the historical novelist. As the narrator of Bring up the Bodies puts it: “There are no endings.” Endings are, instead, “all beginnings”, the foundation of interpretative acts.

In Mantel’s view, the past is not something we passively consume, either, but that which we actively “create” in each act of remembrance. That’s not to say, of course, that Mantel is arguing that there are no historical “facts” or that the past didn’t happen. Rather, she reminds us that the evidence we use to give narrative shape to the past is “always partial”, and often “incomplete”. “Facts are not truth”, Mantel argues, but “the record of what’s left on the record.” It is up to the living to interpret, or, indeed, misinterpret, those accounts.

Wolf Hall won the Booker Prize in 2009.

In this respect the writer of historical fiction is not working in direct opposition to the professional historian: both must think creatively about what remains, deploying – especially when faced with gaps and silences in the archive – “selection, elision, artful arrangement”, literary manoeuvres more closely associated with novelist Philippa Gregory than with Guy the historian. However, exceptional examples from both fields should, claims Mantel, be “self-questioning” and always willing to undermine their own claims to authenticity.

Richard’s teeth

Mantel’s own theorising of history writing shares much with that other great Tudor storyteller: William Shakespeare.

While Shakespeare’s Richard III (1592), can be read as a towering achievement in historical propaganda – casting Richard, the last of the Plantagenets, as an evil usurper, and Richmond, first Tudor king and Elizabeth I’s grandfather, as prophetic saviour – the play invites serious speculation about the idiosyncratic nature of historical truth.

Take this exchange in Act II Scene IV of the play, which comes just before the doomed young princes are led to the tower. Here, the younger of the two, Richard, duke of York, asks his grandmother, the duchess of York, about stories he’s heard about his uncle’s birth:

York: Marry, they say my uncle grew so fast
That he could gnaw a crust at two hours old … Duchess of York: I pray thee, pretty York, who told thee this?
York: Grandam, his nurse.
Duchess of York: His nurse? Why, she was dead ere thou wast born.
York: If ’twere not she, I cannot tell who told me.

Fresh in the knowledge that his uncle’s nurse died before he was born, the boy has no idea who told him the story of his uncle’s gnashing baby teeth. Has he misremembered his source, blurring the lines between fact and fiction? Was the boy’s uncle born a monster, or is that a convenient fiction his enemies might wish to tell themselves? And why on earth would Shakespeare bother to include this digression?

Bring up the Bodies won the Booker Prize in 2012.

In all other respects, Richard III invites straightforward historical divisions between good (the Tudors) and evil (the Plantagenet dynasty). But here, subversive doubts creep in about the provenance of the stories we tell about real historical people, with the “historical fact” briefly revealed as a messy, fallible concept, always on the edge of make-believe.


Richard III reminds us that historical facts can be fictionalised, but also that the fictional can just as easily turn into fact. Mantel’s Tudor cycle has been haunted by similar anxieties. In the often terrifying world of Henry VIII’s court, her novels show how paranoia breeds rumour, how rumour bleeds into and shapes fact and, as a result, “how difficult it is to get at the truth”. History isn’t just a different country for Mantel, it’s something intimately tied to the fictions we cling to.

And indeed in Wolf Hall that blurred relationship between fact and fiction, history and myth, is often front and centre. In Wolf Hall the past is somewhere above, between, and below the official record. History is not to be found in “coronations, the conclaves of cardinals, the pomp and processions.” Instead it’s in “a woman’s sigh”, or the smell she “leaves on the air”, a “hand pulling close the bed curtain”; all those things that are crucially absent from the archive.

Brought to life: Thomas Cromwell.Hans Holbein via the Frick Collection.

The fact of history’s ephemerality opens a “gap” for the fictional, into which we “pour [our] fears, fantasies, desires”. As Mantel has asked elsewhere: “Is there a firm divide between myth and history, fiction and fact: or do we move back and forth on a line between, our position indeterminate and always shifting?”

For the Canadian novelist, Guy Gavriel Kay, fantasy is a necessary precondition of all forms of historical writing: “When we work with distant history, to a very great degree, we are all guessing.”

Guy Gavriel Key’s Lions of Al-Rassan.

This is why Kay is at leave to employ the conventions of fantasy to deal with the past, transposing real historical events, peoples, and places – medieval Spain and Roderigo Diaz (El Cid) in The Lions of Al-Rassan (1995), for example, or the Viking invasions of Britain in The Last Light of the Sun (2004) – into the realm of the fantastical.

Kay researches (he provides bibliographies in all his books) and then unravels history and historical evidence, putting a “quarter turn” on the assumed facts: renaming historical figures, reversing and collapsing the order of known events, substituting invented religions for real ones, introducing magic into the history of Renaissance Europe, or China. He has described the result of this process as “near-history”: alternative pasts that are at once radically strange and weirdly familiar.

Like Mantel, Kay’s (near-)historical fictions can be read as less an effort to evade the blur between fact and fiction than to honestly point towards that blur as a condition of history itself. After all, history is debatable and often impossible to verify. It’s a reminder, perhaps, that we sometimes need the tropes of fiction to smooth over those complexities, or render them legible, truthful, in the contemporary moment. We need metaphors, and similes, so that the dead can speak and act, live and die.

The Conversation

Michael Durrant does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Happy 100th birthday, Mr President: how JFK's image and legacy have endured

Author: Gregory Frame, Lecturer in Film Studies, Bangor University

JFK remains among the most charismatic presidents in US history.Florida Memory, State Library of Florida

John F Kennedy was born 100 years ago on May 29, 1917. While the achievements of his presidency and the content of his character have been subjects of contestation among historians and political commentators since the 1970s, there is little question regarding the enduring power of his image. As the youngest man to win election to the presidency, entering the White House with a beautiful wife and young children in tow, he projected the promise of a new era in American politics and society.

In Norman Mailer’s sprawling, seminal essay about Kennedy, published in Esquire in November 1960, Kennedy was the embodiment of what America wanted to be: young, idealistic, affluent and cosmopolitan. When America was faced with the choice between Kennedy and Richard Nixon in the 1960 presidential election, Mailer posed the question: “Would the nation be brave enough to enlist the romantic dream of itself, would it vote for the image in the mirror of its unconscious” – or would it opt for “the stability of the mediocre”?

Kennedy knew the importance of his image, which is why he placed so much emphasis on his performances in the televised debates. His success in this arena arguably tipped the very close election in his favour. According to journalist Theodore White, television transmogrified Nixon into a “glowering”, “heavy” figure; by contrast, Kennedy appeared glamorous, sophisticated – almost beautiful.

Kennedy and Nixon TV debate, Associated Press, Creative Commons.Wikimedia Commons

Master of the medium

Carrying this success into his presidency, Kennedy used television to communicate with the people to great effect through broadcast press conferences and interviews. As demonstrated by the miniseries Kennedy (1983), where Kennedy was played by perennial screen politician Martin Sheen, JFK’s presidency can be reduced to a series of televised moments: his oft-quoted inaugural address (“Ask not what your country can do for you…”); his tours of France and West Germany (“Ich bin ein Berliner”); and his calm, assured broadcasts to the nation during the civil rights demonstrations and the Cuban Missile Crisis.

As American historian Alan Brinkley wrote in 1998: “Even many of those who have become disillusioned with Kennedy over the years are still struck, when they see him on film [or on television], by how smooth, polished and spontaneously eloquent he was, how impressive a presence, how elegant a speaker.”

Most of the Kennedy miniseries is in colour. But in its reconstruction of monochrome images of Kennedy on television, it employs the medium as a means of memorialising him, infatuated with his image in its nostalgic reverie for a more stable and prosperous time.

Kennedy (1983), DVD, Carlton International Media Ltd.

Kennedy’s image on television (and in newsreel footage) is so seductive it is unsurprising Oliver Stone used it in the opening sequence to his controversial debunking of the official theories behind the president’s assassination in the film JFK (1991). As John Hellmann suggested, this footage establishes Kennedy “as the incarnation of the ideal America in the body of the beautiful man”.

The moving image played a fundamental role in establishing Kennedy as the image-ideal president. As I have argued elsewhere, other presidents have sought to establish their own images in relation to Kennedy’s, from Bill Clinton in 1992 to Barack Obama in 2008 and beyond. Kennedy is a seductive figure – not because of what he did or achieved, but because he cultivated the notion that he reflected the best the United States could be if it dared to dream.

Towards the conclusion of Oliver Stone’s Nixon, the eponymous president, played by Anthony Hopkins, stumbles drunkenly around the White House on the verge of resignation. He looks up to the portrait of Kennedy and says, rather forlornly: “When they [the people] look at you, they see what they want to be. When they look at me, they see what they are.”

Stone is here acknowledging Nixon’s frail humanity as the “ego” to Kennedy’s “ego-ideal”. Where Nixon is deficient and ordinary, Kennedy’s image retains the illusion of perfection in the collective memory.

Nixon (1995), Buena Vista Pictures Ltd.Film International

Politics as reality TV

The 100th anniversary of Kennedy’s birth allows us to reflect upon this legacy. If Kennedy was the superhero and Nixon the flawed human, then Donald Trump is a compendium of some of the worst qualities a politician can have: impulsive, arrogant, narcissistic. In a chaotic, ephemeral and often trivial media environment, Trump, a man with an insatiable appetite for the spotlight and no discernible ideological convictions, has thrived. He believes – and he has not been disabused of this notion – that he can perform the presidency as he performed on reality television in The Apprentice, most recently firing the director of the FBI on television.

We may bemoan the idea that politics has become a television show, but it has. Is that Kennedy’s fault? Yes and no. His polished performances on television hid many questionable tactics and character flaws beneath the surface, but it is often said that we get the politicians we deserve, and in allowing politics to become messily intertwined with the discourses of celebrity and, subsequently, the values of reality television, human beings fostered the conditions that created Kennedy and Trump.

If Kennedy was alive today would he be horrified by what politics has become? No, he’d be on Snapchat.

The Conversation

Gregory Frame does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Teaching students to survive a zombie apocalypse with psychology

Author: John A Parkinson, Professor in Behavioural Neuroscience, Bangor UniversityRebecca Sharp, Senior Lecturer in Psychology, Bangor University


Playing games is ubiquitous across all cultures and time periods – mainly because most people like playing games.

Games involve rules, points, systems, as well as a theme or storyline and can be massively fun and engaging. And there is an increasing body of research that shows “gamification” – where other activities are designed to be like a game – can be successful in encouraging positive changes in behaviour.

Gamification has previously been used to teach skills to nurses, as well as in wider health settings – such as with the use of the app Zombies, Run!.

Broadly speaking, games work effectively because they can make the world more fun to work in. They can also help to achieve “optimal functioning” – which basically means doing the best you can do.

This can be seen in Jane McGonigal’s game and app Superbetter, which helps people live better lives by living more “gamefully”. It does this by helping users adopt new habits, develop a talent, learn or improve a skill, strengthen a relationship, make a physical or athletic breakthrough, complete a meaningful project, or pursue a lifelong dream.

Ground zero

This is also exactly what we’ve done at Bangor University. Here, students on the undergraduate course in behavioural psychology had one of their modules fully gamified. And it started when they received this message, after they enrolled on the course:

Notice to all civilians: this module will run a little differently. The risk of infection is high, please report to the safe quarantine zone in Pontio Base Five at 1200 hours on Friday 30 September. Stay safe, stay alert, and avoid the Infected.

Curiosity piqued, the class arrived at their first lecture of the semester to be greeted by “military personnel” who demanded they be scanned for infection prior to entry.

They were given a brown envelope containing “top secret” documents about their mission fighting the infection. The documents explained the game, and that the module had been gamified to enhance their learning.

What commenced next was the immersion. In addition to themed lectures and materials, the presence of actors and a storyline that was influenced by choices made by the class, students were given weekly “missions” by key characters in the game.

These online quiz-based missions prompted students to study the module materials between lectures to earn points. Points gained allowed students to progress through levels – from “civilian” to “resurrection prevention leader”. Points could also be exchanged for powerful incentives, such as being able to choose the topic of their next assignment, or the topic of a future lecture.

A life gamified

Part of our thinking behind wanting to teach in this way is because although students enrol at university, they don’t always perform optimally – instead intentions are often derailed by distractions.

At a psychological level, there are multiple competing signals trying to access behaviour – but only one can win. This discordance between goals and actual behaviour is called the “intention action gap”, and gamification has the potential to close this gap.

This is because, successful learning requires a student to set goals and then achieve them over and over again. Games use techniques, such as clear rules and rewards, to enhance motivation and promote goal-directed behaviour. And because education is about achieving specific learning goals, the use of games to clarify and promote engagement can be highly effective in providing clear guidance on goal-direction and action – which can make users less fearful of failure. In this way, gamification can result in students achieving better outcomes by optimising learning.

Positive reaction

The application of gamification to a module on behavioural psychology was a novel (albeit ironic) approach to demonstrate to students the very concepts they were learning.

When compared to the previous year’s performance and to a matched same-year non-gamified module, the gamification had a large impact on attendance – which was higher than both the non-gamified module, and the previous year’s group.

Turns out zombies can teach students a thing or two.Shutterstock

Many of the class also engaged with materials between lectures – such as the online “missions” to help them learn and review the content between lectures.

When asked their thoughts at the end of the semester, many students said they enjoyed the gamification and liked the immersive experience. Some even asked for more zombies.

Surviving education

Gamification is clearly well-suited to teaching behavioural psychology as it demonstrates directly some of the concepts students are learning. But it could also easily be adapted and applied to other subjects.

The psychologist, Burrhus Frederic Skinner said that:

Education is what survives when what has been learnt has been forgotten.

So while the students may well forget the precise definition “positive reinforcers” in years to come, they will know implicitly what they are and how to apply them thanks to the game. In other words, they have learned how to learn. And hopefully, their gamified experience will help them survive future “apocalyptic” challenges.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Can environmental documentaries make waves?

Author: Michela Cortese, Associate Lecturer, Bangor University

Trump’s first 100 days in office were, among other things, marked by a climate march in Washington DC that attracted tens of thousands of demonstrators. No surprises there. Since the beginning of his mandate in January, Trump has signed orders to roll back the number of federally protected waterways, restart the construction of contentious oil pipeline, and cut the budget from the Environmental Protection Agency (EPA). Among the various orders and memoranda, the one signed to overhaul Obama’s Clean Power Plan is probably the most remarkable, along with promoting coal extractions all over the US.

A good time, then, to follow up Al Gore’s iconic documentary An Inconvenient Truth, which was released 11 years ago in a similarly discouraging political climate. At that time George W Bush, who is remembered for undermining climate science and for strongly supporting oil interests, was in power. In his own first 100 days at the White House, Bush backed down from the promise of regulating carbon dioxide from coal power plants and announced that the US would not implement the Kyoto climate change treaty.

This summer sees the release of An Inconvenient Sequel: Truth to Power. More than ten years have passed and the documentary looks likely to be released in a very similar context. With republicans in power, war in the Middle East, and regulations on the environment to be reversed, this inconvenient sequel is a reminder that the climate of the conversation about global warming has not changed much in the interim.

But the strategies needed to grab the attention of the public certainly have. In the fast-paced, ever-evolving media landscape of the 21st century, knowing how to engage the public on environmental matters is no easy thing. The tendency of the environmental films that have mushroomed since 2000 has been to use a rhetoric of fear. But how effective has this been? Certainly, environmental activism has grown, particularly with the help of social media, but the role of these productions is unclear, and there is a lack of research on audience response to these films.

Personal planet

The selling point of An Inconvenient Truth was its personal approach. Although it had a lecture-style tone, this was a documentary that was all about Gore. He told his story entwined with that of the planet. It was extraordinary that people paid to go to the cinema to watch a politician giving a lecture. This was a big shift in cinema. Arguably, this format was enlivened by the way in which Gore opened up about his personal history.

The documentary opened with the politician’s notorious quote: “I am Al Gore, and I used to be the next president of the United States.” In November 2000 Gore had lost the presidential elections to George W Bush with an extraordinarily narrow defeat. The choice to run with a very personal rhetoric was certainly strategic – the right time for the former vice president to open up six years from that unfortunate election. Gore told the story of global warming through his personal life, featuring his career disappointments, family tragedies and constantly referring to the scientists he interviewed as “my friend”.

This was a very innovative way of approaching the matter of climate change. We are talking about a politician who decided to offer an insight on his private life for a greater cause: to engage the public on a vital scientific subject. The originality of the documentary led to An Inconvenient Truth scoring two Oscars at the Academy Awards 2006.

Today, An Inconvenient Truth is seen as the prototype of activist film-making. Founder of the Climate Reality Project in 2006 and co-recipient of the 2007 Nobel Peace Prize (with the IPCC), Gore and his movement soon became the core of environmental activism, gathering several environmental groups that, despite their differences, today march together for the greatest challenge of our time.

New hope?

Eleven years on, the revolution under Gore’s lead that many expected has yet to be fulfilled. The next decade was beset with disappointments. More recently, the 2015 Paris Agreement has marked a new era for climate action, proving that both developed and developing countries are now ready to work together to reduce carbon emissions. But today there is a new protagonist – or antagonist – in the picture. The trailer for An Inconvenient Sequel shows Gore watching Trump shouting his doubts about global warming to the crowd and announcing his plans to strip back the EPA’s budget.

It will be interesting to see how the tone of the film moves off from that of the original. The “personal reveal” tactic won’t work so well the second time round. And a change in the narrative is certainly evident from the trailer. The graphs of the previous documentary are replaced with more evocative images of extreme weather and disasters. While statistics about carbon dioxide emissions and sea-level rises were predominantly used to trigger emotions in the audience, this time round Gore can show the results of his predictions. One example of this is the iconic footage of a flooded World Trade Centre Memorial, a possibility which was discussed by Gore in the 2006 documentary and criticised by many for being a “fictional” element at that time rather than an “evidence” of climate impact.

Unfortunately, I am not sure how much this shift will affect the public or whether the sequel will be the manifesto of that revolution that Gore and his followers have been waiting for. The role that the media have played in the communication of climate change issues has changed and developed alongside the evolution of the medium itself and people’s perception of the environment. The last decade has seen an explosion of sensational images and audiences are fatigued by this use of fear.

Many look for media that includes “positive” messages rather than the traditional onslaught of facts and images triggering negative emotions. It has never been more difficult for environmental communicators to please viewers and readers in the midst of a never-ending flow of information available to them.

The Conversation

Michela Cortese received funding from research councils in the past.

Is talking to yourself a sign of mental illness? An expert delivers her verdict

Author: Paloma Mari-Beffa, Senior Lecturer in Neuropsychology and Cognitive Psychology, Bangor University

We have inner conversations all the time, so what difference does it make if we have them out loud?G Allen Penton/Shutterstock

Being caught talking to yourself, especially if using your own name in the conversation, is beyond embarrassing. And it’s no wonder – it makes you look like you are hallucinating. Clearly, this is because the entire purpose of talking aloud is to communicate with others. But given that so many of us do talk to ourselves, could it be normal after all – or perhaps even healthy?

We actually talk to ourselves silently all the time. I don’t just mean the odd “where are my keys?” comment – we actually often engage in deep, transcendental conversations at 3am with nobody else but our own thoughts to answer back. This inner talk is very healthy indeed, having a special role in keeping our minds fit. It helps us organise our thoughts, plan actions, consolidate memory and modulate emotions. In other words, it helps us control ourselves.

Talking out loud can be an extension of this silent inner talk, caused when a certain motor command is triggered involuntarily. The Swiss psychologist Jean Piaget observed that toddlers begin to control their actions as soon as they start developing language. When approaching a hot surface, the toddler will typically say “hot, hot” out loud and move away. This kind of behaviour can continue into adulthood.

Non-human primates obviously don’t talk to themselves but have been found to control their actions by activating goals in a type of memory that is specific to the task. If the task is visual, such as matching bananas, a monkey activates a different area of the prefrontal cortex than when matching voices in an auditory task. But when humans are tested in a similar manner, they seem to activate the same areas regardless of the type of task.

Macaque matching bananas.José Reynaldo da Fonseca/wikipedia, CC BY-SA

In a fascinating study, researchers found that our brains can operate much like those of monkeys if we just stop talking to ourselves – whether it is silently or out loud. In the experiment, the researchers asked participants to repeat meaningless sounds out loud (“blah-blah-blah”) while performing visual and sound tasks. Because we cannot say two things at the same time, muttering these sounds made participants unable to tell themselves what to do in each task. Under these circumstances, humans behaved like monkeys do, activating separate visual and sound areas of the brain for each task.

This study elegantly showed that talking to ourselves is probably not the only way to control our behaviour, but it is the one that we prefer and use by default. But this doesn’t mean that we can always control what we say. Indeed, there are many situations in which our inner talk can become problematic. When talking to ourselves at 3am, we typically really try to stop thinking so we can go back to sleep. But telling yourself not to think only sends your mind wandering, activating all kinds of thoughts – including inner talk – in an almost random way.

This kind of mental activation is very difficult to control, but seems to be suppressed when we focus on something with a purpose. Reading a book, for example, should be able to suppress inner talk in a quite efficient way, making it a favourite activity to relax our minds before falling asleep.

A mind-wandering rant could be seen as mad.Dmytro Zinkevych/Shutterstock

But researchers have found that patients suffering from anxiety or depression activate these “random” thoughts even when they are trying to perform some unrelated task. Our mental health seems to depend on both our ability to activate thoughts relevant to the current task and to suppress the irrelevant ones – mental noise. Not surprisingly, several clinical techniques, such as mindfulness, aim to declutter the mind and reduce stress. When mind wandering becomes completely out of control, we enter a dreamlike state displaying incoherent and context-inappropriate talk that could be described as mental illness.

Loud vs silent chat

So your inner talk helps to organise your thoughts and flexibly adapt them to changing demands, but is there anything special about talking out loud? Why not just keep it to yourself, if there is nobody else to hear your words?

In a recent experiment in our laboratory at Bangor University, Alexander Kirkham and I demonstrated that talking out loud actually improves control over a task, above and beyond what is achieved by inner speech. We gave 28 participants a set of written instructions, and asked to read them either silently or out loud. We measured participants’ concentration and performance on the tasks, and both were improved when task instructions had been read aloud.

Much of this benefit appears to come from simply hearing oneself, as auditory commands seem to be better controllers of behaviour than written ones. Our results demonstrated that, even if we talk to ourselves to gain control during challenging tasks, performance substantially improves when we do it out loud.

This can probably help explain why so many sports professionals, such as tennis players, frequently talk to themselves during competitions, often at crucial points in a game, saying things like “Come on!” to help them stay focused. Our ability to generate explicit self instructions is actually one of the best tools we have for cognitive control, and it simply works better when said aloud.

So there you have it. Talking out loud, when the mind is not wandering, could actually be a sign of high cognitive functioning. Rather than being mentally ill, it can make you intellectually more competent. The stereotype of the mad scientist talking to themselves, lost in their own inner world, might reflect the reality of a genius who uses all the means at their disposal to increase their brain power.

The Conversation

Paloma Mari-Beffa does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Rhinos should be conserved in Africa, not moved to Australia

Author: Matt Hayward, Senior Lecturer in Conservation, Bangor University

A southern white rhino in South Africa.Author provided

Rhinos are one of the most iconic symbols of the African savanna: grey behemoths with armour plating and fearsome horns. And yet it is the horns that are leading to their demise. Poaching is so prolific that zoos cannot even protect them.

Some people believe rhino horns can cure several ailments; others see horns as status symbols. Given horns are made of keratin, this is really about as effective as chewing your finger nails. Nonetheless, a massive increase in poaching over the past decade has led to rapid declines in some rhino species, and solutions are urgently needed.

One proposal is to take 80 rhinos from private game farms in South Africa and transport them to captive facilities in Australia, at a cost of over US$4m. Though it cannot be denied that this is a “novel” idea, I, and colleagues from around the world, have serious concerns about the project, and we have now published a paper looking into the problematic plan.

Conservation cost

The first issue is whether the cost of moving the rhinos is unjustified. The $4m cost is almost double the anti-poaching budget for South African National Parks ($2.2m), the managers of the estate where most white rhinos currently reside in the country.

The money would be better spent on anti-poaching activities in South Africa to increase local capacity. Or, from an Australian perspective, given the country’s abysmal record of mammal extinctions, it could go towards protecting indigenous species there.

In addition, there is the time cost of using the expertise of business leaders, marketeers and scientists. All could be working on conservation issues of much greater importance.

Bringing animals from the wild into captivity introduces strong selective pressure for domestication. Essentially, those animals that are too wild don’t breed and so don’t pass on their genes, while the sedate (unwild) animals do. This is exacerbated for species like rhinos where predation has shaped their evolution: they have grown big, dangerous horns to protect themselves. So captivity will likely be detrimental to the survival of any captive bred offspring should they be returned to the wild.

Poaching is still a huge problem, despite a resurgence in the southern white rhino population.Author provided

It is not known yet which rhino species will be the focus of the Australian project, but it will probably be the southern white rhino subspecies– which is the rhino species least likely to go extinct. The global population estimate for southern white rhinos (over 20,000) is stable, despite high poaching levels.

This number stands in stark contrast to the number of northern white (three), black (4,880 and increasing), great Indian (2,575), Sumatran (275) and Javan (up to 66) rhinos. These latter three species are clearly of much greater conservation concern than southern white rhinos.

There are also well over 800 southern white rhinos currently held in zoos around the world.

With appropriate management, the population size of the southern white is unlikely to lose genetic diversity, so adding 80 more individuals to zoos is utterly unnecessary. By contrast, across the world there are 39 other large mammalian herbivore species that are threatened with extinction that are far more in need of conservation funding than the five rhino species.


Rhinos inhabit places occupied by other less high profile threatened species – like African wild dogs and pangolins – which do not benefit from the same level of conservation funding. Conserving wildlife in their natural habitat has many benefits for the creatures and plants they coexist with. Rhinos are keystone species, creating grazing lawns that provide habitats for other species and ultimately affect fire regimes (fire frequency and burn patterns). They are also habitats themselves for a range of species-specific parasites. Abandoning efforts to conserve rhinos in their environment means these ecosystem services will no longer be provided.

Finally, taking biodiversity assets (rhinos) from Africa and transporting them to foreign countries extends the history of exploitation of Africa’s resources. Although well-meaning, the safe-keeping of rhinos by Western countries is as disempowering and patronising as the historical appropriation of cultural artefacts by colonial powers.

Conservation projects are ultimately more successful when led locally. With its strong social foundation, community-based conservation has had a significant impact on rhino protection and population recovery in Africa. In fact, local capacity and institutions are at the centre of one of the world’s most successful conservation success stories – the southern white rhino was brought back from the brink, growing from a few hundred in South Africa at the turn of the last century to over 20,000 throughout southern Africa today.

In our opinion, this project is neo-colonial conservation that diverts money and public attention away from the fundamental issues necessary to conserve rhinos. There is no evidence of what will happen to the rhinos transported to Australia once the poaching crisis is averted, but there seems nothing as robust as China’s “panda diplomacy” where pandas provided to foreign zoos remain the property of China, alongside a substantial annual payment, as do any offspring produced, for the duration of the arrangement.

With increased support, community-based rhino conservation initiatives can continue to lead the way. It is money that is missing, not the will to conserve them nor the expertise necessary to do so. Using the funding proposed for the Australian Rhino Project to support locally-led conservation or to educate people to reduce consumer demand for rhino horn in Asia seem far more acceptable options.

The Conversation

The research that this article refers to was done in conjunction with William J. Ripple, Graham I. H. Kerley, Marietjie Landman, Roan D. Plotz and Stephen T. Garnett