Research stories

On our News pages

Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.

On TheConversation.com

Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.

Emotions: how humans regulate them and why some people can't

Author: Leanne Rowlands, PhD Researcher in Neuropsychology, Bangor University

Gearstd/Shutterstock

Take the following scenario. You are nearing the end of a busy day at work, when a comment from your boss diminishes what’s left of your dwindling patience. You turn, red-faced, towards the source of your indignation. It is then that you stop, reflect, and choose not to voice your displeasure. After all, the shift is nearly over.

This may not be the most exciting plot, but it shows how we as humans can regulate our emotions.

Our regulation of emotions is not limited to stopping an outburst of anger – it means that we can manage the emotions we feel as well as how and when they are experienced and expressed. It can enable us to be positive in the face of difficult situations, or fake joy at opening a terrible birthday present. It can stop grief from crushing us and fear from stopping us in our tracks.

Because it allows us to enjoy positive emotions more and experience negative emotions less, regulation of emotions is incredibly important for our well-being. Conversely, emotional dysregulation is associated with mental health conditions and psychopathology. For example, a breakdown in emotional regulation strategies is thought to play a role in conditions such as depression, anxiety, substance misuse and personality disorders.

How to manage your emotions

By their very nature, emotions make us feel – but they also make us act. This is due to changes in our autonomic nervous system and associated hormones in the endocrine system that anticipate and support emotion-related behaviours. For example, adrenaline is released in a fearful situation to help us run away from danger.

Changing moods.Oksana Mizina/Shutterstock

Before an emotion arises there is first a situation, which can be external: such as a spider creeping nearer, or internal: thinking that you are not good enough. This is then attended to – we focus on the situation – before we appraise it. Put simply, the situation is evaluated in terms of the meaning it holds for ourselves. This meaning then gives rise to an emotional response.

Psychologist and researcher James Gross, has described a set of five strategies that we all use to regulate our emotions and that may be used at different points in the emotion generation process:

1. Situation selection

This involves looking to the future and taking steps to make it more likely to end up in situations that gives rise to desirable emotions, or less likely to end up in situations that lead to undesirable emotions. For example, taking a longer but quieter route home from work to avoid road rage.

2. Situation modification

This strategy might be implemented when we are already in a situation, and refers to steps that might be taken to change or improve the situation’s emotional impact, such as agreeing to disagree when a conversation gets heated.

3. Attentional deployment

Ever distracted yourself in order to face a fear? This is “attentional deployment” and can be used to direct or focus attention on different aspects of a situation, or something else entirely. Someone scared of needles thinking of happy memories during a blood test, for example.

4. Cognitive change

This is about changing how we appraise something to change how we feel about it. One particular form of cognitive change is reappraisal, which involves thinking differently or thinking about the positive sides – such as reappraising the loss of a job as an exciting opportunity to try new things.

5. Response modulation

Response modulation happens late in the emotion generation process, and involves changing how we react or express an emotion, to decrease or increase its emotional impact – hiding anger at a colleague, for example.

How do our brains do it?

The mechanisms that underlie these strategies are distinct and exceptionally complex, involving psychological, cognitive and biological processes. The cognitive control of emotion involves an interaction between the brain’s ancient and subcortical emotion systems (such as the periaqueductal grey, hypothalamus and the amygdala), and the cognitive control systems of the prefrontal and cingulate cortex.

Take reappraisal, which is a type of cognitive change strategy. When we reappraise, cognitive control capacities that are supported by areas in the prefrontal cortex allow us to manage our feelings by changing the meaning of the situation. This leads to a decrease of activity in the subcortical emotion systems that lie deep within the brain. Not only this, but reappraisal also changes our physiology, by decreasing our heart rate and sweat response, and improves how we experience emotions. This goes to show that looking on the bright side really can make us feel better – but not everyone is able to do this.

Those with emotional disorders, such as depression, remain in difficult emotional states for prolonged durations and find it difficult to sustain positive feelings. It has been suggested that depressed individuals show abnormal activation patterns in the same cognitive control areas of the prefrontal cortex – and that the more depressed they are the less able they are to use reappraisal to regulate negative emotions.

However, though some may find reappraisal difficult, situation selection might be just a little easier. Whether it’s being in nature, talking to friends and family, lifting weights, cuddling your dog, or skydiving – doing the things that make you smile can help you see the positives in life.

The Conversation

Leanne Rowlands receives funding from EU Social fund through the Welsh Government.

We tracked coral feeding habits from space to find out which reefs could be more resilient

Author: Michael D. Fox, Postdoctoral Scholar, University of California San DiegoAndrew Frederick Johnson, Researcher at Scripps Insitution of Oceanography & Director of MarFishEco, University of California San DiegoGareth J. Williams, Lecturer, Marine Biology, Bangor University

A healthy coral reef on Millennium Atoll, Southern Line Islands.Brian Zgliczynski, Author provided

Coral reefs are an invaluable source of food, economic revenue, and protection for millions of people worldwide. The three-dimensional structures built by corals also provide nourishment and shelter for over a quarter of all marine organisms.

But coral populations are threatened by a multitude of local and global stressors. Rising ocean temperatures are disrupting the 210m-year-old symbiosis between corals and microscopic algae. When temperatures rise, the coral animal becomes stressed and expels its algal partners, in a process known as coral bleaching.

These symbiotic algae are a critical food resource for corals, and without them corals lose their primary source of nutrition. Fortunately, corals are mixotrophs and not solely dependent on nutrition from their algal partners. Despite their sedentary appearance, corals are voracious predators capable of capturing a wide variety of prey using their tentacles and mucous nets.

The individual polyps of Pocillopora meandrina, which feed by capturing prey with tentacles.Michael D. Fox, Author provided

Knowing how much corals eat via predation is essential for understanding how they can persist in a warming ocean. Numerous laboratory studies have shown that if coral feed, they are more capable of surviving the stress associated with warming temperatures and decreasing pH levels. Feeding can also increase the reproductive capacity of corals, which is key to repopulating reefs that have suffered high levels of coral mortality. Yet, almost 90 years since one of the first published accounts of coral predation, we still do not know much about how coral feeding varies as a function of food availability in the wild.

However, our new study sheds light on this longstanding question. We combined field sampling with global satellite measurements and published data to reveal that corals respond to how much food is on their reef. This indicates that corals living in more productive (food-rich) waters consume more food, which changes our understanding of how corals survive and may aid in predictions of coral recovery in the face of climate change.

Unravelling coral diets from space

Studying variation in the diets of corals over large areas is no easy task. To determine if corals will change their feeding behaviour as a function of food availability, we sailed to the remote Southern Line Islands of Kiribati. These islands are ideal for studying variations in coral diets because they lack local direct human impacts (fishing and pollution) and are situated across a natural gradient of food availability fuelled by equatorial upwelling. This process delivers colder, nutrient- and plankton-rich waters to the surface ocean along the equator in the central Pacific.

We examined coral diets across five islands using stable isotope analysis. Stable isotopes are atoms of the same element (in this case carbon) that differ in mass due to the number of neutrons in their nucleus. This subtle mass difference allows scientists to determine what an organism is eating based on how similar the isotopic composition of the consumer (coral) is to its food (zooplankton).

The isotopic data showed that the corals on the more food-rich islands were capturing and consuming more planktonic prey than corals on islands with lower food availability. These findings suggested that the abundance of food might be important for corals in other locations, which inspired our team to evaluate if coral feeding habits can be used to track global food availability.

Lead author of the study Michael Fox collects coral tissue samples in the remote central Pacific Ocean.Brian Zgliczynski, Author provided

Satellites can reliably measure the amount of phytoplankton around tropical islands– a useful proxy for estimating food abundance for corals. So, using satellite data from 2004-2015, taken from 16 locations spanning the Pacific and Indian Oceans to the Red Sea and the Caribbean, we compared published isotopic values from corals at each location.

What we found was a striking relationship between the chlorophyll content of the water and the feeding habits of corals. Essentially, corals in more productive regions consume more planktonic food.

Can well-fed corals survive the heat?

The seemingly simple observation that corals eat more where there is more food has important implications for our understanding of how coral reefs function. It underscores the importance of the physical environment around reefs and suggests that food availability may be an overlooked driver of coral recovery potential.

The capacity for corals to feed before or during thermal stress can improve their capacity for survival. These findings lay the foundation to begin investigating the possibility that reefs in more naturally food-rich waters have a greater capacity to resist or recover from disturbance events such as thermally induced bleaching.

Reefs do show variations in how they respond to thermal stress events – some reefs bleach less than others – but the exact mechanisms behind these differences remain largely unclear. The relationship between coral feeding and ocean chlorophyll established in this study offers a roadmap to locating potentially more resilient coral reefs around the world. Such knowledge does not replace the need to urgently reduce greenhouse gas emissions and protect coral reefs from the increasing frequency of ocean warming events, however. Instead it should be used to guide strategic management actions in the inevitable interim.

The Conversation

Andrew Frederick Johnson receives funding from the National Science Foundation (USA)

Gareth J. Williams receives funding from The Bertarelli Foundation.

Michael D. Fox does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Are electric fences really the best way to solve human-elephant land conflicts?

Author: Liudmila Osipova, PhD Researcher, Bangor University

An elephant grazing in Kimana Conservancy, Kenya.Author provided

Conflict between humans and elephants has reached a crisis point in Kenya. As the elephants have begun to regularly raid farms in search of food, it has become not uncommon for local people to attack and kill them in retaliation. Between 2013 and 2016, 1,700 crop raiding incidents, 40 human deaths and 300 injuries caused by wildlife were reported in the Kajiado district alone.

The problem has come as vast parts of Kenya that are home to elephants have been subject to intensive agricultural development in the past few decades. The Maasai people who tend to the land are switching from their traditional nomadic lifestyle to seek a more permanent livelihood. But these lands have also been used by elephants and other wildlife for many generations, providing them with food, water and space for migration.

Tensions are running high, but a controversial solution is being put in place: electrified fencing.

A young Maasai woman, from a small household in Kenya.Author provided

In the 2016 Netflix documentary The Ivory Game, filmed in Kenya’s Kajiado district, the following exchange was caught on camera, between a group of Maasai people and Craig Millar, head of security at non-profit conservation foundation Big Life:

Farmer 1: You see this maize? It is for my children, not for elephants … we don’t want to see elephants on our farms.

Millar: And what do you think is the solution?

Farmer 1: The solution is to kill them!

Farmer 2: A fence. Electrification.

Millar: I agree, but … it is expensive. We will ask countries in Europe for help … everybody will have to contribute something. You will have to protect the fence once it is erected.

Farmer 1: We’ll take care of it. If you are lying about the fence, the elephants will be in danger. The elephants will die.

When the documentary was filmed, an electrified fence was believed to be the only solution to the conflict. So, with support from international investors, work in the borderlands between Kenya and Tanzania was started in 2016 and the foundation has reported that the 50km of fence built to date has already reduced elephant crop raids by more than 90%.

Unfortunately, this is not the only human-elephant conflict hotspot in the country. Kenya is experiencing rapid economic and industrial growth, and small-scale agriculture developments are spreading across Maasai lands, causing more and more problems.

Fenced in

Fencing is one of the most commonly used conservation tools in the world. And Big Life’s electrified fence is a great example of how fast and effective it can be. But fencing can have long-term consequences for animals – it can disturb wildlife migration routes, disrupt gene transfer through mating and alter population dynamics.

The possible costs to animals are unknown. South Africa is the only African country that legally requires an environmental impact assessment to be done prior to building fences. Generally speaking, there is no straightforward international policy or legal guidelines for fence planning. In most countries, fences are built in a random and uncontrolled way. But fencing can be an effective tool for conservation – in Australia, fencing is commonly used to save native mammals from introduced carnivores, while in Namibia fencing protects cattle from cheetahs and lions.

Elephants often use the same movement corridors for decades.Author provided

In our recently published paper we looked at how an electrified fence being built around crop fields in southern Kenya is affecting major elephant migration pathways. We used GPS collars on 12 elephants from the area where the fence was to be built, and tracked their movement and behaviour. All the elephants were from different families and were collared in various locations.

After two years of data collection we used the information to map where and how the elephants spent their time in the study area. We reconstructed their movement paths and built a connectivity model, highlighting the most important migration routes between large national parks.

After validating our model, we included the fence plan and recalculated, to estimate if the fence would change the elephants’ free movement between parks. The results showed that local managers were right: fencing did not disturb migration corridors nor diminish connectivity between the national parks.

But more detailed examination gave us some food for thought. Areas with limited amounts of the resources that elephants need (wetlands, floodplains and conservancies) are predicted to be more intensively used after fencing because the elephants will no longer have access to their usual grounds – and this may lead to overgrazing and habitat destruction. In addition, fences will not stop elephants from moving – so the conflict will basically be shifted to unfenced areas.

These results raise a reasonable question: how much more land will have to be fenced to resolve human-wildlife conflicts? Besides high costs and difficulties in maintenance, the more land is fenced the less habitat remains for elephants. Long-term aerial monitoring in the Amboseli Ecosystem (an 5,700km² conservation area near the Tanzania-Kenya border) confirms that habitat loss to agriculture will become a bigger threat to elephants than illegal poaching in the near future.

There is no simple solution here. The benefits of electrified fencing are undeniable, but lack of understanding of the long-term consequences for wildlife is worrying. We recommend that integrated impact assessments – as we did during our study – are made prior to fencing become international policy.

Another approach could be using fences only as a temporary tool for mitigating critical conflicts and considering alternative management approaches – such as fencing which contains beehives, to deter elephants but not restrict their movement – to solve the problem in the long run.

The Conversation

Liudmila Osipova receives funding from EU (FONASO programme). The research was accomplished with the support not-for-profit organization the African Conservation Center

Universities must look at local employment markets when building their graduates' skills

Author: Teresa Crew, Lecturer in Social Policy, Bangor University

Job seeking.Creatista/Shutterstock

Students are often reminded that a degree is “not enough”, and that they will also need “employability skills” – a complex combination of personal attributes, discipline-specific knowledge and generic talents – to succeed after university. They are encouraged while studying to develop skills such as problem solving, self-management and the ability to work as part of a team.

All valid attributes yes, but this view is based on the idea that graduates are young and highly mobile. But the truth is that not all graduates will want to – or be able to – leave their university town or city, especially females and graduates from low-income backgrounds.

As Brexit looms, advocacy organisation Univerisites UK has suggested that increased local graduate retention could ease current and potentially upcoming skills shortages in the UK. Yet the research to date shows that cities across the UK face a big challenge when it comes to attracting and retaining graduate talent. In 2016, only 58% of that year’s graduates went on to work in the area in which they took their degree.

One major hurdle to graduate retention comes down to the skills that local employers actually need from prospective staff. Just like it is not enough to have a degree, it is not enough to teach all graduates a generic skillset and hope for the best. Required skills can vary greatly from region to region, with some – like the ability to drive – proving pointless in areas with, for example, good public transport links. In north Wales, where I conducted my own research into the issue of graduate retention, the most valuable skills for a graduate to have on top of their degree are access to local networks, having their own transport and Welsh language skills.

Interview day.fizkes/Shutterstock

Staying local

Social contacts and contacts from former employment can help a graduate seeking to stay in their university town, but the close connections that come from going to school together and living in the same neighbourhoods are invaluable. When employers seek to fill vacancies, they can rely on who a candidate knows to infer the potential worker’s underlying ability.

That’s not to say “who you know” is always better than “what you know”. Not all members of a community will know the “right” people who can provide access to employment opportunities after all. And graduates from low income backgrounds often find their contacts are limited because their parents have no experience of the graduate labour market and the types of roles that they would be applying for.

This kind of social capital can be developed both as a student and a graduate. I have been working with Sociologists Outside Academia, a group within the British Sociological Association, to design an “applied sociology” curriculum. The aim of this curriculum is to equip students with the skills, knowledge and professional outlook required to improve workplaces, organisations and communities. One of our recommended assessments would see students working on a local community problem, with the opportunity to pitch a proposal to a client verbally and in writing.

After graduation from universities in Wales, there are schemes such as the Knowledge Economy Skills Scholarships (KESS 2), a project supported by European Social Funds (ESF) through the Welsh Government, led by Bangor University. KESS 2 provides opportunities for graduates to build professional networks, and for funded PhD and research masters study in collaboration with an active business or company partner.

Language skills

Another skill of particular importance to the graduates I spoke to in north Wales was the Welsh language. Over half of the population in some areas of north Wales speak Welsh. And there is concerted action by the Welsh government to double the number of Welsh speakers to one million by 2050.

On top of this, 71% of employers in Wales have stated that Welsh language skills (written and oral) were desirable for jobs in their companies. And that there is a shortage of bilingually skilled staff in graduate occupations such as nursing and in the tourism industry.

While current graduates who went to school in Wales will have had some form of Welsh language education, not all would regard themselves as speakers of the language. And even among bilinguals, proficiency in written and oral communication can vary widly. Research has suggested that while bilingualism is not the preserve of elites, disadvantaged households in Wales may believe that their form of bilingualism is inappropriate for professional environments.

Many of my interviewees felt a lack of confidence in their Welsh skills. They felt that the Welsh they spoke at home was not the same as the more formal Welsh needed for employment purposes. There may be further problems too for those graduates of Welsh universities who did not go to school in Wales, and have had no Welsh language education.

Clearly, universities need to support their graduates by not just focusing on generic employability skills, but by looking at the regional economy. By taking into account what local employers might want from graduates, institutions can start to address the financial, academic and social hurdles that modern graduates, particularly those who have reached university through a non-traditional route, have to face.

The Conversation

Teresa Crew receives funding from Economic and Social Research Council (ESRC)

Why we should give prejudiced students a voice in the classroom

Author: Corinna Patterson, Lecturer in Sociology, Bangor University

Speaking freely.Photographee.eu/Shutterstock

In the space of a few years, Britain’s political landscape has changed. Now, generally, young people are proportionately more likely to have socially liberal and socialist views, and want to remain part of the EU. Meanwhile, older demographics proportionately voted for Brexit, and were said to be largely responsible for voting the Conservatives into office in 2017.

This polarisation was especially prevalent in university towns. But general trends do not pick up on the more complex and messy reality of perspectives and sympathies. One study of young people’s views on Brexit and the EU, for example, recently found they are actually less tolerant of immigration than is widely thought.

Up until 2016, students seemed remarkably unpolitical as a majority. Many had no overt political stances or felt any affiliation with any formal political perspective. The last couple of years, however, have seen a distinct change in their knowledge and engagement with current issues and debates. A change that is both exciting for me as a teacher, but also concerning.

Protectionist views

Recently, student support for the Labour party has risen, thanks largely to grass roots organisation Momentum which has been credited with changing Labour’s narrative into a more relevant discussion of issues that are of direct concern to young people today.

Emerging in parallel to this, however, have been very protectionist views, spurred on of course by UKIP and Nigel Farage. The party and its former leader have been perceived by many as saying things “as they are”, again offering a refreshingly blunt change from the usual party-political rhetoric. This is an ideological position that is gaining support right across Europe and beyond, giving people the ability to legitimise racist attitudes.

Healthy discussion is vital in universities.ESB Professional/Shutterstock

These more protectionist views – many of which have been close to, or quite in line with, what we might call fascism– are also becoming more overt in schools and universities too. It seems an extreme word to use in relation to a small minority of students’ views, but the values and perceptions that I have personally heard confidently argued at times, have been very worryingly in line with this ideology.

I had previously assumed (wrongly perhaps) that everyone in a class would be against fascism, having enough knowledge about World War Two and the Holocaust to see the dangers of the lies propagated by such ideas of supremacy.

It wasn’t until this year that I realised that I could no longer hold such an assumption, and by doing so I may well be alienating the students who hold such views from discussions. This could in itself serve to further entrench their views rather than aid the development of a critical, evidenced-based perspective of their own.

Challenging the challengers

Healthy debate, generated by a range of perspectives is very welcome in classrooms and lecture halls, and necessary for a healthy democracy. The concerning issue that is arising in society at large, and becoming increasingly prevalent in universities, is the polarisation of views.

This is not simply about students developing racist, fascist or right-wing views however. The emergence of these views reveals how globalisation has left many behind especially those who feel disempowered, disconnected and threatened by the changes that have and are taking place around them. It is a backlash to much of the progress many feel has been made in recent years, in terms of gender and racial equality.

The problem we have is that young people now obtain their perspectives from a very narrow range of social media sources. And, because of social media algorithms, their political views can be formed and reinforced by a narrow range of perspectives. These views then become unchallenged and recognised as legitimate. Leaders are hero worshipped, and understanding of different perspectives, experiences and people can diminish while evidence-based, independent, critical analysis (a skill lacking in society at the best of times) is lost, polarising perspectives and narrowing debate.

Academics and universities need – as journalist John Morgan points out– to work out how to approach the problem carefully “lest they portray themselves as part of the global elite resented by populist supporters”. Students need to feel able to express and explore their ideas. But we as teachers should be helping them to challenge their own preconceptions through evidence-based research, and develop the skills to critically analyse information for themselves.

The fear of non-conformity, of gender and racial equality and of diversity needs to be addressed so that cultures and global challenges become issues that are looked at from a position of understanding and contextuality, not from a reactive and defensive position. We can’t ignore any student who does not agree with a more liberal standpoint. Instead, we need to challenge them in a way that doesn’t further create defensive entrenchment of views, alienating those who perhaps already feel alienated.

The Conversation

Corinna Patterson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Golf: the neuroscience of the perfect putt

Author: Andrew Michael Cooke, Lecturer in Performance Psychology, Bangor University

Listen to your brain.OtmarW/shutterstock, CC BY-SA

Sports fans across the world watched the American golfer Tiger Woods roll in a putt to win the PGA tour’s season ending Tour Championship on September 23. His victory caps a remarkable comeback from personal struggles and injuries that caused him to plummet to 1,199 in the world rankings less than a year ago, and restores him as one of the world’s best.

With the PGA Tour finale now complete, the eyes of the golfing world are on Paris for the Ryder Cup– golf’s biannual team contest pitching the best players from the USA against the cream of Europe. But what makes a successful golfer? My research explores the neuroscience of golf putting – and ways that the brain can be trained to increase putting success.

Golfers carry 14 clubs, but the putter is by far the most used, accounting for around 41% of shots. Successfully striking the 1.68-inch diameter golf ball into the 4.25-inch golf hole requires precision programming of force and direction. You have to take into account factors such as slope, direction of the blades of grass and weather effects including temperature, wind and rain.

My research has identified a type of “brainwave”, produced by electrical pulses resulting from brain cells communicating with each other, that can predict golfing success. They can easily be recorded by simply putting sensors on the scalp. In a brain imaging study where 20 expert and novice golfers each hit 120 putts, I found that the intensity of activity of a brainwave at the frequency of 10-12 Hz, recorded before the backswing, could clearly distinguish putts that went in the hole from those that missed.

More specifically, intense activity at sensors placed on frontal parts of the scalp, over the premotor cortex, was key for putting success. This finding has since been supported by other research, which also found that reduced activity at sensors placed on the left-temporal parts of the scalp (close to the left ear) can further contribute to the recipe for proficient putting.

This makes sense, as the premotor cortex is implicated in movement planning, and the left-temporal region is associated with verbal-analytic processing. So it looks as if the brain intently focuses on accurately planning force and direction, while blocking out verbal intrusions, immediately before successful putts.

Training the brain to drain putts

Having identified neural signatures associated with putting success, scientists are now exploring whether you can train golfers to produce this pattern of brain activity and recognise what it feels like. The trick is to only hit putts when the appropriate activation-level is produced (when they are “in the zone”).

Such brain training can be achieved using a technique called “neurofeedback”, which involves measuring brain activity and displaying it back in real time (in the form of auditory tones, or graphs on a computer screen) so that recipients can develop ways of consciously controlling their brain activity levels. It may seem far fetched, but the technology and equipment are readily available, portable and relatively cheap – starting at less than £300 for a wireless electroencephalographic (EEG) neurofeedback headset.

Jason Day: brain trained.Keith Allison/wikipedia, CC BY-SA

In a 2015 study, I used wireless neurofeedback technology to train 12 amateur golfers to produce the pattern of brainwaves that I’d previously associated with success before they hit putts. This took place during three separate one-hour training sessions. On their return to the laboratory a few days later, the golfers were able to reliably produce the pattern of 10-12 Hz brain activity that I had prescribed.

What’s more, their putting had improved (on average, 8ft putts finished 21% closer to the hole after the training). Admittedly though, this was not to a sufficient extent to exclude the possibility of a placebo effect. Notwithstanding, the results are encouraging, and have been bolstered by similar findings from researchers in other parts of the world.

From the lab to the golf course

While the scientists are still experimenting before making firm and unequivocal statements about neurofeedback’s effectiveness, there are some members of the golfing elite who are already convinced of the benefits of brain training. Australian Jason Day, the current world number 11, has used neurofeedback for a number of years and said that it has yielded “a 110% improvement” in his mental game. So it may be no coincidence that he was ranked as the best putter on the 2018 PGA Tour.

Meanwhile, a more recent convert who’ll be on show in Paris is American Bryson DeChambeau. The current world number seven revealed details of his brain training regime in August 2018, before winning two out of the four season-ending FedEx Cup playoff events. With 21 professional victories between them, Day and DeChambeau are certainly doing something right.

Much is made about the Ryder Cup being a team event, a stark diversion from the individual contests that characterise regular tournaments on the PGA and European Tours. While this undoubtedly adds new dynamics that capture the attention of the sporting world, it will still, in all likelihood, boil down to an individual putt by an individual player to determine which continent lifts golfs’ premier prize.

As a proud European, I hope that player is wearing European blue, and can optimally shape his 10-12Hz brainpower during those crucial moments.

The Conversation

Andrew Michael Cooke has received funding from the Economic and Social Research Council grants PTA-02627-2696 and RES-000-22-4523.

Free school meal funds help pay for school trips too – but self-imposed stigma stops parents claiming

Author: Gwilym Siôn ap Gruffudd, Lecturer/ Researcher in Education, School of Education and Human Development, Bangor University

Welsh funds for school meals are being used to expand pupils' education.Rawpixel.com/Shutterstock

Each and every one of us define success in our own way. But in schools, it is mostly limited to a grading system, with pupils who achieve better marks considered to be more of a “success”. The barriers to this success are not just natural intelligence, or lack of hard work, however, they come from a variety of different places.

For our recently published study, we looked at how poverty and educational attainment are linked in rural Wales. We spoke to children, teachers and other key stakeholders to explore the problems that they experience and perceive. We also looked at national, regional and local plans and policies for combating poverty and increasing educational attainment in pupils.

Wales has the worst child poverty in the UK. One in three children aged up to 16 (of which there are approximately 200,000) are living in poverty. An estimated 90,000 of these live in severe poverty, and forecasts show that this is not set to improve.

Much evidence has been presented as to why pupils in Wales slip behind the academic success of those in other countries. Correlations are often drawn between poverty and education, and the need to reduce the gap between the aloof affluent, the authentically austere and the adversely poor. But for our study, we wanted to analyse things from a different angle, from the perceived, actual and expressed needs of pupils and teachers, as they applied policies that were designed to help schools overcome the barriers of poverty.

We turned our attention to free school meal funding. Welsh government money is given to schools to provide all qualifying children and/or young people between the ages of four and 18 with free school meals. In rural primary and secondary schools, rather than solely providing food, the funding is used by schools in diverse and sensitive ways to help pupils engage in essential curricular and extra-curricular activities that would otherwise be beyond their immediate needs. This means that all rural pupils in the schools can, for example, go to science and discovery centres on trips, without the immediate worry of affordability.

Some families are going out of their way to avoid claiming free school meals.Africa Studio/Shutterstock

However, another finding of our research was that parents were neither rightfully nor proudly claiming free school meals for their children. And there appears to be greater prevalence of this phenomena in rural (as opposed to urban) schools across Wales.

Many families in rural Wales are identified as JAM (“just about managing”), with two parents working full-time and long hours in low paying jobs. They understandably find it difficult to spend time with their children and give them beneficial educational experiences. But they are also less likely to claim free school meals, despite being eligible.

Stigma

The problem, we found, is that there is a stigma attached to free school meals that causes parents to abstain from claiming them. Rural pride – coupled with beliefs and fears that children in receipt of free school meals are obvious to other pupils, teachers, or that schools can influence the eligibility criteria – is limiting claims of the additional resources and support available.

Some even prefer to go far out of their way to source sustenance from food banks not located in their area of residence instead of claiming. This indicates that parents may experience limiting systemic psychological barriers, or have deeply ingrained beliefs associated with their ability, worth and values that perhaps were neither real nor accurate but more of a self-imposed socio-scholastic barrier for their children.

Since we published our research, the Welsh government has announced a further £90m for the Pupil Development Grant – the fund for school meals – which goes to all schools in Wales. Though welcome, there are evidently still issues that need to be addressed to ensure poverty holds back no child in Wales from achieving their best while in education.

Schools are acutely aware of, and addressing, the barriers, for example by creating online payment systems for all parents, which has the added benefit of removing the physical stigma that comes with issuing tickets for meals.

Removing the stigma altogether cannot be done by changing processes or politics alone, however. We as a society need to change how we see free school meal funding. A significant number of school age children in general experience socio-economic disadvantage of one form or another. Funds like the one in Wales are not an indicator of poverty, but rather, often a vital resource for ensuring that each and every child has access to the same educational experiences.

The Conversation

This research was commissioned by GwE and ERW School Improvement Consortia.

Life's purpose rests in our mind's spectacular drive to extract meaning from the world

Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

Searching for meaning.agsandrew/Shutterstock

What is the purpose of life? Whatever you may think is the answer, you might, from time to time at least, find your own definition unsatisfactory. After all, how can one say why any living creature is on Earth in just one simple phrase?

For me, looking back on 18 years of research into how the human brain handles language, there seems to be only one, solid, resilient thread that prevails over all others. Humanity’s purpose rests in the spectacular drive of our minds to extract meaning from the world around us.

For many scientists, this drive to find sense guides every step they take, it defines everything that they do or say. Understanding nature and constantly striving to explain its underpinning principles, rules and mechanisms is the essence of the scientist’s existence. And this can be considered the most simplified version of their life’s purpose.

But this isn’t something that just applies to the scientifically minded. When examining a healthy sample of human minds using techniques such as brain imaging and EEG, the brain’s relentless obsession with extracting meaning from everything has been found in all kinds of people regardless of status, education, or location.

Language: a meaning-filled treasure chest

Take words, for instance, those mesmerising language units that package meaning with phenomenal density. When you show a word to someone who can read it, they not only retrieve the meaning of it, but all the meanings that this person has ever seen associated with it. They also rely on the meaning of words that resemble that word, and even the meaning of nonsensical words that sound or look like it.

And then there are bilinguals, who have the particular fate of having words in different languages for arguably overlapping concepts. Speakers of more than one language automatically access translation in their native language when they encounter a word in their second language. Not only do they do this without knowing, they do it even when they have no intention of doing so.

Recently, we have been able to show that even an abstract picture – one that cannot easily be taken as a depiction of a particular concept – connects to words in the mind in a way that can be predicted. It does not seem to matter how seemingly void of meaning an image, a sound, or a smell may be, the human brain will project meaning onto it. And it will do so automatically in a subconscious (albeit predictable) way, presumably because the bulk of us extract meaning in a somewhat comparable fashion, since we have many experiences of the world in common.

Consider the picture below, for example. It has essentially no distinctive features that could lead you to identify, let alone name, it in an instant.

Grace or violence?Alexandru Panoiu/Flickr, CC BY-SA

You would probably struggle to accurately describe the textures and colours it is composed of, or say what it actually represents. Yet your mind would be happier to associate it with the concept of “grace” than that of “violence” – even if you are not able to explain why – before a word is handed over to you as a tool for interpretation.

Beyond words

The drive of humans to understand is not limited to just language, however. Our species appears to be guided by this profound and inexorable impulse to understand the world in every aspect of our lives. In other words, the goal of our existence ultimately seems to be achieving a full understanding of this same existence, a kind of kaleidoscopic infinity loop in which our mind is trapped, from the emergence of proto-consciousness in the womb, all the way to our deathbed.

The proposal is compatible with theoretical standpoints in quantum physics and astrophysics, under the impetus of great scientists like John Archibald Wheeler, who proposed that information is the very essence of existence (“it for bit” – perhaps the best ever attempt to account for all meaning in the universe in one simple phrase).

Information – that is atoms, molecules, cells, organisms, societies – is self-obsessed, constantly looking for meaning in the mirror, like Narcissus looking at the reflection of the self, like the molecular biologist’s DNA playing with itself under the microscope, like AI scientists trying to give robots all the features that would make them indistinguishable from themselves.

Perhaps it does not matter if you find this proposal satisfying, because getting the answer to what the purpose of life is would equate to making your life purposeless. And who would want that?

The Conversation

Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Extreme weather in Europe linked to less sea ice and warming in the Barents Sea

Author: Yueng-Djern Lenn, Senior Lecturer in Physical Oceanography, Bangor UniversityBenjamin Barton, PhD Researcher, Bangor UniversityCamille Lique, Research scientist in physical oceanography, Institut Français de Recherche pour l’Exploitation de la Mer (Ifremer)

Vladimir Lugai/Shutterstock

The cold, remote Arctic Ocean and its surrounding marginal seas have experienced climate change at a rate not seen at lower latitudes. Warming air, land and sea temperatures, and large declines in seasonal Arctic sea ice cover are all symptoms of the changing Arctic climate. Although these changes are occurring in relatively remote locations, there is growing evidence to link Arctic sea ice retreat to increasingly erratic weather patterns over the northern hemisphere.

As sea ice declines, areas of open water increase, allowing the ocean to lose more heat to the atmosphere. Heat lost from the ocean to the atmosphere reduces the atmospheric pressure which provides more energy to storms and increases their cloud content through evaporation.

Water flowing north from the Atlantic Ocean provides a major source of heat to the Arctic Ocean and surrounding continental shelf seas. While the Atlantic Water (the particular water mass in the Arctic ocean) carries enough heat to melt all the floating Arctic sea ice in less than five years, it is currently insulated from the surface by a lighter, fresher layer of water over most of the central Arctic Ocean.

However, this paradigm appears to be changing. North of Svalbard, Atlantic Water heat has been mixed up towards the surface, resulting in increased surface heat lost to the atmosphere over the ever greater area of open ocean. This change has recently been shown to enhance the rate of sea ice loss eastwards.

Barents Sea changes

Location of the Barents Sea.Wikimedia, CC BY-SA

A key Arctic region for Atlantic Water heat exchange with the atmosphere is the Barents Sea. Atlantic Water flowing east through the Barents Sea Opening – between Bear Island, and northern Norway – remains exposed to the atmosphere as it circulates through the central Barents Sea. It gradually cools and becomes fresher (due to sea ice melting) as it moves eastwards to the Kara Sea.

In the Barents Sea, sea ice forms every autumn and melts in late spring/summer. In the northern part of the sea, a north-south change from cold to warm sea surface temperatures signals the presence of the Polar Front, which separates cold Arctic water from the warm Atlantic water. The meeting of the two water masses, its location and the temperature difference across it reflects changes in Barents Sea circulation.

During years with low seasonal sea ice concentrations (when there’s more heat loss from more exposed open water), the north-south differences in atmospheric temperatures across the Barents Sea are reduced. These conditions have been linked to wintertime cyclones travelling further south into western Europe, instead of their tendency to move eastwards towards Siberia, as well as more frequent cold winter extremes at middle latitudes.

Ice and weather

For our recent study, we looked at satellite measurements of sea ice and sea surface temperature, to determine how ocean and ice conditions have evolved between 1985 and the end of 2016. We found that prior to 2005, sea ice extended south of the Polar Front every winter, but that since 2005 this has not been the case.

At the same time, the sea surface temperature difference across the Polar Front has increased, with southern temperatures increasing at a faster rate than those to the north. The average between 1985 and 2004 was -1.2°C in the north and 1.5°C in the south, while between 2005 and 2016 it was -0.6°C in the north and 2.6°C in the south. Clearly, from 2005 the Barents Sea has become too warm for sea ice to exist south of the Polar Front. The question then is why is the Barents Sea getting warmer?

Winter-averaged sea surface temperature and sea ice extent as observed in the Barents Sea by satellites from 2005 and 2016.Author provided

Long-term oceanographic measurements of water temperature and salinity near the Barents Sea Opening have shown that inflowing Atlantic Water temperatures have increased over the last 30 years, with what appears to be a small but persistent rise around 2005– likely to be due to upstream changes in the North Atlantic sources (though it must be noted that our study did not explore this question). An impact of the warmer water entering the Barents Sea is a warmer atmosphere, which in turn insulates the warmer surface water allowing the Atlantic Water heat to penetrate further to the north, preventing winter sea ice formation and import (that is sea ice that has formed farther north that has drifted southwards) to the region south of the Polar Front.

We believe that this represents a long-term shift in the climate of the Barents Sea, a region already identified as influential on lower-latitude European weather. Furthermore, we believe that the 2005 regime shift we observed over the Barents Sea may have contributed to the increasingly frequent extreme weather events experienced over Europe in the past decade or so.

The Conversation

Yueng-Djern Lenn currently receives and has had previous research grant funding from the Natural Environment Research Council, UK. She has also previously been supported by the National Science Foundation, USA.

Benjamin Barton receives funding from the UK-France PhD programme managed by The Defence Science and Technology Laboratory (Dstl), UK and the Direction Générale de l’Armement (DGA), France.

Camille Lique works for Ifremer (Institut Francais de Recherche pour l'Exploitation de la Mer). She has received funding from the UK Defence Science and Technology Laboratory (Dstl), UK and the Direction Générale de l’Armement (DGA), as well as from the French INSU-LEFE programme, the European CMEMS programme and the French 'Agence Nationale de la Recherche'.

Humphrey Llwyd: the Renaissance scholar who drew Wales into the atlas, and wrote it into history books

Author: Huw Pryce, Professor of Welsh History, Bangor University

Abraham Ortelius's 1570 world map.The Library of Congress/Wikimedia

As a small country with less than 5% of the UK population, Wales faces major challenges in making its presence felt in the wider world – but this is something that scholars, politicians and the people themselves have been concerned about for centuries.

August 2018 marks the 450th anniversary of the death of Humphrey Llwyd, a remarkable Renaissance scholar who believed that Wales was fundamental to the history and identity of Britain. Llwyd not only drafted the first published map of Wales– which literally set the country on a global stage – but was the first person to write a history of Wales and a topographical account of Britain.

Born to a gentry family in Denbigh in 1527 and educated at Oxford, Llwyd went on to make his career in England, being employed in the household of the cultured and book-loving Henry Fitzalan, the 12th Earl of Arundel. This gave Llwyd the opportunity to develop his interest in learning. It also led to his marriage to Barbara, sister of the earl’s son-in-law, Lord Lumley (who himself was another enthusiastic book collector).

Humphrey Llwyd, as depicted in the 1799 book The Royal Tribes of Wales.Philip Yorke/Wikimedia

By 1563 Llwyd had set up home back in Denbigh, within the walls of the town’s medieval castle. As MP for the borough, he reportedly facilitated the passage, through the parliament of 1563, of the bill authorising the translation of the Bible and Book of Common Prayer into Welsh.

In 1566–7 Llwyd joined Arundel on a journey to Italy. However, a little over a year after his return to Denbigh, he fell seriously ill, and died on August 21 1568. He was buried just outside the town at the church of Llanfarchell, where the fine monument erected to his memory can still be seen.

Mapping Wales

Like other Welsh Renaissance scholars, Llwyd welcomed the so-called “union” of Wales and England under Henry VIII. Yet precisely because the future of Wales lay in the wider orbit of Britain Llwyd was determined to promote its history and culture as integral parts of the island’s heritage.

That determination was sharpened by his experiences outside Wales. It is no coincidence that the first work conceived of as a history of Wales – Llwyd’s Cronica Walliae (“The Chronicle of Wales”) of 1559 – was written in England, very probably at Arundel’s palace of Nonsuch near London for antiquarian-minded members of the earl’s circle. (Despite its Latin title, the work was written in English.)

The chronicle struck a defiant tone:

I was the first that tocke the province [Wales] in hande to put thees thinges into the Englishe tonge. For that I wolde not have the inhabitantes of this Ile ignorant of the histories and cronicles of the same, wherein I am sure to offende manye because I have oppenede ther ignorance and blindenes thereby …

Llwyd’s final works resulted from commissions by the great Flemish cartographer and “inventor” of the atlas, Abraham Ortelius, whom Llwyd met at Antwerp on his way home from Italy in 1567. These included two maps, one of Wales, the other of England and Wales, which were eventually published in a supplement to Ortelius’s atlas, Theatrum Orbis Terrarum (“Theatre of the World”), in 1573.

The map of Wales printed as part of Theatrum Orbis Terrarum.National Library of Wales

Llwyd sent drafts of these from his deathbed in Denbigh, along with notes on the topography of Britain – Commentarioli Britannicae descriptionis fragmentum (“A Fragment of a Little Commentary on the Description of Britain”) – written in Latin and published in Cologne in 1572. This was soon followed by Thomas Twyne’s English translation, The Breviary of Britayne (1573). Significantly, about half of the work was devoted to Wales.

Defending history

One aim of the Breviary was to defend the traditional British history popularised by Geoffrey of Monmouth– which traced the earliest kings of Britain to the Trojan exile Brutus – against the Italian humanist historian Polydore Vergil, “who sought not only to obscure the glory of the British name, but also to defame the Britons themselves with slanderous lies”. Like his compatriot Sir John Prise of Brecon, Llwyd not only cited numerous classical sources but stressed the importance of sources in Welsh, which Vergil could not read.

The Cronica Walliae also took the truth of British history for granted. The work drew heavily on the medieval Welsh chronicles known as Brut y Tywysogyon (“The Chronicle of the Princes”), which were designed as continuations of Geoffrey’s history, though Llwyd also used other sources and imposed his own shape on the whole. In particular, he divided the history by the reigns of the kings and princes whose deeds he related, from Cadwaladr the Blessed in the late seventh century to the failed revolt of Madog ap Llywelyn in 1294–5. This allowed Llwyd to present the history of medieval Wales as an unbroken succession of legitimate rulers. It also allowed him to insert the first account of Prince Madog’s alleged discovery of America in the 12th century.

His final sentence made clear, however, that a separate Welsh history was long over: after 1295 “there was nothinge done in Wales worthy memory, but that is to bee redde in the Englishe Chronicle”. Nevertheless, by commemorating their ancient and medieval history, Llwyd insisted that the Welsh could boast a unique pedigree and status as “the genuine Britons” in the Tudor realm.

The Conversation

Huw Pryce receives funding from the AHRC for his contribution to the major project, "Inventor of Britain: The Complete Works of Humphrey Llwyd", led by Professor Philip Schwyzer (Exeter University), in collaboration also with Professor Keith Lilley (Queen’s University Belfast), which will publish new critical editions of Llwyd’s works and throw fresh light on their significance.

Wales's tourism problem is down to a disconnect with its own people

Author: Euryn Rhys Roberts, Lecturer in Medieval and Welsh History, Bangor University

Harlech Castle, Gwynedd, north Wales.Valery Egorov/Shutterstock

Wales is a country bursting with ancient culture and beautiful landscapes. It is home to a vibrant people, who are intensely proud of their heritage. It sounds like the perfect place for many a traveller to visit – so why then, has it long struggled to attract foreign tourism?

In 2017, more than one million trips were taken to Wales by overseas visitors. This very modest 0.5% increase on 2016 was accompanied by a steep drop in international tourists’ spending – down by 17% from £444m to £369m. These figures were in sharp contrast to London (up 14% to £13,546m) and Scotland (up 23% to £2,276m).

Dwelling too much on this disparity – when both London and Scotland are better connected and internationally more visible – would be a self-flagellating enterprise. But Wales may have expected better after a £5m Welsh government spend on a “Year of Legends” marketing campaign. Putting the heritage of Wales – its legends, landscapes and castles – at the fore was meant to highlight some of its unique selling points.

But while the nation tried to market its “Welshness” abroad, at home it was confused as to what this even meant. Proposals including a giant “iron ring” sculpture at Flint Castle and a nostalgic flirtation with marketing Wales internationally as a “principality” were met with anger and accusations that the devolved government had forgotten the very history it was trying to sell.

Sadly, however, none of this is a new problem – Wales has been struggling with foreign tourism for decades – and it is largely down to this disconnection.

Years of failed promises

During the 1970s and 1980s, Wales’s share of the total amount spent by international visitors to the UK never hovered much higher than about 2%. Then as now, focusing on heritage and culture was seen as a way of addressing the changing tastes and trends which had eaten away at the traditional rural and coastal resort market.

Much has been made of the series of themed years which began in 2016 with the “Year of Adventure”. But Wales has also done this before: 1976 was the “Welcome America Year” while 1983 was the “Year of the Castles”. What was intended as an unproblematic tourist promotion, the year of castles actually became a matter of some controversy in Wales – the castles were mainly built by invaders leading some to criticise it as a celebration of the 1282-3 conquest of the native principality of Wales, and its subjection to the crown of England.

Conwy castle, built by Edward I during his conquest of native Wales, between 1283 and 1289.Pixabay

Nevertheless, the plan went ahead, with a year-long festival – Cestyll ’83 (Castles ‘83) – at its heart. Though directed and publicised from above, it largely relied on the action of local authorities and voluntary organisations. The only directive was that any activities – from charity pram pushes to medieval pageants – should “take place in or near a Welsh castle”. The Wales Tourist Board would eventually claim that some 200 events in Wales during 1983 were inspired by the festival.

Festival of shame

Using a castle-shaped stand, the festival was launched at the World Travel Market in London in December 1982. This was followed, at the end of February 1983, with a domestic and royal launch attended by Charles and Diana, the Prince and Princess of Wales, at Caerphilly Castle. Like all commemoration it had a whiff of self-congratulation and a gratuitous swagger. It was also all too easy for the Wales Tourist Board to slip in that the festival was a celebration of the seventh centenary of the building of some of Wales’s most famous castles – such as Conwy, Caernarfon and Harlech – all of which were built by Edward I to secure his conquests.

As a result, the festival was dubbed a “festival of shame”. Modern grievances were transferred onto Edward’s castles. Weren’t these, questioned some, the first English holiday homes in Wales?

That’s not to say it wasn’t a success – on the commercial side, the increase in visitors and buzz it created played a key role in the establishment of the government’s historic environment service CADW to maximise the tourist potential of the country’s heritage. On the cultural side, it highlighted that the medieval heritage of Wales could not be treated as unproblematic. While making mistakes and forgetting its history might be an indicator that Welsh nationhood is alive and kicking – under French historian Ernest Renan’s famous definition of what makes a nation– the castles of Wales remain saddled, it would seem, with a heritage which is both a blessing and a curse. In the present as in the past, Welsh castles have been a source of conflict and cultural exchange.

Tourism may be about commodifying locations – but if Wales wants its own people on board it needs to ask itself what it wants from the country’s heritage beyond potential economic gain. Locals and long-distance travellers might pay more attention to the country if its public history was known for its debate and controversy – and not as a bland footnote to English and British history.

Either way, Wales needs to come up with a solution that both the Welsh agree with and foreign visitors can engage with. The ongoing disconnect is evidently doing nothing to sell the nation to the world.

The Conversation

Euryn Rhys Roberts does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Five ways that natural nanotechnology could inspire human design

Author: John Thomas Prabhakar, Lecturer of Physical Chemistry (Nanocrystals and Nanoparticles), Bangor University

Michael Fitzsimmons/Shutterstock

Though nanotechnology is portrayed as a fairly recent human invention, nature is actually full of nanoscopic architectures. They underpin the essential functions of a variety of life forms, from bacteria to berries, wasps to whales.

In fact, tactful use of the principles of nanoscience can be traced to natural structures that are over 500m-years-old. Below are just five sources of inspiration that scientists could use to create the next generation of human technology.

1. Structural colours

The colouration of several types of beetles and butterflies is produced by sets of carefully spaced nanoscopic pillars. Made of sugars such as chitosan, or proteins like keratin, the widths of slits between the pillars are engineered to manipulate light to achieve certain colours or effects like iridescence.

One benefit of this strategy is resilience. Pigments tend to bleach with exposure to light, but structural colours are stable for remarkably long periods. A recent study of structural colouration in metallic-blue marble berries, for example, featured specimens collected in 1974, which had maintained their colour despite being long dead.

Complex slit architecture in the wings of the butterfly Thecla opisena.Science Advances/Wilts et al, CC BY-NC

Another advantage is that colour can be changed by simply varying the size and shape of the slits, and by filling the pores with liquids or vapours too. In fact, often the first clue to the presence of structural colouration is a vivid colour change after the specimen has been soaked in water. Some wing structures are so sensitive to air density in the slits that colour changes are seen in response to temperature too.

2. Long range visibility

In addition to simply deflecting light at an angle to achieve the appearance of colour, some ultra-thin layers of slit panels completely reverse the direction of the travel of light rays. This deflection and blocking of light can work together to create stunning optical effects such as a single butterfly’s wings with half-a-mile visibility, and beetles with brilliant white scales, measuring a slim five micrometers. In fact, these structures are so impressive that they can outperform artificially engineered structures that are 25 times thicker.

3. Adhesion

Gecko feet can bind firmly to practically any solid surface in milliseconds, and detach with no apparent effort. This adhesion is purely physical with no chemical interaction between the feet and surface.

Micro and nanostructure of Gecko feet.© 2005, The National Academy of Sciences

The active adhesive layer of the gecko’s foot is a branched nanoscopic layer of bristles called “spatulae”, which measure about 200 nanometers in length. Several thousand of these spatulae are attached to micron sized “seta”. Both are made of very flexible keratin. Though research into the finer details of the spatulae’s attachment and detachment mechanism is ongoing, the very fact that they operate with no sticky chemical is an impressive feat of design.

Gecko’s feet have other fascinating features too. They are self-cleaning, resistant to self-matting (the seta don’t stick to each other) and are detached by default (including from each other). These features have prompted suggestions that in the future, glues, screws and rivets could all be made from a single process, casting keratin or similar material into different moulds.

4. Porous strength

The strongest form of any solid is the single crystal state – think diamonds – in which atoms are present in near perfect order from one end of the object to the other. Things like steel rods, aircraft bodies and car panels are not single crystalline, but polycrystalline, similar in structure to a mosaic of grains. So, in theory, the strength of these materials could be improved by increasing the grain size, or by making the whole structure single crystalline.

Single crystals can be very heavy, but nature has a solution for this in the form of nanostructured pores. The resultant structure – a meso-crystal – is the strongest form of a given solid for its weight category. Sea urchin spines and nacre (mother of pearl) are both made of meso-crystalline forms. These creatures have lightweight shells and yet can reside at great depths where the pressure is high.

In theory, meso-crystalline materials can be manufactured, although using existing processes would require a lot of intricate manipulation. Tiny nanoparticles would have to be spun around until they line up with atomic precision to other parts of the growing mesocrystals, and then they would need to be gelled together around a soft spacer to eventually form a porous network.

5. Bacterial navigation

Magnetotactic bacteria posses the extraordinary ability to sense minute magnetic fields, including the Earth’s own, using small chains of nanocrystals called magnetosomes. These are grains sized between 30–50 nanometers, made of either magnetite (a form of iron oxide) or, less commonly, greghite (an iron sulphur combo). Several features of magnetosomes work together to produce a foldable “compass needle”, many times more sensitive than man-made counterparts.

Though these “sensors” are only used for navigating short distances (magnetotactic bacteria are pond-dwelling), their precision is incredible. Not only can they find their way, but varying grain size means that they can retain information, while growth is restricted to the most magnetically sensitive atomic arrangements.

However, as oxygen and sulphur combine voraciously with iron to produce magnetite, greghite or over 50 other compounds – only a few of which are magnetic – great skill is required to selectively produce the correct form, and create the magnetosome chains. Such dexterity is currently beyond our reach but future navigation could be revolutionised if scientists learn how to mimic these structures.

The Conversation

John Thomas Prabhakar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Unused £321m trapped on dormant Oyster cards – and time may be running out to get it back

Author: Bernardo Batiz-Lazo, Professor of Business History and Bank Management, Bangor UniversityPrachandra Shakya, PhD Candidate, Bangor University

Topping up.shutterstock

It is 15 years since Transport for London (TfL) launched the Oyster card on London’s buses and tube trains, but Oyster hasn’t had a very happy birthday.

Instead of cake, candles and raised glasses, news broke that money trapped on dormant Oyster cards amounts to £321m, a princely sum that has effectively been loaned, interest-free from the public to TfL. This “mountain of cash” exists as credit on cards that haven’t been used for at least a year – either lost, damaged, abandoned, or stashed away.

To followers of Oyster-nomics, this is just one more episode in a marked decline affecting Oyster and similar top-up based systems. More and more cards have been slipping into disuse, while the percentage of journeys using Oyster has plummeted. Where did these troubles come from, and might the so-called cash mountain be the final straw?

Oyster vs Octopus

To understand Oyster’s problems, we need to take a look at its history.

London was not the first world city to introduce labour-saving methods on its public transport, and there have been many attempts to use technology to ease the passage of commuters cramming into buses and trains. In the 1960s, the Japanese launched a cardboard ticket with a magnetic stripe on the back. The system is still used today, including on some British railway lines and the Mexico City metro.

In Hong Kong during the 1990s a diverse group of companies collaborated to develop Octopus – a payment card with a chip that dramatically reduced the city’s use of cash. Initially, the card solely served the city’s vast transport network – a direct forerunner of Oyster – but slowly expanded to include convenience stores, fast food restaurants and more.

They look better too.shutterstock

By December 2017, more than 10,000 Hong Kong shops and service providers were accepting Octopus payments from 34m cards – accounting for 15m transactions a day. These corresponded to a daily spend of around HK$194m (£18.7m).

The Oyster card seems brittle by comparison. While Octopus morphed into a contactless, stored value smart card capable of online and offline transactions, Oyster remains a glorified travel card. TfL oversees 3.99 billion journeys every year, so have easily had the influence and financial muscle to help develop Oyster if they had wanted. Predominantly, they have chosen not to.

At one stage there were ambitions to expand the Oyster network to Britain’s ATMs, so that customers would be able to top-up at any hole-in-the-wall. But in the course of our ATM research, interviewees in the banking sector suggested it was political infighting in LINK – the sole ATM network in the UK – that kept the plans on the shelf, rather than any technological or commercial concern. A clear missed opportunity for Oyster to develop, Octopus-esque, and establish similar schemes across the country.

As it is, while some global counterparts have evolved to keep up with the new applications of contactless technology, Oyster has been touching in and out the same way since 2003.

Going for gold

Perhaps unexpectedly, the 2012 London Olympics dealt the Oyster card a body blow. Preparations for the games included plans to make Olympic sites “cash-free zones” in a bid to cut queues and stop criminals targeting visitors.

After much lobbying, this led to TfL starting to accept EMV payments. “EMV” – “Europay, Mastercard, Visa” – refers to technical specifications which, within specific guidelines, make chips in payment cards and point-of-sale terminals compatible. This allowed contactless bank cards to be used instead of Oyster, initially on London’s 8,500-strong fleet of red buses.

The London Olympics delivered an unexpected blow to the Oyster monopoly.shutterstock

By the end of 2013, London’s entire network of buses, tube trains, trams, metropolitan rail lines, and TfL-operated river boats was open to EMV payments, and in 2014 TfL doubled down by banning cash payment for bus fares. At the time, fewer than 40% of the 96m debit cards and 58m credit cards in the UK were contactless, but by the end of 2017, 70% of all payment cards had contactless capabilities. Similar trends were expected in the wallets of many of London’s 15m or so annual overseas visitors.

Since the London Olympics in 2012, Oyster travel has dropped by 20%, while EMV journeys grew from 79,421 in 2014 to 723,098 in 2017– a factor of more than nine. The number of unused Oyster cards doubled between 2013 and 2017 from 27m to 53m. As for the cash mountain, that’s been growing by an average of 25% per year since 2014, from £123m to the whopping £321m now quoted in the press.

An Oyster with no pearl

In effect, punters have loaned TfL this money, interest-free, and there’s no guarantee it will be fully returned. When breaking the news, Liberal Democrat London Assembly member Caroline Pidgeon stated that it was “time TfL devoted far more time and energy telling the public how they can get their own money back.”

But as with energy companies, TfL has no financial incentive to persuade the public to withdraw their balances. Gas bills at least are usually large enough to jerk the claimant into action, whereas Oyster balances are spread across 76m units (73% of which have lain dormant for a year or more), each containing an average of £2.86. One solution would be to imitate airlines and air miles – TfL could set a deadline by which to withdraw dormant money, or lose anything that goes unclaimed.

Punters can now pay for their morning torture in more ways than ever before.shutterstock

This dormant money has set off alarm bells across the pre-paid industry, and shows how non-financial organisations can heavily affect the way payment methods develop. In this case the bargaining power clearly lies with the transport operator (TfL) and not the user as, regardless of people’s preferences, they have to conform to the operator’s choice of payment method. Whether Oyster stays or goes will depend on TfL’s strategy, not on benefits to users.

Nevertheless, this is just one narrative in a much wider story: cash transactions are digitising, payment methods proliferating, and top-up systems like Oyster must evolve quickly or face extinction. Advances barely on the horizon a few years ago are now setting the industry standard, and the Oyster card has spent 15 years in stasis. One day soon it may be touching out for good.

The Conversation

Bernardo Bátiz-Lazo has received funding to research ATM and payments history from the British Academy, Fundación de Estudios Financieros (Fundef-ITAM), Charles Babbage Institute and the Hagley Museum and Archives. He is also active in the ATM Industry Association, consults with KAL ATM Software and is a regular contributor to www.atmmarketplace.com.

Prachandra Shakya does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Pristine Antarctic fjords contain similar levels of microplastics to open oceans near big civilisations

Author: Alexis Janosik, Assistant Professor of Biology, University of West FloridaDavid Barnes, Data Interpretation Ecologist, British Antarctic SurveyJames Scourse, Professor of Physical Geography, University of ExeterKatrien Van Landeghem, Senior Lecturer in Marine Geology, Bangor University

Author provided

In the middle of the last century, mass-produced, disposable plastic waste started washing up on shorelines, and to be found in the middle of the oceans. This has since become an increasingly serious problem, spreading globally to even the most remote places on Earth. Just a few decades later, in the 1970s, scientists found the same problem was occurring at a much less visible, microscopic level, with microplastics.

These particles of plastic are between 0.05mm and 5mm in size. Larger pieces of plastic can be broken down into microplastics but these tiny bits of plastic also come from deliberate additions to all sorts of products, from toothpaste to washing power.

Now, with major global sampling efforts, it has become clear that microplastics are dispersing all over the world – in the water column, sediments, and marine animal diets – even reaching as far south as the pristine environments of Antarctica.

Glacial retreat

While this plastic problem has become more prevalent, one of the most pristine ecosystems on Earth, the fjords of the Western Antarctic Peninsula, have been revealed by retreating glaciers.

Tucked between islands and the mainland, the coast along the Western Antarctic Peninsula has long, narrow inlets created by glaciers. During the last 50 years, these fjords have physically changed, due to reduced sea ice cover and because nearly 90% of glaciers have retreated in this region. These processes have exposed the ocean floor of many of the fjords for the first time.

The Antarctic fjords.Google Earth/US Geological Survey/DigitalGlobe/CNES/Airbus

The potential for microplastics to impact this environment and its marine life is huge – and we’re now working to figure out the depth of the effect that microplastic pollution is having on the newly colonised habitats. Any microplastics recovered in the Southern Ocean, particularly in newly formed ecosystems, raise alarm. They not only indicate that the area has been affected, but that plastic pollution is increasingly ubiquitous too.

New habitats

In November 2017, our multidisciplinary UK-Chile-US-Canada research team – known as ICEBERGS – joined the RRS James Clark Ross (an ice strengthened research ship) and headed to Antarctica’s northernmost fjords. Our goal was, and still is, to gain a better understanding of how the environment and organisms evolve in newly emerging and colonising habitats in Antarctica. We are particularly interested in the marine ecosystems on the ocean floor, so have been looking at areas such as Marian Cove and Börgen Bay on the Western Antarctic Peninsula, where communities have only developed in the last few decades – due to the retreating glaciers.

Thriving marine ecosystems can act as climate regulators. When ice retreats, new, pristine fjordic habitats are revealed and phytoplankton blooms occur. These help to counteract climate change because they take carbon dioxide gas out of the atmosphere. New productive seabed habitat also becomes available for the diverse shallow water fauna that eat this algae, and store the carbon long term. Not counteracting climate change, however, is the fact that new open water absorbs heat faster, in contrast to ice that would have reflected it.

The animals colonising the exposed fjords face challenging conditions. The sediment and fresh water flowing in the glacier melt runoff make it very difficult for many organisms to survive. And, if exposed to them, microplastics can be a serious concern for many marine animals, especially filter-feeding organisms (for example krill, and other zooplankton). As these creatures filter water to obtain food, they may ingest microplastics which can clog and block their feeding appendages, limiting food intake. Ingested microplastics may be transferred to the circulatory system too, which can cause an increased immune response.

Microplastics may also bring in new bacteria and chemical pollutants attached to them too. So, because many filter-feeding organisms support the entire food web, any impact on them should be expected to have cascading effects on the ecosystem.

On board the RRS James Clark Ross.Author provided

In newly revealed habitats, creatures are less likely to have been impacted by marine pollutants previously so they can help us learn about more recent changes in an environment. To our knowledge, microplastics have not been found in the Antarctic fjords before now, but our preliminary results have already found an alarmingly high presence – similar to those found in the open water of the Atlantic and Pacific Oceans, near big civilisations.

These results came from samples taken directly from the fjords, and we are now looking further at the evidence of how micro-organisms are being affected by microplastics. During the next two Antarctic summers, we will be collecting more geophysical, physical oceanographic, sedimentological and biological data from these pristine sites in the same locations, so we can compare the changes over time in the habitats that colonise new ocean floor in Antarctic fjords.

Only after such rigorous data collection and analysis will we be able to tell the true impact of microplastics on pristine environments. Until then, we can all do our bit to cut down on potential pollution and protect what may very well be the last pristine environments on Earth.

The Conversation

David Barnes receives funding from Natural Environment Research Council grants.

James Scourse received funding from the Natural Environmental Research Council and CONICYT for this research.

Katrien Van Landeghem acknowledges the financial support provided by the Welsh Government and Higher Education Funding Council for Wales through the Sêr Cymru National Research Network for Low Carbon, Energy and Environment, and she receives funding from the Natural Environmental Research Council for this research.

Alexis Janosik does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Why football may still be coming home... to France

Author: Jonathan Ervine, Senior Lecturer in French and Francophone Studies, Bangor University

When England hosted the 1996 European Championships, a song by Frank Skinner, David Baddiel and the Lightning Seeds inspired the popular chant: “football’s coming home”. Ahead of England’s World Cup semi-final defeat by Croatia, many fans were again talking about football coming home. But were they right to do so? After all, there is a chance that football will still be coming home – despite England’s elimination.

Given their team’s recent performances and their country’s role in the history of football, the French also have reason to feel that football may soon be “coming home”. This idea may be hard to swallow for some English fans, not least those who are getting the lyrics wrong.

Jules Rimet – the World Cup founder mentioned in the chorus of Football’s coming home – was French. So was Henri Delaunay, who is generally seen as the brains behind the European Championships. So was Gabriel Hanot, the L'Equipe journalist credited with founding the European Cup (now Champions League). Indeed, football’s world governing body the Fédération Internationale de Football Association, better known as FIFA, was founded in Paris in 1904 and its first president was another French journalist, Robert Guérin.

The first World Cup trophy was named after Jules Rimet, FIFA president 1921-1954.BnF

France has had a long history of establishing international sports tournaments and organisations. This in part stems from influential Frenchmen in the late 19th century such as Philippe Tissié, Paschal Grousset, and Pierre de Coubertin who became convinced of the educational and physical benefits of sport.

De Coubertin is best-known as the founder of the modern Olympics and he initially wanted the first games to take place in Paris, to coincide with the city’s 1900 Exposition Universelle. For De Coubertin and others, the development of international sport provided France with an instrument of soft power.

England were at this time somewhat suspicious of international sporting organisations, as the football sociologist John Williams has mentioned. It didn’t send a team to the World Cup until 1950, fully 20 years after the first tournament in Uruguay.

Nonetheless, England is often perceived as the home of football due to its role in the early development of the game. Sheffield FC (founded 1857) is heralded as the world’s first football team. The Football Association (FA), established in 1863, is the oldest national football association in the world and it is the FA that helped create the basis for the rules of football that exist today.

France’s oldest football team Le Havre were in fact created in 1872 by Englishmen working in the city’s port. Their sky blue and navy halved shirts represent the alma mater of the club’s founders, namely the universities of Cambridge and Oxford. Le Havre’s club anthem even adopts the same tune as “God Save the Queen”.

Just Fontaine scored a record 13 goals for France at the 1958 World Cup.wiki

However, Williams was right that it is not easy to define where football’s true home is to be found. The line “football’s coming home” appears to hint at a sense of entitlement and ownership when it comes to England’s relationship with football.

Yet football is a global game. Its governing body FIFA may have been founded in Paris, but its headquarters are now located in Zurich, Switzerland. England is no longer home to the International Football Association Board (IFAB) that is responsible for the laws of football. Its headquarters are now also in Zurich.

‘Never understood anything about football’

Given the role that France has played in football becoming a major international sport, are many French people talking about football potentially “coming home” this summer? In short, they’re not. This is largely due to football occupying a very different place in French as opposed to English culture.

France has a larger population than England, but less than half as many professional football teams. Prior to the launch of cable channel Canal Plus in 1984, relatively little domestic football was shown on French television. Nevertheless, hosting and winning the 1998 World Cup led to increased interest in football.

Since then, high-profile failures in several major tournaments have led to France’s leading footballers facing lots of criticism back home over their bad attitudes. In 2012, French football magazine So Foot hit back and claimed that France was a “country that has never understood anything about football”. These comments appeared in a special issue on “Why France doesn’t like its footballers”. France was also described in the title of a book that year by the journalist Joachim Barbier as “This country that doesn’t like football”, or Ce pays qui n'aime pas le foot, subtitled “why France misunderstands football and its culture”.

At a time when France has faced economic challenges and an increased threat from terrorism, football has the potential to boost the national mood. This year’s World Cup Final will take place the day after a national holiday that marks Bastille Day. A victory by Les Bleus would give France good reason to claim le football revient chez lui two decades after its iconic 1998 World Cup victory.


More evidence-based articles about football and the World Cup:

The Conversation

Jonathan Ervine does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Theresa May was right to reimpose collective ministerial responsibility – it's the only way to govern

Author: Stephen Clear, Lecturer in Law, Bangor University

It lasted for 48 hours. Two days after Theresa May told Conservative ministers that they must adhere to the convention of collective responsibility and support the agreed Brexit plan, the prime minister had to accept the resignation of her Brexit secretary, David Davis, and foreign secretary, Boris Johnson.

In his resignation letter, Davis wrote that he did not support the new agreed strategy and was following the collective responsibility convention in resigning.

Collective responsibility only concerns ministers in government serving within the cabinet. Dating back to the 18th century, it is a constitutional convention which holds that members of the cabinet should support all governmental decisions. While it’s a convention rather than a legal requirement, ministers are nonetheless expected to show a “united front” for all government actions and policies.

In practice, this means that decisions taken by the cabinet are binding on all its members. While a minister may disagree in private, they must still publicly support the agreed position. According to the Cabinet Manual, should a minister feel they cannot abide by the public “united front” requirement, then they must resign.

Perhaps one of the most famous examples of the convention in practice was the resignation of Robin Cook in 2003 as leader of the House of Commons for Tony Blair’s Labour government. Under the collective responsibility rules, Cook was unable to publicly speak out about his objections to the war in Iraq. Following the tenets of the convention, he resigned from his office, and spoke from the backbenches of his disagreement with the government’s position.

Such a principled approach to collective responsibility saw Cook receive a standing ovation. Nonetheless, such resignations over not toeing government lines are rare, as more often than not individual ministers want to hold on to government office.

While it is largely up to the prime minister to enforce the convention, it is seen as more politically honourable – and better for the party – for a minister to resign when they want to speak out against the government’s collective position.

Agreeing to differ

The Cabinet Manual makes it clear that collective responsibility applies in all instances, “save where it is explicitly set aside”. As the Labour prime minister James Callaghan remarked in 1977: “I certainly think that the doctrine should apply, except in cases where I announce it does not.”

The suspension of collective responsibility – otherwise known as an “agreement to differ” – is rare. Within the UK, it has only been implemented on six previous occasions– ranging from the first on the issue of tariff policy in 1932, to proposals for alternative voting systems during general elections under the 2010 coalition agreement.

Both referendums pertaining to the European Union – the first in 1975 on UK membership of the European Economic Community, and the second on the 2016 Brexit referendum– carried a temporary suspension of collective responsibility on the specific issues.

Since David Cameron gave his cabinet freedom to differ over Brexit, there has been a progressive (and very public) weakening of cabinet collective responsibility.

Even before his resignation as foreign secretary, Johnson had repeatedly criticised the government’s approach to Brexit. The treasury minister, Liz Truss, has openly criticised “male macho” cabinet colleagues. In particular, the perceived “hot air” coming out of the Department for the Environment – with the suggestion that “wood burning Goves” are trying to tell us how to live our lives.

Cameron only gave his ministers freedom to differ over Brexit. However, reinstating collective responsibility has been a significant challenge for May’s administration. And she has now lost two ministers who could not adhere to it.

Why it must now endure

For May’s administration to survive, collective unity – alongside confidence and trust – is now needed. Remaining within the cabinet, and publicly speaking out against an agreed direction, weakens unity, causes confusion, and undermines the leadership of the prime minister.

The convention is crucial as it is the government that leads the policy and direction of the country. Its requirements are based on the foundations that unity is needed to deliver the government’s agenda, and projects stability, strength and leadership both domestically and overseas.

A united front among ministers is necessary for political stability. Without such, the lack of unity has consequences for the UK’s ability to negotiate with the EU, while also carrying economic and trade implications.

The Conversation

Stephen Clear does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Why we explored an undisturbed rainforest hidden on top of an African mountain

Author: Simon Willcock, Lecturer in Environmental Geography, Bangor UniversityPhil Platts, Research Fellow, University of York

Atop Mount Lico in northern Mozambique is a site that few have had the pleasure of seeing – a hidden rainforest, protected by a steep circle of rock. Though the mountain was known to locals, the forest itself remained a secret until six years ago, when Julian Bayliss spotted it on satellite imagery. It wasn’t until last year, however, that he revealed his discovery, at the Oxford Nature Festival.

We recently visited the 700 metre-high mountaintop rainforest in an expedition organised by Bayliss, in collaboration with Mozambique’s Natural History Museum and National Herbarium. As far as anyone knew (including the locals), we would be the first people to set foot there (spoiler: we weren’t).

Since the rainforest’s discovery, Lico has received worldwide attention. That it captured the public’s imagination speaks volumes about how rare such places are. Humans are nothing if not adventurous, pushing our range boundaries like no other species can. But when almost every corner of the planet now shows signs of human activity, how do conservation scientists justify visiting and publicising these last bastions of untrodden nature?

From our perspective, the answer depends on what expeditions like this can teach us about the natural world, our place in it, and how to shepherd the wildest of places through the Anthropocene. Standing back and crossing our collective fingers is not always a winning strategy. This expedition formed part of a long-standing research programme into these mountains, that aims to provide evidence to legally protect Mozambique’s mountain forests. Currently none of northern Mozambique’s mountains are formally protected, either nationally or internationally. Finding new species is one way to highlight the importance of such sites and justify their protection.

As well as exploring Mount Lico, the expedition was the first to undertake a biological survey of nearby Mount Socone. Every bit as majestic and species rich as the iconic Lico, Socone highlights the threat faced by many forests in Mozambique, Africa and elsewhere. Globally, one football pitch worth of forest is lost every second, driving countless species to extinction. The removal of trees from mountain slopes also leads to soil erosion, flooding in the wet season and water shortages in the dry season.

On our first day on Socone, we set out to locate the middle of the forest using a satellite image and GPS. However, the difference between what this image was telling us and what we could see was vast. As we walked towards what the image showed as the heart of lush rainforest, we could see the warm glow of the African sun. Soon enough, we emerged from beneath the canopy and into newly established farmland. Without the protective cover of the forest, heavy rains will pound these exposed mountain soils, fresh cuts will need to be made, and so the cycle repeats. Media attention on neighbouring Lico, and the new species descriptions coming out of both sites, help to bring these conservation and livelihood issues to the world’s attention.

Time capsules

Our brief footsteps on Lico will soon be overgrown, and the plants and animals that live there will continue to be protected by the same towering cliffs (more than 125 metres high) that have saved them up to now (without the help of world-class climbers, our expedition would not have been possible).

But the impact of people goes far beyond where we have actually managed to set foot. Since the industrial revolution, humans have increased the amount of carbon dioxide in the atmosphere to levels higher than at any time in the past 400,000 years, increasing temperatures and changing weather patterns. Despite being situated on a fortress of rock, Lico’s forest is vulnerable to climate change, like every other ecosystem on the planet.

The contrast between protection from direct human activities but exposure to climate change means that Lico has a lot to teach us. Most forests experience both of these processes simultaneously, and so it is difficult to unravel their relative and interacting impacts. Through the data collected on Lico, Socone and other forests worldwide, we gain a greater understanding of how human disturbance affects the ability of forests to respond to environmental change.

Lico is a rare data point on this map: millennia of climate change and ecological response, played out in the absence of direct human disturbance. Reconstructing this history meant digging a two metre-deep pit in the forest, so that we could sample the layers of soil in the order that they accumulated. We tried to minimise any lasting effects on the forest (the hole was filled and topsoil replaced) but nonetheless, reasonable objections can be made against our disturbing this previously pristine site.

What we gained were a series of time capsules: each little tin of soil contains information on the plants that grew, the fires that burned and the water that flowed, data that will be shared in open access repositories, allowing people worldwide to investigate this unique site without the need for further disturbance. What we learn from Lico will help the world understand how forests might be affected by future changes in climate.

So were we really the first humans on Lico? Well, not quite. To everyone’s surprise, we found ancient pots, ceremonially placed near the source of a stream that flows to a waterfall down the side of the cliff. Were these placed there during a time of drought, as the waterfall ran dry and the crops failed?

Archaeologists and climate scientists are investigating. Given the pots pre-date local knowledge, the incredible inaccessibility and lack of any other signs of human activity, Lico’s forest remains one of the least disturbed on the planet. One thing’s for sure though – humans really do get everywhere.

The Conversation

Simon Willcock received funding for this expedition from Bangor University. The expedition was part-funded by the TransGlobe Expedition Trust, Biocensus, the African Butterfly Research Institute, DMM Climbing, and Marmot tents.

Phil Platts receives funding from the University of York's Environment Department. The expedition was part-funded by the TransGlobe Expedition Trust, Biocensus, the African Butterfly Research Institute, DMM Climbing, and Marmot tents.

We're working on a more accurate pollen forecasting system using plant DNA

Author: Simon Creer, Professor in Molecular Ecology, Bangor UniversityGeorgina Brennan, Postdoctoral Research Officer, Bangor University

Hayfever.Alex Cofaru/Shutterstock

Most people enjoy the warmer, longer days that summer months bring – but plant allergy sufferers will have mixed emotions. Roughly one in five Europeans suffers from allergic reactions to tree, grass and weed pollen causing pollinosis, hay fever and allergic asthma.

Allergies to substances such as pollen are driven by errors in the body’s immune system, which means it mounts a response to otherwise benign substances from plants. On first exposure to pollen, the body decides if some of the otherwise harmless proteins in the pollen are dangerous. If it decides they are, the immune system produces immunoglobin E (IgE) antibodies in a process called sensitisation.

The next time the body is exposed to pollen, it remembers the proteins and mounts another response. The IgE antibodies detect the pollen in, or on, the body, and cause cells to release histamine and a variety of other chemicals. This results in symptoms ranging from itchy eyes and nose, to production of mucous, inflammation and sneezing fits.

But while we know that “pollen” causes this response, at present we still don’t know all the types of pollen that cause the body to react.

Forecasting hay fever

In the UK, a daily pollen forecast is generated by the UK Met Office in collaboration with the National Pollen and Aerobiology Unit (NPARU), to help allergy sufferers. This forecast is created using data from a network of pollen traps which operate throughout the main pollen season (March to September) and measure how many pollen grains are present on a daily basis.

Sweet vernal, an early flowering grass.Author provided

Pollen from different types of tree can be identified using microscopes, but grass pollen grains all look the same. As a result the pollen forecast for grasses (of which there are 150 types in the UK alone) is based on the broad, undifferentiated category of “grass”. That is despite grass pollen being the single most important outdoor aeroallergen.

We already know that different species of grass pollinate at different times in the year, and allergic reactions can occur at different times throughout the allergy season. What we need to figure out is whether allergies are caused by all species, specific species, or a combination of species of grasses. We also need to learn how pollen grains change in composition in time and space. While pollen is known for being very tough and is often well preserved in sediments, it can be very fragile in certain circumstances, such as bursting when in contact with rain drops.

To find out which grasses are linked to the allergic response, we need to know many things, such as where and when species of grass are releasing pollen. We also need to uncover how the pollen moves through the atmosphere, quantify the exposure of grass pollen species in time and space, and work out how allergies develop across broad geographical and temporal scales.

The #PollerGEN project

Our Natural Environment Research Council (NERC) PollerGEN project team is now working on a way to detect airborne pollen from different species of allergenic grass. We’re also developing new pollen source maps, and modelling how pollen grains likely move across landscapes, as well as identifying which species are linked with the exacerbation of asthma and hay fever.

We’re going to be using a new UK plant DNA barcode library, as well as environmental genomic technologies to identify complex mixtures of tree and grass pollens from a molecular genetic perspective. By combining this information with detailed source maps and aerobiological modelling, we hope to redefine how pollen forecasts are measured and reported in the future.

We have just started the third year of pollen collection and hope to road test the combined forecasting methods over the next year. In the long run, our vision is to be able to provide specific pollen forecasts for grass, and unravel which species of grass pollen are most likely causing allergic responses. More broadly, we also want to provide information to healthcare professionals and charities, who can translate this information to help pollen allergy sufferers live healthier and more productive lives.

In the meantime, if you suffer from pollen allergies, sneeze or wheeze during spring, speak to a doctor or pharmacist to prepare an action plan. You can also get support from Allergy UK, and information about the pollen forecast from the UK Met Office.

The Conversation

Simon Creer receives funding from the Natural Environment Research Council.

Georgina Brennan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Investigation gets underway over Carl Sargeant tragedy and Welsh first minister

Author: Stephen Clear, Lecturer in Law, Bangor University

The circumstances surrounding the tragic, untimely death of former Welsh Assembly member Carl Sargeant in November 2017 are yet to fully emerge. But now that the terms of reference for an independent investigation have been announced, it is hopeful that the truth will be uncovered.

But what will the outcome mean for Wales’s first minister Carywn Jones, in what is being portrayed as something akin to a trial of his actions?

On November 2, 2017, Sargeant – the assembly member for Alyn and Deeside – was sacked as the Welsh government’s secretary for communities and children. He was also suspended from the Labour Party, pending an investigation into alleged “unwanted attention, inappropriate touching or groping”. Five days later, he was found dead at his home in Connah’s Quay, Flintshire.

Since then there has been mounting pressure on the first minister to fully explain the events surrounding Sargeant’s dismissal. Sargeant’s family has claimed that he was deprived of justice and not informed of the details of the allegations against him.

Fellow AMs meanwhile have criticised the way Sargeant was dismissed – calling it“trial by media”. Others have gone as far as saying that Sargeant was bullied by the first minister’s office.

Jones claims, however, that he had no choice but to dismiss Sargeant based on the evidence he had received. Responding to further accusations of a “toxic environment” of bullying within the Welsh Labour party, Jones made a public statement in November 2017 – and referred himself to an independent assessment board. He also agreed to an independent inquiry into his actions.

Despite claims that the first minister caused the Sargeant family considerable distress, Jones has repeated that he has nothing to hide.

What to expect

Last week, the independent investigators, led by public lawyer Paul Bowen QC, announced their remit as being to look into Jones’s “actions and decisions in relation to Carl Sargeant’s departure from his post … and thereafter”.

It is important to note that this is an investigation, not an inquiry. Its legal authority comes from the first minister’s functions, set out in the Government of Wales Act 2006. This is rather than a formal inquiry under the Inquiries Act 2005.

This distinction means that the investigators will operate on a specifically tailored operational protocol basis, agreed by both the Welsh government and Sargeant’s family. These protocols set out the way information will be shared by the Welsh government and include a provision for confidentiality and the redaction of sensitive information from public reports. These exceptions are not too dissimilar to the restrictions on public access under the inquiries legislation.

However, unlike an inquiry, the investigators will not have the legal power to compel attendance of third parties, or the production of specific documents of interest. While the Welsh government’s permanent secretary Dame Shan Morgan has given reassurances that all staff will fully cooperate and provide all necessary documentation, this is not strictly a legal requirement for those outside the Welsh government.

The investigators have already made an open call for evidence and testimonies relating to events. This extends to the civil service staff– although there has already been concern over the way information is being collected and some of the new evidence that is emerging.

Inside the Welsh government

The most revealing aspects of the investigation are likely to come from the first minister’s communications – including messages sent using his private email addresses to cabinet ministers– which will give insight into the real climate within the Welsh government.

We are also likely to see extraordinary reports as Jones is questioned directly over his actions and scrutinised over his personal leadership and administration. The first minister has already announced his intention to stand down from the role, following his “darkest of times” in office. Nonetheless he has pledged to comply and see the investigations through to their conclusion.

Importantly, if reports that the claims made against Sargeant were aired publicly in the media– before he was given the details of them – are true, such a disregard for justice and due process would also likely lead to wider questions about the fairness of the first minister’s powers. Similarly, there will be concerns over the way allegations are handled within public sector organisations, as well as how the ministerial codes are enforced in Wales.

The Conversation

Stephen Clear does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Eight bedtime stories to read to children of all ages

Author: Raluca Radulescu, Professor of Medieval Literature and English Literature, Bangor UniversityLisa Blower, Lecturer in Creative Writing, Bangor University

Evgeny Atamanenko/Shutterstock

Speaking at the 2018 Hay Festival, His Dark Materials author Philip Pullman said: “To share a bedtime story is one of the greatest experiences of childhood and parenthood.” This couldn’t be more true. Besides helping sleepyheads absorb language through the familiar voices that nurture them, understand the complexities of their world, and the reasons behind their feelings, bedtime stories show how childhood can be the greatest adventure of all.

1. Toddle Waddle by Julia Donaldson

Age range: two to five years

Toddle Waddle, by Julia Donaldson.Macmillan Children's Books

Even the youngest child can engage with sound, colour and fun, and this book (illustrated by Nick Sharratt) is filled with bright joy and wonderful onomatopoeia. From the sound of flip-flops to the excitement of slurping a drink at the beach and the music made by different instruments, the sounds, then words, are a wonderful introduction to the intricacies of language.

2. Mr Men & Little Miss books by Roger Hargreaves

Age range: three years+

Hargreaves’ colourful 2D characters behaving to type are a wonderful way to identify with basic emotions by interpreting colour as a feeling. As journalist and author Lucy Mangan puts it in her memoir Bookworm: “Of course uppitiness is purple. Of course happiness is yellow.” These are no fuss, easy to follow collectables – and bitesize too, so you can gobble through second helpings before turning out the light.

3. The Lorax by Dr Seuss

Age range: three to eight years

The Lorax, by Dr Seuss.HarperCollins

No child should grow up without The Lorax. They’ll never be the same when they’ve learned about the Swannee-swans, Humming fish, and Bar-ba-loots bears, their Truffula trees being cut by the mysterious and scruple-free Once-ler. While the environmental message of the book is even more urgent now than it was when The Lorax was first published in 1971, the story is just as entrancing, instructive – without preaching – and, above all, as hopeful as ever. A wonderful wise Lorax speaks for the trees, and for all the world’s children, who want to keep the future green.

4. My Big Shouting Day, by Rebecca Patterson

Age range: two to eight years

A funny picture book for younger readers that will resonate with many parents for its keen perspective on patience. It positively encourages under-fours to shout along with grumpy Bella who gets up on the wrong side of the bed. It shows the child that it’s ok to feel angry – heck, they’ll be a teenager soon enough – but it also gives them permission to express it, and reminds them that tomorrow is always a new day.

5. The Moomin books by Tove Jansson

Age range: three to eight years

The Moomins’ home, Moominvalley, is a place of wonder and fun, populated by fairy-like, round creatures that resemble hippopotamuses, but enjoy human hobbies such as writing memoirs (Moomin papa), making jam (Moomin mama), and playing make-believe (Moomintroll and Snork Maiden). Their adventurous side comes out at all opportunities, stirred by friends Little My and Snufkin, or by mysterious intruders.

First published between 1945 and 1970, in recent years the stories have been tailored for both younger (soft and flap books) and older children (hardback storybooks). The Moomin books tell dream-like stories while tackling questions about love, friendships, encounters with strangers, and so on. An all-round winner.

6. Alice in Wonderland by Lewis Carroll

Alice, by John Tenniel.Wikimedia

Age range: four to 11 years

The first true book written for children about children never fails to bewitch and baffle. Young Alice-like readers can explore the topsy-turvey Wonderland, while the grown-ups reading to them will appreciate the metaphorical Mad Hatter and role of the white rabbit as leader in the adventure in a way they wouldn’t have been able to as a child. Carroll’s book is a celebration of a child’s wonder and curiosity, and fears of growing bigger too. It invites you to talk dreams and nightmares, to accept the weird and extraordinary and, best of all, to conjure up your own adventure down the rabbit hole. It’s a rite of passage, ideal for sharing.

7. Norse Myths: Tales of Odin, Thor and Loki, retold by Kevin Crossley-Holland

Age range: five to 12 years

In a world where comic book superheroes and heroines reign supreme, these legends can entrance a young mind forever. This selection of Norse myths brings all the gritty dark stuff about trickster Loki together with tales of hammer-wielding Thor, and the machinations of Asgardean king Odin and goddess of love, battle and death, Freyja. It tickles the imagination of the young and challenges the parent too. Fabulous illustrations by Jeffrey Alan Love accompany Crossley-Holland’s delightful retelling, bringing these ancient stories to life in a way that no other anthology has.

8. Charlie and The Chocolate Factory by Roald Dahl

Age range: eight to 12 years

Charlie and the Chocolate Factory, by Roald Dahl.Penguin Random House

This chocolate wonderland is the perfect read-aloud book, thanks to Dahl’s masterful use of the English language. Amid all the magic and invention is a wagging finger providing moral lessons on the perils of being greedy, or a brat or overly competitive – and that goes for the adult reader too. Thank goodness then for Willy Wonka, the man who really never grew up, and his band of oompa-loompahs who punish the bad, reward the good, then provide reason for it all through song.

In truth, there is no right book to share – there are plenty of them available these days – nor should there be any chronological order to how and what we read. These are just some suggestions on ways to make bedtime a little more magical. But never underestimate how marvellous it can be to reread a childhood favourite to the little one you’re now tucking in to bed. It could inspire a passion for reading and spark an interest that lasts a lifetime.

The age ranges used in this article are mostly based on interest and reading level ratings from Book Trust.

The Conversation

Nothing to disclose.

Lisa Blower does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.