On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
Administrative justice can make countries fairer and more equal – if it is implemented properly
Author: Sarah Nason, Lecturer in Administrative Law and Jurisprudence, Bangor University
There is a little known, but hugely important, justice system which impacts everyone’s life – administrative justice. Made up of various different bodies (including courts, tribunals, complaint handlers and more), it is concerned with the laws surrounding decision-making and dispute resolution of public bodies. In many countries, it deals with more cases than criminal or private civil justice.
The system also ensures that government officials make correct decisions in areas such as housing, education, health and social care, and immigration. These decisions often have the greatest impact on the most vulnerable in society. How a country enables people to seek redress against public bodies is an indicator of its approach to the values of equality and dignity. It is something which becomes crucially evident when an administrative justice system fails, such as in the cases of Grenfell Tower and the Windrush scandal.
For my newly published report, I looked at how improving an administrative justice system can make a nation fairer and more equal. The report, which focused on Wales, concludes that administrative justice is a cornerstone of social justice. It is an alternative means of holding the state to account, particularly when the effectiveness of traditional legal and political accountability methods are questioned.
But in many nations, administrative justice is complex. It can be hard to determine which level of government (federal, regional, devolved or even supranational) has responsibility for different aspects of law and redress. This complexity might be one reason why the system can be “unseen, ignored and unloved” by both policy-makers and the media. Another reason is that redress is seen as a one-off interaction to resolve a dispute. The broader potential to view disputes as symptoms rather than causes of unfairness and inequality can be missed.
The Welsh approach
Across the world, administrative justice systems have not been the product of design. Institutions and procedures have developed bit by bit over time in response to particular public administration problems, and as a consequence of deep rooted social and economic changes. Take, for example, the growth of UK tribunals after establishing a welfare state, or the expansion of ombudsmanry as a non-court based alternative for seeking efficient and effective justice, as well as the other ad hoc redress measures for discretionary welfare payments. This mix can make it hard for people to know what their rights are, and how to seek redress when public bodies make unfair – or just plain wrong – decisions.
There have been some UK initiatives to raise the profile of, and provide research into, administrative justice. A new oversight body has recently been established too. International efforts have meanwhile focused on harmonising general principles of law and good administration that apply to public bodies. However, despite the good work that these initiatives have done, what was actually needed was a consideration of whether the whole system of institutions and procedures is clear and accessible. In Wales, however, things are being handled very differently.
Wales has an emerging legal jurisdiction, and is currently questioning the future of its justice system. A Welsh view is that good administration is good for you, and that all public decision-making – from government ministers down to individual officers in local authorities – should respect principles of sustainability and equality, and the rights of children, older people, the disabled and future generations.
In the areas over which Wales has devolved power, it is beginning to focus on “ways of working” too. This requires public bodies to collaborate with each other to prevent problems occurring. They must also involve citizens throughout the planning and delivery of public services, and in resolving disputes.
Wales doesn’t have a perfect system just yet. It is facing challenges familiar to other legal jurisdictions seeking to ensure laws are properly implemented and checks on power are sufficiently robust. Globally, law and redress procedures are increasingly fragmented across a range of sources and institutions. For example, an array of “integrity” institutions have developed in Wales (and elsewhere), including ombudsmen, auditors, and inquiry processes. While integrity institutions have an important role to play, their work in promoting good decision making is not a substitute for effective legal redress. Yet in some jurisdictions their growth has coincided with significant austerity-related cuts to court and tribunal based remedies.
Across the world, people may have access to more different types of institutions than ever through which to seek redress. But there is still insufficient clarity as to their powers, functions, accountability and how they are supposed to interact with each other. Accessing administrative justice can be a minefield, or as the Public Services Ombudsman for Wales has put it: “Opaque justice is no justice”.
Wales is certainly on the right track but its administrative justice system still needs work. I have made several recommendations, including revising and consolidating legislation, ensuring the system is underpinned by a clear and consistent set of principles, and improving National Assembly oversight.
The United Nations describes the Welsh approach to the rights of future generations as “world leading”. Wales also now has an opportunity to lead international best practice in administrative justice.
Sarah Nason receives funding from the UK Economic and Social Research Council and the Nuffield Foundation
Psychotherapy can make you richer - especially if you are a man
Author: Noemi Mantovan, Senior Lecturer in Economics, Bangor UniversityGuido Cozzi, Professor of Macroeconomics, University of St.GallenSilvia Galli, University of St.Gallen
Psychotherapy is good for mental health, but it can be very expensive too. As economists we try to carefully model and evaluate the monetary effects of different actions and policies. So, for our recent study we decided to use our methodologies to look into psychotherapy, and work out how it can affect labour income.
We analysed British Household Panel Survey (BHPS) data collected between 1995 and 2008. This survey observes the characteristics and decisions of 2,943 men and 5,064 women over time. The participants are randomly selected so that they statistically represent a much larger UK population. Using this information, we looked both at the effect of psychotherapy on mental health (measured using the general health questionnaire, which is used to identify common psychiatric conditions) and income. The results are clear and robust: psychotherapy helps people improve not only their future mental health, but also their future income.
However, when looking at the data it soon became clear that results would differ according to gender. In fact, it turns out that men benefit significantly more economically than women from psychotherapy.
The BHPS data shows that men who reported having had stress and mental problems, and consulting a psychotherapist, experienced an income increase of 13% in the subsequent year. For women the income increase was only 8%. Though different, the boost is substantial for both genders and reflects an increase in productivity resulting from psychotherapy, with an associated reduction of poverty. Needless to say, in our analysis we filtered out the effect of several other factors affecting income – such as education, children, marital status, type of occupation, age, and more – to find a direct link between income and therapy.
Looking deeper into how and why the results vary by gender, we saw that even with the same level of mental health, women seek help more often than men. In the 13 years of data we examined, an average of 23% of women went into therapy at some point, compared to 15% of men. According to our data, consulting a psychotherapist helps women nearly twice as much as men in terms of mental health (1.2 points versus .5 points on the 36 notch general health questionnaire scale). The logical conclusion of that would be that women benefit more from psychotherapy, but this is not so when looking at income.
The 13% and 8% estimations can also be used to calculate a gender wage gap decomposition. A decomposition is used to explain how factors affect the difference in earnings between men and women. Ultimately here the decomposition shows that psychotherapy accounts for 2% of the 5% difference between men and women’s boost in earnings – even though more women than men seek treatment. To understand why this is so we need to look beyond economics and into psychology: gender discrimination in the workplace is highly correlated with poor mental health. If part of the reason why women suffer from low mental health is due to hostile work environments, then improving their mental health will not have the same effect as for men. Personal psychotherapy cannot solve the problems of a discriminatory workplace.
But our findings don’t mean that anyone should rush out to pay for therapy in the hopes of an income boost. Increasing the provision of free or affordable mental health care is paramount to obtaining a healthy and productive society. A 13% and 8% increase in income per capita is well worth the cost. What’s more, stigmatising social norms and gender workplace discrimination are still huge obstacles to obtain a healthier society that need to be properly addressed. Several steps have been taken by the UK in recent years – including the government’s Five Year Forward View, recommendations for the overhaul of different areas of the NHS, including mental health – but a lot more can be done to remove the stigma, and increase and improve mental health services.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Ocean acidification will increase the iodine content of seaweeds – and the billions of people who eat them
Author: Georgina Brennan, Postdoctoral Research Officer, Bangor UniversityDong Xu, Associate Researcher, Yellow Sea Fisheries Research Institute, Chinese Academy of Fishery SciencesNaihao Ye, Professor, Yellow Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences
Evidence is rapidly accumulating that ocean acidification and elevated temperatures will have catastrophic consequences for marine organisms and ecosystems. In fact, it is something we are already witnessing. Coral reefs are bleaching, while snails and other calcifying marine organisms struggle to build their shells, scales and skeletons and juvenile marine animals even struggle to navigate to suitable habitats.
Yet many primary producers, including seaweeds, are predicted to thrive in the acidic oceans of the future – as they use CO₂ from the seawater to produce energy by photosynthesis.
Humans have eaten seaweeds for tens of thousands of years and today the diets of billions of people, especially in Asia, are based on cultivated seaweeds. However, while future ocean conditions may improve the yield of farmed seaweeds, we do not know how the nutritional content of seaweeds will be affected by climate change. To investigate this, we recently looked into how the iodine content of seaweeds will be affected by future climate change scenarios.
Seaweeds are one of the best natural sources of iodine, and this essential mineral is used by the body to make thyroid hormones. But both too much and too little iodine can change the way the body’s thyroid gland works. If climate change were to affect the amount of iodine in seaweed, humans – and other animals – who rely on it as a staple part of their diet may suffer serious health problems.
Creating acid oceans
For this recently published study, we simulated current and future ocean acidification conditions in laboratory and outdoors settings. To conduct the outdoor experiments, we enclosed seawater in cages made of very small mesh polythene nets so that environmental conditions such as CO₂ and temperature could be manipulated and responses monitored, while all other environmental conditions remained the same as the natural environment.
We used three kelp species – Saccharina japonica, Undaria pinnatifida, and Macrocystis pyrifera– as well as the coastal seaweeds Ulva pertusa, Ulva intestinalis, Gracilaria lemaneiformis and Gracilaria chouae, for the research. With the exception of M. pyrifera, these seaweeds are widely consumed by humans across the world – for instance, in sushi, soups and in the Welsh delicacy laverbread. M. pyrifera was selected as it is a preferred food source of marine invertebrates, such as sea urchins and abalone, which are harvested by the fishing industry.
In ocean acidification research like this, oceanographers monitor the partial pressure of CO₂ in seawater. This figure reflects the amount of dissolved CO₂, which is measured as parts per million (or µatm) and is an indicator of how acidic the oceans are. The Intergovernmental Panel on Climate Change predicts that future CO₂ in the oceans will more than double by the year 2100 – rising from current levels of 400 µatm to 1,000 µatm – if no mitigating action is taken against climate change.
We created these future ocean acidification conditions by blowing CO₂ bubbles into the seawater, and measuring the µatm. We then grew seaweeds in eight climate scenarios in the lab and two climate scenarios in the field. These ranged from current levels of CO₂ and temperature to future ocean acidification and elevated temperature scenarios.
Iodine and seafood
We found that seaweeds grown in conditions which followed future ocean acidification predictions accumulated more iodine than seaweeds grown in present-day conditions. However, in the scenarios we tested, elevated temperature was not as important as ocean acidification in causing iodine accumulation in seaweeds. This means that while we expect the yield of a very important food crop to increase under future climate change, levels of iodine will also increase, affecting human nutrition.
We also traced elevated iodine content from seaweeds to their consumers. Natural consumers of seaweeds such as fish and shellfish are also a rich dietary source of iodine for humans. Using an outdoor feeding experiment, we examined the effect of consuming seaweeds under future ocean acidification conditions on the edible shellfish, abalone (Haliotis discus). We found that iodine concentrations increased in shellfish tissue after eating seaweeds with elevated iodine concentration. In addition, we saw that the concentration of thyroid hormones in the shellfish tissue decreased. This provides evidence that ocean acidification impacts the quality of seafood by changing the concentrations of an essential mineral with consequences for consumers.
There is a risk that as the world’s climate continues to change, people who eat seaweed as a staple part of their diet may consume too much iodine, which can lead to a wide range of health problems. Since seaweeds and shellfish underpin the nutrition of billions of humans around the world it is essential to understand how the iodine content of seafood will change under global climate change. This information can for instance be used by the World Health Organisation to provide recommendations on appropriate levels of seaweeds consumption to maintain a sufficient daily iodine intake.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Agroforestry can help the UK meet climate change commitments without cutting livestock numbers
Author: Charlotte Pritchard, PhD Researcher, Bangor University
Some 12m hectares of the UK is currently covered by agricultural grasslands which support a national lamb and beef industry worth approximately £3.7 billion. However, proposals have been made that this landscape should undergo radical changes to aid the country’s climate change commitments. A controversial government advisory report recently produced by the independent Committee on Climate Change calls for UK lamb and beef production to be reduced by up to 50%. It claims that by replacing grazing land with forestry the UK will be able to substantially decrease its greenhouse gas (GHG) emissions.
The National Farmers Union has responded to the report stating that it has no plans to reduce livestock numbers. Lamb and beef production is an important part of the UK’s cultural heritage, and is vital for supporting rural communities. The lamb and beef industry also provides the country with a supply of high-welfare locally sourced meat. In fact, the UK is the top lamb producer and the third largest beef producer in the EU. And in 2016, the UK was 76% self-sufficient in terms of its own food production. But lamb and beef production is also the greatest contributor to agricultural GHG emissions – the CCC report states that, in 2016, lamb, beef and dairy production combined contributed to around 58% of UK agricultural emissions.
Sheep and cattle grazing is also an integral part of how upland landscapes are currently managed. This is particularly true for Scotland, where managing the upland landscape is important for supporting other industries, such as game bird production. These upland systems have great potential for afforestation – the planting of trees in previously unforested areas – though this doesn’t necessarily have to result in a decrease in livestock numbers.
Planting trees is a crucial step in the fight against climate change. Trees act as a carbon sink for CO2 and also provide a source of different biofuels products. Previous planting schemes have seen success, for example, between 1990 and 2010 the area of the UK covered by woodland increased from 2.6 to 2.8 million hectares. But grazing land need not be taken away for the sake of this environmental initiative. Afforestation plans can be sensitive to the aforementioned socioeconomic and cultural factors if a balanced approach is taken.
So what can be done? Agroforestry might be a way to meet the Committee on Climate Change’s recommendation to release between three to seven million hectares of grassland for afforestation without affecting the UK’s food supply.
Under agroforestry schemes, new woodlands are grown and existing trees are cultivated on farmlands. The aim is to optimise farming systems by incorporating woodland into them rather than replacing grazing land with trees. Planting trees and hedgerows improves grass growth, protects against flooding and topsoil erosion, increases farmland biodiversity and provides a source of natural shelter for livestock. And if the trees are used for biofuel or timber they can provide additional farm income.
Agroforestry schemes can improve animal welfare too. The 2018 lambing season resulted in an unprecedented lamb mortality rate. But it has been shown that, by providing a source of natural shelter, lamb mortality rates can be reduced by up to 50% during inclement weather.
Projects like this are already in place, for example the Welsh government’s Glastir scheme. Launched in 2012, this pan-Wales sustainable land management scheme rewards farmers financially for adhering to environmental guidelines. Though it must be noted that while Glastir has proven more effective than previous agri-environmental schemes, it has been criticised for its lack of measureable outcomes and its limited uptake by Welsh farmers.
With Brexit looming, now is the perfect time for agricultural reform as the country revisits current land use policies. As an industry that is currently so reliant on EU subsidies, there is a strong incentive to optimise production methods. Government discussions are already well under way over how to bring together the agriculture and forestry sectors in order to better manage pastoral landscapes. If agroforestry is incorporated in to these new agricultural policies and subsidy schemes there will be huge benefits for farmers, conservationists, the general public and the livestock they rely on.
Charlotte Pritchard receives funding from KESS 2, a pan-Wales higher level skills initiative led by Bangor University on behalf of the HE sector in Wales. It is part funded by the Welsh Government’s European Social Fund (ESF) convergence programme for West Wales and the Valleys.
Madagascar: fear and violence making rainforest conservation more challenging than ever
Author: Julia P G Jones, Professor of Conservation Science, Bangor University
People are too afraid to return to the village so they are sleeping in the forest or have left altogether. They have lost their stored grain and all their belongings. I don’t know how they will get by.
These are the words of Riana*, a young woman from Bevoahazo, a tiny village in the eastern rainforests of Madagascar. Bevoahazo sits on the edge of Ranomafana National Park in a UNESCO world heritage site teeming with endangered and endemic species. Security in the area has been deteriorating over the last few years but things have escalated recently.
On November 24, 50 men raided the village stealing stores of rice – vital food reserves for local people who are mostly subsistence farmers – and injuring anyone who tried to defend their property. A few days later the local police chief, Heritiana Emilson Rambeloson, who had come to the area with a small team to investigate, was shot dead.
I spent two years living in Bevoahazo in the early 2000s while researching the sustainability of crayfish harvesting. I have spoken to friends from the village who are are currently staying in the nearby town of Ranomafana for safety, and researchers in the area to get a better understanding of what is happening.
Bandits and biodiversity
Patricia Wright, a professor of anthropology, has spent more than 30 years working in Ranomafana. She directs the Centre Valbio, an internationally renowned conservation research centre situated on the edge of the forest. She said:
The security situation is at crisis point. This is leading to real human suffering in one of the most important places for biodiversity on the planet. The [murdered policeman] was smart, dedicated to his job and was interested in wildlife and the importance of the forest. A genuine friend. We will miss him.
The recent death comes just months after a member of Valbio staff was killed by bandits. Jean François Xavier Razafindraibe was killed when armed men raided his village close to the park entrance in June 2018.
Ranomafana National Park was established by the Malagasy government to protect its globally important biodiversity. As part of the Forests of Atsinanana it is home to a number of critically endangered endemic lemurs such as the golden bamboo lemur and the black-and-white ruffed lemur.
Ranomafana is a popular tourist spot in Madagascar with stunning scenery, rare wildlife and the friendly, sleepy town nearby. So far the insecurity hasn’t influenced tourism. As Wright says:
The bandits steer clear of tourists, but the villagers are living a life of fear.
Gold mining’s dark influence
Miners panning for gold illegally in the forest interior are a source of the insecurity. This has been an ongoing issue for many years but has become much more difficult for the park authorities to control. The miners pollute rivers, clear the rare swamp forest and hunt endangered wildlife for meat.
The situation is complicated. Armed cattle thieves known as dahalo are causing havoc in many areas of Madagascar. A recent estimate suggests they have caused 4,000 deaths in the last five years alone.
In 2017, the mayor of the neighbouring town of Ambalakindresy, Elysé Arsène Ratsimbazafy, was shot dead in what is widely believed to have been a hit. He had run for election on a platform of ridding the town of the bandits and had cooperated with efforts to get the miners expelled from the national park interior.
Mar Cabeza, a professor of biology at the University of Helsinki, returned from the area a few days ago. She said:
The gold mining has escalated in recent years and differs greatly from previous subsistence-related threats. The widespread fear has negatively affected both research and conservation management.
One of Cabeza’s PhD students, Marketta Vuola, was meant to conduct research in the attacked villages recently, but was warned of the danger and moved to another village. Vuola told me
News spread fast, with all villages in the region being afraid. We spent last night hiding, with our day packs ready to escape to the forest.
There has been a robust response to the recent series of attacks. The district quickly sent reinforcements of 80 police. This will hopefully reassure the local population, allowing people to return to their village, and will reduce the immediate threat.
This reassurance is essential as my old friend Koto* told me over the phone:
People need to be able to get back home to tend their crops; if they can’t do this they will suffer even more.
However the rise in insecurity reflects a wider problem of respect for the rule of law in Madagascar. Jonah Ratsimbazafy, a professor of paleontology at the University of Antananarivo in Madagascar, said:
If you focus on what is happening, then you will lose your hope for Madagascar. We must focus on the solutions. Good governance is crucial in order to develop the economy of Madagascar and for saving the irreplaceable biodiversity.
Madagascar will elect a new president on December 19. People in Bevoahazo, and throughout Madagascar, are hoping that the new government can bring the change so desperately needed.
*Names changed to protect identities.
Julia P G Jones does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
¿Cómo será el planeta Tierra cuando se forme el siguiente supercontinente?
Author: Mattias Green, Reader in Physical Oceanography, Bangor UniversityHannah Sophia Davies, PhD Researcher, Universidade de Lisboa Joao C. Duarte, Researcher and Coordinator of the Marine Geology and Geophysics Group, Universidade de Lisboa
La capa externa de la Tierra, la corteza sólida sobre la que caminamos, está hecha de partes rotas, algo parecido a la cáscara hecha pedazos de un huevo. Estas piezas, las placas tectónicas, se mueven alrededor del planeta a una velocidad de unos pocos centímetros al año. Cada cierto tiempo, las placas se juntan y forman un supercontinente que permanece durante más de 100 millones de años, hasta que desaparece al dispersarse las placas. Posteriormente, tras un lapso de tiempo de entre 400 y 600 millones de años, el proceso se repite.
El último supercontinente, Pangea, se formó hace unos 310 millones de años y comenzó a separarse hace 180 millones de años aproximadamente. Se cree que el siguiente se formará en 200/250 millones de años, por lo que actualmente nos encontramos en el ecuador de la fase de dispersión del actual ciclo de formación. La pregunta es, ¿cómo y por qué se formará el futuro supercontinente?
Existen, fundamentalmente, cuatro escenarios probables para dicha formación: Novopangea, Pangea Última, Aurica y Amasia. Cómo se pudiera formar cada uno depende de diferentes factores, pero todos están relacionados con el modo en que Pangea se separó y con el movimiento actual de los continentes.
La ruptura de Pangea condujo a la formación del océano Atlántico, que aún se está abriendo y ampliando. En consecuencia, el océano Pacífico se está estrechando. El Pacífico alberga un anillo de zonas de subducción a lo largo de sus bordes (el Cinturón de Fuego del Pacífico), donde el suelo oceánico es subducido bajo las placas continentales hacia el interior del planeta. De esa manera, el suelo oceánico antiguo se recicla y puede penetrar en las columnas volcánicas. El Atlántico, en cambio, tiene una gran cresta oceánica que produce una nueva placa, pero solo alberga dos zonas de subducción: el Arco volcánico de las Antillas Menores, en el Caribe, y el Arco de las Antillas Australes, situado entre Sudamérica y la Antártida.
De mantenerse las condiciones actuales, es decir, que el Atlántico continúe abriéndose y el Pacífico cerrándose, tendríamos un escenario en el que el siguiente supercontinente se formaría en las antípodas de Pangea. El continente americano chocaría con una Antártida que se encontraría navegando a la deriva hacia el norte, para posteriormente colisionar con los ya reunidos África y Eurasia. Este hipotético supercontinente recibe el nombre de Novopangea o Novopangaea.
2. Pangea Última
La apertura del Atlántico, sin embargo, podría ralentizarse e, incluso, comenzar a cerrarse en el futuro. Los dos pequeños arcos de subducción del Atlántico podrían extenderse a lo largo de la costa este de toda América, lo que llevaría a una nueva formación de Pangea tras la colisión de América, Europa y África, produciendo un supercontinente llamado Pangea Última, que estaría rodeado completamente por un súper océano Pacífico.
Sin embargo, si el Atlántico desarrollase nuevas áreas de subducción, algo que podría estar ocurriendo ya, tanto el Pacífico como el Atlántico podrían cerrarse. Esto significa que debería crearse una nueva cuenca oceánica para reemplazarlos.
En este escenario, la grieta panasiática que atraviesa Asia desde el oeste de India hasta el Ártico se abriría para formar un nuevo océano. El resultado sería la formación del supercontinente Aurica. Debido a la deriva actual de Australia hacia el norte, se situaría en el centro del nuevo continente, ya que el Extremo Oriente y América cerrarían el Pacífico a cada lado. Las placas europeas y africanas se reunirían así con América por el cierre del Atlántico.
El cuarto escenario predice un destino completamente diferente para la futura Tierra. Algunas de las placas tectónicas se están desplazando actualmente hacia el norte, incluidas África y Australia. Se cree que esta deriva es impulsada por anomalías en el interior de la Tierra (en el manto, concretamente) heredadas de Pangea. Debido a esta deriva hacia el norte, se puede imaginar un escenario en el que todos los continentes excepto la Antártida continúen viajando hacia el norte. Esto significa que, al final, se reunirían en torno al Polo Norte en un supercontinente llamado Amasia. En este escenario, tanto el Atlántico como el Pacífico permanecerían abiertos en su mayoría.
De estos cuatro escenarios, consideramos que Novopangea es el más probable. Obedecería a la progresión lógica de las direcciones actuales que adoptan las placas continentales a la deriva, mientras que los otros tres escenarios necesitarían de procesos adicionales para verse realizados.
Para la formación de Aurica, tendrían que crearse nuevas zonas de subducción en el Atlántico.
Pangea Última solo se formaría con la inversión de la apertura del Atlántico.
Por último, el nacimiento de Amasia dependería de anomalías producidas por Pangea en el interior de la Tierra.
Investigar el futuro tectónico de la Tierra nos obliga a explorar los límites de nuestro conocimiento y a pensar en los largos procesos que rodean a nuestro planeta. También nos lleva a observar el sistema terrestre como un todo, y nos plantea una serie de preguntas: ¿Cuál será el clima del siguiente supercontinente? ¿Cómo se ajustará la circulación oceánica? ¿Cómo evolucionará y se adaptará la vida a su nuevo entorno? Son el tipo de preguntas que ponen a prueba los límites de la ciencia porque hacen lo propio con los límites de nuestra imaginación.
Mattias Green recibe fondos del Natural Environmental Research Council (Reino Unido).
Hannah Sophia Davies recibe fondos de la Fundação para a Ciência e a Tecnologia (Portugal).
Joao C. Duarte recibe fondos de la Fundação para a Ciência e a Tecnologia (Portugal).
Rare woodland wildlife at risk because of 50-year-old tree felling rules
Author: Craig Shuttleworth, Honorary Visiting Research Fellow, Bangor University
In the UK it is illegal to deliberately kill or injure red squirrels, disturb them while they are using a nest, or destroy their nests. Yet, although the 1981 Wildlife and Countryside Act provides these protections, there is a legal anomaly in England and Wales – one that can potentially undermine the conservation of the red squirrel, along with every other rare and endangered forest plant or animal species. Although rare woodland species are protected, the habitat they dwell in is generally not.
Timber harvesting requires a licence – although there are some very limited exceptions where this permission is not needed, for example due to public safety, or where small volumes of wood are being cut. But under the 1967 Forestry Act, applications in England and Wales cannot be refused for “the purpose of conserving or enhancing” flora or fauna (though they can be refused for this purpose in Scotland). Nor can licence conditions be imposed for this reason. No matter how rare, how vulnerable or how much effort has gone into the regional conservation of a species, there are no exceptions to this.
A timber felling licence does not sweep aside the legal protection that animals such as the red squirrel have – and a precautionary approach is advisable when felling in woodlands containing this species. Nevertheless, the possession of a felling licence opens a loophole because the wildlife legislation protecting the red squirrel provides the defence of“incidental result of an otherwise lawful operation”. So, with a licence in hand, woodlands containing this threatened speciescan be clearfelled because tree harvesting is a lawful operation.
Changing the rules
The solution is clearly to amend the Forestry Act to better align timber harvesting and wildlife protection laws. Harmonising UK forestry legislation would allow for better timing, methods and patterns of tree harvesting to be guaranteed in habitats containing any rare species. Additionally, while licensing authorities currently can only assess each felling licence application in isolation, legislative change would enable the cumulative impact of granting a licence to be considered in relation to felling that had previously been approved. This stops management of rare woodland species on specific sites being at the mercy of timber prices and market economics.
Commercially managed forests provide jobs and produce valuable products. As the modernised laws in Scotland show, the forest industry operates quite successfully where timber harvesting licence applications can consider wildlife impacts. Amendment in England and Wales would deliver similar integration.
Consequently, the ethical credentials of the timber harvesting industry would be strengthened. In an age where consumers want confidence that timber products they purchase have not destroyed wildlife populations, this is essential. It is already commonplace for products made of UK-sourced wood to have the Forest Stewardship Council (FSC) logo. The FSC signifies the wood is from sustainable sources managed with a high regard for wildlife conservation. So amendment of the 1967 Forestry Act would give greater consumer confidence in supply chains and also reinforce the credibility of the global FSC forest certification scheme itself.
Since the 1980s, the forestry sector has increasingly balanced commercial, societal and environmental imperatives. Consequently there will be times, should the law change, when refusal of a logging license to conserve biodiversity is an unavoidable trade off. Here it is important to stress that the forest industry receives state grants to support crop establishment and protection. The taxpayer therefore has a right to ensure that forests are managed sympathetically for wildlife. We should not forget that commercial plantations can be vitally important for wildlife and without them many species would be much rarer.
On the other hand, some felling will inevitably still be licensed even though operations will adversely affect individual animals of a protected species through habitat loss or alteration. Although such decisions may be unpopular with local people, it is common for wildlife management strategies to focus on population level conservation targets rather than at the individual animal level.
I believe an amendment to the Forestry Act is overdue. Regulatory change will empower authorities with the legal tools to achieve a better balance between often competing forest management objectives. It will benefit wildlife and the UK timber industry too.
Craig Shuttleworth currently works in the Red Squirrels United project EU LIFE14 NAT/UK/000467. He is an advisor to European Squirrel Initiative, Red Squirrels Survival Trust and the Zoological Society of Wales. He is urging Government to amend the 1967 Forestry Act.
Chemsex and PrEP reliance are fuelling a rise in syphilis among men who have sex with men
Author: Simon Bishop, Lecturer in Public Health and Primary Care, Bangor University
No one is entirely sure about the origins of syphilis, a sexually transmitted infection caused by the bacterium Treponema pallidum. The first recorded outbreak in Europe appeared during the 1495 invasion of Naples, where it led to widespread disease and death, particularly among troops on the French side. Later, disbanded armies helped to spread syphilis, the “great pox”, across Europe, where the disease rapidly became endemic.
Transmitted from person-to-person primarily through sexual contact, the first symptom of syphilis to appear is usually a small, round and painless skin ulcer, referred to as a canker, at the site of infection. This canker will eventually heal and disappear but the bacteria remain, circulating in the blood and potentially leading to severe health consequences, including heart disease, dementia and blindness.
Over the centuries many attempts have been made to treat the disease, ranging from superstitious but generally harmless folk medicine, through to the potentially more effective but dangerous use of mercury and arsenic compounds. However, the discovery of penicillin in 1928 by Alexander Fleming changed everything. For the first time the disease became not only treatable but curable. In response to this drug, and increased use of condoms as part of the fight against the emerging threat of HIV, cases of syphilis fell dramatically. By the 1980s the disease was virtually eradicated from the UK.
Unfortunately, this situation was shortlived and syphilis is back and spreading quickly. The UK is now seeing thousands of new cases of the disease every year, rising by 148% since 2008 in England alone. However, what makes the resurgence of syphilis somewhat different this time is that the vast majority of these new cases are being found among men who have sex with men (MSM).
There are a number of factors that may help to explain why the current syphilis epidemic is disproportionately affecting MSM. The growing use of Pre-exposure prophylaxis (PrEP) drugs used to prevent the transmission of HIV, has proved to be a very effective tool in the fight against HIV/AIDS among homosexual and bisexual men. However, rather than being used alongside condoms as a second line of defence, these drugs are increasingly being used alone as a condom substitute. While this may be sufficient to prevent the transmission of HIV, the removal of the physical barrier provided by condoms means that other STIs, including syphilis, are still easily transmitted from person to person.
A second factor – blamed by the European Commission for the rising trend – is the growing phenomenon of chemsex. Chemsex refers to the use of recreational drugs, such as crystal meth, during sexual encounters between individuals or in group gatherings. Intended to enhance the sexual experience, evidence suggests that such drug use lowers inhibitions and tends to reduce the likelihood of condom use. For the same reason as an over-reliance on PrEP, failure to consistently use condoms places people at a substantial risk of acquiring STIs like syphilis.
The apparent riskiness of this behaviour is exacerbated further by just how common it appears to be. The 2014 Positive Voices survey found that roughly 30% of HIV-positive gay men in England and Wales said they had taken part in chemsex during the previous year. And that men who practised chemsex were far more likely to engage in anal sex without using a condom. Four years on there is little doubt that practices such as these are fuelling much of the syphilis transmission that we are currently seeing among MSMs.
While rates of syphilis are currently still low compared to other STIs – such as chlamydia, which is currently responsible for around 48% of all new STI diagnoses– the disease’s rapid resurgence has started to ring alarm bells throughout the medical community. Although no longer the terror of the sexually active – a role now occupied by HIV – syphilis is unfortunately too often viewed as being of only minor importance because it is so easily cured. Yet, while the disease still responds well to penicillin there remains the very real risk that it will eventually become resistant to that antibiotic.
Some strains of syphilis have already developed drug resistance to another antibiotic, azithromycin. If the number of cases of syphilis continues to rise, and if penicillin remains the first line of treatment, then there is little doubt that at some point resistance to this drug will appear and then spread. Only by reinforcing the need to use condoms in all casual sexual encounters, and through emphasising the potential hazards associated with chemsex, may syphilis once again be consigned to history.
Simon Bishop does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
What planet Earth might look like when the next supercontinent forms – four scenarios
Author: Mattias Green, Reader in Physical Oceanography, Bangor UniversityHannah Sophia Davies, PhD Researcher, Universidade de Lisboa Joao C. Duarte, Researcher and Coordinator of the Marine Geology and Geophysics Group, Universidade de Lisboa
The outer layer of the Earth, the solid crust we walk on, is made up of broken pieces, much like the shell of a broken egg. These pieces, the tectontic plates, move around the planet at speeds of a few centimetres per year. Every so often they come together and combine into a supercontinent, which remains for a few hundred million years before breaking up. The plates then disperse or scatter and move away from each other, until they eventually – after another 400-600 million years – come back together again.
The last supercontinent, Pangea, formed around 310 million years ago, and started breaking up around 180 million years ago. It has been suggested that the next supercontinent will form in 200-250 million years, so we are currently about halfway through the scattered phase of the current supercontinent cycle. The question is: how will the next supercontinent form, and why?
There are four fundamental scenarios for the formation of the next supercontinent: Novopangea, Pangea Ultima, Aurica and Amasia. How each forms depends on different scenarios but ultimately are linked to how Pangea separated, and how the world’s continents are still moving today.
The breakup of Pangea led to the formation of the Atlantic ocean, which is still opening and getting wider today. Consequently, the Pacific ocean is closing and getting narrower. The Pacific is home to a ring of subduction zones along its edges (the “ring of fire”), where ocean floor is brought down, or subducted, under continental plates and into the Earth’s interior. There, the old ocean floor is recycled and can go into volcanic plumes. The Atlantic, by contrast, has a large ocean ridge producing new ocean plate, but is only home to two subduction zones: the Lesser Antilles Arc in the Caribbean and the Scotia Arc between South America and Antarctica.
If we assume that present day conditions persist, so that the Atlantic continues to open and the Pacific keeps closing, we have a scenario where the next supercontinent forms in the antipodes of Pangea. The Americas would collide with the northward drifting Antarctica, and then into the already collided Africa-Eurasia. The supercontinent that would then form has been named Novopangea, or Novopangaea.
2. Pangea Ultima
The Atlantic opening may, however, slow down and actually start closing in the future. The two small arcs of subduction in the Atlantic could potentially spread all along the east coasts of the Americas, leading to a reforming of Pangea as the Americas, Europe and Africa are brought back together into a supercontinent called Pangea Ultima. This new supercontinent would be surrounded by a super Pacific Ocean.
However, if the Atlantic was to develop new subduction zones – something that may already be happening– both the Pacific and Atlantic oceans may be fated to close. This means that a a new ocean basin would have to form to replace them.
In this scenario the Pan-Asian rift currently cutting through Asia from west of India up to the Arctic opens to form the new ocean. The result is the formation of the supercontinent Aurica. Because of Australia’s current northwards drift it would be at the centre of the new continent as East Asia and the Americas close the Pacific from either side. The European and African plates would then rejoin the Americas as the Atlantic closes.
The fourth scenario predicts a completely different fate for future Earth. Several of the tectonic plates are currently moving north, including both Africa and Australia. This drift is believed to be driven by anomalies left by Pangea, deep in the Earth’s interior, in the part called the mantle. Because of this northern drift, one can envisage a scenario where the continents, except Antarctica, keep drifting north. This means that they would eventually gather around the North Pole in a supercontinent called Amasia. In this scenario, both the Atlantic and the Pacific would mostly remain open.
Of these four scenarios we believe that Novopangea is the most likely. It is a logical progression of present day continental plate drift directions, while the other three assume that another process comes into play. There would need to be new Atlantic subduction zones for Aurica, the reversal of the Atlantic opening for Pangea Ultima, or anomalies in the Earth’s interior left by Pangea for Amasia.
Investigating the Earth’s tectonic future forces us to push the boundaries of our knowledge, and to think about the processes that shape our planet over long time scales. It also leads us to think about the Earth system as a whole, and raises a series of other questions – what will the climate of the next supercontinent be? How will the ocean circulation adjust? How will life evolve and adapt? These are the kind of questions that push the boundaries of science further because they push the boundaries of our imagination.
Mattias Green receives funding from the Natural Enviornmental Research Council.
Hannah Davies receives funding from the Portuguese science foundation - FCT
Joao C. Duarte receives funding from the Portuguese Science Foundation - FCT.
Mangrove forests can rebound thanks to climate change – it's an opportunity we must take
Author: Christian Dunn, Lecturer in Wetland Science, Bangor University
Humans have become adept at destroying natural habitats. Indeed, we’re so good at it we’ve changed the very makeup and climate of our planet. But there may be signs the natural world is fighting back by protecting itself against rising temperatures and changing weather patterns, and we face the tantalising prospect of helping this process.
A recent study found that mangrove forests could be adapting to climate change by growing beyond their usual range. The risk of several days of continuous frost, which previously kept these trees in tropical and subtropical areas near the equator, is continuously shifting towards the poles. As average global temperatures rise, mangroves are able to increase their growth and expand their range beyond the equator.
Mangrove forests are coastal wetlands made up of a dense jumble of trees and shrubs capable of living in salt or brackish water. Famous for their tangle of roots sticking up from the ground and dropping down from branches, mangrove forests can grow out into the sea and create almost impenetrable mazes of narrow channels along shorelines.
Mangroves protect coastlines, treat polluted waters, provide livelihoods and resources for some of the world’s poorest people and are home to an impressive number of species – many of which are commercially important. It’s been suggested that the majority of the global fish catch relies, either directly or indirectly, on mangroves.
Despite their value, humans have also done an impressive job over the last century of destroying them to make way for coastal developments, aquaculture and by logging them for timber and fuel production. Not to mention destroying their natural water courses and polluting the ground they grow in.
So the possibility that climate change could be benefiting these habitats is promising indeed. In the long run, this could help society adapt to climate change and even reduce the concentration of greenhouse gases in the atmosphere.
Adapting to climate change
One feature of mangroves that we’ve long benefited from is the protection they offer to our coastlines. Waves lose their power passing through dense mangrove forests, and they can offer protection from storms, typhoons, hurricanes and tsunamis.
Their mass of roots –- both above and below ground – help to bind and build sediments, meaning mangrove areas can grow vertically, which is a clear asset in the face of rising sea levels. Expanding mangrove forests could therefore help protect us from the devastating effects of extreme weather that become more likely with climate change.
Mangrove forests are also incredibly productive ecosystems, which means that lots of carbon dioxide is taken in and used by the trees and shrubs as they grow. When this organic matter dies, a proportion of it forms the sediment underneath the mangrove forest. As a result, carbon remains trapped as semi-decomposed plant matter, and is unable to re-enter the atmosphere as a greenhouse gas. This ensures mangroves can actually act as giant stores – or sinks - of carbon.
Research suggests that mangroves could be better carbon stores than the coastal habitats they are encroaching on – opening the possibility for mangroves to combat the very causes of global warming. In this way, mangroves act as Earth’s natural defences to climate change –- protecting the planet by striking at the very cause of the problem.
Around the world, some mangrove forests are being given legal protection and large-scale restoration works are taking place with varying degrees of success, as one study in Sri Lanka found.
In America and Australia work is being undertaken to restore areas of mangrove dieback following ill-considered developments and the use of herbicides. Conservationists and academics are researching where mangrove restoration would be most beneficial, and developing the best methods for these projects around the world.
The knowledge that mangroves could both benefit from a changing climate and protect us from some of its worst effects demands a renewed vigour in promoting these wetlands. It also raises a question. Should resources be ploughed into maintaining ecosystems where regional changes in the climate are unlikely to help them prosper? Or should we concentrate our efforts on helping expand habitats that are not only resilient to climate change but can help mitigate climate change itself?
Perhaps it is time to move towards the latter and act as ecosystem physicians, giving healing and healable habitats like mangroves every opportunity to do what they do best.
Christian Dunn does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
How to help people with dementia retain the power of choice
Author: Rebecca Sharp, Senior Lecturer in Psychology, Bangor UniversityZoe Lucock, PhD Researcher, Bangor University
Deterioration in the ability to produce complex speech or understand what people are asking, can make it difficult for people with dementia to make choices in conventional ways. It can be simple things like deciding which clothes to wear, or what to have for dinner. But when a person is in the more advanced stages of dementia, and may not be able to speak at all, it can be difficult for those caring for them to work out what their preferences would be.
To help the estimated 280,000 people with dementia who are living in UK care homes, family members are often asked what their loved ones would prefer and notes are made by staff. But we know that people’s preferences can change, sometimes on a daily basis, and are hard to predict even by people who know them really well.
Take the example of Mrs Jones. Care workers know that she likes both tea and coffee, but that she prefers tea. If Mrs Jones finds it difficult to tell them what she wants, how will they know that today is the day that Mrs Jones fancies a coffee?
Behavioural researchers have found that one way to figure out what a person would like is to measure how they respond when provided with different options at the same time. For example, to find out whether a person prefers a biscuit or a scone, the two treats are presented together for the person to choose.
As the person making the choice is unable to speak, physical behaviours such as reaching, touching, and picking up the item are watched to find out which they would like. Studies which use this method are usually done with people with dementia in their care home, and tailored to the individual taking part. While the researchers can find out what works best, it also means that people with dementia benefit directly from taking part in the study. Staff are also shown how to find out preferences – leading to immediate improvements in care.
Though it seems like a simple thing to put into practice, this “choice” method is not currently part of the UK care system. However, we have been testing to see whether it could be used in all care homes, to give everyone with dementia more choice in a place where it has traditionally been limited. By observing what people do rather than what they say, care staff can get a more objective idea of what people like, measure their preferences daily, track how they change, and – most importantly – give people with dementia and communication issues more of a voice in their daily lives.
Our work forms part of the first UK project of its kind in the field of behavioural gerontology. The preferences research is part of a series of studies all focused on using behaviour analysis to help improve the quality of life of people with dementia. In addition, students on Bangor University’s applied behaviour analysis programme are trained to specialise in this approach with older adults.
Though the project itself is due to go on for another year, we have already confirmed previous findings from US-based care home studies which showed that people with dementia prefer activities over food items when given a choice between them. For example, we found that people chose activities such as jigsaws, crosswords, and crochet over treats such as custard tarts and pork pies.
This might be because one risk for people with dementia in long-term care is that they can spend a lot of time unengaged. It can be difficult to find lots of meaningful activities for care settings, and opportunities for conversation can be reduced. So activities become more valuable because they give people something to do and to talk about with other people, while food might become less valuable due to sensory changes associated with dementia such as changes in ability to taste and swallow.
Putting this into practice, we now know that if a person with dementia is to be given food and activity choices, they should be done separately – rather than at the same time, like the biscuit and scone example – as preference for taking part in an activity might overshadow a food choice. In the long run, this means that staff don’t learn what people’s food preferences are, too.
For the next stage of our research, we are going to work with people with developmental disabilities (for example, Down Syndrome) who develop dementia. People with developmental disabilities often develop dementia at a younger age, and are more likely to develop it than those who do not have a developmental disability. They are often diagnosed late, too, due to “diagnostic overshadowing”, where changes in behaviour are attributed to their disability rather than dementia.
Previous research has found that people with developmental disabilities will often choose food over activities when a choice between the two is offered (the opposite of people with dementia). However, no one has yet looked at whether this preference shifts when people with developmental disabilities develop dementia. If we know how preferences change, we can ensure that care settings tailor their support.
We all value having choices, and our work is focused on evaluating and developing ways to ensure that people with dementia and developmental disabilities continue to be offered choices, even in the smallest of ways.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Prince Charles: the conventions that will stop him from meddling as King
Author: Stephen Clear, Lecturer in Constitutional and Administrative Law, and Public Procurement, Bangor University
Categorised by some as a “meddler”, for decades constitutional lawyers have debated whether Prince Charles will be a reformist when he succeeds the Queen. Specifically, his “spider memos” to government ministers – which evidence his views on political issues such as climate change – have been used as an indication that he would not be “politically neutral”, and would reformulate the relationship between the Crown and parliament.
Fictional speculation on what will happen when Charles becomes king has been abound too. In the 2017 stage play adaptation King Charles III, the monarch tries to enforce his will by refusing royal assent to laws. It insensitively suggests Charles would be a king who would use ancient decree– and prerogative powers– to dissolve the UK parliament, while using the Crown’s role as commander in chief of the British armed forces to defend its position.
Recently, however, the “king in waiting” gave his strongest indication yet that the status quo will be maintained. When asked in an interview whether he would be a “meddling” or activist king, Prince Charles stated, “I’m not that stupid”. He referred to how suggestions that he would continue to make interventions – by lobbying parliamentarians– were “nonsense”, and that he would operate within the “constitutional parameters”.
His comments are a clear signal of respect for UK parliamentary sovereignty and go someway to quelling calls from those who want the UK to revisit the role of the royals. But if you look into what the UK monarchy can actually do now, it is unlikely in reality that the Crown would be able to reclaim historical powers – even if it wanted to.
The monarch’s powers
Since the reign of King John and the 1215 signing of the Magna Carta, the UK has had a system of monarchy limited by law – but it has not been smooth running. The long-standing tension between the Crown and its subjects was seen when King Charles entered parliament in 1642 to arrest parliamentarians for treason. The revolution, and for a short period the UK becoming a “republic” under Oliver Cromwell, serve as reminders of the consequences of monarchs trying to arbitrarily enforce their powers. The restoration of the Crown in 1660 with King Charles II, the Case of Proclamations in 1611– which stated that the king cannot make law without the consent of parliament – and the passing of the Bill of Rights in 1689, forced the monarch to accept the democratic will of parliament.
Today, the UK has a constitutional monarchy – a head of state limited by parliament. The ability to make laws resides with Westminster. While the monarch does have to give royal assent before a bill can become law, it is regarded as a rubber stamp exercise. In fact the last monarch to refuse to assent was Queen Anne in 1707.
Although the monarch no longer has a direct legislative or executive role, they do nonetheless hold important powers and privileges. For example, they can dissolve or summon back parliament. They can also appoint and remove ministers, including the prime minister. The Queen also has the power to issue orders in council (a regulation made on advice of ministers), passports, and royal pardons – and declare war. The monarch also holds several other titles such as Head of the Church of England, Commonwealth and Civil Service. They also have power to award honours and peerages.
However, the reality is the monarch will, by convention, only exercise such powers on the advice of their government. Furthermore, some now argue that such conventions extend more widely to the advice of parliament, for example in relation to when the UK will go to war.
Not so reformist after all
These legal and political safeguards tells us that concerns about the future king forcing his will upon the people, by dissolving governments and shutting down parliaments, are unlikely to come to fruition. The reality being that – because of the system of conventions and checks on their power – the Crown only acts on the advice of their parliamentarians.
2018 has been an important year of change for the royal family, with Prince Charles taking over more of the Queen’s duties. In addition, the 53 leaders of the Commonwealth have unanimously decided that Charles will be its next leader (something that was not guaranteed). Some go as far as to suggest that plans are already in place for the Queen to abdicate within the next three years.
The Prince’s recent comments have likely come about due to the need to address his public image as an interventionist before he takes the throne. It was necessary for him to provide reassurance that stability will be maintained in the UK during his reign. And, though Charles is unlikely to publicly change the role of the monarch – in his own words – privately he would see it as his duty to “encourage and warn”, but publicly remain politically neutral, like the Queen.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Poorer children priced out of learning instruments but school music programmes benefit the wider community
Author: Eira Winrow, PhD Research Candidate and Research Project Support Officer, Bangor UniversityRhiannon Tudor Edwards, Professor of Health Economics, Bangor University
Years of austerity in the UK have bitten away at school budgets, and the arts have suffered heavily. Schools can no longer afford to employ teaching assistants, so it is little wonder that local authorities have cut school music funding.
Schools are responsible for their own budgets, and musical instrument lessons that were traditionally subsidised by councils have been cut down in some districts. Now, the Musicians’ Union has found that children living in the poorest areas are no longer getting the exposure to music and the arts that they so often only get in school. With parents being asked to subside instrument lessons, 41% of low-income families have said that they cannot do so due to their limited household budget.
Research shows that if music is made enjoyable for school-age children, there are more benefits than just being able to understand notes or play an instrument. There is evidence to suggest that children’s listening, reading, and language skills can all be developed by effective school music programmes. One study from the US also found that pupils engaged in music activities communicated more with teachers and parents, while the parents themselves built friendships with one another. And there is a well-being aspect to this too. A study of an American community choir for ten to 14-year-old boys found it had social, spiritual and emotional benefits.
Now our research has found that there is a huge economic benefit to children having musical education. For every £1 invested in a north Wales community programme, we saw as much as £6.69 created in social value. Social value allows us to consider the wider benefits of a programme, including those that are not usually valued in pounds and pence.
Raising the roof
For 18 months, we worked closely with Codi'r To (“Raise the Roof”), a Welsh community regeneration programme set up in 2014. More than 280 children from two socio-economically challenged communities in Gwynedd take part in the scheme, from nursery age through primary school. The team behind Codi'r To has been working on music projects in the area for several years and recently adapted a Venezuelan programme, El Sistema, which seeks to achieve social change through music, and improve educational and well-being outcomes for children.
These sessions take place during the school day at no cost to the participants. Professional music tutors work with the children and teachers, giving them the opportunity to learn to play brass and percussion instruments. They can then play in the school orchestra or samba group. Codi'r To also works to bring live music to the community, and create opportunities for the pupils to perform in public.
Our work was focused on producing a social return on investment evaluation of the programme. This measures a wide range of benefits, including social, environmental and economic factors. It identifies benefits for participants as well as those who may be directly affected by the results – the family members, wider school, local community and Codi'r To itself.
After identifying the cost of setting up the programme, the yearly running costs and putting a value on the time and other inputs from all of the stakeholders, we calculated the social value of every pound invested in it as a ratio of almost 1:7. Surprisingly, only 48% of the social value generated by Codi'r To was for the pupils. The other 51% of the social value was for family members of the children taking part. This percentage was driven by parents telling us that Codi'r To brought them closer to the community. The remaining 1% of benefit was divided between the wider school and the community.
We based this social value on findings that showed children’s confidence was raised (which we gave a proxy value of £238 per child), their behaviour improved at home (£711 per family) and in school (£132 per child), and that they were more engaged with the community. Teachers and the wider school benefited from more harmonious classrooms (£149 per classroom), improved behaviour and better relationships between the pupils (£419 per school). Families felt that they were more a part of the community (£2,925 per person) had better relationships with the school (£10.74 per hour of contact) and experienced better behaviour from their children at home.
Despite these varied benefits, Codi'r To depends on charity funding and
children from low-income familes are less likely to find careers in the arts and creative industries, and less likely to be involved in after school clubs which include the arts or cultural elements too. Going forward, charitable programmes such as Sistema Cymru’s Codi'r To may be the only chance some children have to experience music and creative activities.
Hans Christian Andersen once said: “Where words fail, music speaks.” Perhaps it’s time for less talk and more action when it comes to getting children involved in the arts. The value of schemes like Codi'r To clearly speak for themselves.
Eira Winrow receives PhD funding from Health and Care Research Wales.
Rhiannon Tudor Edwards does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
More experiments may help explore what works in conservation
Author: Julia P G Jones, Professor of Conservation Science, Bangor University
All over the world, countless conservation projects are taking place, attempting to achieve aims from reducing habitat loss, to restoring populations of threatened species. However there is growing awareness that conservationists have not always done a good enough job at evaluating whether the things they do really work.
Efforts that fail to make things better for species and ecosystems waste the limited resources available for conservation, and result in missed opportunities to stem the loss of biodiversity. Given that monitored populations of wildlife species have declined by 60% in the last 50 years, and large scale loss of forest continues, this is bad news. So, research to show whether conservation efforts work really matters. And those doing conservation need easy access to the results of this vital evidence.
In many fields, when researchers want to know whether something works they conduct an experiment. For example, patients are often randomly assigned to receive a new drug (or not) and the results are compared to determine if the new treatment has the potential to help people. Despite calls for more use of experiments in conservation, they remain extremely rare.
Experiments changing practice
One common approach to conservation is encouraging owners to manage their land in a way which provides benefits for the environment. This has been done in the UK for decades. For example, farmers are paid to maintain hedgerows and leave stubble on fields to help farmland birds. These kinds of payments for ecosystem services schemes are increasingly used in the tropics as well.
In 2017, an experiment in Uganda revealed that paying farmers not to chop down trees was a cost effective way to slow deforestation. Now we have published the results of only the second experiment at such a scale. Our study evaluates whether providing incentives to farmers to protect forest and keep cattle out of streams improves water quality.
The research focuses on the efforts of the Bolivian NGO Natura, which has been working with communities in the Andes to help protect the area’s incredible forests. These are home to spectacled bears and other wonderful wildlife, and are seen locally as important for supplying clean water. In Natura’s Watershared programme, upstream landowners were offered incentives to shift their livelihood activities away from clearing forest or letting cattle graze untended in the forest. Natura wanted to know if their innovative approach to conservation was working, so they took the unusual step of setting up an experiment to find out.
In 2010, 129 communities were randomly placed in a control group, or given the chance to enrol their land in Watershared agreements. Households in the latter “treatment communities” could then choose to enrol as much of their land as they wished in the programme. Analysing the results of this experiment, we found that while keeping cattle out of rivers is (perhaps unsurprisingly) good for water quality at the location where it happens, the treatment communities did not have cleaner water in their taps. Further investigation revealed that this was at least in part because of the low level of uptake of the programme, and that the land most likely to be important for improving water quality was often not enrolled.
Natura is already implementing the results of this research to improve the design of Watershared. They are working with communities to ensure that protection is targeted to areas most likely to benefit water quality. And our experience with running such a large-scale experiment holds useful lessons for others interested in increasing knowledge about what works in conservation.
Doesn’t everyone like an experiment?
Away from conservation, there has been an explosion in the use of randomised experiments to evaluate the impact of other large scale interventions – in development and education, for example. However, there has been backlash from opponents who have pointed out, among other things, that these kinds of investigations will not always provide valid answers to the most important questions because these experiments can only normally answer the question “does it work?”, rather than “why does it work?”, and so can’t really answer the other key question, “will it work in other situations?”. This debate has got quite heated, and even acrimonious, at times.
Running an experiment to evaluate the impact of a large-scale conservation intervention is certainly very challenging. It is often not possible to randomise which areas receive a new conservation project (can you imagine a government randomly allocating where it puts national parks?). There are also issues with achieving adequate replication, and there can be ethical concerns which prevent experimentation.
However, given the importance of knowing what works in conservation, more high quality evaluations (which won’t always be experiments) are certainly needed. Only by learning from current practice can the future effectiveness of conservation be improved.
Julia P G Jones was funded by The Leverhulme Trust to conduct her research on the Watershared Randomized Control Trial.
Family habit of inheriting volunteer roles could help small charities
Author: Stephanie Jones, PhD researcher in sociology, studying civil society, volunteering and participation, Bangor University
Though many of us live increasingly busy lives, the number of those actively involved in volunteerism in the UK is growing. In fact, every year more than 21m people volunteer at least once. But for many people, volunteering is not just a one off, or infrequent thing. In fact, it can be a legacy, a form of tradition which is often passed down through family generations.
While we know that volunteering contributes significantly to well-being and community spirit, the legacy aspect of volunteering is often overlooked. Research has found that parents who volunteer significantly influence their children’s voluntary activity, and having a parent who actively volunteers increases a child’s likelihood of becoming a volunteer by 53% in later life.
My own research into families who volunteer has also found a strong link between family history and current volunteering. In 2016, I spoke to 74 people who volunteered at three different heritage railways in north Wales, and 84% of them had a family history of railway volunteering. Often it was an activity that had been passed down from father to son across many generations. Of those interviewed, 41 were volunteering alongside a family member. One volunteer told me:
I grew up in the mid 1950s and I remember my dad and uncle volunteering at the railway at weekends. I used to tag along with them when I was young, but it wasn’t until I was in my 40s and they had both passed away that I really got involved. It was sort of expected of me, to be a volunteer and pass on the railways experiences to my children and their children.
The spill over effect between parental volunteering and associated volunteering by their children has a wide influence, and while many people volunteer with the same organisations that their parents did, not all do – 89% of the railway volunteers also volunteered for secondary organisations, such as the National Trust, Citizens Advice, religious institutions, RSPB, and many local charities. A volunteer shared:
Both of my parents were volunteers, mum was involved with the Women’s Institute and dad worked here on the railway with my granddad. I’ve always wanted to give something back, so for me, volunteering was such a natural thing to do. I’ve been a volunteer for the National Trust, RSPB, neighbourhood watch, and the St John Ambulance. I think with my parents being volunteers it was just passed onto me.
I also found that the volunteers’ altruism was not just a trait that they had picked up from parents and grandparents. Volunteering was passed down to them as a form of inheritance similar to more tangible objects. It was a tradition which they had upheld over many years. While also being a way to protect the railways for the next generation of visitors, carrying on the custom of railway volunteering gave many of them a way to maintain a connection with family members who had since passed on, and carry on their legacy. Two of the railway volunteers confirmed this:
Yeah it was passed down from my dad to me, but before him it was my granddad. I think it’s really important to carry on the tradition, it’s our history, our legacy. Without people like us volunteering and passing it down, it’s just going to wither and die.
My parents were very active in the community, always helping other people, so that trait was passed onto me, and here I am.
One major hurdle to the continued success of the UK’s preserved railways – and for other smaller organisations which rely on volunteers too – is the lack of younger people joining in. Currently, the majority of railway volunteers are retired, north of 50 years old, and generally male. They have usually grown up with, or have fond memories of steam trains. Though this demographic is working hard to save the railways now, a potential time-bomb is looming and the wider railway preservation industry is concerned both about encouraging young people to engage and become volunteers, and ultimately about the potential of losing the engineering expertise of older individuals.
But there is a potential solution. Simply by encouraging the intergenerational transmission of volunteering through current volunteers and their families, the next generation will be in place to take over, sustaining these organisations for years to come.
Stephanie Jones receives funding from WISERD
More in depth data is required to reveal the true global footprint of fishing
Author: Michel Kaiser, Honorary Professor, Bangor University
There has been a lot of debate recently on the extent of the global fishing footprint. A recent paper claimed that fishing affects 55% of the world’s oceans. Given that many people in the developing world rely on fish as their main source of protein, and the increasing preference for luxury fish products in countries such as China, such statistics might seem plausible.
To calculate the 55% figure, the researchers relied on the automatic identification system (AIS). Primarily intended for safety purposes, AIS combines radio and satellite monitoring with other electronic data such as speed, heading and destination port, to track, monitor and even predict vessel activity. All vessels over a certain size must have an AIS transceiver, so this widespread monitoring produced the huge amounts of data that allowed the researchers to estimate the global fishing footprint.
However, when determining what proportion of the ocean is being fished, the scale at which the fishing activity is mapped makes a significant difference to the accuracy of the overall result. Using higher resolution data, with grid squares of between one and 3km², rather than 1,000km² for example, produces a footprint which differs by a factor of more than ten.
We need to manage the impact of human activity on ecosystems, but doing so demands a more accurate understanding of fishing. In a similar way to other food production systems, fisheries can have a wide range of effects on the ecosystems and species that they interact with. Wild capture fisheries are also vulnerable to over-fishing unless they are well managed and regulated. For this management to be effective, we need to know how much fishing is taking place, where it is happening, and the exact type of fishing that it is.
Calculating the footprint of bottom trawling
Such a high level of accuracy is especially important when looking at the impact of bottom trawling. Many forms of fishing have a wide impact on the ecosystem. But no fishing activity exemplifies this better than bottom trawling, which sees large, heavy nets being towed across the seabed. It is associated with the removal of species from the seafloor and temporary or longer-lasting modification of habitats. This has become a particularly emotive issue with campaigns seeking to outlaw its use.
However, bottom trawl fishing is a key source of food, accounting for 25% of global landings. With such high demands, we can’t simply stop the practice dead in its tracks. The impacts can be managed, but first we need to understand where bottom trawling occurs and how often, so it can be done in a sustainable way.
For our newly published research, we looked into the true extent of bottom trawling around the world. We used high resolution vessel monitoring data to reconstruct fishing footprints for 24 regions of the sea of less than 9km² each. We found that, on average, only 14% of this area was affected by bottom trawling. There were, however, major regional differences. For example, trawling affects less than 10% of the Australian and New Zealand seabed, compared to over 50% in some European seas.
Funded in part by the Marine Stewardship Council, this research will facilitate the implementation of sustainable fishing practices around the world. It also demonstrates that when fisheries are well-managed and sustainably fished, the associated impacts on the seabed are reduced, compared to other less well managed fisheries. In other words, if you manage the fishing of the target species appropriately you’ll probably also succeed in reducing other effects of fishing activity.
Zero footprint isn’t the end goal
Some habitats are also highly resilient to the effects of trawling, whereas others are more vulnerable and take decades to recover. Recent advances in our understanding of the effects of bottom trawls mean that it is now possible to predict their impact and to suggest pragmatic ways to reduce those impacts through different management options, such as by directing fishing activity away from sensitive areas of the seabed to more resilient areas.
These new advances in understanding pave the way for a truly ecosystem-based approach to fisheries management, in which we can manage target species and the wider effects of fishing on the seabed. This is the approach taken by the Marine Stewardship Council fisheries standard, which integrates the full range of ecosystem effects of fishing.
So, what challenges remain? While AIS data is publicly available, it covers only a proportion of the world’s fishing fleet. Vessel monitoring systems (VMS) are used for a larger selection of the world’s fishing fleets, but can be shrouded in confidentiality issues (the data is usually considered the property of fishermen). In addition, small-scale vessels, which in many places account for the largest proportion of fishing boats, are not monitored at present.
However, much data does still exist that can guide improvements in the management of fishing impact. It is critical that we interpret this data properly, using appropriate resolution mapping, to avoid inaccurate representations of the size and extent of fishing’s footprint on the oceans.
Michel Kaiser receives funding from UK Department of Environment, Food and Rural Affairs, Grant/Award Number: project MF1225; European Union (project BENTHIS EU-FP7 312088), Grant/Award Number: project BENTHIS EU-FP7 312088; The Walton Family Foundation; David and Lucile Packard Foundation; American Seafoods Group US; Glacier Fish Company LLC US; Nippon Suisan (USA), Inc.; Pacific Andes International Holdings, Ltd.; Natural Environment Research Council,UK, Grant/Award Number: NE/L003279/1; Marine Ecosystems Research Programme; Marine Stewardship Council; Glacier Fish Company LLC U.S.; Nippon Suisan (USA) Ltd.; American Seafoods Group U.S.; Pesca Chile, S.A.; Pacific Andes International Holdings Ltd.; Sanford Ltd. N.Z.; Sealord Group Ltd. N.Z.; South African Trawling Association and Trident Seafoods; US National Oceanic and Atmospheric Administration (RAM); The Food and Agriculture Organisation of the UN; Blumar Seafoods Denmark; Clearwater Seafoods; International Council for the Exploration ofthe Sea (ICES) Science Fund; Espersen Group; Independent Fisheries Limited N.Z.; Gortons Inc.; San Arawa, S.A.; The Alaska Seafood Cooperative. He is the Science & Standards Director at the Marine Stewardship Council and a member of the IUCN-Fisheries Expert Group.
Edible crabs won't cope with the effects of climate change on seawater – new study
Author: Nia Whiteley, Reader in Zoology (Aquatic), Bangor University
We are only just beginning to learn how aquatic organisms will respond to climate change, and the effect that this will have on their communities and ecosystems. One way to find out more is to look at whether species will be able to compensate for changes in their environment. Particularly if they can survive any immediate fluctuations in temperature, and reductions in ocean pH brought about by increasing levels of atmospheric CO₂.
Coastlines and estuaries are already challenging places for marine organisms to live. The physical properties of seawater – salinity, temperature, pH and oxygen levels – vary frequently. And with further environmental fluctuations due to climate change, they are becoming even more demanding. Patterns of sea surface salinity are changing, as fresh water input increases, due to exceptional storm events and runoff from flooding.
Scientists have started to examine the combined effects of global warming and a reduction in seawater pH – otherwise known as ocean acidification – on marine communities. To date, it has appeared that multiple factors have more of an effect on these creatures than each factor in isolation. Together they influence the ability of species to compensate and survive the changes.
However, not much is known about the combined effects of ocean acidification and seawater dilution on these organisms. This is important as changes in salinity tolerance are known to influence distribution patterns of marine species and their community structures.
Comparing the plight of crabs
For our newly published study we decided to look at this combination of factors by focusing on two species of marine crabs: the edible crab (Cancer pagurus) and the shore crab (Carcinus maenas). Both are common to UK waters, but experience different degrees of environmental variation in their natural habitats. For edible crabs, home is typically the low intertidal shallow shelf waters for juveniles, and down to 100 metres for adults away from the influence of freshwater. While shore crabs typically live in estuaries and experience dilute seawater on a regular basis.
We studied how the crabs reacted to what are predicted to be the business as usual levels of CO₂ in 2100 (1,000 micro-atmospheres) and a biologically relevant reduction in seawater salinity. We were interested to see whether the edible crab will be less capable than the shore crab which regularly experiences salinity variations. We were also keen to find out why one species is likely to be more vulnerable than the other by investigating the ways they naturally compensate for environmental changes.
We exposed juveniles of both species to the different CO₂ and salinity conditions for up to one year. The crabs were fed regularly and they continued to grow by moulting throughout the exposure time. We found that the shore crab was fully capable of surviving the conditions for up to a year, but the edible crab struggled.
The shore crab – which is a widely invasive species in countries outside Europe – increased its response to a stimulus (upregulated) its capacity to exchange bicarbonate ions across the gills. This mechanism helps buffer changes in body fluid pH associated with increased CO₂ in seawater. The edible crab, meanwhile, showed no such upregulation, and had limited ion transporting capacities. Instead, this species accumulated CO₂ within its haemolymph (crustacean blood) supply.
There was some attempt at compensating for the conditions, but remarkably the edible crabs were better off in dilute seawater. This was a surprise as the edible crab typically spends all of its adult life living in marine environments separated from the influence of freshwater. The reason behind it is difficult to explain, but it may come down to passive changes associated with exposure to dilute seawater making the haemolymph more alkaline.
Our work demonstrates that the juvenile edible crabs could survive elevated CO₂ conditions by moving into freshening seawater – but only for limited periods. This species also proved to be vulnerable to longer term exposures to dilute seawater.
Our study helps us appreciate that there are fundamental differences in the biological capacities of marine species to compensate for climate change. Even within a taxa of crustaceans that is generally regarded as being relatively tolerant to change.
Fully marine species, such as the edible crab, with its preference for stability, are poorly equipped for survival in a variable natural environment. They are likely be to more vulnerable to climate change and further studies on this and similar species are urgently needed.
Nia Whiteley receives funding from NERC UK (grant no NE/J007544/1 and NE/J007951/1).
Emotions: how humans regulate them and why some people can't
Author: Leanne Rowlands, PhD Researcher in Neuropsychology, Bangor University
Take the following scenario. You are nearing the end of a busy day at work, when a comment from your boss diminishes what’s left of your dwindling patience. You turn, red-faced, towards the source of your indignation. It is then that you stop, reflect, and choose not to voice your displeasure. After all, the shift is nearly over.
This may not be the most exciting plot, but it shows how we as humans can regulate our emotions.
Our regulation of emotions is not limited to stopping an outburst of anger – it means that we can manage the emotions we feel as well as how and when they are experienced and expressed. It can enable us to be positive in the face of difficult situations, or fake joy at opening a terrible birthday present. It can stop grief from crushing us and fear from stopping us in our tracks.
Because it allows us to enjoy positive emotions more and experience negative emotions less, regulation of emotions is incredibly important for our well-being. Conversely, emotional dysregulation is associated with mental health conditions and psychopathology. For example, a breakdown in emotional regulation strategies is thought to play a role in conditions such as depression, anxiety, substance misuse and personality disorders.
How to manage your emotions
By their very nature, emotions make us feel – but they also make us act. This is due to changes in our autonomic nervous system and associated hormones in the endocrine system that anticipate and support emotion-related behaviours. For example, adrenaline is released in a fearful situation to help us run away from danger.
Before an emotion arises there is first a situation, which can be external: such as a spider creeping nearer, or internal: thinking that you are not good enough. This is then attended to – we focus on the situation – before we appraise it. Put simply, the situation is evaluated in terms of the meaning it holds for ourselves. This meaning then gives rise to an emotional response.
Psychologist and researcher James Gross, has described a set of five strategies that we all use to regulate our emotions and that may be used at different points in the emotion generation process:
1. Situation selection
This involves looking to the future and taking steps to make it more likely to end up in situations that gives rise to desirable emotions, or less likely to end up in situations that lead to undesirable emotions. For example, taking a longer but quieter route home from work to avoid road rage.
2. Situation modification
This strategy might be implemented when we are already in a situation, and refers to steps that might be taken to change or improve the situation’s emotional impact, such as agreeing to disagree when a conversation gets heated.
3. Attentional deployment
Ever distracted yourself in order to face a fear? This is “attentional deployment” and can be used to direct or focus attention on different aspects of a situation, or something else entirely. Someone scared of needles thinking of happy memories during a blood test, for example.
4. Cognitive change
This is about changing how we appraise something to change how we feel about it. One particular form of cognitive change is reappraisal, which involves thinking differently or thinking about the positive sides – such as reappraising the loss of a job as an exciting opportunity to try new things.
5. Response modulation
Response modulation happens late in the emotion generation process, and involves changing how we react or express an emotion, to decrease or increase its emotional impact – hiding anger at a colleague, for example.
How do our brains do it?
The mechanisms that underlie these strategies are distinct and exceptionally complex, involving psychological, cognitive and biological processes. The cognitive control of emotion involves an interaction between the brain’s ancient and subcortical emotion systems (such as the periaqueductal grey, hypothalamus and the amygdala), and the cognitive control systems of the prefrontal and cingulate cortex.
Take reappraisal, which is a type of cognitive change strategy. When we reappraise, cognitive control capacities that are supported by areas in the prefrontal cortex allow us to manage our feelings by changing the meaning of the situation. This leads to a decrease of activity in the subcortical emotion systems that lie deep within the brain. Not only this, but reappraisal also changes our physiology, by decreasing our heart rate and sweat response, and improves how we experience emotions. This goes to show that looking on the bright side really can make us feel better – but not everyone is able to do this.
Those with emotional disorders, such as depression, remain in difficult emotional states for prolonged durations and find it difficult to sustain positive feelings. It has been suggested that depressed individuals show abnormal activation patterns in the same cognitive control areas of the prefrontal cortex – and that the more depressed they are the less able they are to use reappraisal to regulate negative emotions.
However, though some may find reappraisal difficult, situation selection might be just a little easier. Whether it’s being in nature, talking to friends and family, lifting weights, cuddling your dog, or skydiving – doing the things that make you smile can help you see the positives in life.
Leanne Rowlands receives funding from EU Social fund through the Welsh Government.
We tracked coral feeding habits from space to find out which reefs could be more resilient
Author: Michael D. Fox, Postdoctoral Scholar, University of California San DiegoAndrew Frederick Johnson, Researcher at Scripps Insitution of Oceanography & Director of MarFishEco, University of California San DiegoGareth J. Williams, Lecturer, Marine Biology, Bangor University
Coral reefs are an invaluable source of food, economic revenue, and protection for millions of people worldwide. The three-dimensional structures built by corals also provide nourishment and shelter for over a quarter of all marine organisms.
But coral populations are threatened by a multitude of local and global stressors. Rising ocean temperatures are disrupting the 210m-year-old symbiosis between corals and microscopic algae. When temperatures rise, the coral animal becomes stressed and expels its algal partners, in a process known as coral bleaching.
These symbiotic algae are a critical food resource for corals, and without them corals lose their primary source of nutrition. Fortunately, corals are mixotrophs and not solely dependent on nutrition from their algal partners. Despite their sedentary appearance, corals are voracious predators capable of capturing a wide variety of prey using their tentacles and mucous nets.
Knowing how much corals eat via predation is essential for understanding how they can persist in a warming ocean. Numerous laboratory studies have shown that if coral feed, they are more capable of surviving the stress associated with warming temperatures and decreasing pH levels. Feeding can also increase the reproductive capacity of corals, which is key to repopulating reefs that have suffered high levels of coral mortality. Yet, almost 90 years since one of the first published accounts of coral predation, we still do not know much about how coral feeding varies as a function of food availability in the wild.
However, our new study sheds light on this longstanding question. We combined field sampling with global satellite measurements and published data to reveal that corals respond to how much food is on their reef. This indicates that corals living in more productive (food-rich) waters consume more food, which changes our understanding of how corals survive and may aid in predictions of coral recovery in the face of climate change.
Unravelling coral diets from space
Studying variation in the diets of corals over large areas is no easy task. To determine if corals will change their feeding behaviour as a function of food availability, we sailed to the remote Southern Line Islands of Kiribati. These islands are ideal for studying variations in coral diets because they lack local direct human impacts (fishing and pollution) and are situated across a natural gradient of food availability fuelled by equatorial upwelling. This process delivers colder, nutrient- and plankton-rich waters to the surface ocean along the equator in the central Pacific.
We examined coral diets across five islands using stable isotope analysis. Stable isotopes are atoms of the same element (in this case carbon) that differ in mass due to the number of neutrons in their nucleus. This subtle mass difference allows scientists to determine what an organism is eating based on how similar the isotopic composition of the consumer (coral) is to its food (zooplankton).
The isotopic data showed that the corals on the more food-rich islands were capturing and consuming more planktonic prey than corals on islands with lower food availability. These findings suggested that the abundance of food might be important for corals in other locations, which inspired our team to evaluate if coral feeding habits can be used to track global food availability.
Satellites can reliably measure the amount of phytoplankton around tropical islands– a useful proxy for estimating food abundance for corals. So, using satellite data from 2004-2015, taken from 16 locations spanning the Pacific and Indian Oceans to the Red Sea and the Caribbean, we compared published isotopic values from corals at each location.
What we found was a striking relationship between the chlorophyll content of the water and the feeding habits of corals. Essentially, corals in more productive regions consume more planktonic food.
Can well-fed corals survive the heat?
The seemingly simple observation that corals eat more where there is more food has important implications for our understanding of how coral reefs function. It underscores the importance of the physical environment around reefs and suggests that food availability may be an overlooked driver of coral recovery potential.
The capacity for corals to feed before or during thermal stress can improve their capacity for survival. These findings lay the foundation to begin investigating the possibility that reefs in more naturally food-rich waters have a greater capacity to resist or recover from disturbance events such as thermally induced bleaching.
Reefs do show variations in how they respond to thermal stress events – some reefs bleach less than others – but the exact mechanisms behind these differences remain largely unclear. The relationship between coral feeding and ocean chlorophyll established in this study offers a roadmap to locating potentially more resilient coral reefs around the world. Such knowledge does not replace the need to urgently reduce greenhouse gas emissions and protect coral reefs from the increasing frequency of ocean warming events, however. Instead it should be used to guide strategic management actions in the inevitable interim.
Andrew Frederick Johnson receives funding from the National Science Foundation (USA)
Gareth J. Williams receives funding from The Bertarelli Foundation.
Michael D. Fox does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Are electric fences really the best way to solve human-elephant land conflicts?
Author: Liudmila Osipova, PhD Researcher, Bangor University
Conflict between humans and elephants has reached a crisis point in Kenya. As the elephants have begun to regularly raid farms in search of food, it has become not uncommon for local people to attack and kill them in retaliation. Between 2013 and 2016, 1,700 crop raiding incidents, 40 human deaths and 300 injuries caused by wildlife were reported in the Kajiado district alone.
The problem has come as vast parts of Kenya that are home to elephants have been subject to intensive agricultural development in the past few decades. The Maasai people who tend to the land are switching from their traditional nomadic lifestyle to seek a more permanent livelihood. But these lands have also been used by elephants and other wildlife for many generations, providing them with food, water and space for migration.
Tensions are running high, but a controversial solution is being put in place: electrified fencing.
In the 2016 Netflix documentary The Ivory Game, filmed in Kenya’s Kajiado district, the following exchange was caught on camera, between a group of Maasai people and Craig Millar, head of security at non-profit conservation foundation Big Life:
Farmer 1: You see this maize? It is for my children, not for elephants … we don’t want to see elephants on our farms.
Millar: And what do you think is the solution?
Farmer 1: The solution is to kill them!
Farmer 2: A fence. Electrification.
Millar: I agree, but … it is expensive. We will ask countries in Europe for help … everybody will have to contribute something. You will have to protect the fence once it is erected.
Farmer 1: We’ll take care of it. If you are lying about the fence, the elephants will be in danger. The elephants will die.
When the documentary was filmed, an electrified fence was believed to be the only solution to the conflict. So, with support from international investors, work in the borderlands between Kenya and Tanzania was started in 2016 and the foundation has reported that the 50km of fence built to date has already reduced elephant crop raids by more than 90%.
Unfortunately, this is not the only human-elephant conflict hotspot in the country. Kenya is experiencing rapid economic and industrial growth, and small-scale agriculture developments are spreading across Maasai lands, causing more and more problems.
Fencing is one of the most commonly used conservation tools in the world. And Big Life’s electrified fence is a great example of how fast and effective it can be. But fencing can have long-term consequences for animals – it can disturb wildlife migration routes, disrupt gene transfer through mating and alter population dynamics.
The possible costs to animals are unknown. South Africa is the only African country that legally requires an environmental impact assessment to be done prior to building fences. Generally speaking, there is no straightforward international policy or legal guidelines for fence planning. In most countries, fences are built in a random and uncontrolled way. But fencing can be an effective tool for conservation – in Australia, fencing is commonly used to save native mammals from introduced carnivores, while in Namibia fencing protects cattle from cheetahs and lions.
In our recently published paper we looked at how an electrified fence being built around crop fields in southern Kenya is affecting major elephant migration pathways. We used GPS collars on 12 elephants from the area where the fence was to be built, and tracked their movement and behaviour. All the elephants were from different families and were collared in various locations.
After two years of data collection we used the information to map where and how the elephants spent their time in the study area. We reconstructed their movement paths and built a connectivity model, highlighting the most important migration routes between large national parks.
After validating our model, we included the fence plan and recalculated, to estimate if the fence would change the elephants’ free movement between parks. The results showed that local managers were right: fencing did not disturb migration corridors nor diminish connectivity between the national parks.
But more detailed examination gave us some food for thought. Areas with limited amounts of the resources that elephants need (wetlands, floodplains and conservancies) are predicted to be more intensively used after fencing because the elephants will no longer have access to their usual grounds – and this may lead to overgrazing and habitat destruction. In addition, fences will not stop elephants from moving – so the conflict will basically be shifted to unfenced areas.
These results raise a reasonable question: how much more land will have to be fenced to resolve human-wildlife conflicts? Besides high costs and difficulties in maintenance, the more land is fenced the less habitat remains for elephants. Long-term aerial monitoring in the Amboseli Ecosystem (an 5,700km² conservation area near the Tanzania-Kenya border) confirms that habitat loss to agriculture will become a bigger threat to elephants than illegal poaching in the near future.
There is no simple solution here. The benefits of electrified fencing are undeniable, but lack of understanding of the long-term consequences for wildlife is worrying. We recommend that integrated impact assessments – as we did during our study – are made prior to fencing become international policy.
Another approach could be using fences only as a temporary tool for mitigating critical conflicts and considering alternative management approaches – such as fencing which contains beehives, to deter elephants but not restrict their movement – to solve the problem in the long run.
Liudmila Osipova receives funding from EU (FONASO programme). The research was accomplished with the support not-for-profit organization the African Conservation Center