On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
Starfish can see in the dark (among other amazing abilities)
Author: Coleen Suckling, Lecturer in Marine Biology, Bangor University
If you go down to the shore today, you’re sure of a big surprise. Many will have witnessed the presence of a starfish or two when visiting the seashore or a public aquarium. Starfish come in an exciting range of colours and sizes, but have you ever given a thought to how this multi-armed wonder manages to exist in our oceans when it’s so unlike the other animals we know?
Recent research published in the journal Proceedings of the Royal Society B not only highlighted that starfish have eyes but also revealed that they can even see in the dark. Starfish may appear rather inanimate, as if they were simple pointy organisms that sit around on the seabed absorbing nutrients from the water. But in reality there’s a lot more going on beneath their spiny exteriors.
Seeing and glowing in the dark
Most starfish possess a crude eye at the tip of each arm. These compound eyes contain multiple lenses called ommatidia, each creating one pixel of the total image the animal sees. Tropical starfish eyes have been shown to be capable of forming crude images, which allow these animals to stay close to their homes.
Now scientists have shown that several deep-sea starfish species, found up to 1km beneath the water’s surface where no sunlight can penetrate, can still see despite the dark. Most species that can see in the dark depths of the ocean like this have more sensitive eyes but see cruder images. But these starfish appear to see more clearly than their tropical counterparts living in the shallows where there is light.
The researchers suggested different reasons for this. Some species appear to see clearly in a horizontal direction but less so in a vertical direction, which would make sense for an organism that lies on the sea floor. Others appear to have less ability to detect changes in what they’re seeing over time.
Two of these visual species are also bioluminescent, which means they can produce short glowing flashes across the surface of their bodies. It’s likely that the combination of these light flashes and the ability to see clearly allows these deep-sea starfish to communicate with potential mates.
Hungry predators, such as fish or crabs, can bite off the arms of starfish. If there is a struggle then some species of starfish will voluntarily break off their own arms, giving the rest of their body time to escape. More amazingly, they can then regenerate a whole new arm. If you find a starfish with one (or more) arm smaller than the rest, it’s very likely that this will be the new regenerating limb.
Powered by seawater
Starfish don’t have a typical set of muscles. Instead, they are able to move by pressurising seawater inside their body through a water vascular system. They draw in seawater through a porous spot called a madreporite located on the top surface of the body. The water then passes through a series of internal canals to reach the arms, which have thousands of small tube-like feet hanging below them.
Muscles and valves inside each tube foot pressurise water that enable it to extend and retract, creating a walking movement, much like human legs but multiplied hundreds of times. At the end of each tube foot is a tiny suction cup, much like a kitchen plunger, which can stick to surfaces and allow the starfish to gain traction.
Starfish are extremely effective predators on the sea floor, feeding on a wide range of food such as mussels, clams and oysters. They will hunch over their prey and use their tube feet to simultaneously grip the prey and to clamp down onto the seabed to prevent any escapes.
If the prey is small enough the starfish will then swallow the entire animal, internally inflating its stomach, which is located in the centre where the arms meet. While holding this death grip position, the starfish will then gradually dissolve the edible soft tissues using enzymes inside the stomach before ejecting the inedible hard shell parts.
But if their prey is too big to insert into the stomach, this won’t stop the starfish from fighting to get its meal. Instead it will use its powerful arms and tube feet to pull the two shells slightly apart and then eject its stomach into the gap so it can breakdown the soft tissue inside the prey and slurp it up, much like using a straw.
So when you next see a starfish, whether it’s on the shore or in an aquarium, please give it a humble nod for being such a specialised member of our ocean club.
Coleen Suckling does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Positive psychology helps brain injury survivors recover with a better outlook on life
Author: Leanne Rowlands, PhD researcher in Neuropsychology, Bangor University
In the UK alone, nearly 350,000 people are admitted to hospital each year with an acquired brain injury, caused by anything from road traffic accidents, falls, and assaults, to vascular disorders such as strokes. And this number is growing.
As they physically recover from their injury, survivors and their families also face psychologically adjusting to a lasting impairment. Often, this includes cognitive, and communicative difficulties. But the social and emotional factors can present a greater burden, with high rates of depression among survivors. This is not only difficult to experience, but can slow down the person’s overall recovery.
But not all of those with acquired brain injuries experience depression. And contrary to what some might expect, brain injury can actually be a source of positive personal growth. Some survivors recover with a better perception of themselves, an improved philosophy of life, and stronger personal relationships. Similarly, some survivors report improved quality of life and enhanced personal satisfaction.
So why the difference? Why do some brain injury survivors recover with a better frame of mind, while others struggle with depression? Trying to simply be happier doesn’t work – brain injury or not – but research suggests that appreciating the positive things in life is key.
In one study, researchers found that appreciation of life, new possibilities, and a patient’s own personal strength, greatly contributed to positive personal growth after a brain injury. It can seem like a difficult task, building internal strength after such a serious event, but there is an area of psychological research that has found it can be fairly simple to do.
In recent years, the field of positive psychology has been helping researchers and psychiatrists to better understand what causes happiness and encourages well being. This study of positive emotions, optimism, strengths, and understanding, looks at “building what’s strong” – rather than “fixing what’s wrong”.
Positive psychology can be done by using one of five simple methods. It’s something we can all benefit from. Even though the focus is on building rather than fixing, this includes people with brain injuries, too.
Professor Jonathan Evans wrote in 2011 about how positive psychology could help those with brain injuries, suggesting that it may be used alongside other rehabilitation programmes, to support them with adjusting to life after injury in a positive and hopeful way.
More recently, a trial project – the Positive PsychoTherapy in ABI Rehab (PoPsTAR) programme – put this idea into practice. The researchers incorporated therapeutic exercises based on positive psychology methods, such as setting realistic goals and focusing on positive events, with a rehabilitation programme. They found that Evans’s idea worked, and now we are working on a new project to take this method forward.
Of the five positive psychology methods, one of the most effective is “three good things”. The idea is that you write down three things that have gone well every day for a week, with a short explanation for it. This exercise has been shown to increase happiness and decrease symptoms of depression for up to six months in healthy control participants. And it has been shown to effectively improve happiness in a group of people with ABI, too.
It is thought that “three good things” helps people to focus on, and be more likely to notice, positive events and aspects of life after brain injury. For survivors with memory or attention impairment, the reflection of positive events may be more difficult. This can lead to an inaccurate sense of self, or negative perceptions of life and situations, causing some to feel that their life is lacking in positivity. But keeping a three good things diary can help them to recollect positive things in order to develop positive self-perceptions and self-esteem.
We have been running a pilot study with brain injury survivors which backs up the “three good things” research. The Brain Injury Solutions and Emotions Programme (BISEP) was developed to help survivors deal with any difficulties while they recover. But rather than doing it alone, we’re taking the three good things method one step further and asking them to share one good thing with a group of fellow survivors in a weekly meeting.
Though it’s early days, so far we have received positive anecdotes, with participants using the “things” to reformulate how they feel about their day. As group interventions have been shown to provide social support, the idea is to use the “good things” to help the participants engage with other survivors and motivate them to continue the positive method.
The two hour weekly meetings are therapeutic. Each week, we discuss a different topic and different strategies, but always start with a good things reflection. Once again, it is a simple way to build a positive psychology method into recovery but one, we hope, that will help the survivors to build a new enthusiasm for life.
Leanne Rowlands does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
UK criminal justice is at breaking point after years of unstable leadership
Author: Stephen Clear, Lecturer in Law, Bangor University
The criminal justice system in England and Wales is failing victims and witnesses to such an extent that MPs say it is now “close to breaking point”. Years of budget cuts and changes have led to a justice system that is in meltdown.
With such a crisis at hand, one would expect some kind of “strong and stable” leadership from the UK government. Yet, in the most recent cabinet reshuffle, the prime minister, Theresa May, once again appointed a new lord chancellor and secretary of state for justice, David Gauke. Gauke is the sixth justice secretary since 2010, and Theresa May’s third. He replaced David Lidington just six months after he took up the role. Prior to that Liz Truss held the position for less than a year.
The Ministry of Justice is considered a major government department. Supported by 32 agencies and public bodies, its core purpose is to protect and advance the principles of justice, while upholding the rule of law. In fact, the UK justice systems has long been “the envy of the world”. An “independent judiciary” with “global lawyers”, the “brand” is recognised as “internationally outstanding”. But the lack of consistent leadership is causing it to stall. Though there are permanent secretaries working within the ministry, it is the secretary of state who “steers the ship”, and maintains relationships and trust between the government and the judiciary.
The post of lord chancellor – now more commonly known as the secretary of state for justice – dates back to medieval times, when they were responsible for the supervision, preparation and dispatch of the king’s letters, using the sovereign’s seal. Prior to the Constitutional Reform Act 2005, the lord chancellor held roles in all three arms of the state. They were a senior judge, a member of cabinet, and presided over the house of lords.
Today the lord chancellor is an elected MP who holds the cabinet position of head of the ministry of justice. While they still have the ancient title of lord chancellor, the role focuses on responsibility for the efficient functioning and independence of the courts, along with other important constitutional roles.
But constant changes at the top mean that the secretaries of state for justice have not fulfilled these roles. In the meantime, judges have been branded “enemies of the people” – with only a slow response to defend them – and their diversity has been called into question.
On this latter point, in early 2018, David Lidington said that judicial diversity targets were “not the answer” to the issue. So what is? While the judicial appointments commission has a role to play in diversity matters, a secretary of state must be in place to set out the the government’s position on what is a pressing matter. The judiciary should represent the people of society, and right now it is not doing so.
Cuts and closures
Looking to the front of house, England and Wales also needs a secretary to lead on a meaningful review of the consequences of the £450m a year legal aid cuts, as well as their impact on the cost of justice. Much of what is being recognised now as bringing the justice system to melting point is the consequence of years of these cuts. Again, a secretary with longevity in the role could lead on the future direction of justice policy within the UK, as well as keep justice issues top of the government’s agenda.
Similarly, there is the impact of the extensive programme of court closures, which must be headed up by consistent ministerial leadership. The country needs someone to ensure adequate responsibility is taken for the decisions being made, and to ensure that access to justice is not restricted.
However, none of this should be taken to mean that just any MP should be handed the role of justice secretary simply because they will last in the job. The ministry of justice requires a secretary with an understanding of the wider profession as it is today and the challenges lawyers face.
In recent years, secretaries have not even had legal backgrounds– although it must be noted that the appointment of Gauke, a former solicitor, has broken this recent trend, a fact which could see policies being led by his more intricate understanding of the law.
Without heeding these glaring warning signs now, the “breaking point” could very quickly develop into a crack in England and Wales’s legal system. Only with someone at the helm who can take long-term responsibility for overhauling the country’s legal system can justice be truly served at all levels.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Five ingenious ways snakes manipulate their bodies to hunt and survive
Author: Tom Major, PhD candidate in Biological Sciences, Bangor University
Do a quick search for “snakes” in the news and you’ll find people terrified, bitten or, sadly, killed by these creatures. Many of us fear their slithering ways and researchers have found evidence which suggests that humans have evolved a tendency to spot snakes more easily than other animals.
But there are more than 3,500 species of snake in the world, and they have been around for 167m years– so they must be doing something right.
Although it seems strange to us, snakes’ lack of legs mean that they have evolved numerous fantastic techniques to survive, making ingenious use of their cylindrical forms.
1. Some snakes can travel in straight lines
The majority of snakes bend their spines and exert force on the ground, trees, or water with the bends in their body or the edges of their coils to move. But some can travel in a perfectly straight line. Until recently, it was a mystery how they accomplished this, but new research demonstrates that Boa constrictors and other heavy bodied snakes use their belly scales like a tyre tread to seamlessly progress in a straight line.
Three sets of muscles work in union, with the first yanking the belly skin and scales forward. Meanwhile, the second shortens the skin as the belly scales move forward and come together, before pinning them in place as the third set brings the spinal column forward. This allows the snake to move forward at nearly constant speed, but they only do it when they are relaxed. A frightened snake in need of speed will revert to a more typical mode of locomotion.
Moving like this is thought to benefit snakes which spend time underground in narrow holes, allowing them to squeeze into animal burrows in search of refuge or prey.
2. Puff adders use their tongues as bait
Widespread across the grassy woodlands of sub-Saharan Africa and parts of the Arabian Peninsula is a chunky venomous snake called the puff adder (Bitis arietans), so named for its habit of hissing loudly when disturbed. Puff adders are successful predators of small mammals, lizards, frogs and birds, but until recently one secret to their success was unknown.
Upon spotting a frog nearby, the puff adder begins flicking its tongue unusually slowly, seemingly mimicking a small worm. To frogs, juicy worms are irresistible, and their eagerness to eat them leads them straight into the waiting mouth of the viper. This hunting strategy is known as lingual luring.
3. Mock viper eyes change shape
While it is gifted with one of the most impressive scientific names of any snake – Psammodynastes pulverulentus, a mixture of ancient Greek and Latin meaning “dusty sand ruler” – the mock viper, unlike the puff adder, does not possess deadly venom. Living in the forested areas of south and southeast Asia, the mock viper is surrounded by dangerous animals such as leopard cats and is subject to the possibility of being eaten on a daily basis. To counter this and intimidate would-be predators, the mock viper earns its name by physically resembling a viper, possessing the well-defined triangular head that characterises real vipers in the area.
This disguise is not enough for these snakes, though. When threatened with imminent danger, the mock viper alters the shape of its pupil from round to a thin, vertical slit. These “elliptical pupils” are typical of actual vipers in the area. It is thought that this last-ditch defence may be enough to persuade a predator to think twice and allow the mock viper to slither to safety.
4. Boas line up to catch prey
In Cuba’s Desembarco del Granma national park, Jamaican fruit bats have found their ideal home in the chambers of sinkhole caves – deep holes sunk vertically into the ground. Unfortunately, it is no easy life: Cuban boas (Chilabothrus angulifer), large, constricting snakes with striking zigzag patterns, also live around these caves, and have developed a taste for the bats.
Though the bats spend the daytime comfortably roosted deep in the caves, they leave every evening to forage for fruit. The boas take up position on the cave ceiling late in the evening and wait for this nightly passage to take place. But their positioning is not random. The boas spread themselves in a line, forming a rudimentary barrier. This coordinated hunting increases their chances of catching a bat because their prey has no choice but to fly past a snake to exit the cave.
5. Sea snakes tie themselves in knots
Sea snakes spend their entire lives in water, even giving birth to live young in the ocean. They have many adaptations to survive including a flat, paddle shaped tail, and an ability to excrete salt using a gland under the tongue.
Despite their name, yellow-bellied sea snakes (Hydrophis platurus) are not cowardly, but rather possess bright yellow undersides. These snakes have developed a bizarre strategy to help them shed their old skin. Because there is not much in the open sea to rub up against to loosen the skin, they actually tie themselves in a knot, using their own bodies as a scratching post to remove it in one piece, much like peeling off a sock.
Tom Major does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
How open data can help the world better manage coral reefs
Author: Adel Heenan, Postdoctoral fellow, Bangor UniversityIvor D. Williams, Coral Reef Ecologist, National Oceanic and Atmospheric Administration
Coral reefs are critically important to the world but despite the ongoing efforts of scientists and campaigners, these stunningly beautiful ecosystems still face a variety of threats. The most pervasive is, of course, climate change, which is putting their very future in jeopardy.
Climate change is a complex, worldwide problem that needs a global solution. One part of which is good monitoring systems, that operate at a large scale. Broad scale datasets from these systems are required to understand how vulnerable ecosystems like coral reefs are changing, and to separate that information from natural variation.
Often, however, scientists that collect coral reef monitoring data do so in isolation. They work on independent research projects, or for relatively small programmes with specific local agenda, and so don’t always make their data available to the scientific community. The pressure on academic researchers to be the first to publish their findings also disincentives data sharing. So there can be a conflict of interest between the motivations of an individual scientist and the larger advancement of science.
More practically, getting data ready to share is time consuming, particularly when there aren’t standardised monitoring procedures or a good data management infrastructure in place. In the absence of good management, data can simply be lost as people move on, taking lab books, data sheets and external hard drives with them.
But these barriers can be overcome. Through, for example, open access journals that publish scientifically valuable datasets. Peer-reviewed, citable datasets with standardised meta-data promotes sharing and reusability, while also recognising the researchers behind it.
Given the now urgent need to find science-based solutions for coral reefs, we believe the benefits of open data far outweigh the costs. This is one of the reasons we recently published our entire dataset of coral reef habitats and fish assemblages in the western central Pacific.
Our dataset was collected by scientific divers from the US national oceanic and atmospheric administration between 2010 and 2017. They were part of the interdisciplinary team that operates from NOAA ships to collect physical, chemical and biological data for the pacific reef assessment and monitoring programme. For seven years, these researchers surveyed fish assemblages and coral reef habitats at 39 islands and atolls in the United States affiliated western central Pacific.
The areas studied ranged from the remotest islands in the central Pacific – hundreds of kilometres from the nearest human civilisations – to highly populated, developed and urbanised islands such as Oahu and Guam.
These islands also have different biophysical conditions, such as temperature. This means that we have been able to quantify different threats relative to the natural background variability caused by environmental conditions. For instance, we can now understand the true effect of human depletion on coral reef fishes. We have also been able to set reasonable expectations for what a healthy reef looks like in different locations.
When multiple large data sets like this are pooled, they become even more powerful, allowing researchers to tackle key questions, such as where coral reef “bright spots” are and why they are thriving.
By making all data easily available like ours is, and working to improve comparability, we can speed up the scientific pace to better understand and manage coral reefs. Though we were required to make the NOAA data available under the United States Open Data Policy, we believe it is important for the wider coral reef community to fully embrace this ideal. Coral reefs are so widespread that no one programme can hope to gather data across most of their range. Linking large and small-scale programmes will improve the value of both: large datasets can give the big picture context, while localised programmes can be more intensive or regularly repeated.
One landmark study, for example – which used open datasets from different sources – found that the majority of coral reefs are fished to under half of their maximum population. So a range of management target benchmarks were established. Another compiled 25 different datasets to report on the status of coral reef fish biomass at 37 different districts in Hawaii, covering almost the entire archipelago’s coastline. Not only does this collated data help local reef management, but it can be used for marine spatial planning and for assessing effectiveness of reef management elsewhere.
There are a certainly a number of challenges to bringing different datasets together. Scientists will have to work together to create a core set of community standards for how to calibrate across different methods, and what to monitor. But by doing this, the information we gather will be far more useful in addressing the coral reef crisis. A commitment to open data is an important part of this.
Adel Heenan received funding from the NOAA coral reef conservation programme.
Ivor D. Williams receives funding from the NOAA coral reef conservation programme.
What supplements do scientists use, and why?
Author: Simon Bishop, Lecturer in Public Health and Primary Care, Bangor UniversityGraeme Close, Professor of Human Physiology, Liverpool John Moores UniversityHaleh Moravej, Senior Lecturer in Nutritional Sciences, Manchester Metropolitan UniversityJustin Roberts, Senior Lecturer, Anglia Ruskin UniversityNeil Williams, Lecturer in Exercise Physiology and Nutrition, Nottingham Trent UniversityTim Spector, Professor of Genetic Epidemiology, King's College London
Supplements are a multi-billion dollar industry. But, unlike pharmaceutical companies, manufacturers of these products don’t have to prove that their products are effective, only that they are safe– and that’s for new supplements only.
We wanted to know which supplements are worth our attention (and money) so we asked six scientists – experts in everything from public health to exercise physiology – to name a supplement they take each day and why they take it. Here is what they said.
Simon Bishop, lecturer in public health and primary care, Bangor University
Turmeric is more familiar as an ingredient in South Asian cooking, adding an earthy warmth and fragrance to curried dishes, but, in recent years, it has also garnered attention for its potential health benefits. I have been taking ground turmeric root as a dietary supplement for around two years, but I have been interested in its use in Ayurvedic medicine for far longer.
Turmeric is used as a traditional remedy in many parts of Asia to reduce inflammation and help wounds heal. Now, mounting evidence suggests that curcumin, a substance in turmeric, may also help to protect against a range of diseases, including rheumatoid arthritis, cardiovascular disease, dementia and some cancers.
The evidence underpinning these claims of health-giving properties is not conclusive, but it is compelling enough for me to continue to take turmeric each morning, along with my first cup of coffee – another habit that may help me live a bit longer.
Graeme Close, professor of human physiology, Liverpool John Moores University
Vitamin D is a peculiar vitamin in that it is synthesised in our bodies with the aid of sunlight, so people who live in cold countries, or who spend a lot of time indoors, are at risk of a deficiency. People with darker skin tone are also more at risk of vitamin D deficiency as melanin slows down skin production of vitamin D. It is estimated that about a billion people are deficient in the vitamin.
Most people are aware that we need enough vitamin D to maintain healthy bones, but, over the past few years, scientists have become increasingly aware of other important roles of vitamin D. We now believe vitamin D deficiencies can result in a less efficient immune system, impaired muscle function and regeneration, and even depression.
Vitamin D is one of the cheapest supplements and is a really simple deficiency to correct. I used to test myself for deficiencies, but now – because I live in the UK where sunlight is scarce between October and April, and it doesn’t contain enough UVB radiation during these cold months – I supplement with a dose of 50 micrograms, daily, throughout the winter. I also advise the elite athletes that I provide nutrition support to, to do the same.
Justin Roberts, senior lecturer in sport and exercise nutrition, Anglia Ruskin University
Having diverse beneficial gut bacteria is important for your physical and mental health. However, the balance of bacterial species can be disrupted by poor diet, being physically inactive and being under constant stress. One way to support the health of the gut is to consume dietary probiotics (live bacteria and yeasts), such as yogurt, kefir and kombucha.
I first came across probiotics after years of triathlon training, often experiencing gastrointestinal symptoms – such as nausea and stomach cramps – after training and races. I was also more susceptible to colds. After researching the area, I was surprised at how many people experience similar gastrointestinal problems after exercise. Now I have found that taking a probiotic regularly lessens my symptoms after training and benefits my general health.
A recent study we conducted showed that taking a probiotic in the evening with food, over 12 weeks of exercise training, reduced gastrointestinal problems in novice triathletes.
There is also a wealth of research supporting the use of probiotics for general health benefits, including improving intestinal health, enhancing the immune response and reducing serum cholesterol.
Neil Williams, lecturer in exercise physiology and nutrition, Nottingham Trent University
Prebiotics are non-digestible carbohydrates that act as a “fertiliser” to increase the growth and activity of beneficial bacteria in the gut. This is turn can have positive effects on inflammation and immune function, metabolic syndrome, increase mineral absorption, reduce traveller’s diarrhoea and improve gut health.
I first came across prebiotics in my research to target the gut microbiota in athletes suffering from exercise-induced asthma. Previous research had shown asthma patients to have altered gut microbiota, and feeding prebiotics to mice had been shown to improve their allergic asthma. Taking this as our launching point, we showed that taking prebiotics for three weeks could reduce the severity of exercise-induced asthma in adults by 40%. Participants in our study also noted improvements in eczema and allergic symptoms.
I add prebiotic powder to my coffee every morning. I have found that it reduces my hayfever symptoms in the summer and my likelihood of getting colds in the winter.
Haleh Moravej, senior lecturer in nutritional sciences, Manchester Metropolitan University
I started taking omega 3 after attending a Nutrition Society winter conference in 2016. The scientific evidence that omega 3 could improve my brain function, prevent mood disorders and help to prevent Alzheimer’s disease was overwhelming. After analysing my diet it was obvious that I wasn’t getting enough omega 3 fatty acids. A healthy adult should get a minimum of 250-500mg, daily.
Omega 3 is a form of fatty acid. It comes in many forms, two of which are very important for brain development and mental health: EPA and DHA. These types are primarily found in fish. Another type of omega 3 – ALA (alpha-linolenic acid) – is found in plant-based foods, such as nuts and seeds, including walnuts and flax seeds. Due to my busy schedule as a lecturer, during term time my diet is not as varied and enriched with omega 3 fatty acids as I would like, forcing me to choose a supplement. I take one 1,200mg capsule, daily.
Nothing but real food
Tim Spector, professor of genetic epidemiology, King’s College London
I used to take supplements, but six years ago I changed my mind. After researching my book I realised that the clinical studies, when properly carried out and independent of the manufacturers, clearly showed they didn’t work, and in many cases could be harmful. Studies of multivitamins show regular users are more likely to die of cancer or heart disease, for example. The only exception is supplements for preventing blindness due to macular degeneration, where randomised trials have been generally positive for a minor effect with a mixture of antioxidants.
In many cases, there is some experimental evidence these chemicals in supplements work naturally in the body or as foods, but no good evidence that when given in concentrated form as tablets they have any benefit. Recent evidence shows that high doses of some supplements can even be harmful – a case in point being calcium and vitamin D. Rather than taking expensive and ineffective synthetic products, we should get all the nutrients, microbes and vitamins we need from eating a range of real foods, as evolution and nature intended.
Graeme Close consults to Gatorade Sport Science Institute (GSSI) and Healthspan Elite. He has received funding from The MRC, BBSRC, Aliment Nutrition, GSK and GSSI.
Tim Spector receives funding from The MRC, Wellcome Trust, CDRF, NIHR, EU Horizon Grants, and is author of "The Diet Myth : the real science behind what we eat - Orion 2016"
Haleh Moravej, Justin Roberts, Neil Williams, and Simon Bishop do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
You are more likely to deny the truth in your second language
Author: Manon Jones, Senior Lecturer of Psychology, Bangor UniversityCeri Ellis, Research Associate, University of Manchester
Whether you’re speaking in your native tongue, or in another language, being understood and believed is fundamental to good communication. After all, a fact is a fact in any language, and a statement that is objectively true should just be considered true, whether presented to you in English, Chinese or Arabic.
However, our research suggests that the perception of truth is slippery when viewed through the prism of different languages and cultures. So much so that people who speak two languages can accept a fact in one of their languages, while denying it in the other.
Bilingual people often report that they feel different when switching from one language to another. Take Karin, a fictitious bilingual, for example. She might use German informally at home with family, in the pub, and while watching football. But she uses English for more structured, professional aspects of her life as an international lawyer.
This contextual change of language is not simply superficial, it goes hand-in-hand with a host of perceptual, cognitive and emotional trends. Research shows that language linked to experiences shapes the way we process information. So if someone was to utter the words “Ich liebe dich” to Karin, she might well blush, but by the same token, “I love you” might not alter her cheek colour at all. It’s not a matter of proficiency: Karin is equally fluent in German and English, but her emotional experiences are bound more strongly to her mother tongue, simply because she experienced more fundamental, defining emotions as a child.
A substantial number of psychology experiments have shown that languages shape aspects of our visual perception, the way we categorise objects in our environment, and even the way we perceive events. In other words, our very sense of reality is constructed by the confines of the language we speak.
Less is known of whether language also shapes our higher-level knowledge, relating to concepts and facts. Until recently, it was commonly assumed that one’s understanding of meaning is shared across all the languages one speaks. However, we have been able to observe that this is not the case. Bilinguals actually interpret facts differently depending on the language they are presented with, and depending on whether the fact makes them feel good or bad about their native culture.
During one such study from our group, we asked Welsh-English bilinguals – who had spoken Welsh from birth and considered themselves culturally Welsh – to rate sentences as true or false. The sentences had either a positive or negative cultural connotation, and were factually either true or false. For example, “mining was celebrated as a core and fruitful industry in our country” has a positive connotation and is a true statement. Another similar yet subtly different example is “Wales exports prime quality slate to every single country”, which is a positive yet false statement. The statement “historians have shown that miners were heavily exploited in our country” is negative and true. And finally, “the poor work ethic of miners ruined the mining industry in our country” is negative and false.
Our bilingual participants read these sentences in both English and Welsh, and as they categorised each one, we used electrodes attached to their scalps to record the implicit interpretation of each sentence.
We found that when sentences were positive, bilinguals showed a bias towards categorising them as true – even when they were false – and that they did this in both languages. So far, no surprise. But when sentences were negative, bilinguals responded to them differently depending on whether they were presented in Welsh or in English, even though the exact same information was presented in both of the languages.
In Welsh they tended to be less biased and more truthful, and so they often correctly identified some unpleasant statements as true. But in English, their bias resulted in a surprisingly defensive reaction: they denied the truth of unpleasant statements, and so tended to categorise them as a false, even though they were true.
This research shows the way in which language interacts with emotions to trigger asymmetric effects on our interpretation of facts. While participants’ native language is closely tied to our emotions – which perhaps comes with greater honesty and vulnerability – their second language is associated with more distant, rational thinking.
Make no mistake, our bilingual participants knew what was factually true and what was factually false – as revealed by the brain activity measures – but functioning in the second language appeared to protect them against unpalatable truths, and deal with them more strategically.
Manon Jones receives funding from the Coleg Cymraeg Cenedlaethol.
Ceri Ellis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Is fishing with electricity less destructive than digging up the seabed with beam trawlers?
Author: Michel Kaiser, Chair of Marine Conservation Ecology, Bangor University
While many people may be interested in the sustainability and welfare of the fish they eat, or the health of the environment, fewer probably worry about the effect that trawl fishing – which accounts for 20% of landings– has on the ocean.
For a long time researchers and the industry have been trying to improve trawl fishing practices. Things have moved on from practices such as beam trawling – where a large net is dragged across the ocean floor – to potentially less invasive and newer methods like electric pulse trawling. This sees electrical pulses being sent into the seawater to flush out bottom-dwelling fish like plaice and sole, causing them to swim into the path of trawl nets.
Beam trawls have been the focus of environmental concern for decades, as it causes a substantial reduction in the abundance of animals living on the seabed. These effects can be long lasting if the fishing occurs in areas which are inhabited by long-lived seabed dwelling species such as oysters and sponges. Beam trawls are also associated with high amounts of bycatch – unwanted fish and other organisms – although the industry and researchers are working on ways to reduce this.
However, the relatively newer electric pulse fishing is not necessarily a perfect solution either. Though it does not dig into the seabed to the same extent as traditional beam trawling, research has found it can fatally injure other species which may not be the target catch.
So why use this method if it still has its faults? High fuel costs and EU legislation which has reduced the discarding fish at sea, have renewed interest in the use of electricity in fishing. Across the world, millions are fed by the fish caught by trawlers so it is unrealistic for trawling to just be stopped altogether, but the variety of negative impacts on the marine ecosystem remain a cause for concern.
For and against
The UK government recently announced an review into the use of electric pulses by foreign trawlers in British waters due to concerns about its potential effects on the environment and bycatch. Campaign groups have also called on the EU to reinstate a ban on the electrical pulse method, calling it “destructive”.
The current pulse trawls are fine-tuned to catch larger fish (the spine of the fish acts as a conductor), so that bigger fish respond more strongly to the electric stimulus and are more likely to be caught in the nets. This reduces catch of unwanted species that are less likely to respond to the electric pulse, and also reduces contact with the seabed.
Traditional beam trawls, on the other hand, are fitted with heavy “tickler chains” – horizontal chains strung across the mouth of the trawl – designed to “dig” fish like Dover sole out of the seabed. Soles curl into a “c” shape in response to the electric stimulation used by pulse trawls, so they can be caught without the use of these “tickler chains”.
Dispensing with the chains means that the gear is lighter, creates less disruption of the seabed, and substantially reduces the amount of other seabed organisms caught – by 75-80% per unit area of the seabed fished. By not catching the unwanted species, this improves the quality of landed catch too, because skin abrasion is reduced in the net. Together, improved catch quality and the reduced fuel consumption means greater profitability for the fishermen.
Electric pulse seems like a good idea from this perspective, but studies of its effects on other species of fish – that are not the intended catch – show that larger cod in particular are prone to spinal fractures when in contact with the electric pulses. Small cod appear to be unaffected. Cod typically have a low survival rate if they are unintentionally caught in most trawls, so this issue of spinal fracture may be irrelevant if they are caught using either method.
Additionally, though fewer seabed organisms end up in the trawl net when using electricity compared to traditional beam trawling, it is too early to tell whether the creatures remaining on the seabed are affected negatively by contact with the electric stimuli. Aquarium experiments, have shown that worms and shrimps, for example, recover within seconds following the application of an electric shock. However, these controlled laboratory experiments take place without natural predators – that may take advantage of a shocked creature – present.
The issues here are not solely environmental. The pulse trawl fleet has encroached on grounds that historically were fished by fishermen using low impact netting methods, leading to some resentment and conflict with others in the fishing community.
Societal acceptance of any food production method is vital, and at present – for pulse trawling – this is a greater challenge than answering the ecological questions. This issue could be resolved by more formal zoning of the sea so that pulse trawling is restricted to areas that do not impinge upon traditional low impact fisheries – initiatives which are currently in negotiation.
Taking both society and environment into account, electric pulse trawling may not be an infallible solution, but it might a better way of trawling than the use of traditional forms of beam trawling.
Michel Kaiser is the Chair of the International Scientific Advisory Committee (ISAC) for the Dutch pulse trawling project. The ISAC is an independent body whose function is to scrutinise the science undertaken as part of this project.
2018 must be the year that we reimagine judicial diversity
Author: Stephen Clear, Lecturer in Law, Bangor University
Shortly before his retirement at the end of 2016, the then supreme court President, Lord Neuberger, stated that“the higher echelons of the judiciary in the UK suffer from a marked lack of diversity and … the supreme court does not score at all well”.
In a year where equality has been more at the forefront of the public consciousness than ever before, one would hope that this stark commentary from Britain’s top judge would have sparked some change. And yet, more than a year later, little progress has been made.
There have been plenty of opportunities for the judiciary to become more diverse in recent months. However, between 2016 and 2017, official figures show that female, and black, Asian and minority ethnic (BAME) judicial appointments either remained largely consistent or saw minuscule improvements. By the close of 2017, 28% of judges were female (consistent with 2016), and only seven per cent were BAME.
In addition, despite it now being 30 years since Lady Butler-Sloss became the first woman to be appointed a justice of appeal, there are only nine women out of 39 in the court of appeal.
But despite this and more, the secretary of state for justice has recently rejected calls for “judicial diversity targets”, saying it is the “wrong approach”.
That’s not to say there were no changes, however. While overall there was limited improvement, there were some positive signs of progress elsewhere. In total, 45% (806) of tribunal judges are now female. And BAME representation now accounts for ten per cent of court judges under 40, and 14% of tribunal judges under 40.
But still the judiciary is mostly living up to its older, white male stereotype. 2017 saw the first BAME judge being appointed to the court of appeal – Justice Singh– and though it is a significant step, his presence only amounts to a 2.56% BAME representation on this court’s benches (one out of 39). Meanwhile, no supreme court positions have been filled with BAME appointments to date.
Another improvement was Lord Neuberger’s replacement by Lady Hale as president of the supreme court, with Lady Black also joining the highest court of England and Wales. But this has only improved the supreme court’s gender balance from 8.3% to 16.6% female.
Looking at international figures, it becomes clear that this lack of diversity is not just a British problem. Council of Europe research from 2016 shows that 72% of countries are failing to meet its 40% minimum of female judges target.
However, compared to other states, the UK was, at the time, second worst for gender balance within their respective high/supreme courts – second only to Italy. Though this situation has now marginally improved, the UK still has among the worst gender balance within Europe. It also falls behind the US’s supreme court, where female justices make up 33.3% of the court, and which has 11.1% BAME representation.
Though the situation cannot be changed overnight, there are significant opportunities coming in 2018 that can further redress the balance. Lords Mance, Hughes and Sumption will all reach retirement age, and their positions will be filled by judges from lower down the ranks. If the supreme court follows the precedent it set with the appointments of Lady Black and Lords Lloyd Jones and Briggs in 2017, it is likely that justices of the court of appeal will be in the running for the vacancies. And with nine female judges sitting in that court at present, there is evident potential for more women to progress.
With only one very recent BAME representative appointed to the court of appeal, it is very unlikely that the supreme court will be able to do anything too radical on this issue this year. But the movement of appeal court judges will leave further positions to be filled, which can and should be taken by BAME and female judges.
Though appointments to the supreme court, court of appeal and high court should rightly always be reserved for the most capable jurists based on merit, questions have to be asked as to why diverse candidates have not naturally filtered to the pinnacles of the profession. Diversity targets are off the table for now, but the justice secretary should revisit the merits of having them imposed at the lower levels only, so as to ensure equal opportunities for the UK’s talented BAME and female lawyers at the start of their judicial careers. Though it won’t make for any immediate extreme changes higher up, the more diverse group will eventually progress.
By ensuring that there is a pool of judges that reflects society at the lower levels, this could make 2018 the year that the UK starts to radically overhaul judicial diversity.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Tory attack on Working Time Directive signals a post-Brexit race to the bottom
Author: Tony Dobbins, Professor of Employment Studies, Bangor University
Pro-Brexit Conservative government ministers like Michael Gove are demanding the EU Working Time Directive be scrapped, according to reports. In a Sunday Times interview, foreign secretary Boris Johnson urged prime minister Theresa May to negotiate a Brexit trade deal enabling Britain to ditch EU laws, warning about being a “vassal state” of Brussels.
EU employment rights have faced prolonged opposition from Conservative MPs. They are seen to impede the flexibility of UK business and labour markets. It is no surprise then that the directive is being attacked by right-wing Brexiteers, with The Sun newspaper claiming they have widespread cabinet support to axe it after Brexit. This is despite previous promises by the prime minister that: “The Conservatives will guarantee all rights that workers currently enjoy as we leave the European Union.”
What is the Working Time Directive?
The EU Working Time Directive enshrines minimum health and safety requirements for organising working time in EU member states. Workers have legal rights to a weekly working time limit of 48 hours, minimum paid holidays, statutory rest periods.
Significantly, however, it also permits member states like the UK “opt-outs” from the maximum 48-hour week, as long as individual workers agree.
The directive was transposed into UK law under The Working Time Regulations 1998, implementing the following main rights:
- A limit of 48 hours that a worker can be required to work in a week – though individuals may choose to work longer by opting out.
- Paid annual leave of 5.6 weeks a year.
- 11 consecutive hours’ rest in any 24-hour period.
- A 20-minute rest break if the working day is longer than six hours.
- One day off each week.
- A limit on the normal working hours of night workers to an average eight hours in any 24-hour period.
The Sun quotes a government minister claiming that axing the directive “will give employers the added flexibility they will need once we have left the EU” and enable British workers to earn higher wages by allowing “millions of people to earn vital overtime cash”.
But this claim is contestable. It fails to recognise that many individuals can already voluntarily opt-out of the directive and work more hours if they want to. Plus, according to a government impact report looking into the matter, “no clear conclusions can be drawn” about the directive’s impact on wages.
Various commentators warn of the adverse consequences for workers’ health and well-being if the directive is axed. Doing so would remove legal rights to paid holidays, maximum working hours and rest breaks – potentially opening the door to further employer exploitation of workers who have weak bargaining power and/or no collective trade union representation.
Trades Union Congress general secretary, Frances O’Grady commented:
This is a straight-up attack on our rights at work. Millions could lose their paid holidays, and be forced to work ridiculously long hours.
Charlotte Cross, director of the Better Health at Work Alliance, told The Chartered Institute of Personnel and Development (CIPD) that it “could be a damaging step for employee well-being in a workforce already plagued by high stress and poor mental health”.
Further, current research suggests that restricting working time is beneficial for everyone – workers, employers, citizens, customers. Workers have stronger health and safety protection; citizens and customers are less exposed to dangerously stressed workers. Employers benefit from more productive workers, avoid deteriorating quality of service associated with overworked staff, and reduce risk of workplace accidents.
So there are valid arguments for working fewer– not more – hours, or to distribute working hours more equitably. The UK has a polarised labour market: some employees work too many hours, whereas others experience underemployment and zero hours contracts. Scrapping the Working Time Directive protections would be detrimental for workers, especially those who already experience precarious working conditions.
Did workers vote for this?
It seems doubtful that many who voted for Brexit voted to remove legal rights to paid holidays, rest breaks, and maximum hours, hence the spinning of facts by right-wing politicians and media. Many workers may be unaware that these rights originated from EU employment legislation, and what they will lose if they are removed.
Even before Brexit kicks in, the UK labour market is one of the most lightly regulated in the EU. This is despite EU employment laws, which have often been implemented in a minimalist way – the UK opt-out from the 48-hour week is a prime example of this.
If the Conservatives remain in power and the Brexiteers broaden their deregulation of EU employment rights post-Brexit, the attack on the Working Time Directive could signal a more extensive dismantling of protections for vulnerable workers. The concern is that a race to the bottom will unfold with a continued quest for even greater labour market flexibility.
Tony Dobbins has received research funding from The British Academy, the Economic and Social Research Council, the European Commission.
Why PrEP takers should still use condoms with HIV+ partners
Author: Simon Bishop, Lecturer in Public Health and Primary Care, Bangor University
In the film, The Matrix, lead character Neo is given the choice to take one of two pills that will determine his fate. The red pill promises to open his eyes to the true nature of reality, while the blue pill will perpetuate his ignorance and shield him with a comfortable illusion. Neo takes the red pill in a moment that has become one of the most retold film analogies of all time.
Neo’s pill taking is also useful for delving into a worrying trend that has arisen in recent months with regards to HIV-prevention drugs. The medications that have been licensed in recent years to reduce HIV transmission among homosexual men run the risk of being nothing more than a blue pill for other groups, luring users into a false sense of security.
Condoms have been the mainstay of safer sex messages for 30 years as the best way of reducing HIV transmission. In 2012, however, the US food and drug administration licensed a drug to prevent people from contracting HIV, which had previously only been used to treat the infection. This small blue pill was called Truvada, and so pre-exposure prophylaxis (or PrEP) was born. By this stage, evidence of the safety and effectiveness of Truvada in reducing HIV transmission was already strong, especially among men who have sex with men. The US decision to licence the drug was quickly followed by World Health Organisation guidelines also supporting the use of Truvada for PrEP, not as an alternative to condom use, but rather as part of a broader HIV prevention approach that included condoms.
With US and WHO approval, the use of PrEP has now become commonplace and widespread. In the UK, Scotland currently offers PrEP for free on the NHS to high-risk individuals, and both England and Wales are trialling its use through selected sexual health clinics. For those unable to obtain an NHS prescription for Truvada, there are a number of online sellers willing to provide the drug via mail-order for as little as £35 (US$48) a month, making PrEP hugely accessible.
On the face of it, these might seem like welcome developments– an available and affordable way to reduce the number of people contacting HIV. The problem is that is seems that some men who have sex with men may not be using PrEP in addition to condoms, but rather as an alternative. The wide availability of PrEP, and its promise to protect against HIV, also appears to be leading heterosexual men and women towards using Truvada in order to avoid condoms, particularly within the context of commercial sex. Indeed guidance from both the US and the UK suggests that PrEP may be considered appropriate for use by heterosexuals who are sexually promiscuous but tend not to (or perhaps do not want to) use condoms.
There are a number of problems with this position. First, condoms protect against more than just HIV. Dispensing with their use risks exposure to other sexually transmitted infections, including gonorrhoea, chlamydia and syphilis. Although these infections are usually curable, there are strains that are resistant to antibiotics and so difficult to treat. Drug-resistant gonorrhoea in particular represents a major public health concern.
Even leaving aside other sexually transmitted infections, PrEP still represents a poor alternative to condom use, particularly in protecting heterosexuals. Comparisons between the two approaches vary in terms of their effectiveness, but studies that have looked at the use of PrEP to prevent HIV transmission in women have often been disappointing in terms of their ability to prevent new infections.
The situation is made more complicated because – though injectable alternatives are now being trialled– Truvada is usually provided as a pill that needs to be taken regularly, ideally every day, in order to provide the best protection. Anyone who has ever been prescribed a course of antibiotics knows just how easy it can be to forget to take a dose, but in the case of PrEP this forgetfulness can have particularly serious consequences.
And finally, not all HIV is prevented by Truvada. The drug has been licensed for use to treat the disease for well over a decade and over time some strains of HIV have become resistant to it. One consequence of this resistance is that we have started to see failures of PrEP to prevent HIV infection, even when the drug is used consistently. Although the prevalence of Truvada-resistant HIV is currently thought to be very low, its very existence underlines the danger of relying on PrEP alone.
PrEP continues to be a valuable tool in the arsenal of HIV prevention, especially among high-risk groups, such as men and transgender women who have sex with men. Despite this, immense care needs be taken to prevent Truvada becoming viewed as an alternative to using condoms by the wider population. Unfortunately, when used on its own, as in The Matrix, the blue pill risks offering an illusion of safety. It may be argued as ethical to make any new advancement in HIV prevention available to all. In reality doing so may ultimately do more harm than good, fuelling an epidemic in sexually transmitted infections and speeding up drug resistance in HIV.
Simon Bishop does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Yoga in the workplace can reduce back pain and sickness absence
Author: Dr Ned Hartfiel, Research Officer, Centre for Health Economics and Medicines Evaluation, Bangor UniversityRhiannon Tudor Edwards, Professor of Health Economics, Bangor University
Back pain is the single leading cause of disability in the world. In the US, four out of every five people experience back pain at some point in their life. In the UK, back pain is one of the most common reasons for visits to the doctor, and missed work. In fact, absence from work due to back problems costs British employers more than £3 billion every year.
But there is a potentially easy way to prevent this problem: yoga. Our new research has found that exercises from the ancient Indian practice can have very positive benefits for back problems. Our findings suggest that yoga programmes consisting of stretching, breathing, and relaxation methods can reduce sickness absence due to back pain and musculoskeletal conditions.
Wellness at work
There has already been plenty of research demonstrating the benefits of yoga for NHS patients, showing that patients with chronic back pain who regularly practice yoga take fewer sick days than those who don’t practice yoga. But very little research has been done which looks into the benefits of implementing workplace programmes, like we did.
We worked with 150 NHS employees from three hospitals in North Wales. The staff were randomly assigned to either a yoga group or an education group. The yoga group received a total of eight 60 minute yoga sessions, once a week for eight weeks. In addition to this, the yoga participants were given a DVD and a poster for home practice. They were invited to practice yoga at home for ten minutes a day for six months. The education group meanwhile received two instructional booklets for how to manage back pain and reduce stress at work.
The yoga programme was based on Dru Yoga– which emphasises soft, flowing movements – and consisted of four parts. To start each session, there was a series of gentle warm-up movements, followed by eight stretches to release tension from the shoulders and hips. Then participants did four back care postures to develop suppleness in the spine, and improve posture. This was completed with relaxation techniques to create an overall feeling of positive health and well-being.
After eight weeks, the results showed that most yoga participants had larger reductions in back pain compared to the education group. After six months, employee staff records showed that the yoga participants had 20 times less sick leave due to musculoskeletal conditions (including back pain) than the education group. We also found that the yoga participants visited health professionals for back pain only half as often as education participants during the six month study.
Those who improved the most were participants who also practised yoga at home for an average of 60 minutes or more each week. Ten minutes or more a day of home practice was associated with doubling the reduction in back pain, and many participants noted that it helped them to better manage stress too.
Gains in productivity
In the US, about a quarter of all major employers deliver some form of meditation or yoga, but it has yet to be taken up so widely in the UK, or elsewhere in Europe. Insurance company Aetna, for example, offers free yoga classes to their 55,000 employees with reported annual savings of US$2,000 (£1,520) per head in healthcare costs and a US$3,000 (£2,280) gain per person in productivity. Preventing back pain makes economic sense all round. Yoga seems not only good for employees and employers, but also for the economy as well.
With more and more research confirming the health benefits of yoga, the National Institute for Health and Care Excellence (NICE) in the UK now recommends stretching, strengthening and yoga exercises as the first step in managing low back pain. Public Health England also advises yoga classes in the workplace.
Since our initial work with the NHS proved to be such a success, the Dru Yoga healthy back programme used in the study has been delivered to staff at Merseyside Police, Great Ormond Street Hospital, the Institute of Chartered Accountants, Siemens, Barclays, Santander and many other private and public organisations. We now hope that many more will take up yoga to improve the health and well-being of their employees.
Dr Ned Hartfiel is a Research Officer at Bangor University and Director of the Healthy Back Programme Ltd. He is also a volunteer at the Dru International Training Centre in North Wales. This study was funded by a grant from the Welsh Health Economics Support Services.
Professor Rhiannon Tudor Edwards is co Director for the Centre for Health Economics and Medicines Evaluation, School of Healthcare Sciences and Bangor Institute for Health and Medical Research at Bangor University. She receives funding from Health and Care Research Wales and Public Health Wales, both Welsh Government.
Lessons from the Beeching cuts in reviving Britain's railways
Author: Andrew Edwards, Dean of Arts and Humanities and Senior Lecturer in Modern History, Bangor University
More than 50 years ago the Beeching Report was published, spelling the end of hundreds of miles of British railway lines and stations. Pretty much immediately, local campaigns sprang up to protest what became infamously known as the “Beeching Axe”. Now, the transport secretary Chris Grayling has announced that some of the lines could be re-opened.
The proposals, aimed to “reverse decades of decline” in the railways, have been praised as the “rebirth of the railways”. Yet huge investment is needed to truly revitalise the railways. Now, as in Beeching’s time, Britain’s railways are in need of updating. And if we want to see a rail system that is both economically viable and socially beneficial there are some lessons to be learned from the wrongs of past policy.
Back in 1963, Dr Richard Beeching’s plans to cut 5,000 miles of line and some 300 stations were outlined in a British Railways Board report, The Reshaping of British Railways. From an economic perspective, the urgent need to identify savings in the railways was hard to challenge. Nationalised in 1948, the railways had struggled to pay their way for most of the following decade and had, by the early 1960s, accumulated significant operating deficits.
The railways were then in drastic need of modernisation to its rolling stock. Many of the stations built in the Victorian era had fallen into disrepair. Hit by substantial rises in the cost of coal and steel, a combination of management inertia and lack of a clear government strategy hampered attempts to place the railways on a sustainable footing. At a time when rail workers were poorly paid, there was even a reluctance to raise rail fares to offset operating losses.
Consequently, modernisation was slow to materialise and Britain’s railways still largely ran on steam. Despite the more efficient opportunities afforded by diesel locomotion and electrification, British Railways still purchased steam engines well into the 1950s.
Beeching’s remit in 1961 was to lead to railways back into profitability by the end of the decade. With Britain’s economic fortunes on the wane by the early 1960s, the time was ripe for a thorough rationalisation of Britain’s most prominent nationalised industry.
Economic vs social cost
The clinical and ruthless assessment of what was required to put the railways back on stable footing won many admirers in the then Conservative government. It embraced Beeching’s proposals and was quick to implement his report’s recommendations. Few alternatives were offered. When Labour returned to power in 1964, it did little to reverse the cuts – although Beeching was removed as chairman of British Railways in 1965. The reality was that – for both the main parties at the time – the vision of modernisation was framed around a transport system dominated by roads.
As prime minister at the time, Harold Macmillan confided in his diary in 1963: “In ten years we have gone from 2m to 6m motor cars. In another ten years we may go to 12 and eventually 18m cars.” The opening of the new M1 motorway in 1959 – eventually connecting the city of Leeds in the north of England to London in the south – had already provided an iconic symbol of a new vision that was to be pursued vigorously in the decades that followed.
The main opposition to Beeching’s proposals focused on the social impact of the proposed cuts. Opponents argued that Beeching had paid scant attention to the social importance of the railways. Many argued that the closure of many lines in rural Britain would isolate communities.
In regions of Britain such as rural north Wales – where tourism was widely viewed as an alternative to the fast-declining extractive industries and where depopulation was a significant social, cultural and economic problem – opposition to Beeching was voiced across the political spectrum. As a local Labour MP argued at the time, the railways were “a form of social service, which is as essential as the supply of electricity, gas, water and the NHS”.
Beeching did recognise these concerns, but it was outside his remit to find a solution to the social issue. Although local campaigns slowed down the rate of closures, the vast majority of the report’s recommendations were enacted. Across Wales, of the 1,500 miles of line in operation in 1951, only 670 miles remained in 1965. By 1975, the figure had fallen to less than 500.
The Beeching legacy
Since the closures, more than 50 lines have already reopened. In parts of the UK, many former lines were resurrected as part of the new “Heritage Rail” sector, while many successful community rail partnerships have also flourished. Elsewhere, disused lines have become popular cycle and walking tracks.
The former line from Bangor to Caernarfon along the North Wales coastline encapsulates the Beeching legacy. A significant proportion of the former line is now a popular cycle track, the site of the former station in Caernarfon now hosts a supermarket, while the line south of Caernarfon has been developed as part of hugely successful Welsh Highland Railway. To reopen that line would, no doubt, stimulate a vigorous debate among local cyclists, environmental campaigners, the local industry and heritage conservationists.
Contrary to the apocalyptic narrative that accompanies any discussion of his infamous “axe”, there was life for the railways after Beeching. But the reality was that the majority of Britons did view the roads as a more convenient, economical and practical mode of travel from the 1950s onwards.
Today, with those now roads overcrowded and motoring costs escalating, the railways are once again providing a viable alternative. Rail passenger numbers have risen dramatically over the past two decades. For that reason alone, there is logic in revisiting Beeching.
Regular rail users may well have a different view. Overcrowded trains, idiosyncratic timetabling and frequent delays are just some of the problems that need to be addressed. Moreover, rail fares have risen rapidly in real terms since the recession more than a decade ago. The rise of 3.4% in prices in 2018 will compound that problem. And the problems that faced Beeching back in the early 1960s are still there.
Whether nationalised or in private hands, Britain’s railways are still in desperate need of investment, modernisation and coordination. Meanwhile, Beeching’s elusive search for a more efficient railway goes on.
Andrew Edwards previously received UK funding body grants.
Exercise alone does not lead to weight loss in women – in the medium term
Author: Hans-Peter Kubis, Director of the Health Exercise and Rehabilitation Group, Bangor University
Knowing whether or not exercise causes people to lose weight is tricky. When people take up exercise, they often restrict their diet – consciously or unconsciously – and this can mask the effects of the exercise. In our latest study, we avoided this bias and discovered that exercise, on its own, does not lead to weight loss in women.
For our research, we concealed the true objective of our investigation (investigating weight loss response to exercise) from the participants, and used bogus objectives instead (cognitive performance and cardiovascular fitness improvement). We also excluded women who intended to lose weight from the study because there was a higher risk that they would restrict their diet.
In two training studies, over four and eight weeks, women aged 18 to 32 attended circuit-training classes three times a week. We recorded the women’s body weight, muscle and fat mass at the start and at the end of the study. We also took blood samples so that we could measure appetite hormones (insulin, leptin, amylin, ghrelin and PYY), as they can alter appetite and food intake.
Results showed that neither lean nor obese women lost weight, including the 34 finishers of the four-week training programme, and the 36 finishers of the eight-week exercise programme. Although, lean women did gain muscle mass.
When we looked at individual weight responses to the exercise programmes, we noticed that the levels of appetite hormones leptin and amylin helped explain why some people gained or lost weight by the end of the study. Changes in appetite hormones as a result of exercise make it much harder for some people to lose weight than for others. In other words, the energy they burned during the exercise class was replaced in their diet. Their body was effectively defending against weight loss, regardless of whether they were lean or obese.
This somewhat frustrating outcome does not mean that exercise is not good for people. There is no doubt that exercise has health benefits on many levels, whether it is for prevention of lifestyle diseases, such as type 2 diabetes or cardiovascular disease, or mental health issues, like depression. But we need to consider that our ancestors evolved to survive over millennia in environments where food was scarce, so our bodies are better adapted to defending against weight loss than defending against weight gain. Our bodies adjust and try to preserve our body weight if we take up exercise, but they don’t adjust to help us lose weight if we gain a few pounds.
However, exercise can help to control weight in indirect ways. It may help us develop more self-control and not give in to food temptations easily. We can also transfer some skills learned from regularly taking part in exercise, such as time management and overcoming periods of low motivation, to other behaviours, such as eating.
People need to work on their diet if they want to achieve weight loss. Combining a healthy diet – such as avoiding processed and sugary foods, eating lots of veg and other high-fibre foods, avoiding snacking and having regular meals – with exercise will certainly produce results.
Hans-Peter Kubis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Apa yang menyebabkan kawanan paus kerap terdampar?
Author: Peter Evans, Honorary Senior Lecturer, Bangor University
Baru-baru ini, 10 ekor paus sperma terdampar di perairan kawasan Ujong Batee, Kabupaten Aceh Besar. Enam dari 10 paus itu berhasil diselamatkan, tetapi empat sisanya mati.
Sementara itu, awal tahun ini, 600 paus pilot terdampar di Selandia Baru. Sekitar 400 di antaranya mati sebelum para relawan bisa mengembalikan mereka ke laut.
Terdamparnya kawanan paus seperti ini telah terjadi sejak dimulainya catatan manusia, dan hingga sekarang masih terjadi secara reguler.
Pada penghujung 2015, misalnya, 337 paus sei mati di fjord di Cile. Pada Februari 2016, 29 paus sperma ditemukan terdampar di pantai di Jerman, Belanda, Inggris bagian timur, dan Prancis bagian utara, sebuah rekor untuk spesies ini di Laut Utara.
Mengapa hewan-hewan ini, yang amat menguasai seluk-beluk kehidupan di perairan, justru bergerak memasuki lingkungan daratan yang tidak ramah—sehingga berujung kematian?
Terdampar beramai-ramai terjadi pada hampir semua spesies paus di samudra. Paus pilot sirip panjang dan sirip pendek cenderung menjadi korban yang paling sering. Spesies lain misalnya paus pembunuh palsu, paus kepala melon, paus berparuh Cuvier dan paus sperma.
Mereka biasa hidup di kedalaman 1.000 meter lebih dan merupakan makhluk sosial. Mereka membentuk kelompok yang bisa terdiri dari ratusan ekor.
Spesies paus yang paling sering terdampar adalah mereka yang hidup di laut dalam, dan di lokasi yang sama, sehingga alam lebih berperan sebagai penyebab dibandingkan manusia. Paus kerap terdampar di area yang sangat dangkal, dengan lantai laut yang melandai perlahan dan sering kali berpasir.
Dengan situasi seperti itu, tidak heran jika hewan-hewan ini, yang terbiasa berenang di laut dalam, bisa kesulitan dan bahkan kembali terdampar bila mereka berhasil mengambang lagi.
Kemampuan ekolokasi yang mereka gunakan untuk membantu navigasi juga tidak berfungsi baik di lingkungan yang demikian. Jadi cukup mungkin bila mayoritas paus terdampar akibat kesalahan navigasi, misalnya ketika mereka memburu mangsa hingga ke daerah asing dan berbahaya.
Di bagian selatan Laut Utara, kawanan paus pernah tercatat terdampar setidaknya sejak tahun 1577.
Selain itu, terdampar secara massal tidak hanya disebabkan oleh tersesat atau kesalahan menentukan kedalaman air. Bisa saja ada satu ekor atau lebih paus yang memang sakit, dan ketika mereka makin lemah, mencari perairan yang lebih dangkal sehingga lebih mudah bernafas ke permukaan.
Baca juga:Wallacea: laboratorium hidup evolusi
Namun ketika tubuh mereka beristirahat pada permukaan keras untuk waktu yang lama, rongga dada mereka akan tertekan dan organ-organ dalam mereka pun rusak.
Terkadang, kegiatan manusia dapat menyebabkan paus terdampar, khususnya kegiatan militer yang melibatkan penggunaan sonar. Hubungan ini pertama kali diungkapkan pada 1996 setelah latihan militer NATO di lepas pantai Yunani berlangsung bersamaan dengan terdamparnya 12 paus berparuh Cuvier. Sayangnya, hewan-hewan ini tidak sempat diperiksa dokter hewan.
Pada Mei 2000, kasus paus terdampar terjadi di Bahama bersamaan dengan aktivitas angkatan laut (AL) yang menggunakan sonar serupa. Ditemukan perdarahan pada sejumlah paus yang diperiksa, khususnya di telinga bagian dalam. Ini menandakan adanya trauma akustik.
Setelah kejadian serupa di Kepulauan Canary pada September 2002, dokter hewan juga mengidentifikasi gejala penyakit dekompresi yang artinya paus-paus itu tidak selalu mati karena terdampar, tapi mungkin saja terluka atau sudah mati lebih dulu di laut.
Banyak peneliti meyakini gelombang sonar mungkin memicu perilaku tertentu pada paus, yang mengganggu mereka dalam mengelola gas di dalam tubuh mereka. Akibatnya, kemampuan mereka menyelam dan timbul ke permukaan dengan aman pun terganggu.
Kebisingan dalam laut adalah masalah besar, yang muncul sebagai dampak kegiatan manusia memasukkan suara (dengan beragam intensitas dan frekuensi) ke dalam laut, yang berasal dari berbagai teknologi bahkan peledakan.
Gempa laut juga merupakan sumber kebisingan di bawah laut, yang juga bisa menyebabkan kerusakan fisik atau perilaku yang mengakibatkan paus terdampar, meski belum seorang pun yang membuat hubungan statistik di antara keduanya.
Kasus terdamparnya paus di Aceh dan Selandia Baru, dengan keberhasilan menyelamatkan paus dalam jumlah signifikan, juga menimbulkan pertanyaan apakah beberapa hewan yang sehat hanya mengikuti yang sakit ke daerah berbahaya.
Bertahun-tahun lalu, saya ikut membantu lumba-lumba paruh pendek biasa yang terdampar hidup-hidup di Teifi Estuary, Inggris. Satu paus mati dengan cepat dan hasil otopsi menunjukkan, hewan itu memiliki infeksi parasit paru berat, yang diperkirakan mempersulit bernafas. Individu lainnya tetap berada di dekat temannya yang sekarat dan tampak sangat tertekan, terus saja bersiul.
Kami berhasil mengambangkan kembali lumba-lumba ini dan akhirnya ia pun berenang pergi. Bagi saya, kejadian itu menunjukkan kuatnya ikatan sosial yang terjadi di antara mereka. Ketika kita melihat sejumlah besar paus atau lumba-lumba seolah-olah melakukan bunuh diri massal, kemungkinannya adalah mereka saling merespon satu sama lain secara vokal, mencerminkan hubungan sosial mereka yang kuat.
Riset menunjukkan, paus yang terdampar massal bahkan belum tentu saling terkait satu sama lain. Jadi mungkin kasus terdampar beramai-ramai adalah cerminan dari betapa kuatnya ikatan sosial di antara paus.
Peter Evans tidak bekerja, menjadi konsultan, memiliki saham, atau menerima dana dari perusahaan atau organisasi mana pun yang akan mengambil untung dari artikel ini, dan telah mengungkapkan bahwa ia tidak memiliki afiliasi di luar afiliasi akademis yang telah disebut di atas.
Blue Planet II: can we really halt the coral reef catastrophe?
Author: John Turner, Professor & Dean of Postgraduate Research, Bangor University
The third episode of the BBC’s Blue Planet II spectacularly described a series of fascinating interactions between species on some of the most pristine reefs in the world. These reefs, analogous to bustling cities, are powered by sunlight, and provide space and services for a wealth of marine life.
Competition is rife, as exemplified by the ferocious jaws of the metre-long bobbit worm, ready to pounce on unsuspecting fish by night from its lair in the sand, or the pulsating show of colours of the cuttlefish as it stalks a mesmerised crab. Other reef species team up in unlikely partnerships to improve the outcome of a hunt for fish amongst the coral, as shown by the pointing display of an octopus working in cahoots with a grouper.
Inevitably, the episode described how these cities are under threat, as warming oceans destroy the symbiotic relationship between the corals and the algae living within them, causing the corals to lose their algae, and become bleached.
Prolonged bleaching leads to the death of the colonies that build the reef, leaving behind lifeless ruins. Since 2014, an unprecedented series of consecutive warming events driven by climate change, have affected many reefs, including the Australian Great Barrier Reef, and annual bleaching is predicted to become more frequent, leaving no time for the reefs to recover between these extreme events. In the last scenes, narrator David Attenborough provides a glimmer of hope as he describes corals and other reef species spawning on mass to produce new generations of life to build new reefs.
What’s really going on?
The producers understandably visit the best and most pristine reefs in the world to capture these wonderful sequences. We must remember that the majority of coral reefs, especially those close to large human populations, are already degraded due to localised impact from over and destructive fishing, nutrient run off from urban and agricultural land, and coastal development.
The most severely threatened reefs are in South-East Asia and the Atlantic, but even the Indian Ocean, Middle East and wider Pacific are now suffering from direct human impact. Estimates indicate that 75% of the world’s reefs are already threatened by local threats combined with rising sea surface temperatures and mortality from coral bleaching.
Even the remote reefs of the central Indian Ocean and north-west Pacific are now weakened, and vulnerable to disease. Assuming current trajectories, by mid-century bleaching episodes are predicted to be annual events affecting most reefs, and by the end of the century, atmospheric carbon dioxide levels will have changed ocean chemistry causing acidification, weakening the calcium carbonate skeletons of corals and slowing their growth . In their weakened state, these corals reefs will be further compromised by more frequent tropical storms and rising sea levels.
Resilient reefs may have some ability to resist climate change and adapt to the changing conditions or recover from these disturbances. Corals in the Gulf experience high seasonal temperatures of up to 35°C without bleaching, having adapted to these conditions over evolutionary time, although sustained high temperatures, such as those as experienced in 2010, can still cause them to bleach .
Some corals grow in near shore murky waters, where they may receive protection from high solar irradiation; even cloudy conditions can protect corals during warming events. Strong water currents and upwelling may also mitigate bleaching on seaward reefs.
Calm conditions, on the other hand, appear to enhance bleaching susceptibility. The remote and protected reefs of the Chagos Archipelago in the central Indian Ocean experienced 90% mortality in shallow waters in the severe warming event of 1998. They displayed a relatively rapid recovery over 12 years compared to many other reefs with rapid growth of branching and tabular corals. But consecutive warming events in 2015, 2016 and 2017 have devastated the shallow (less than 15 metres deep) reefs of these uninhabited and isolated reefs once more, and recovery may be more challenging this time.
What can be done?
Coral recruits can already be observed, probably from slightly deeper depths, but they are settling on dead collapsing colonies and will be washed off the reefs in storms. Successful recolonisation may depend on the availability of stable substrates and being able to compete with the algae that is replacing the live coral.
Although global action is required to reduce greenhouse gas emissions (and this will have little effect until mid-century), management intervention at a local level can build resilience on reefs by reducing direct human impact. In a study in Belize, localised fishing was controlled in a Marine Reserve in which grazing of algae by parrotfish was maintained, halving the rate of reef decline.
By maintaining the organisation and complexity of reefs, we can ensure that these reef cities thrive, even in the most threatened regions.
At the end of the Blue Planet II reef episode, thousands of groupers gathered at the drop off on a pristine and remote reef in French Polynesia, risking gatherings of hundreds of sharks to swim out into the tidal stream to spawn.
Off the Cayman Islands, in the central Caribbean, similar groups of spawning Nassau grouper were once heavily exploited by local fishers but are now legally protected. Acoustic techniques have been used to show that they are now once more gathering in their thousands to spawn.
As Blue Planet II made clear, our planet’s reefs are both beautiful and in peril. We do, however, still have time to save them – but only if we act now.
John Turner receives funding from DEFRA Darwin Initiative and Bertarelli Foundation, and is a Trustee of the Chagos Conservation Trust
Why Holocaust jokes can only be told by a Jewish comedian
Author: Nathan Abrams, Professor of Film Studies, Bangor University
When Larry David joked about chatting up women in Nazi concentration camps recently he caused a minor storm of outrage. As part of a monologue on Saturday Night Live, David mused:
I’ve always been obsessed with women – and I’ve always wondered: If I’d grown up in Poland when Hitler came to power and was sent to a concentration camp, would I still be checking out women in the camp? I think I would.
“Of course,” he continued, “the problem is there are no good opening lines in a concentration camp. ‘How’s it going? They treating you OK? You know, if we ever get out of here, I’d love to take you out for some latkes. You like latkes?’”
David has joked about the Holocaust before. In the comedy show he co-created, Seinfeld, an entire episode is devoted to Schindler’s List. In his own show, Curb Your Enthusiasm, he plays Wagner (a favourite composer of Adolf Hitler) to a co-religionist who accuses him of being a self-hater. He invites a cast member of the reality show Survivor to meet a Holocaust survivor and they proceed to argue over who had it worse off. Many suggested David’s jokes weren’t in good taste, that he had crossed a line this time. But had he?
David is building upon a tradition of Holocaust humour which is nothing new. In the early 1960s, following the kidnap, trial, and execution of Adolf Eichmann, legendary Jewish comic, Lenny Bruce, had a joke in which he’d say in a redneck used car salesman’s voice: “Here’s a Volkswagen pickup truck that was just used slightly during the war carrying the people back and forth to the furnaces.” Or he held up a newspaper with the headline: “Six Million Jews Found Alive in Argentina.”
In 1964, Stanley Kubrick’s movie Dr Strangelove or: How I Learned to Stop Worrying and Love the Bomb parodied contemporary fears of nuclear destruction by conflating it with the Holocaust through its title character, a pantomime Nazi played by Peter Sellers. Three years later, in 1967, Mad Magazine’s Mein Kamp Humor Dept, produced the parody Hokum’s Heroes. “And here it is … the brand new weekly TV situation comedy featuring that gay, wild, zany, irrepressible bunch of World War II concentration camp prisoners … those happy inmates of ‘Buchenwald’ known as … ‘Hochman’s Heroes’.”
Then, in that same year, Mel Brooks directed The Producers a film which featured a bad taste musical named Springtime for Hitler, complete with Busby Berkeley-style routines of SS troops dancing in swastika formation.
Knowledge beats outrage
Such Holocaust humour has grown exponentially in recent decades. This is particularly evident in mainstream American cinema where the Holocaust often appears as an incidental, gratuitous, superfluous throwaway line, or in-joke. Take Woody Allen – who has had a career-long fascination with the Holocaust. When asked in Deconstructing Harry (1997): “Do you care even about the Holocaust or do you think it never happened?” Allen has his protagonist Harry Block respond: “Not only do I know that we lost six million, but the scary thing is records are made to be broken.”
As Holocaust scholar Lawrence Baron has pointed out in his book, Projecting the Holocaust into the present, images and themes from the Holocaust permeate popular culture like particles of dust filling the air. The Holocaust has become the benchmark and paradigm for evil. It is invoked – and, the more the term is used, the less powerful it becomes. This saturation has its consequences: it becomes ripe for humour. It is no longer taboo.
But it is also generational. For those born towards the end or soon after World War II, the Holocaust was a narrative they heard secondhand. For those born later, it is an historical event. They don’t know anyone who was murdered by the Nazis.
At the same time, Holocaust education has worked. In mainstream politics, it’s considered unacceptable to publicly deny the Holocaust – and is illegal to do so in many countries. For their part, younger Jews have learned that a low profile is useless, given that anti-Semites aren’t so discerning in their discrimination. At the same time, anti-Jewish prejudice has been on the decline in many countries – particularly towards the end of the 20th century and beginning of the 21st.
A generation of Jewish producers, directors, actors, actresses and screenwriters emerged that was less anxious, less afraid of stoking an antisemitic backlash. This is evidenced by the lack of outrage to so many of these jokes over the years, many of which have passed by barely noticed.
Larry David’s shtick on SNL is merely the latest in a 60-year trend. He is locating himself in a venerable tradition of gallows humour at which Jews have historically excelled. We have joked about pogroms before so why not the worst of them all? It does not mean that we are forgetting the Holocaust – on the contrary, the jokes are a form of remembrance. Having said that, I think that younger Jews are more likely to laugh than older Jewish people or non-Jews – we are more familiar with this humour and hence it’s less shocking.
But the key thing is: who is doing the telling? All the examples noted above are by Jews and that’s the principal point – if someone non-Jewish were to engage in this type of humour, it would have an entirely different connotation. It would not be appropriate.
Nathan Abrams receives funding from The British Academy.
Want to become self-compassionate? Run a marathon
Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University
Unsurprisingly, running a marathon is tough. It takes months of training before runners even make it to the starting line and this preparation can, at times, feel like punishment. The marathon runner in training can often be found limping around with blisters, sore muscles and blackened or lost toenails. Not, perhaps, an image we might naturally associate with the idea of “self-compassion”.
A relatively new concept, self-compassion has been hailed as a more robust alternative to self-esteem. While compassion refers to the demonstration of sympathy and concern for others in times of suffering, self-compassion entails showing this same understanding to ourselves.
One of first skills needed for self-compassion is self-kindness – extending compassion to yourself, even when you feel like you have failed, which can be challenging to say the least. Often when faced with failure, we implicitly assume self-criticism is necessary in order to motivate strong future performance. But in reality this strategy often falls flat. Giving oneself a harsh talking to doesn’t just make us feel bad, it also interferes with our ability to calmly examine a situation and identify what to change in order to improve – an essential component of psychological resilience.
But what does all of this have to do with running a marathon?
Training for a marathon can revolutionise self-perception, making kind self-talk – where you speak directly to yourself either mentally or out loud – easier for even the most reluctant of individuals. This shift isn’t prompted by changes in physique, but of mind. After dedicating oneself to a marathon, the anatomy receives a perceptual upgrade and transforms from a mere body into an essential tool. You begin to see the true value in your own body and the strength that it has.
Research suggests that working towards purposeful goals enhances our sense of self-worth, so under the conditions of marathon training, self-care – looking after ourselves physically – is not only viewed as essential for performance, but as something we deserve. Commit to a goal, invest time, energy and emotion in that goal, and anything that threatens the performance of the body – literally the vehicle needed to carry you to your end target – is unacceptable.
This relates to the second element of self-compassion: a balanced perspective. Described as caring for ourselves in an enduring way, a balanced perspective ensures happiness and health in the long-term. This can also be tricky, given we are typically geared toward instant gratification and struggle to connect the immediate rewards of pleasurable items such as food, alcohol and cigarettes, with their long-term consequences. In fact, neurological research suggests that we literally see our future selves as different people.
However, training for a marathon can help perceptual balance, because it directs our attention away from our immediate concerns and towards the future. Research suggests that goals cognitively activate stimuli which help us achieve them. This means the motivation to complete a marathon makes objects and activities which are relevant to our long-term health implicitly attractive and easier to engage with.
More specifically, setting a goal which requires us to plan and monitor progress over weeks or months can help to bridge the gap between current and future happiness. Sticking to a schedule and receiving feedback, such as identifying weekly mileage goals and achieving new distance targets, can make us more willing to make choices that will benefit us later on. This might be resisting the instant pleasure of one too many drinks on a Friday night, or getting enough sleep so that we feel at our best when training.
The third and final component of self-compassion is common humanity. This refers to the understanding that suffering is a natural and shared part of being human. Based on the idea that feeling isolated in our pain exacerbates perceptions of inadequacy and insecurity, common humanity is an important part of avoiding negative cycles of self-pity.
Running is sometimes considered an isolated and fiercely competitive sport, but this isn’t necessarily true. Runners step in to help one another in times of difficulty – just look at Matthew Rees who helped fellow runner David Wyeth complete the last 300m of the 2017 London Marathon, to the detriment of his own timing. Running provides a sense of human connection, because it shows that struggle is normal. Being one in a field of thousands, communally suffering in the pursuit of a common goal, is paradoxically satisfying. Perhaps because it allows us to appreciate just how small we are in the scheme of things.
So, while marathon training may be painful, sometimes we have to experience a degree of suffering in order to truly value ourselves, to appreciate others, and to learn what it means to be self-compassionate.
Rhi Willmot does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Investing in warmer housing could save the NHS billions
Author: Dr Nathan Bray, Research Officer in Health Economics, Bangor UniversityEira Winrow, PhD Research Candidate and Research Project Support Officer, Bangor UniversityRhiannon Tudor Edwards, Professor of Health Economics, Bangor University
British weather isn’t much to write home about. The temperate maritime climate makes for summers which are relatively warm and winters which are relatively cold. But despite rarely experiencing extremely cold weather, the UK has a problem with significantly more people dying during the winter compared to the rest of the year. In fact, 2.6m excess winter deaths have occurred since records began in 1950 – that’s equivalent to the entire population of Manchester.
Although the government has been collecting data on excess winter deaths – that is, the difference between the number of deaths that occur from December to March compared to the rest of the year – for almost 70 years, the annual statistics are still shocking. In the winter of 2014/15, there were a staggering 43,900 excess deaths, the highest recorded figure since 1999/2000. In the last 10 years, there has only been one winter where less than 20,000 excess deaths occurred: 2013/14. Although excess winter deaths have been steadily declining since records began, in the winter of 2015/16 there were still 24,300.
According to official statistics, respiratory disease is the underlying cause for over a third of excess winter deaths, predominantly due to pneumonia and influenza. About three-quarters of these excess respiratory deaths occur in people aged 75 or over. Unsurprisingly, cold homes (particularly those below 16°C) cause a substantially increased risk of respiratory disease and older people are significantly more likely to have difficulty heating their homes.
Health and homes
The UK is currently in the midst of a housing crisis – and not just due to a lack of homes. According to a 2017 government report, a fifth of all homes in England fail to meet the Decent Homes Standard– which is aimed at bringing all council and housing association homes up to a minimum level. Despite the explicit guidelines, an astonishing 16% of private rented homes and 12% of housing association homes still have no form of central heating.
Even when people have adequate housing, the cost of energy and fuel can be a major issue. Government schemes, such as the affordable warmth grant, have been implemented to help low income households increase indoor warmth and energy efficiency. However, approximately 2.5m households in England (about one in nine) are still in fuel poverty – struggling to keep their homes adequately warm due to the cost of energy and fuel – and this figure is rising.
Poor housing costs the NHS a whopping £1.4 billion every year. Reports indicate that the health impact of poor housing is almost on a par with that of smoking and alcohol. Clearly, significant public health gains could be made through high quality, cost-effective home improvements, particulalrly for social housing. Take insulation, for example: evidence shows that properly fitted and safe insulation can increase indoor warmth, reduce damp, and improve respiratory health, which in turn reduces work and school absenteeism, and use of health services.
Warmth on prescription
In our recent research, we examined whether warmer social housing could improve population health and reduce use of NHS services in the northeast of England. To do this, we analysed the costs and outcomes associated with retrofitting social housing with new combi-boilers and double glazed windows.
After the housing improvements had been installed, NHS service use costs reduced by 16% per household – equating to an estimated NHS cost reduction of over £20,000 in just six months for the full cohort of 228 households. This reduction was offset by the initial expense of the housing improvements (around £3,725 per household), but if these results could be replicated and sustained, the NHS could eventually save millions of pounds over the lifetime of the new boilers and windows.
The benefits were not confined to NHS savings. We also found that the overall health status and financial satisfaction of main tenants significantly improved. Furthermore, over a third of households were no longer exhibiting signs of fuel poverty – households were subsequently able to heat all rooms in the home, where previously most had left one room unheated due to energy costs.
Perhaps it is time to think beyond medicines and surgery when we consider the remit of the NHS for improving health, and start looking into more projects like this. NHS-provided “boilers on prescription” have already been trialled in Sunderland with positive results. This sort of cross-government thinking promotes a nuanced approach to health and social care.
We don’t need to assume that the NHS should foot the bill entirely for ill health related to housing, for instance the Treasury could establish a cross-government approach by investing in housing to simultaneously save NHS money. A £10 billion investment into better housing could pay for itself in just seven years through NHS cost savings. With a growing need to prevent ill health and avoidable death, maybe it’s time for the government to think creatively right across the public sector, and adopt a new slogan: improving health by any means necessary.
Nathan Bray receives funding from Health and Care Research Wales and the EU Horizon 2020 Framework Programme for Research and Innovation
Eira Winrow receives PhD funding from Health and Care Research Wales.
Rhiannon Tudor Edwards receives funding from the National Institute for Health Research, Health Technology Assessment (HTA), Health and Care Research Wales and the EU Horizon 2020 Framework Programme for Research and Innovation.
Why we taught psychology students how to run a marathon
Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University
Mike Fanelli, champion marathon runner and coach, tells his athletes to divide their race into thirds. “Run the first part with your head,” he says, “the middle part with your personality, and the last part with your heart.” Sage advice – particularly if you are a third year psychology student at Bangor University, preparing for one of the final milestones in your undergraduate experience: running the Liverpool Marathon.
For many students, the concluding semester of third year is a time of uncertainty. Not only are they tackling the demands of a dissertation and battling exams, but they are also teetering on the precipice of an unknown future, away from the comfort of university.
As spring draws to a close, the academic atmosphere provides a heady cocktail of sleep-deprivation, achievement and stress. Yet 22 of our students managed to do all this and train for a marathon as part of their “Born To Run” class. None of them had completed such a distance before – in fact, most had run no further than 5km prior to their module induction.
Rewind several months, and I am listening to my PhD supervisor, John Parkinson, and fellow academic Fran Garrad-Cole discuss their plans for “the running module”, which would coincide with more traditional lectures on positive and motivational psychology. I was greatly enthused by the idea given the psychological benefits of physical activity. Exercise is related to improvements in mood, self-esteem and social integration, as well as reducing symptoms of depression.
Particularly relevant to those under pressure at work or school, is the association between physical activity and the ability to cope with stress, as well as enhanced cognitive functioning. But despite these benefits, designing a class around running a marathon was no easy task.
Race to success
As neither module organiser nor student, it was easy for me to relish the gamble of this venture. My participation – assisting the classes and helping the students to train for the marathon – did not place my professional reputation on the line, nor did it have the potential to significantly impact the outcome of my degree. The danger with this kind of practical application is that when things fail, the failure is highly visible.
It would be easy to reduce “success” into a binary distinction of running or not running on race day. Yet this perspective would very much miss the point. The aim of the module wasn’t to complete a marathon, but to create graduates who set huge challenges, and nail them, whenever that may be.
Not every student ran the marathon, but for the 13 who did, the three who ran the half, and those who didn’t run at all, the lessons on perseverance and resilience demonstrate that failure can be an essential component of success.
The message from the Born to Run module was essentially one of courage. T S Elliot once said, “Only those who risk going too far can possibly find out how far one can go.” This statement rings true on multiple levels. It was visible in the students’ bravery in publicly committing to such a challenging goal, John and Fran’s professional risk, and in both the mental and physical ardour that training for a marathon takes.
What I saw was the incredible impact that setting high expectations balanced with warm support and strategic expertise can have on student engagement. Most importantly, I learnt how bringing your own passion into the classroom can transform the learning experience, transcending both their academic and personal life.
So to return to Mike Fanelli, the final stages of the module, as well as the marathon, are about the heart. The technical strategies the students learnt saw them through the first few miles, and the traits they were encouraged to develop enabled them to cover the next third. But in the final part, when delirium sets in, it’s the emotional bond created by such a challenging yet supportive experience that gets you through.
The pleasure I felt at eventually crossing the line was multiplied immeasurably by sharing this experience with the others I have seen develop over the semester. I will be forever grateful to one student, Patrick, for pulling me through that last mile, and forever in awe of Fran, John and the first ever Born to Runners.
Rhi Willmot has nothing to disclose.