On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
Can a brain injury change who you are?
Author: Leanne Rowlands, PhD researcher in Neuropsychology, Bangor University
Who we are, and what makes us “us” has been the topic of much debate throughout history. At the individual level, the ingredients for the unique essence of a person consist mostly of personality concepts. Things like kindness, warmth, hostility and selfishness. Deeper than this, however, is how we react to the world around us, respond socially, our moral reasoning, and ability to manage emotions and behaviours.
Philosophers, including Plato and Descartes, attributed these experiences to non-physical entities, quite separate to the brain. “Souls”, they describe, are where human experiences take place. According to this belief, souls house our personalities, and enable moral reasoning to occur. This idea still enjoys substantial support today. Many are comforted by the thought that the soul does not need the brain, and mental life can continue after death.
If who we are is attributed to a non-physical substance independent of the brain, then physical damage to this organ should not change a person. But there is an overwhelming amount of neuropsychological evidence to suggest that this is, in fact, not only possible, but relatively common.
The perfect place to start explaining this is the curious case of Phineas Gage.
In 1848, 25-year-old Gage was working as a construction foreman for a railroad company. During the works, explosives were required to blast away rock. This intricate procedure involved explosive powder and a tamping iron rod. In a moment of distraction, Gage detonated the powder and the charge went off, sending the rod through his left cheek. It pierced his skull, and travelled through the front of his brain, exiting the top of his head at high speed. Modern day methods have since revealed that the likely site of damage was to parts of his prefrontal cortex.
Gage was thrown to the floor, stunned, but conscious. His body eventually recovered well, but Gage’s behavioural changes were extraordinary. Previously a well-mannered, respectable, smart business man, Gage reportedly became irresponsible, rude and aggressive. He was careless and unable to make good decisions. Women were advised not to stay long in his company, and his friends barely recognised him.
A similar case was that of photographer and forerunner of motion pictures Eadweard Muybridge. In 1860, Muybridge was involved in a stagecoach accident and sustained a brain injury to the orbitofrontal cortex (part of the prefrontal cortex). He had no recollection of the crash, and developed traits that were quite unlike his former self. He became aggressive, emotionally unstable, impulsive and possessive. In 1874, upon discovering his wife’s infidelity, he shot and killed the man involved. His attorney pled insanity, due to the extent of the personality changes following the accident. Sworn testimonies emphasised that “he seemed like a different man”.
Perhaps an even more controversial example is that of a 40-year-old school teacher who, in the year 2000, developed a strong interest in pornography, particularly child pornography. The patient went to great lengths to conceal this interest, which he acknowledged was unacceptable. But unable to refrain from his urges, he continued to act on his sexual impulses. When he began making sexual advances towards his young stepdaughter, he was legally removed from the home and diagnosed with paedophilia. Later, it was discovered that he had a brain tumour displacing part of his orbitofrontal cortex, disrupting its function. The symptoms resolved with the removal of the tumour.
All these cases have one thing in common: damage to areas of the prefrontal cortex, in particular the orbitofrontal cortex. Although they may be extreme examples, the idea that damage to these parts of the brain results in severe personality changes is now well-established. The prefrontal cortex has a role in managing behaviours, regulating emotions and responding appropriately. So it makes sense that disinhibited and inappropriate behaviour, psychopathy, criminal behaviour, and impulsivity have all been linked to damage of this area.
However, changes after injury can be more subtle than those previously described. Consider the case of Mr. L, who suffered a severe traumatic brain injury after falling off a roof while supervising a building construction. His later aggressive behaviour and delusional jealousy about his wife’s apparent infidelity caused a breakdown in their relationship. To her, he was not the same man anymore.
Difficulties with emotion management like this are not only distressing, but are predictive of lower psychological adjustment, negative social changes and greater caregiver distress. Many brain injury survivors also suffer with depression, anxiety and social isolation, while struggling to adjust to post-injury life.
But with a growing appreciation of the relevance of emotional adjustment in rehabilitation, treatments have been developed to help manage these changes. In our lab, we have developed the BISEP (Brain Injury Solutions and Emotions Programme), which is a cost-effective, education-based, group therapy. This addresses several common complaints of brain injury survivors and has a strong emphasis on emotion regulation. It teaches attendees strategies that can be used adaptively and independently, to help manage their emotions and associated behaviours. Although it is early days, we have obtained some positive preliminary results.
From a neuropsychological perspective, it’s clear that who we are is dependent on the brain, and not the soul. Damage to the prefrontal cortex can change who we are, and though people have become unrecognisable from it in the past, new strategies will make a big difference to their lives. It may be too late for Gage, Muybridge and others, but brain injury survivors of the future will have the help they need to go back to living their lives as they did before.
Leanne Rowlands does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Supercontinent formation may be linked to a cycle of supertides
Author: Mattias Green, Reader in Physical Oceanography, Bangor University
Earth’s crust is made up of fractured slabs of rock, like a broken shell on an egg. These plates move around at speeds of about 5cm per year – and eventually this movement brings all the continents together and form what is known as a supercontinent. The last supercontinent on Earth was Pangaea, which existed between 300-180m years ago.
This collection and dispersion of the continents is known as a supercontinent cycle, and the world now is 180m years into the current cycle. It is predicted that the next supercontinent will form in about 250m years, when the Atlantic and Pacific oceans both close and a new ocean forms where the large Asian plate splits. Because the plates move around, ocean basins change their shape and size. For example, the Atlantic is currently expanding at about the rate your fingernails grow (a couple of centimetres per year), whereas the Pacific is slowly closing.
These changes in the ocean basins can have a very large impact on the tides over millions of years. This is because the tide moves around the oceans like a very long wave, with more than 1,000km between two peaks. The way this wave moves is largely controlled by the shape of the ocean basin and its depth, and if the ocean has the right size – if the length of a basin is half that of the wave, or “resonant” – the tides can become very large.
Resonance can happen in any system that swings or oscillates if you force it at its natural period. For example, if you give a child on a swing a small push at the right time, they will swing higher and higher, because you are forcing them at the natural period of the swing. The period of the tide is set by the motions of the Earth, moon and sun – and the natural period of an ocean basin is set by its geometry. For example, today, the north Atlantic is very near resonance because these two periods are almost the same. This is why the tides in the Atlantic are much larger than those in the Pacific or Indian Oceans.
But this has not always been the case. From experiments with computer models which can simulate the tides with great accuracy, we know that the tides were weak for long periods of the current supercontinent cycle, because the shape and size of the basins couldn’t support large tides. In fact, of the past 250m years, it is only the last 2m years or so that have seen large tides on Earth. Since we are approaching the halfway point of the supercontinent cycle, we asked ourselves a question: what will happen to the tides as the next supercontinent assembles in 250m years or so? Is it possible that there is a supertidal cycle linked to the supercontinent cycle?
Using the computer model, we have now found that there is indeed a supertidal cycle linked to the supercontinent cycle. In fact, there are two: we are currently at the start of one “tidal maximum”, a period of time when the tides are very large. They will then weaken significantly, before briefly becoming large again in around 150m years from now. After that, the tides will again drop down to less than half of the energy levels they have at present as the next supercontinent forms. This will happen because the basins go in and out of resonance as their shape changes. The tidal maxima are brief in geological terms and only last 20m years or so. For most of the time, the tides are less energetic than they are today and, over the 400-600m years between the formations of the two supercontinents, the tides are only large for 50m years in total.
Tides are a major energy source for the ocean: the energy pumped into the tide by the sun and the moon is lost, or dissipates, within the ocean. This energy helps stir the ocean – much like a spoon stirs a cup of coffee. In the same way as the spoon moves sugar and milk around in the cup, the tide can drive movements of nutrients, heat and salt between the deep ocean and the surface. Fluxes of heat and salt are key to the large-scale climate controlling ocean circulation and fluxes of nutrients help sustain biological production, especially in shallow seas.
Changes in tides on any timescale can have large effects on the whole Earth system. While the changes described here may not have impact on us in the immediate future, it adds to our understanding of how the tides interact within various disciplines – including plate tectonics, the climate system, nutrient recycling and, eventually, the ocean’s ability to evolve and host life.
Mattias Green receives funding from The Natural Environmental Research Council (grants NE/F014821/1 and NE/I030224/1).
New styles of strikes and protest are emerging in the UK
Author: Emma Sara Hughes, PhD Candidate in Employment Relations, Bangor UniversityTony Dundon, Professor of HRM & Employment Relations, University of Manchester
The image of strikers picketing outside factory gates is usually seen as something from the archives. Official statistics show an almost perennial decline in formal strikes. In the month of January 2018 there were 9,000 recorded working days lost due to strikes – a tiny fraction of the 3m recorded in January 1979.
Yet there has been a noticeable increase in private sector working days lost from strike action. In January 2018, the figure stood at 231,000 working days lost. That is 146,000 more days than in January 2017 and 166,000 more than than January 2016.
And it’s not just those on the left who are striking. Workers are also agitated in modern and union-free enterprises. For example, Ryanair was forced to bargain with trade unions after pilots across Europe threatened industrial action, despite its flamboyant CEO, Michael O’Leary, once proclaiming that“hell would freeze over” before his company recognised a union. McDonald’s workers in Cambridge and London also went on strike over pay and zero-hours contracts late last year, with talk of more action to come.
The beginning of 2018 witnessed some high profile strikes in key sectors: at a number of railways over safety; at water company United Utilities over pay and working conditions; at IT giant Fujitsu over job losses; and thousands of lecturers across more than 60 universities have been striking over pensions.
What’s all the fuss about?
People are worried about their pay, working conditions, future earnings and security at a time when the world of work is changing.
University lecturers are angered not only at the reduced pension deal being offered by their employers’ group, Universities UK. Views are mixed but many are also aggrieved at their own union, the University and College Union, for recommending an offer that some local activists and members view as falling short on their demands.
Evidently conflict has not been eradicated from modern workplaces. Employees in multiple sectors also protest in other ways such as absenteeism, minor acts of defiance, mischief or sabotage. The Centre of Economic and Business consultancy reports year-on-year increases in absenteeism since 2011. Short disputes and other types of protest are excluded from official strike statistics – hence, many go unnoticed.
Newer patterns of resistance include social media campaigns over precarious zero-hour contracts, “lunchtime protests” such as those at HM Revenue & Customs and Bentley cars, government lobbying by workers at engineering firm GKN over a takeover, or worker sit-ins as staged by hundreds of Hinckley Point power station workers over pay. Meanwhile students have occupied university premises in solidarity with striking lecturers.
The shadow of Brexit
Predicting cause and effect for social phenomena is difficult. Protests are often attributed to employment and economic cycles, combined with changing social values of younger people.
The emergent wave of dissent may indicate we are approaching what some economists call a long-wave “Kondratieff cycle” – named after the Russian economist Nikolai Kondratieff. Here economic cycles can stretch over longer periods – say ten, 20 or 40 years.
If the mid-1990s was an “upswing”, slumping with the 2008 financial crisis, growing dissent may signal another long-wave turning point, fuelled by fears of the UK’s fragile future. For instance, in July 2017 the UK’s fiscal watchdog warned that Britain’s public finances were worse than on the eve of the financial crash. Coupled with the Conservatives losing their majority in government, it may be that the real effect of Brexit is only now materialising and compounding the ill effects of austerity.
Another reason may be because real wages have plummeted, while unemployment is at its lowest since the peak of strike activity in the mid-1970s (now 4.3%), thereby giving workers a greater degree of confidence in pressing their demands.
Changing social values
Another possible explanation is that people now expect more and want immediate change. This is exemplified in shock votes for Donald Trump in the US, Brexit or even Jeremy Corbyn’s popularity.
A new moral consciousness may even have replaced a former industrial working-class ideology. Younger and female labour market participation rates have burgeoned but so too has the gender pay gap and inequality. Multiculturalism, social inclusion, global employment issues are all catalysts for pioneering human rights values for businesses.
People are not satisfied with this status quo and are calling for change. So, as well as the more traditional style of organised action, some workers are expressing this new potential moral consciousness with subtle active protest such as the lunchtime protests, worker sit-ins and social media campaigns.
Analysis also points to “conflict benefits”: for example striking lecturers report lower stress levels, renewed energy and an enjoyment of the solidarity that comes from protest. Research also shows that conflict can support creativity and open disagreement can incite productive outcomes.
Whether we are entering a Kondratieff upswing or witnessing a new active moral consciousness is unclear. Nevertheless, it may be that protest can produce positive outcomes not only for workers, but also help companies to better engage with their workforce.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
AI like HAL 9000 can never exist because real emotions aren't programmable
Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University
HAL 9000 is one of the best-known articifical intelligence characters of modern film. This superior form of sentient computer embarks on a mission to Jupiter, along with a human crew, in Stanley Kubrick’s iconic film 2001: A Space Odyssey, which is currently celebrating its 50th year since release.
HAL is capable of speech production and comprehension, facial recognition, lip reading – and playing chess. Its superior computational ability is boosted by uniquely human traits, too. It can interpret emotional behaviour, reason and appreciate art.
By giving HAL emotions, writer Arthur C. Clarke and filmmaker Stanley Kubrick made it one of the most human-like fictional technologies ever created. In one of the most beautiful scenes in sci-fi history, it says it is “afraid” when mission commander Dr David Bowman starts disconnecting its memory modules following a series of murderous events.
HAL is programmed to deliver optimal assistance to the crew of the spaceship Discovery. It has control over the entire vessel, and staggering intelligence to aid it in its task. Yet soon after we become acquainted with HAL, we cannot help feeling that it is worried – it even claims it is experiencing fear – and that it has an ability to empathise, however small. But while there is nothing to preclude the idea that such an emotional AI could see the light of day, if such depth of feelings were to be included in real world technology, they would have to be entirely fake.
A ‘perfect’ AI
When, during the film, Bowman starts to manually override HAL’s functions, it asks him to stop, and after we witness a fascinating obliteration of HAL’s “mental” faculties, the AI seemingly tries to comfort itself by singing Daisy Bell – reportedly the first ever song produced by a computer.
In fact, viewers begin to feel that Bowman is killing HAL. The disconnection feels like a vengeful termination, after witnessing the film’s earlier events. But though HAL makes emotional statements, a real world AI would certainly be limited to having only the ability to reason, and make decisions. The cold, hard truth is that – despite what computer scientists say– we will never be able to program emotions in the way HAL’s fictional creators did because we do not understand them. Psychologists and neuroscientists are certainly trying to learn how emotions interact with cognition, but still they remain a mystery.
Take our own research, for example. In a study conducted with Chinese-English bilinguals, we explored how the emotional value of words can change unconscious mental operation. When we presented our participants with positive and neutral words, such as “holiday” or “tree”, they unconsciously retrieved these word forms in Chinese. But when the words had a negative meaning, such as “murder” or “rape”, their brain blocked access to their mother tongue – without their knowledge.
Reason and emotion
On the other hand, we know a lot about reasoning. We can describe how we come to rational decisions, write rules and turn these rules into process and code. Yet emotions are a mysterious evolutionary legacy. Their source is the source of everything, and not simply an attribute of the mind that can be implemented by design. To program something, you not only need to know how it works, you need to know what the objective is. Reason has objectives, emotions don’t.
In an experiment conducted in 2015, we were able to put this to the test. We asked native speakers of Mandarin Chinese studying at Bangor University to play a game of chance for money. In each round, they had to take or leave a proposed bet shown on the screen – for example, a 50% chance of winning 20 points, and a 50% chance of losing 100 points.
We hypothesised that giving them feedback in their mother tongue would be more emotional to them and so lead them to behave differently, compared to when they received feedback in their second language, English. Indeed, when they received positive feedback in native Chinese, they were 10% more likely to take a bet in the next round, irrespective of risk. This shows that emotions influence reasoning.
Going back to AI, as emotions cannot be truly implemented in a program – no matter how sophisticated it may be – the reasoning of the computer can never be changed by its feelings.
One possible interpretation of HAL’s strange “emotional” behaviour is that it was programmed to simulate emotions in extreme situations, where it would need to manipulate humans not on the basis of reasoning but by calling upon their emotional self, when human reason fails. This is the only way I can see that real world AI could convincingly simulate emotions in such circumstances.
In my opinion, we will not, ever, build a machine that feels, hopes, is scared, or happy. And because that is an absolute prerequisite to any claim that we have engendered artificial general intelligence, we will never create an artificial mind outside life.
This is precisely where the magic of 2001: A Space Odyssey lies. For a moment, we are led to believe the impossible, that pure science fiction can override the facts of the world we live in.
Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The English language is the world's Achilles heel
Author: Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University
English has achieved prime status by becoming the most widely spoken language in the world – if one disregards proficiency – ahead of Mandarin Chinese and Spanish. English is spoken in 101 countries, while Arabic is spoken in 60, French in 51, Chinese in 33, and Spanish in 31. From one small island, English has gone on to acquire lingua franca status in international business, worldwide diplomacy, and science.
But the success of English – or indeed any language – as a “universal” language comes with a hefty price, in terms of vulnerability. Problems arise when English is a second language to either speakers, listeners, or both. No matter how proficient they are, their own understanding of English, and their first (or “native”) language can change what they believe is being said.
When someone uses their second language, they seem to operate slightly differently than when they function in their native language. This phenomenon has been referred to as the “foreign language effect”. Research from our group has shown that native speakers of Chinese, for example, tended to take more risks in a gambling game when they received positive feedback in their native language (wins), when compared to negative feedback (losses). But this trend disappeared – that is, they became less impulsive – when the same positive feedback was given to them in English. It was as if they are more rational in their second language.
While reduced impulsiveness when dealing in a second language can be seen as a positive thing, the picture is potentially much darker when it comes to human interactions. In a second language, research has found that speakers are also likely to be less emotional and show less empathy and consideration for the emotional state of others.
For instance, we showed that Chinese-English bilinguals exposed to negative words in English unconsciously filtered out the mental impact of these words. And Polish-English bilinguals who are normally affected by sad statements in their native Polish appeared to be much less disturbed by the same statements in English.
In another recent study by our group, we found that second language use can even affect one’s inclination to believe the truth. Especially when conversations touch on culture and intimate beliefs.
Since second language speakers of English are a huge majority in the world today, native English speakers will frequently interact with non-native speakers in English, more so than any other language. And in an exchange between a native and a foreign speaker, the research suggests that the foreign speaker is more likely to be emotionally detached and can even show different moral judgements.
And there is more. While English provides a phenomenal opportunity for global communication, its prominence means that native speakers of English have low awareness of language diversity. This is a problem because there is good evidence that differences between languages go hand-in-hand with differences in conceptualisation of the world and even perception of it.
In 2009, we were able to show that native speakers of Greek, who have two words for dark blue and light blue in their language, see the contrast between light and dark blue as more salient than native speakers of English. This effect was not simply due to the different environment in which people are brought up in either, because the native speakers of English showed similar sensitivity to blue contrasts and green contrasts, the latter being very common in the UK.
On the one hand, operating in a second language is not the same as operating in a native language. But, on the other, language diversity has a big impact on perception and conceptions. This is bound to have implications on how information is accessed, how it is interpreted, and how it is used by second language speakers when they interact with others.
We can come to the conclusion that a balanced exchange of ideas, as well as consideration for others’ emotional states and beliefs, requires a proficient knowledge of each other’s native language. In other words, we need truly bilingual exchanges, in which all involved know the language of the other. So, it is just as important for English native speakers to be able to converse with others in their languages.
The US and the UK could do much more to engage in rectifying the world’s language balance, and foster mass learning of foreign languages. Unfortunately, the best way to achieve near-native foreign language proficiency is through immersion, by visiting other countries and interacting with local speakers of the language. Doing so might also have the effect of bridging some current political divides.
Guillaume Thierry has received funding from BBSRC, ESRC, AHRC, British Academy, and ERC
We're mapping wartime shipwrecks to explore the past – and help develop green energy projects
Author: Michael Roberts, SEACAMS R&D Project Manager, Centre for Applied Marine Sciences, Bangor University
Wartime shipwrecks such as the USS Juneau– recently discovered in the Pacific Ocean by philanthropist Paul Allen and his team – are of great interest to both military historians and the general public. The USS Juneau was holed by a Japanese torpedo off the Solomon Islands in November 1942, and sank in more than 13,000 feet of water with the loss of 687 lives. Its discovery offers a hugely valuable insight into the fate of both the ship and its crew.
Many such wrecks lie in extremely deep, relatively clear waters and are the legacy of naval battles fought far out to sea. But some of the technologies and methods that are being used to locate and identify such sites are now being employed by scientists in shallower, sediment-rich UK waters for similar – and very different – purposes.
During both world wars, Britain relied heavily on shipping convoys to supply the nation via well-established maritime routes into major ports such as Liverpool, Cardiff and Bristol. But these busy marine “corridors” were also well known to enemy forces, and losses due to German U-boat attacks, mines and collisions due to enforced “blackouts” in the Irish Sea were significant throughout both conflicts. There are more than 200 such wreck sites around Wales and many have yet to be examined in any great detail.
Since 2014, via the SEACAMS project funded by the Wales European Funding Office (WEFO), scientists from the School of Ocean Sciences at Bangor University have been using their research vessel Prince Madog– which is equipped with state-of-the-art multibeam sonar technology – to locate and survey vessels from both world wars. And in the Irish Sea alone, there are plenty to choose from.
How it works
The modern multibeam sonar systems on the Prince Madog generate very high resolution, three-dimensional models of the seafloor as the research vessel moves through the water over it. Depending on conditions and the specific systems used, these models can allow surveyors and scientists to identify objects at near centimetre scale. In water depths of 100 metres, typically found in the Irish Sea, researchers are generating models and images of wrecks that can help marine archaeologists to confirm their identity and even provide evidence of their demise. So far, more than 70 individual sites have been studied and it’s hoped that the project will survey around 100 new wreck sites this year.
While these wartime relics can provide valuable information to historians and archaeologists, they may also help lead to the birth of a new industry. The data being collected are providing scientists with unique insights into how these wrecks influence physical and biological processes in the ocean and this information is now being used to support the ambitions of the marine renewable energy (MRE) sector via research and development projects with developers such as Minesto in North Wales and Wave Hub in Pembrokeshire.
A number of MRE projects –– some being planned, some already underway – aim to capitalise on Wales’ excellent wave and tidal resources to create a sustainable energy supply. To assist in this, scientists at Bangor are now using shipwrecks as models and laboratories for predicting what will happen when key MRE-related infrastructure, such as foundations, turbines and cabling, are placed on the seabed at various locations.
Wrecks provide information on how the tide and currents have removed or deposited sediments and how the presence of these structures on the seabed have influenced these processes over time. Researchers are also looking at how these structures can act as artificial reefs, potentially increasing the number of fish in an area and attracting whales, dolphins and diving birds. Through repeat sonar surveys, the research is also examining how different wrecks are degrading and how these vessels may ultimately pose a risk of pollution to nearby coastlines.
The data gathered will be hugely useful to those behind MRE projects, allowing them to better predict how green energy infrastructure will effect – and be affected by – their undersea locations.
As with the surveys underway in the South Pacific, such as the one that discovered the USS Juneau, the research being conducted in the Irish Sea is also driven by a desire to improve our understanding of past conflicts.
The Heritage Lottery funded project, Commemorating the Forgotten U-boat War around the Welsh Coast, 1914-18, for example, is being led by the Royal Commission on the Ancient and Historical Monuments in Wales in partnership with Bangor University and the Nautical Archaeology Society. It aims to highlight the fact that not all World War I battles and losses occurred along the Western Front – indeed, many raged within sight and sound of the UK coastline.
They were also truly international incidents. Many of the ships sunk were British, French, Irish, Norwegian, Portuguese and Russian – with crews from all over the world. Many German vessels were sunk, too.
The surveys are also solving scores of mysteries. Of the shipwreck sites in the Irish Sea examined so far, we have found that 40% of the vessels have been incorrectly identified on maps and charts. Using the detailed models produced by the sonar technology – as well as naval archives, shipyard records and a little detective work – we hopefully can ensure these mistakes are corrected and that we know exactly what was sunk where. This will give us a far clearer picture of what now lies beneath the waves – and what such wrecks can tell us about the turbulent past of these oceans.
Michael Roberts receives funding from Wales European Funding Office (WEFO) via the SEACAMS2 project and through the Heritage Lottery Fund via the Royal Commission on the Ancient and Historical Monuments of Wales led 'U-Boat project'.
A great year for signed languages in film – and what we can learn from it
Author: Sara Louise Wheeler, Lecturer in Social Policy (Welsh medium), Bangor University
Looking back at the films released in 2017, and those honoured at the Oscars, it is quite remarkable to note the prominence of signed languages. Three films in particular stand out for their sensitive portrayals of signed languages as bona fide languages: Baby Driver, The Shape of Water and The Silent Child. Two of these films, Baby Driver and The Silent Child, also make an important contribution – both onscreen and off – towards recognising and respecting Deaf culture, identity, and community; they both have Deaf actors playing characters that demonstrate the importance of signed languages in their everyday lives.
Baby Driver – which had two of its three 2018 Academy Award nominations for sound editing and mixing – contains a beautiful portrayal of American Sign Language (ASL) and its role in everyday life. Central character Baby/Miles and his Deaf foster father Joseph, discuss relationships, Baby’s involvement in criminal activity, and the slightly more mundane topic of preparing dinner to Joseph’s liking.
Baby Driver is also a landmark film because actor CJ Jones, who plays the role of Joseph, is Deaf and native to ASL. Studios often cast hearing actors in Deaf roles, but writer and director Edgar Wright stated that, having auditioned Jones, he found watching hearing actors pretend to be Deaf “difficult”. This is an interesting point in terms of cultural appropriation of Deaf identity and has been the subject of campaigning, including #DeafTalent. Meanwhile, Ansel Elgort, who played Baby/Miles, undertook lessons in ASL in order to “do it justice”.
The Shape of Water
The Shape of Water, which took four of the 2018 Academy Awards including best picture, is a magical realism tale, which follows a janitor at a government laboratory as she falls in love with a humanoid amphibian creature being held there. While protagonist Elisa Esposito (played by Sally Hawkins) is not deaf, she is unable to verbalise. So Elisa communicates exclusively through ASL with her co-worker Zelda Delilah Fuller (Octavia Spencer), neighbour Giles (Richard Jenkins), and her amphibian lover (Doug Jones) who also cannot verbalise.
Through the unfolding love story, and her conversations about it with her friends, the power of ASL as a full and vibrant language is demonstrated. Elisa befriends the creature and teaches him ASL, as one would any language. She begs and convinces Giles to help her rescue the creature, conveying the depth of her emotional attachment and love. Having copulated with the creature, Elisa confides in Zelda the intimate and explicit details of the encounter. All of this complexity of everyday life is conveyed through ASL.
In an interview, Hawkins explained that Elisa’s signed language was a deliberate mixture of period-specific ASL and an “amalgamation of things Elisa would have cobbled together "because of where she’d probably have learnt it”. This is a good representation of the reality of many signers in this period and beyond. Signed languages were historically suppressed worldwide, in favour of lip reading and use of voice.
The Silent Child
The Silent Child, which was awarded best live action short film at the Oscars, conveys the isolation and dissociation of a profoundly deaf child, Libby, living with her hearing family, who assume she “follows things really well”. However, as we witness a family meal through Libby’s eyes, we can begin to appreciate that this is far from being the case. The situation improves with the arrival of a social worker, Joanne, who teaches Libby British Sign Language (BSL), enabling her to communicate and express herself. However, the film ends on an uncertain and emotional note, as Libby’s parents decide to cease her BSL sessions.
The film’s main character, Libby, is played by six-year-old Deaf actor Maisie Sly, a native to BSL. Rachel Shenton, who wrote and starred in the film as Joanne, made her Oscar acceptance speech in BSL.
The Silent Child is pro-BSL, while still illustrating the tensions between hearing and Deaf cultures, as BSL is still seen by many parents as a threat to family communication.
So, the films of 2017-18 have placed signed languages centre stage and generated much debate. But while attitudes towards them and Deaf culture may be improving, there is still a long way to go before anything nearing equality is achieved. It is thus gratifying to learn that the team behind The Silent Child are planning a full-length movie follow up, amid campaigning for children to be taught BSL in schools. Let’s hope that the momentum from this year can be maintained and built upon.
Sara Louise Wheeler does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
We've located the part of the brain which understands social interactions
Author: Jon Walbrin, PhD Researcher in Cognitive Neuroscience, Bangor University
The ability to quickly detect and recognise the purpose of a social interaction is as indispensable today as it would have been to our ancient ancestors – but how does the brain do it?
Figuring out the meaning of an interaction between other humans enables us to respond to, and think about, others accordingly. It means we can intervene at the point where a disagreement between friends disintegrates into a bitter argument. Or recognise a nuanced power struggle between two opposing politicians.
Our research team recently identified the part of the brain that plays a crucial role in this complex, yet effortless, process. What’s more, we’ve found that this region is not only sensitive to the presence of interactive behaviour, but also to the contents of interactions (for example whether people are helping each other or not).
In our study, we used a brain imaging technique – functional magnetic resonance imaging (fMRI) – to measure the responses of our participants to brief video clips. These depicted two human figures either interacting or not interacting with each other.
Intriguingly, a single brain region – the right side of the posterior superior temporal sulcus (pSTS) – was the only part of the brain that could differentiate between interactive and non-interactive videos. That is, the right pSTS was the only brain region that was reliably more activated by the videos. This strongly implicates the important role that this region plays in perceiving interactive behaviour.
Measuring social interactions
This isn’t the first time that the pSTS has been linked with processing social information, however. The pSTS and broader temporal lobe region (the blue part of the brain in the image above) are known to be sensitive to other categories of social visual information. This includes faces and bodies, as well as theory of mind processing – that is, when actively thinking about what’s going on in another person’s mind. These types of social information are often at play during social encounters, so could it be that interaction sensitivity in the pSTS is actually due to differences in face, body, or theory of mind processing instead?
To figure out whether this was the case, we ran a second fMRI experiment. Here the videos that participants watched were very carefully controlled. These new videos did not contain sources of social information other than actions, so any pSTS sensitivity should have been from the interactive information.
Removing faces and bodies from videos about human social interactions seems almost impossible, if not inherently eerie. But we designed a special set of animations that showed 2D shapes moving “purposefully” around a scene.
Previous research has shown that carefully controlling how shapes like these move can create the strong impression that the shapes are “alive” and moving in an intentional way. These are much like the meaningful actions that humans might perform, for example, opening a door, or pulling a lever.
Competition and cooperation
Using these videos, we looked at two aspects of how the pSTS processes visual interactive information. First, we analysed whether it could tell two interacting shapes from two non-interacting shapes, similar to our first experiment. Then we looked at whether the pSTS could tell the difference between two different types of interactions: competition – where the shapes worked against each other. For example, one tried to open a door, while the other tried to close it – and cooperation. In this latter footage, the shapes worked together. For example, they both tried to open a door.
As we predicted, the right pSTS was able to reliably discriminate between interaction compared to non-interaction videos, a finding that complements what we saw in our first experiment. Similarly, the pSTS could also reliably discriminate between competitive and cooperative interactions. Together, these findings demonstrate that the pSTS is a region centrally involved in processing visual social interaction information.
However, it seems unlikely that one small chunk of brain could be entirely responsible for such a complex and dynamic process. So we also compared responses in a neighbouring brain region – the temporo-parietal junction – that is associated with theory of mind processing, and so may also contribute to interaction perception. We observed that, like the pSTS, the junction could reliably tell apart interactions from non-interactions, and competition from cooperation, although to a weaker extent.
To bring it all together, these findings show the crucial role that the pSTS, and to some extent, the temporo-parietal junction play in perceiving visual social interactions. However, our results also open up a lot of interesting questions for future interaction research. We don’t yet know which type of interactive information is most important in detecting whether two people are interacting. Nor do we know how pSTS interaction responses differ in people that tend to show atypical social visual responses, such as with autism spectrum disorders.
This research was was funded by a European Research Council 'Becoming Social' grant, awarded to Dr Kami Koldewyn, lead researcher of the Developmental Social Vision Lab at Bangor University, Wales. Data from the first experiment presented here was collected at Massachusetts Institute of Technology.
Six common misconceptions about meditation
Author: Dusana Dorjee, Honorary Lecturer, Bangor University
But even with all this interest, misconceptions about what this ancient practice can do for human health and well-being are still circulating.
1. There is only one type of meditation
Only some meditations involve sitting quietly with legs crossed. Qi Gong and Tai Chi, for example, focus on meditative movement. This combines a relaxed but alert state of mind with slow movements and gentle breathing. Others, like Tibetan Buddhist meditation involve visualisations and/or mantras. There is also “thinking meditation” where one reflects on topics such as impermanence, while staying relaxed yet focused and reflective.
Many types also encourage bringing meditation into ordinary daily activities – such as mindful dish washing involving paying attention to the sensations of the water and hand movements. Similarly, there is eating meditation, where one expresses gratitude for the food and wishes for others less fortunate.
2. It’s all about being still and quiet
Stable non-reactive attention is developed in all meditation types, but it is particularly targeted in mindfulness practices. Other meditation types cultivate qualities such as compassion, generosity or forgiveness. Another form – sometimes called deconstructive meditation– specifically develops contemplative insight into the working and nature of our minds.
Meditation training typically progresses from practices which stabilise attention to cultivating compassion and other related qualities, then insight. Importantly, at each of these stages the meditator reflects on their motivation and intentions for the practice, which is likely to affect the outcomes too. While some may meditate to reduce anxiety or back pain, others seek spiritual awakening, for example.
3. You have to be able to “empty the mind”
While meditating does often involve quieting of the mind, this doesn’t mean the mind goes blank. Meditation involves developing the ability to observe one’s thoughts, emotions and sensations with the quality of non-reactivity – that is being able to notice and pause rather than react – and develop a wider compassionate perspective.
The idea that one needs to empty the mind has probably come from misunderstandings about some advanced meditation types such as meditative absorptions, awareness of awareness practices or some Dzogchen meditations. These are accompanied by very few ordinary thoughts, sensations and emotions. But even with limited thinking, these meditative states have qualities of ease, clarity, compassion, alertness and reflective awareness. Forcefully trying to limit thinking would be unhealthy at any stage of meditation training.
4. Meditation will put you at ease from day one
Meditation isn’t simply a smooth ride to a quiet mind. Increased awareness of unhealthy mental habits and behaviour is common at the beginning of practice, and during transitions towards more advanced stages of meditation. These challenging experiences can actually give rise to some adverse effects– such as increased anxiety or disorientation. This is why it is important to practice under the guidance of an experienced and qualified meditation teacher who is able to provide advice on how to work with such experiences.
5. We know all there is to know about the benefits
Research has already supported the benefits of some types of meditation on things like depression and to some extent stress reduction. However, some other common claims aren’t backed up by scientific research. There is mixed or insufficient evidence on the effects of meditation on reduction in stress hormone levels, for example, and on ageing too.
Though research into how meditation affects the human brain continues, at present our understanding of the long-term effects of meditation is very limited. Most studies tend to follow the effects of meditation from before to after an eight-week course, or one-month retreat, rather than years or potentially a lifetime of meditation.
Neither have the benefits been defined by type of meditation. Different meditation styles – and even different types of mindfulness– have different forms and aims and so might have different impacts on human psychology and physiology.
6. It is only for reducing pain, stress or anxiety
The aim of meditation in its traditional context – including and beyond Buddhism – has been the exploration of meaning and purpose in life, and connecting with deeper existential awareness. This core aspect is often neglected in the current teaching. Research mostly – but not always– focuses on immediate health benefits of meditation, rather than existential well-being.
The existential awareness dimension of meditation practice is closely intertwined with the motivation and intentions behind meditation practice. So if we want to truly understand meditation, perhaps there needs to be a greater focus on this essential aspect. Learning more about this would also help address some current concerns about the use of meditation techniques outside traditional contexts as a means to increase productivity and reduce stress.
Meditation certainly has potential to contribute to our health and well-being, and its real power is still unexplored and unharnessed. If you are considering taking on or continuing with meditation practice, do your research and work out which practice (under proper guidance) will work best for you personally.
Dusana Dorjee has received funding in support of her research on meditation from the British Academy, the Mind and Life Institute and the ESRC. She also works for a community interest company providing training on a mindfulness and well being course for primary schools.
Russian spy attack: how toxic chemicals can cause widespread contamination
Author: Vera Thoss, Lecturer in Sustainable Chemistry, Bangor University
The recent attempted poisoning of the former Russian spy Sergei Skripal and his daughter has led to warnings about the spread of the toxic chemical used in the attack. Hundreds of people who visited the restaurant where the attack is thought to have taken place have been told to wash their clothes to avoid any chance of contamination with the suspected “Novichok” nerve agent.
The danger to the public is thankfully thought to be minimal, with only a small risk coming from prolonged, repeated exposure to the tiny amounts of the chemical. But how do experts know what the danger really is in a situation like this? In order to assess the situation, they need to consider how much of the chemical was released, how it came into contact with people, and how it spreads and degrades in the environment.
We can be exposed to chemicals through our skin, by breathing them in, eating them, or injecting them into our blood. And the exact route can make a huge difference, just as breathing in oxygen keeps us alive but injecting it can kill us.
The most toxic compounds are lethal even in tiny doses. For example, the botulinum toxins, the most toxic substances ever discovered, can kill with just a few nanograms per kilogram of bodyweight if injected into veins or muscles. If inhaled, the lethal dose is in the tens of nanograms per kilogram of bodyweight.
Many of the best-known lethal substances, such as cyanide or arsenic, must be ingested to take effect. But other deadly compounds can be absorbed simply by touching them. This was what happened in the case of Katrin Wetterhahn, a professor in analytical chemistry who accidentally dropped a small amount of dimethylmercury onto her latex gloved hand. As this compounds easily diffused through latex, it was taken up by her body through the skin. She died of mercury poisoning five months later.
Sergei Skripal was poisoned with one of a class of nerve agents known as Novichok agents and chemically described as organophosphorus compounds. They act as an acetylcholinesterase inhibitor, which means that they disrupt the central nervous system. These compounds can come in solid, liquid or gas form, and we know nerve agents work when ingested or inhaled. But it’s not yet clear what specific chemical compound was used in this case and how it was administered. Because of this, we don’t know how much of the agent was needed or how the victims were exposed.
How dangerous a chemical can be also depends on how easily it can spread and contaminate the environment. The physicochemistry of a substance plays an important role here. Arsenic has a melting point of over 600℃ so if it were sprinkled into food it would be unlikely to travel far from the plate because it is solid at room temperature.
But lethal compounds dispersed as gases, like the alleged use of chlorine gas in the Syrian civil war, can result in the instant spread of the chemical across a wide area. This means they can affect many more people, although as they become more widely dispersed they become less harmful to individuals because the doses people receive are lower. Similarly, poisons in liquid or aerosol form, or radioactive solutions can be easily transferred from one surface to another.
Once they’ve entered the environment, chemicals often begin to change or break down, rendering them less harmful over time. For example, when chlorine gas comes into contact with an oxidisable material, such as wood or clothing, it changes into a harmless, inert chloride compound.
In the case of radioactive material, how long the substance is dangerous depends on how quickly its atoms lose energy, a process known as radioactive decay and measured by what’s called a half-life. When another former Russian spy, Alexander Litvinenko, was assassinated in the UK in 2006, the murder weapon was radioactive polonium-210 put into his cup of tea. Polonium-210 has a half life of 138 days, meaning after this time half of its atoms have emitted an alpha particle and decayed into lead-206 atoms.
This alpha radiation emitted inside his body after he had drunk the poisoned tea was what made Litvinenko ill and eventually killed him a month later. But those who came into close contact with him, such as his nurses, would have been much less exposed to the radiation. Alpha particles do not travel a long way and are stopped by even minor obstacles such as a piece of paper or human skin.
Organophosphorus nerve agents including Novichok and sarin, which was used in the Tokyo subway attack that resulted in 13 deaths, are unstable and break down gradually over time or when exposed to water. This is why washing your clothes after being exposed to such a compound could be enough to get rid of it. In fact, organophosphorus-based nerve agents are so unstable that they are often stored as two or more separate compounds and then combined when needed.
The ability to react easily with other substances is what makes lethal chemicals so dangerous, to both their intended victims and innocent bystanders. As a result, these aggressive substances do not typically linger for long. But if they encounter something that holds them on its surface until it releases them again, this can extend their potentially damaging lifetime. Metallic door handles are a good example for the transfer of material from one person to another.
For those cleaning up a contaminated site, all these factors are vital to understanding what they are facing and how they can prevent anyone else falling victim to a deadly chemical.
This article has been amended to state that polonium-210 has a half-life of 138 days and decays into lead-206, not 139 days and polonium-206 as originally stated.
Vera Thoss does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Stanley Kubrick's films all had one thing in common: Jewishness
Author: Nathan Abrams, Professor of Film Studies, Bangor University
Legendary director Stanley Kubrick was known to have said that he was not really a Jew, he just happened to have two Jewish parents. But though he may have tried to divert from this fact, Kubrick, who passed away in 1999 at the age of 70, was born and died a Jew, and Jewishness threads through and underpins all 13 of his films.
Kubrick was famously silent on the meaning of his movies, so their messages are open to interpretation on a number of levels. He covered many genres and topics – starting with war movie Fear and Desire in 1953, and ending with marital drama in Eyes Wide Shut (1999)– and his films broke new ground in cinematic style.
But Kubrick, who is possibly the most written about film director after Alfred Hitchcock, has rarely been thought of as a Jewish director. This is because few dedicated researchers have not bothered to probe his ethnic background in any detail.
Kubrick the Jewish man
Kubrick had a history of working with Jewish actors as his leading men and women. Notably Paul Mazursky, Joe Turkel (three times), Kirk Douglas (twice), Peter Sellers (twice), Shelley Winters, Aubrey Morris, Miriam Karlin, and Sydney Pollack. He also worked with Jewish writers, including Howard Fast, Michael Herr and Frederic Raphael, and considered adapting the work of such Jewish authors as Arthur Schnitzler, Stefan Zweig and Louis Begley. He adored the work of Sigmund Freud, the founder of psychoanalysis, and writer Franz Kafka too. But this alone is not makes him a Jewish filmmaker.
Although Kubrick was never a practising Jew and the Jewish references and viewpoint are not explicit or obvious in his films, once you consider his films from the standpoint of his ethnicity, as well as his cultural and intellectual milieu, then some resonant themes emerge.
Though Kubrick famously worked on a Holocaust film, The Aryan Papers, which never came to fruition, his body of work went far beyond that in terms of Jewish references.
His first feature, Fear and Desire (1953) is his spin on the World War II platoon movie, which typically contained a range of ethnicities and races. True to form, Kubrick cast Mazursky as the shaky (Jewish) recruit Private Sidney. Killer’s Kiss in 1955 is very much moulded in the tradition of the Jewish boxing movie – features such as Body and Soul (1947), directed by Robert Rossen. Kubrick’s film noir, The Killing (1956), could well have as its tagline the Yiddish proverb, “Man plans, God laughs”. All three of these early films could also be described as existentialist, a philosophy popular with Jewish intellectuals in the postwar era, especially in Greenwich Village, New York City, where Kubrick then lived.
In dealing with a major incident of French military injustice during World War I, Paths of Glory (1957), recalls the antisemitic Dreyfus affair, a major cause célèbre of the 19th century. The epic Spartacus (1960) posits a Moses-like liberator who leads Roman slaves out of bondage while also considering such issues as McCarthyism, the Hollywood blacklist, civil rights, the Holocaust and the birth of the State of Israel – all issues of Jewish concern in the 1950s.
In 1964, Dr. Strangelove conflated nuclear holocaust with the Holocaust, particularly through its titular character, the former Nazi Dr. Strangelove, at a time when the Adolf Eichmann Trial was fresh in people’s memories.
Looking further at Kubrick’s later films, 2001: A Space Odyssey– which celebrates its 50th anniversary this year – plays with the Hebrew Bible, Jewish liturgy, as well as Kabbalah, Jewish mysticism. It is full of numerological references with the number four recurring frequently. A Clockwork Orange (1971) explores Judeo-Christian ideas of choice and conveys a very traditional Jewish viewpoint on the issue of free will. And Barry Lyndon (1975) warns of the dangers of social climbing in places where you don’t belong – a traditional Jewish fear, particularly in the 19th and 20th centuries.
The Shining (1980)– Kubrick’s contribution to the horror genre – deals with the very biblical theme of the sacrifice of the son by the father, as found in Genesis 22. And Full Metal Jacket (1987), while ostensibly about Vietnam, is, on one level, about the Holocaust and man’s propensity to evil and genocide.
This is all capped off by Eyes Wide Shut, possibly Kubrick’s most Jewish film – given it was adapted from the work of Jewish author Arthur Schnitzler and heavily influenced by the theories of his Jewish contemporary Sigmund Freud. It also contains the most explicitly Jewish character in any Kubrick film, Victor Ziegler (played by Sydney Pollack).
Kubrick’s films never offer up anything easy or obvious. He made few statements about them. But he spent a long time working on his movies. He was meticulous and paid great attention to detail. He was extremely cultured, well read, and cultivated. He certainly had views that he wanted to share but did so in the least obvious ways. He wanted to make viewers work to understand his deeper messages.
Kubrick’s films were not just about Jews, Jewishness and Judaism, they are far wider than any single theme. But even though the man himself tried to distract from his Jewish roots, it cannot be denied that some of this material was surely intentional.
Nathan Abrams receives funding from The British Academy. He is the author of Stanley Kubrick: New York Jewish Intellectual (Rutgers University Press, 2018).
Thousands of starfish have washed up dead after the 'Beast from the East' – here's why
Author: Coleen Suckling, Lecturer in Marine Biology, Bangor University
Many Europeans have been assessing the damage from the recent wintery weather dramatically nicknamed the “Beast from the East”. But people visiting certain parts of the English coast found a particularly unwelcome surprise. Thousands of dead starfish and other sea creatures were washed up along the shores in Kent and East Yorkshire, creating surreal scenes reminiscent of post-apocalyptic horror movies. So how did a blizzard cause such marine destruction?
Mass starfish strandings aren’t completely unheard of. For example, several million were found on the coast of Worcester County, Maryland, USA in 1960. Up to 10,000 were found along the strandline on the Isle of Man in the British Isles in 1999. And 50,000 were stranded on the Irish coastline in 2009.
These events are not exclusive to just starfish but have also included other marine animals that live on the seabed including crabsand molluscs. There has even been a poem written about these events written by Michael Symmons Roberts.
We don’t yet know the exact reasons for mass strandings but they are often blamed on very cold weather or storms. The Beast from the East was a polar vortex that brought freezing temperatures and high onshore wind gusts onto the eastern coastline of the UK. High winds can disturb the seas along the coast, creating large waves that churn up the seabed where many animals reside. Sediments on the seabed are disturbed and can smother these animals.
Animals picked up by these disturbances can be moved high up the shore during the high tides and are left stranded as the tide retreats. Coincidentally, during the peak of the storm, UK shores also experienced low spring tides, which likely made the marine effects of these high winds worse.
As if this wasn’t enough, the Beast from the East brought very cold temperatures several degrees below freezing over several days across some parts of the country. Such low temperatures can have dramatic effects on marine life. Previous mass strandings have been blamed on cold temperatures making marine life severely lethargic. Once stranded, any surviving animals would have been exposed to potentially lethal low temperatures.
Starfish may be at particular risk of strandings after storms because of a behaviour known as “starballing”. By curling each of their multiple arms to create a large spherical balloon shape with their body, they can essentially roll over the seabed in fast-moving water and cover much greater distances. But during a storm they could be rolled out of control and left stranded on the beach.
It seems most likely that the mass strandings that occurred after the Beast from the East were driven by high wind speeds combined with low spring tides and extreme cold temperatures (for the UK). Luckily, the event is unlikely to have a long-term impact on UK starfish populations.
Starfish are abundant and widespread within the shallow waters of the UK and Atlantic Ocean. They are particularly resilient creatures that live in highly dynamic habitats, can regenerate limbs and are highly effective predators. What’s more worrying is that these kind of extreme weather events may become more common thanks to climate change.
Coleen Suckling does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Martial arts can improve your attention span and alertness long term – new study
Author: Ashleigh Johnstone, PhD Researcher in Cognitive Neuroscience, Bangor University
Martial arts require a good level of physical strength, but those who take up training need to develop an incredible amount of mental acuity, too.
Mental strength is so important to martial arts that researchers have found karate experts’ stronger punching force may be down to a better control of muscle movement in the brain, rather than increased muscular strength. Other studies have also found that children who practice Taekwondo improved in maths test scores, and behaviour.
Which leads to an interesting question – does taking part in martial arts cause the brain to develop better control, or do people with these brain characteristics choose to do martial arts? It is something that our team has been researching, with interesting results.
We’ve been specifically measuring attention to assess mental control, as previous research has suggested that mindfulness and exercise can both have beneficial effects on attention. You could argue that martial arts are a combination of both – active sports that involve aspects of meditation and mindfulness.
In our recently published study, we recruited 21 amateur adults who practice martial arts (karate, judo and taekwondo, among others) and 27 adults with no experience in the sports, to take part in an attention network test. This test assesses three different types of attention: alerting (maintaining a sense of alertness), orienting (the shifting of attention), and executive (involved in choosing the correct response when there’s conflicting information).
We were particularly interested in the alert network, which can reveal how vigilant a person is. If a person has a high alert score on this test, it would suggest that they are better able to respond to unpredictably timed targets than those with a low score.
While there are differences across each martial art in terms of their core philosophies, whether they’re more of a “fighting” martial art or more “meditative”, and their intensity, we did not discriminate about the type our participants took part in. Future research could compare the different types, but for this study we were more interested in general martial artists’ attention compared to non-martial artists’.
We invited the participants to our lab, and recorded details of their martial arts experience (including the type, how often they practice, and how many years they’ve been involved in the sport) before asking them to take part in the computer-based task. This involved participants seeing a row of five arrows, and having to respond to the direction of the central arrow by pressing a letter button on a keyboard (“c” for left-facing arrows, and “m” for right) as quickly as possible. In some trials, they were given a warning cue that told them the arrows would appear soon, and in others they weren’t.
Typically, in most martial arts training, there’s an element of sparring, which is a form of simulated fighting with a partner. One of the aims of this is that the partners will be attempting to remain focused and avoid their partner making contact. After all, nobody wants to be punched in the face. It is rare for a sparring opponent to give a clear warning of the exact timing of a punch so the defending partner needs to stay alert, or vigilant, at all times so that they are ready to dodge the hit.
During our research, the martial arts participants produced higher alert scores than our non-martial artists. This means that the martial artists responded to the arrows fastest, especially when they were not given a warning. This signifies that they have a greater level of vigilance, which could reflect stronger cognitive control.
We also looked at the effects of long-term martial arts practice, and found that alertness was better in the martial artists with the most amount of experience. Several of our participants who had more than nine years’ experience in the sport, showed the best alertness in our tests. This suggests that the longer a person sticks at martial arts, the bigger their reward. Taking this a step further, it appears that the effects of improved attention may be long lasting, rather than just a short boost after training.
While it could be argued that martial arts simply are among many activities that can lead to better health, what we and other researchers have found is that their practice is one of those rare crossovers that helps significantly improve the brain just as much as the body.
Ashleigh Johnstone receives funding from the Economic and Social Research Council (ESRC).
Starfish can see in the dark (among other amazing abilities)
Author: Coleen Suckling, Lecturer in Marine Biology, Bangor University
If you go down to the shore today, you’re sure of a big surprise. Many will have witnessed the presence of a starfish or two when visiting the seashore or a public aquarium. Starfish come in an exciting range of colours and sizes, but have you ever given a thought to how this multi-armed wonder manages to exist in our oceans when it’s so unlike the other animals we know?
Recent research published in the journal Proceedings of the Royal Society B not only highlighted that starfish have eyes but also revealed that they can even see in the dark. Starfish may appear rather inanimate, as if they were simple pointy organisms that sit around on the seabed absorbing nutrients from the water. But in reality there’s a lot more going on beneath their spiny exteriors.
Seeing and glowing in the dark
Most starfish possess a crude eye at the tip of each arm. These compound eyes contain multiple lenses called ommatidia, each creating one pixel of the total image the animal sees. Tropical starfish eyes have been shown to be capable of forming crude images, which allow these animals to stay close to their homes.
Now scientists have shown that several deep-sea starfish species, found up to 1km beneath the water’s surface where no sunlight can penetrate, can still see despite the dark. Most species that can see in the dark depths of the ocean like this have more sensitive eyes but see cruder images. But these starfish appear to see more clearly than their tropical counterparts living in the shallows where there is light.
The researchers suggested different reasons for this. Some species appear to see clearly in a horizontal direction but less so in a vertical direction, which would make sense for an organism that lies on the sea floor. Others appear to have less ability to detect changes in what they’re seeing over time.
Two of these visual species are also bioluminescent, which means they can produce short glowing flashes across the surface of their bodies. It’s likely that the combination of these light flashes and the ability to see clearly allows these deep-sea starfish to communicate with potential mates.
Hungry predators, such as fish or crabs, can bite off the arms of starfish. If there is a struggle then some species of starfish will voluntarily break off their own arms, giving the rest of their body time to escape. More amazingly, they can then regenerate a whole new arm. If you find a starfish with one (or more) arm smaller than the rest, it’s very likely that this will be the new regenerating limb.
Powered by seawater
Starfish don’t have a typical set of muscles. Instead, they are able to move by pressurising seawater inside their body through a water vascular system. They draw in seawater through a porous spot called a madreporite located on the top surface of the body. The water then passes through a series of internal canals to reach the arms, which have thousands of small tube-like feet hanging below them.
Muscles and valves inside each tube foot pressurise water that enable it to extend and retract, creating a walking movement, much like human legs but multiplied hundreds of times. At the end of each tube foot is a tiny suction cup, much like a kitchen plunger, which can stick to surfaces and allow the starfish to gain traction.
Starfish are extremely effective predators on the sea floor, feeding on a wide range of food such as mussels, clams and oysters. They will hunch over their prey and use their tube feet to simultaneously grip the prey and to clamp down onto the seabed to prevent any escapes.
If the prey is small enough the starfish will then swallow the entire animal, internally inflating its stomach, which is located in the centre where the arms meet. While holding this death grip position, the starfish will then gradually dissolve the edible soft tissues using enzymes inside the stomach before ejecting the inedible hard shell parts.
But if their prey is too big to insert into the stomach, this won’t stop the starfish from fighting to get its meal. Instead it will use its powerful arms and tube feet to pull the two shells slightly apart and then eject its stomach into the gap so it can breakdown the soft tissue inside the prey and slurp it up, much like using a straw.
So when you next see a starfish, whether it’s on the shore or in an aquarium, please give it a humble nod for being such a specialised member of our ocean club.
Coleen Suckling does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Positive psychology helps brain injury survivors recover with a better outlook on life
Author: Leanne Rowlands, PhD researcher in Neuropsychology, Bangor University
In the UK alone, nearly 350,000 people are admitted to hospital each year with an acquired brain injury, caused by anything from road traffic accidents, falls, and assaults, to vascular disorders such as strokes. And this number is growing.
As they physically recover from their injury, survivors and their families also face psychologically adjusting to a lasting impairment. Often, this includes cognitive, and communicative difficulties. But the social and emotional factors can present a greater burden, with high rates of depression among survivors. This is not only difficult to experience, but can slow down the person’s overall recovery.
But not all of those with acquired brain injuries experience depression. And contrary to what some might expect, brain injury can actually be a source of positive personal growth. Some survivors recover with a better perception of themselves, an improved philosophy of life, and stronger personal relationships. Similarly, some survivors report improved quality of life and enhanced personal satisfaction.
So why the difference? Why do some brain injury survivors recover with a better frame of mind, while others struggle with depression? Trying to simply be happier doesn’t work – brain injury or not – but research suggests that appreciating the positive things in life is key.
In one study, researchers found that appreciation of life, new possibilities, and a patient’s own personal strength, greatly contributed to positive personal growth after a brain injury. It can seem like a difficult task, building internal strength after such a serious event, but there is an area of psychological research that has found it can be fairly simple to do.
In recent years, the field of positive psychology has been helping researchers and psychiatrists to better understand what causes happiness and encourages well being. This study of positive emotions, optimism, strengths, and understanding, looks at “building what’s strong” – rather than “fixing what’s wrong”.
Positive psychology can be done by using one of five simple methods. It’s something we can all benefit from. Even though the focus is on building rather than fixing, this includes people with brain injuries, too.
Professor Jonathan Evans wrote in 2011 about how positive psychology could help those with brain injuries, suggesting that it may be used alongside other rehabilitation programmes, to support them with adjusting to life after injury in a positive and hopeful way.
More recently, a trial project – the Positive PsychoTherapy in ABI Rehab (PoPsTAR) programme – put this idea into practice. The researchers incorporated therapeutic exercises based on positive psychology methods, such as setting realistic goals and focusing on positive events, with a rehabilitation programme. They found that Evans’s idea worked, and now we are working on a new project to take this method forward.
Of the five positive psychology methods, one of the most effective is “three good things”. The idea is that you write down three things that have gone well every day for a week, with a short explanation for it. This exercise has been shown to increase happiness and decrease symptoms of depression for up to six months in healthy control participants. And it has been shown to effectively improve happiness in a group of people with ABI, too.
It is thought that “three good things” helps people to focus on, and be more likely to notice, positive events and aspects of life after brain injury. For survivors with memory or attention impairment, the reflection of positive events may be more difficult. This can lead to an inaccurate sense of self, or negative perceptions of life and situations, causing some to feel that their life is lacking in positivity. But keeping a three good things diary can help them to recollect positive things in order to develop positive self-perceptions and self-esteem.
We have been running a pilot study with brain injury survivors which backs up the “three good things” research. The Brain Injury Solutions and Emotions Programme (BISEP) was developed to help survivors deal with any difficulties while they recover. But rather than doing it alone, we’re taking the three good things method one step further and asking them to share one good thing with a group of fellow survivors in a weekly meeting.
Though it’s early days, so far we have received positive anecdotes, with participants using the “things” to reformulate how they feel about their day. As group interventions have been shown to provide social support, the idea is to use the “good things” to help the participants engage with other survivors and motivate them to continue the positive method.
The two hour weekly meetings are therapeutic. Each week, we discuss a different topic and different strategies, but always start with a good things reflection. Once again, it is a simple way to build a positive psychology method into recovery but one, we hope, that will help the survivors to build a new enthusiasm for life.
Leanne Rowlands does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
UK criminal justice is at breaking point after years of unstable leadership
Author: Stephen Clear, Lecturer in Law, Bangor University
The criminal justice system in England and Wales is failing victims and witnesses to such an extent that MPs say it is now “close to breaking point”. Years of budget cuts and changes have led to a justice system that is in meltdown.
With such a crisis at hand, one would expect some kind of “strong and stable” leadership from the UK government. Yet, in the most recent cabinet reshuffle, the prime minister, Theresa May, once again appointed a new lord chancellor and secretary of state for justice, David Gauke. Gauke is the sixth justice secretary since 2010, and Theresa May’s third. He replaced David Lidington just six months after he took up the role. Prior to that Liz Truss held the position for less than a year.
The Ministry of Justice is considered a major government department. Supported by 32 agencies and public bodies, its core purpose is to protect and advance the principles of justice, while upholding the rule of law. In fact, the UK justice systems has long been “the envy of the world”. An “independent judiciary” with “global lawyers”, the “brand” is recognised as “internationally outstanding”. But the lack of consistent leadership is causing it to stall. Though there are permanent secretaries working within the ministry, it is the secretary of state who “steers the ship”, and maintains relationships and trust between the government and the judiciary.
The post of lord chancellor – now more commonly known as the secretary of state for justice – dates back to medieval times, when they were responsible for the supervision, preparation and dispatch of the king’s letters, using the sovereign’s seal. Prior to the Constitutional Reform Act 2005, the lord chancellor held roles in all three arms of the state. They were a senior judge, a member of cabinet, and presided over the house of lords.
Today the lord chancellor is an elected MP who holds the cabinet position of head of the ministry of justice. While they still have the ancient title of lord chancellor, the role focuses on responsibility for the efficient functioning and independence of the courts, along with other important constitutional roles.
But constant changes at the top mean that the secretaries of state for justice have not fulfilled these roles. In the meantime, judges have been branded “enemies of the people” – with only a slow response to defend them – and their diversity has been called into question.
On this latter point, in early 2018, David Lidington said that judicial diversity targets were “not the answer” to the issue. So what is? While the judicial appointments commission has a role to play in diversity matters, a secretary of state must be in place to set out the the government’s position on what is a pressing matter. The judiciary should represent the people of society, and right now it is not doing so.
Cuts and closures
Looking to the front of house, England and Wales also needs a secretary to lead on a meaningful review of the consequences of the £450m a year legal aid cuts, as well as their impact on the cost of justice. Much of what is being recognised now as bringing the justice system to melting point is the consequence of years of these cuts. Again, a secretary with longevity in the role could lead on the future direction of justice policy within the UK, as well as keep justice issues top of the government’s agenda.
Similarly, there is the impact of the extensive programme of court closures, which must be headed up by consistent ministerial leadership. The country needs someone to ensure adequate responsibility is taken for the decisions being made, and to ensure that access to justice is not restricted.
However, none of this should be taken to mean that just any MP should be handed the role of justice secretary simply because they will last in the job. The ministry of justice requires a secretary with an understanding of the wider profession as it is today and the challenges lawyers face.
In recent years, secretaries have not even had legal backgrounds– although it must be noted that the appointment of Gauke, a former solicitor, has broken this recent trend, a fact which could see policies being led by his more intricate understanding of the law.
Without heeding these glaring warning signs now, the “breaking point” could very quickly develop into a crack in England and Wales’s legal system. Only with someone at the helm who can take long-term responsibility for overhauling the country’s legal system can justice be truly served at all levels.
Stephen Clear does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Five ingenious ways snakes manipulate their bodies to hunt and survive
Author: Tom Major, PhD candidate in Biological Sciences, Bangor University
Do a quick search for “snakes” in the news and you’ll find people terrified, bitten or, sadly, killed by these creatures. Many of us fear their slithering ways and researchers have found evidence which suggests that humans have evolved a tendency to spot snakes more easily than other animals.
But there are more than 3,500 species of snake in the world, and they have been around for 167m years– so they must be doing something right.
Although it seems strange to us, snakes’ lack of legs mean that they have evolved numerous fantastic techniques to survive, making ingenious use of their cylindrical forms.
1. Some snakes can travel in straight lines
The majority of snakes bend their spines and exert force on the ground, trees, or water with the bends in their body or the edges of their coils to move. But some can travel in a perfectly straight line. Until recently, it was a mystery how they accomplished this, but new research demonstrates that Boa constrictors and other heavy bodied snakes use their belly scales like a tyre tread to seamlessly progress in a straight line.
Three sets of muscles work in union, with the first yanking the belly skin and scales forward. Meanwhile, the second shortens the skin as the belly scales move forward and come together, before pinning them in place as the third set brings the spinal column forward. This allows the snake to move forward at nearly constant speed, but they only do it when they are relaxed. A frightened snake in need of speed will revert to a more typical mode of locomotion.
Moving like this is thought to benefit snakes which spend time underground in narrow holes, allowing them to squeeze into animal burrows in search of refuge or prey.
2. Puff adders use their tongues as bait
Widespread across the grassy woodlands of sub-Saharan Africa and parts of the Arabian Peninsula is a chunky venomous snake called the puff adder (Bitis arietans), so named for its habit of hissing loudly when disturbed. Puff adders are successful predators of small mammals, lizards, frogs and birds, but until recently one secret to their success was unknown.
Upon spotting a frog nearby, the puff adder begins flicking its tongue unusually slowly, seemingly mimicking a small worm. To frogs, juicy worms are irresistible, and their eagerness to eat them leads them straight into the waiting mouth of the viper. This hunting strategy is known as lingual luring.
3. Mock viper eyes change shape
While it is gifted with one of the most impressive scientific names of any snake – Psammodynastes pulverulentus, a mixture of ancient Greek and Latin meaning “dusty sand ruler” – the mock viper, unlike the puff adder, does not possess deadly venom. Living in the forested areas of south and southeast Asia, the mock viper is surrounded by dangerous animals such as leopard cats and is subject to the possibility of being eaten on a daily basis. To counter this and intimidate would-be predators, the mock viper earns its name by physically resembling a viper, possessing the well-defined triangular head that characterises real vipers in the area.
This disguise is not enough for these snakes, though. When threatened with imminent danger, the mock viper alters the shape of its pupil from round to a thin, vertical slit. These “elliptical pupils” are typical of actual vipers in the area. It is thought that this last-ditch defence may be enough to persuade a predator to think twice and allow the mock viper to slither to safety.
4. Boas line up to catch prey
In Cuba’s Desembarco del Granma national park, Jamaican fruit bats have found their ideal home in the chambers of sinkhole caves – deep holes sunk vertically into the ground. Unfortunately, it is no easy life: Cuban boas (Chilabothrus angulifer), large, constricting snakes with striking zigzag patterns, also live around these caves, and have developed a taste for the bats.
Though the bats spend the daytime comfortably roosted deep in the caves, they leave every evening to forage for fruit. The boas take up position on the cave ceiling late in the evening and wait for this nightly passage to take place. But their positioning is not random. The boas spread themselves in a line, forming a rudimentary barrier. This coordinated hunting increases their chances of catching a bat because their prey has no choice but to fly past a snake to exit the cave.
5. Sea snakes tie themselves in knots
Sea snakes spend their entire lives in water, even giving birth to live young in the ocean. They have many adaptations to survive including a flat, paddle shaped tail, and an ability to excrete salt using a gland under the tongue.
Despite their name, yellow-bellied sea snakes (Hydrophis platurus) are not cowardly, but rather possess bright yellow undersides. These snakes have developed a bizarre strategy to help them shed their old skin. Because there is not much in the open sea to rub up against to loosen the skin, they actually tie themselves in a knot, using their own bodies as a scratching post to remove it in one piece, much like peeling off a sock.
Tom Major does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
How open data can help the world better manage coral reefs
Author: Adel Heenan, Postdoctoral fellow, Bangor UniversityIvor D. Williams, Coral Reef Ecologist, National Oceanic and Atmospheric Administration
Coral reefs are critically important to the world but despite the ongoing efforts of scientists and campaigners, these stunningly beautiful ecosystems still face a variety of threats. The most pervasive is, of course, climate change, which is putting their very future in jeopardy.
Climate change is a complex, worldwide problem that needs a global solution. One part of which is good monitoring systems, that operate at a large scale. Broad scale datasets from these systems are required to understand how vulnerable ecosystems like coral reefs are changing, and to separate that information from natural variation.
Often, however, scientists that collect coral reef monitoring data do so in isolation. They work on independent research projects, or for relatively small programmes with specific local agenda, and so don’t always make their data available to the scientific community. The pressure on academic researchers to be the first to publish their findings also disincentives data sharing. So there can be a conflict of interest between the motivations of an individual scientist and the larger advancement of science.
More practically, getting data ready to share is time consuming, particularly when there aren’t standardised monitoring procedures or a good data management infrastructure in place. In the absence of good management, data can simply be lost as people move on, taking lab books, data sheets and external hard drives with them.
But these barriers can be overcome. Through, for example, open access journals that publish scientifically valuable datasets. Peer-reviewed, citable datasets with standardised meta-data promotes sharing and reusability, while also recognising the researchers behind it.
Given the now urgent need to find science-based solutions for coral reefs, we believe the benefits of open data far outweigh the costs. This is one of the reasons we recently published our entire dataset of coral reef habitats and fish assemblages in the western central Pacific.
Our dataset was collected by scientific divers from the US national oceanic and atmospheric administration between 2010 and 2017. They were part of the interdisciplinary team that operates from NOAA ships to collect physical, chemical and biological data for the pacific reef assessment and monitoring programme. For seven years, these researchers surveyed fish assemblages and coral reef habitats at 39 islands and atolls in the United States affiliated western central Pacific.
The areas studied ranged from the remotest islands in the central Pacific – hundreds of kilometres from the nearest human civilisations – to highly populated, developed and urbanised islands such as Oahu and Guam.
These islands also have different biophysical conditions, such as temperature. This means that we have been able to quantify different threats relative to the natural background variability caused by environmental conditions. For instance, we can now understand the true effect of human depletion on coral reef fishes. We have also been able to set reasonable expectations for what a healthy reef looks like in different locations.
When multiple large data sets like this are pooled, they become even more powerful, allowing researchers to tackle key questions, such as where coral reef “bright spots” are and why they are thriving.
By making all data easily available like ours is, and working to improve comparability, we can speed up the scientific pace to better understand and manage coral reefs. Though we were required to make the NOAA data available under the United States Open Data Policy, we believe it is important for the wider coral reef community to fully embrace this ideal. Coral reefs are so widespread that no one programme can hope to gather data across most of their range. Linking large and small-scale programmes will improve the value of both: large datasets can give the big picture context, while localised programmes can be more intensive or regularly repeated.
One landmark study, for example – which used open datasets from different sources – found that the majority of coral reefs are fished to under half of their maximum population. So a range of management target benchmarks were established. Another compiled 25 different datasets to report on the status of coral reef fish biomass at 37 different districts in Hawaii, covering almost the entire archipelago’s coastline. Not only does this collated data help local reef management, but it can be used for marine spatial planning and for assessing effectiveness of reef management elsewhere.
There are a certainly a number of challenges to bringing different datasets together. Scientists will have to work together to create a core set of community standards for how to calibrate across different methods, and what to monitor. But by doing this, the information we gather will be far more useful in addressing the coral reef crisis. A commitment to open data is an important part of this.
Adel Heenan received funding from the NOAA coral reef conservation programme.
Ivor D. Williams receives funding from the NOAA coral reef conservation programme.
What supplements do scientists use, and why?
Author: Simon Bishop, Lecturer in Public Health and Primary Care, Bangor UniversityGraeme Close, Professor of Human Physiology, Liverpool John Moores UniversityHaleh Moravej, Senior Lecturer in Nutritional Sciences, Manchester Metropolitan UniversityJustin Roberts, Senior Lecturer, Anglia Ruskin UniversityNeil Williams, Lecturer in Exercise Physiology and Nutrition, Nottingham Trent UniversityTim Spector, Professor of Genetic Epidemiology, King's College London
Supplements are a multi-billion dollar industry. But, unlike pharmaceutical companies, manufacturers of these products don’t have to prove that their products are effective, only that they are safe– and that’s for new supplements only.
We wanted to know which supplements are worth our attention (and money) so we asked six scientists – experts in everything from public health to exercise physiology – to name a supplement they take each day and why they take it. Here is what they said.
Simon Bishop, lecturer in public health and primary care, Bangor University
Turmeric is more familiar as an ingredient in South Asian cooking, adding an earthy warmth and fragrance to curried dishes, but, in recent years, it has also garnered attention for its potential health benefits. I have been taking ground turmeric root as a dietary supplement for around two years, but I have been interested in its use in Ayurvedic medicine for far longer.
Turmeric is used as a traditional remedy in many parts of Asia to reduce inflammation and help wounds heal. Now, mounting evidence suggests that curcumin, a substance in turmeric, may also help to protect against a range of diseases, including rheumatoid arthritis, cardiovascular disease, dementia and some cancers.
The evidence underpinning these claims of health-giving properties is not conclusive, but it is compelling enough for me to continue to take turmeric each morning, along with my first cup of coffee – another habit that may help me live a bit longer.
Graeme Close, professor of human physiology, Liverpool John Moores University
Vitamin D is a peculiar vitamin in that it is synthesised in our bodies with the aid of sunlight, so people who live in cold countries, or who spend a lot of time indoors, are at risk of a deficiency. People with darker skin tone are also more at risk of vitamin D deficiency as melanin slows down skin production of vitamin D. It is estimated that about a billion people are deficient in the vitamin.
Most people are aware that we need enough vitamin D to maintain healthy bones, but, over the past few years, scientists have become increasingly aware of other important roles of vitamin D. We now believe vitamin D deficiencies can result in a less efficient immune system, impaired muscle function and regeneration, and even depression.
Vitamin D is one of the cheapest supplements and is a really simple deficiency to correct. I used to test myself for deficiencies, but now – because I live in the UK where sunlight is scarce between October and April, and it doesn’t contain enough UVB radiation during these cold months – I supplement with a dose of 50 micrograms, daily, throughout the winter. I also advise the elite athletes that I provide nutrition support to, to do the same.
Justin Roberts, senior lecturer in sport and exercise nutrition, Anglia Ruskin University
Having diverse beneficial gut bacteria is important for your physical and mental health. However, the balance of bacterial species can be disrupted by poor diet, being physically inactive and being under constant stress. One way to support the health of the gut is to consume dietary probiotics (live bacteria and yeasts), such as yogurt, kefir and kombucha.
I first came across probiotics after years of triathlon training, often experiencing gastrointestinal symptoms – such as nausea and stomach cramps – after training and races. I was also more susceptible to colds. After researching the area, I was surprised at how many people experience similar gastrointestinal problems after exercise. Now I have found that taking a probiotic regularly lessens my symptoms after training and benefits my general health.
A recent study we conducted showed that taking a probiotic in the evening with food, over 12 weeks of exercise training, reduced gastrointestinal problems in novice triathletes.
There is also a wealth of research supporting the use of probiotics for general health benefits, including improving intestinal health, enhancing the immune response and reducing serum cholesterol.
Neil Williams, lecturer in exercise physiology and nutrition, Nottingham Trent University
Prebiotics are non-digestible carbohydrates that act as a “fertiliser” to increase the growth and activity of beneficial bacteria in the gut. This is turn can have positive effects on inflammation and immune function, metabolic syndrome, increase mineral absorption, reduce traveller’s diarrhoea and improve gut health.
I first came across prebiotics in my research to target the gut microbiota in athletes suffering from exercise-induced asthma. Previous research had shown asthma patients to have altered gut microbiota, and feeding prebiotics to mice had been shown to improve their allergic asthma. Taking this as our launching point, we showed that taking prebiotics for three weeks could reduce the severity of exercise-induced asthma in adults by 40%. Participants in our study also noted improvements in eczema and allergic symptoms.
I add prebiotic powder to my coffee every morning. I have found that it reduces my hayfever symptoms in the summer and my likelihood of getting colds in the winter.
Haleh Moravej, senior lecturer in nutritional sciences, Manchester Metropolitan University
I started taking omega 3 after attending a Nutrition Society winter conference in 2016. The scientific evidence that omega 3 could improve my brain function, prevent mood disorders and help to prevent Alzheimer’s disease was overwhelming. After analysing my diet it was obvious that I wasn’t getting enough omega 3 fatty acids. A healthy adult should get a minimum of 250-500mg, daily.
Omega 3 is a form of fatty acid. It comes in many forms, two of which are very important for brain development and mental health: EPA and DHA. These types are primarily found in fish. Another type of omega 3 – ALA (alpha-linolenic acid) – is found in plant-based foods, such as nuts and seeds, including walnuts and flax seeds. Due to my busy schedule as a lecturer, during term time my diet is not as varied and enriched with omega 3 fatty acids as I would like, forcing me to choose a supplement. I take one 1,200mg capsule, daily.
Nothing but real food
Tim Spector, professor of genetic epidemiology, King’s College London
I used to take supplements, but six years ago I changed my mind. After researching my book I realised that the clinical studies, when properly carried out and independent of the manufacturers, clearly showed they didn’t work, and in many cases could be harmful. Studies of multivitamins show regular users are more likely to die of cancer or heart disease, for example. The only exception is supplements for preventing blindness due to macular degeneration, where randomised trials have been generally positive for a minor effect with a mixture of antioxidants.
In many cases, there is some experimental evidence these chemicals in supplements work naturally in the body or as foods, but no good evidence that when given in concentrated form as tablets they have any benefit. Recent evidence shows that high doses of some supplements can even be harmful – a case in point being calcium and vitamin D. Rather than taking expensive and ineffective synthetic products, we should get all the nutrients, microbes and vitamins we need from eating a range of real foods, as evolution and nature intended.
Graeme Close consults to Gatorade Sport Science Institute (GSSI) and Healthspan Elite. He has received funding from The MRC, BBSRC, Aliment Nutrition, GSK and GSSI.
Tim Spector receives funding from The MRC, Wellcome Trust, CDRF, NIHR, EU Horizon Grants, and is author of "The Diet Myth : the real science behind what we eat - Orion 2016"
Haleh Moravej, Justin Roberts, Neil Williams, and Simon Bishop do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
You are more likely to deny the truth in your second language
Author: Manon Jones, Senior Lecturer of Psychology, Bangor UniversityCeri Ellis, Research Associate, University of Manchester
Whether you’re speaking in your native tongue, or in another language, being understood and believed is fundamental to good communication. After all, a fact is a fact in any language, and a statement that is objectively true should just be considered true, whether presented to you in English, Chinese or Arabic.
However, our research suggests that the perception of truth is slippery when viewed through the prism of different languages and cultures. So much so that people who speak two languages can accept a fact in one of their languages, while denying it in the other.
Bilingual people often report that they feel different when switching from one language to another. Take Karin, a fictitious bilingual, for example. She might use German informally at home with family, in the pub, and while watching football. But she uses English for more structured, professional aspects of her life as an international lawyer.
This contextual change of language is not simply superficial, it goes hand-in-hand with a host of perceptual, cognitive and emotional trends. Research shows that language linked to experiences shapes the way we process information. So if someone was to utter the words “Ich liebe dich” to Karin, she might well blush, but by the same token, “I love you” might not alter her cheek colour at all. It’s not a matter of proficiency: Karin is equally fluent in German and English, but her emotional experiences are bound more strongly to her mother tongue, simply because she experienced more fundamental, defining emotions as a child.
A substantial number of psychology experiments have shown that languages shape aspects of our visual perception, the way we categorise objects in our environment, and even the way we perceive events. In other words, our very sense of reality is constructed by the confines of the language we speak.
Less is known of whether language also shapes our higher-level knowledge, relating to concepts and facts. Until recently, it was commonly assumed that one’s understanding of meaning is shared across all the languages one speaks. However, we have been able to observe that this is not the case. Bilinguals actually interpret facts differently depending on the language they are presented with, and depending on whether the fact makes them feel good or bad about their native culture.
During one such study from our group, we asked Welsh-English bilinguals – who had spoken Welsh from birth and considered themselves culturally Welsh – to rate sentences as true or false. The sentences had either a positive or negative cultural connotation, and were factually either true or false. For example, “mining was celebrated as a core and fruitful industry in our country” has a positive connotation and is a true statement. Another similar yet subtly different example is “Wales exports prime quality slate to every single country”, which is a positive yet false statement. The statement “historians have shown that miners were heavily exploited in our country” is negative and true. And finally, “the poor work ethic of miners ruined the mining industry in our country” is negative and false.
Our bilingual participants read these sentences in both English and Welsh, and as they categorised each one, we used electrodes attached to their scalps to record the implicit interpretation of each sentence.
We found that when sentences were positive, bilinguals showed a bias towards categorising them as true – even when they were false – and that they did this in both languages. So far, no surprise. But when sentences were negative, bilinguals responded to them differently depending on whether they were presented in Welsh or in English, even though the exact same information was presented in both of the languages.
In Welsh they tended to be less biased and more truthful, and so they often correctly identified some unpleasant statements as true. But in English, their bias resulted in a surprisingly defensive reaction: they denied the truth of unpleasant statements, and so tended to categorise them as a false, even though they were true.
This research shows the way in which language interacts with emotions to trigger asymmetric effects on our interpretation of facts. While participants’ native language is closely tied to our emotions – which perhaps comes with greater honesty and vulnerability – their second language is associated with more distant, rational thinking.
Make no mistake, our bilingual participants knew what was factually true and what was factually false – as revealed by the brain activity measures – but functioning in the second language appeared to protect them against unpalatable truths, and deal with them more strategically.
Manon Jones receives funding from the Coleg Cymraeg Cenedlaethol.
Ceri Ellis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.