On our News pages
Our Research News pages contain an abundance of research related articles, covering recent research output nad topical issues.
Our researchers publish across a wide range of subjects and topics and across a range of news platforms. The articles below are a few of those published on TheConversation.com.
Why PrEP takers should still use condoms with HIV+ partners
Author: Simon Bishop, Lecturer in Public Health and Primary Care, Bangor University
In the film, The Matrix, lead character Neo is given the choice to take one of two pills that will determine his fate. The red pill promises to open his eyes to the true nature of reality, while the blue pill will perpetuate his ignorance and shield him with a comfortable illusion. Neo takes the red pill in a moment that has become one of the most retold film analogies of all time.
Neo’s pill taking is also useful for delving into a worrying trend that has arisen in recent months with regards to HIV-prevention drugs. The medications that have been licensed in recent years to reduce HIV transmission among homosexual men run the risk of being nothing more than a blue pill for other groups, luring users into a false sense of security.
Condoms have been the mainstay of safer sex messages for 30 years as the best way of reducing HIV transmission. In 2012, however, the US food and drug administration licensed a drug to prevent people from contracting HIV, which had previously only been used to treat the infection. This small blue pill was called Truvada, and so pre-exposure prophylaxis (or PrEP) was born. By this stage, evidence of the safety and effectiveness of Truvada in reducing HIV transmission was already strong, especially among men who have sex with men. The US decision to licence the drug was quickly followed by World Health Organisation guidelines also supporting the use of Truvada for PrEP, not as an alternative to condom use, but rather as part of a broader HIV prevention approach that included condoms.
With US and WHO approval, the use of PrEP has now become commonplace and widespread. In the UK, Scotland currently offers PrEP for free on the NHS to high-risk individuals, and both England and Wales are trialling its use through selected sexual health clinics. For those unable to obtain an NHS prescription for Truvada, there are a number of online sellers willing to provide the drug via mail-order for as little as £35 (US$48) a month, making PrEP hugely accessible.
On the face of it, these might seem like welcome developments– an available and affordable way to reduce the number of people contacting HIV. The problem is that is seems that some men who have sex with men may not be using PrEP in addition to condoms, but rather as an alternative. The wide availability of PrEP, and its promise to protect against HIV, also appears to be leading heterosexual men and women towards using Truvada in order to avoid condoms, particularly within the context of commercial sex. Indeed guidance from both the US and the UK suggests that PrEP may be considered appropriate for use by heterosexuals who are sexually promiscuous but tend not to (or perhaps do not want to) use condoms.
There are a number of problems with this position. First, condoms protect against more than just HIV. Dispensing with their use risks exposure to other sexually transmitted infections, including gonorrhoea, chlamydia and syphilis. Although these infections are usually curable, there are strains that are resistant to antibiotics and so difficult to treat. Drug-resistant gonorrhoea in particular represents a major public health concern.
Even leaving aside other sexually transmitted infections, PrEP still represents a poor alternative to condom use, particularly in protecting heterosexuals. Comparisons between the two approaches vary in terms of their effectiveness, but studies that have looked at the use of PrEP to prevent HIV transmission in women have often been disappointing in terms of their ability to prevent new infections.
The situation is made more complicated because – though injectable alternatives are now being trialled– Truvada is usually provided as a pill that needs to be taken regularly, ideally every day, in order to provide the best protection. Anyone who has ever been prescribed a course of antibiotics knows just how easy it can be to forget to take a dose, but in the case of PrEP this forgetfulness can have particularly serious consequences.
And finally, not all HIV is prevented by Truvada. The drug has been licensed for use to treat the disease for well over a decade and over time some strains of HIV have become resistant to it. One consequence of this resistance is that we have started to see failures of PrEP to prevent HIV infection, even when the drug is used consistently. Although the prevalence of Truvada-resistant HIV is currently thought to be very low, its very existence underlines the danger of relying on PrEP alone.
PrEP continues to be a valuable tool in the arsenal of HIV prevention, especially among high-risk groups, such as men and transgender women who have sex with men. Despite this, immense care needs be taken to prevent Truvada becoming viewed as an alternative to using condoms by the wider population. Unfortunately, when used on its own, as in The Matrix, the blue pill risks offering an illusion of safety. It may be argued as ethical to make any new advancement in HIV prevention available to all. In reality doing so may ultimately do more harm than good, fuelling an epidemic in sexually transmitted infections and speeding up drug resistance in HIV.
Simon Bishop does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Yoga in the workplace can reduce back pain and sickness absence
Author: Dr Ned Hartfiel, Research Officer, Centre for Health Economics and Medicines Evaluation, Bangor UniversityRhiannon Tudor Edwards, Professor of Health Economics, Bangor University
Back pain is the single leading cause of disability in the world. In the US, four out of every five people experience back pain at some point in their life. In the UK, back pain is one of the most common reasons for visits to the doctor, and missed work. In fact, absence from work due to back problems costs British employers more than £3 billion every year.
But there is a potentially easy way to prevent this problem: yoga. Our new research has found that exercises from the ancient Indian practice can have very positive benefits for back problems. Our findings suggest that yoga programmes consisting of stretching, breathing, and relaxation methods can reduce sickness absence due to back pain and musculoskeletal conditions.
Wellness at work
There has already been plenty of research demonstrating the benefits of yoga for NHS patients, showing that patients with chronic back pain who regularly practice yoga take fewer sick days than those who don’t practice yoga. But very little research has been done which looks into the benefits of implementing workplace programmes, like we did.
We worked with 150 NHS employees from three hospitals in North Wales. The staff were randomly assigned to either a yoga group or an education group. The yoga group received a total of eight 60 minute yoga sessions, once a week for eight weeks. In addition to this, the yoga participants were given a DVD and a poster for home practice. They were invited to practice yoga at home for ten minutes a day for six months. The education group meanwhile received two instructional booklets for how to manage back pain and reduce stress at work.
The yoga programme was based on Dru Yoga– which emphasises soft, flowing movements – and consisted of four parts. To start each session, there was a series of gentle warm-up movements, followed by eight stretches to release tension from the shoulders and hips. Then participants did four back care postures to develop suppleness in the spine, and improve posture. This was completed with relaxation techniques to create an overall feeling of positive health and well-being.
After eight weeks, the results showed that most yoga participants had larger reductions in back pain compared to the education group. After six months, employee staff records showed that the yoga participants had 20 times less sick leave due to musculoskeletal conditions (including back pain) than the education group. We also found that the yoga participants visited health professionals for back pain only half as often as education participants during the six month study.
Those who improved the most were participants who also practised yoga at home for an average of 60 minutes or more each week. Ten minutes or more a day of home practice was associated with doubling the reduction in back pain, and many participants noted that it helped them to better manage stress too.
Gains in productivity
In the US, about a quarter of all major employers deliver some form of meditation or yoga, but it has yet to be taken up so widely in the UK, or elsewhere in Europe. Insurance company Aetna, for example, offers free yoga classes to their 55,000 employees with reported annual savings of US$2,000 (£1,520) per head in healthcare costs and a US$3,000 (£2,280) gain per person in productivity. Preventing back pain makes economic sense all round. Yoga seems not only good for employees and employers, but also for the economy as well.
With more and more research confirming the health benefits of yoga, the National Institute for Health and Care Excellence (NICE) in the UK now recommends stretching, strengthening and yoga exercises as the first step in managing low back pain. Public Health England also advises yoga classes in the workplace.
Since our initial work with the NHS proved to be such a success, the Dru Yoga healthy back programme used in the study has been delivered to staff at Merseyside Police, Great Ormond Street Hospital, the Institute of Chartered Accountants, Siemens, Barclays, Santander and many other private and public organisations. We now hope that many more will take up yoga to improve the health and well-being of their employees.
Dr Ned Hartfiel is a Research Officer at Bangor University and Director of the Healthy Back Programme Ltd. He is also a volunteer at the Dru International Training Centre in North Wales. This study was funded by a grant from the Welsh Health Economics Support Services.
Professor Rhiannon Tudor Edwards is co Director for the Centre for Health Economics and Medicines Evaluation, School of Healthcare Sciences and Bangor Institute for Health and Medical Research at Bangor University. She receives funding from Health and Care Research Wales and Public Health Wales, both Welsh Government.
Lessons from the Beeching cuts in reviving Britain's railways
Author: Andrew Edwards, Dean of Arts and Humanities and Senior Lecturer in Modern History, Bangor University
More than 50 years ago the Beeching Report was published, spelling the end of hundreds of miles of British railway lines and stations. Pretty much immediately, local campaigns sprang up to protest what became infamously known as the “Beeching Axe”. Now, the transport secretary Chris Grayling has announced that some of the lines could be re-opened.
The proposals, aimed to “reverse decades of decline” in the railways, have been praised as the “rebirth of the railways”. Yet huge investment is needed to truly revitalise the railways. Now, as in Beeching’s time, Britain’s railways are in need of updating. And if we want to see a rail system that is both economically viable and socially beneficial there are some lessons to be learned from the wrongs of past policy.
Back in 1963, Dr Richard Beeching’s plans to cut 5,000 miles of line and some 300 stations were outlined in a British Railways Board report, The Reshaping of British Railways. From an economic perspective, the urgent need to identify savings in the railways was hard to challenge. Nationalised in 1948, the railways had struggled to pay their way for most of the following decade and had, by the early 1960s, accumulated significant operating deficits.
The railways were then in drastic need of modernisation to its rolling stock. Many of the stations built in the Victorian era had fallen into disrepair. Hit by substantial rises in the cost of coal and steel, a combination of management inertia and lack of a clear government strategy hampered attempts to place the railways on a sustainable footing. At a time when rail workers were poorly paid, there was even a reluctance to raise rail fares to offset operating losses.
Consequently, modernisation was slow to materialise and Britain’s railways still largely ran on steam. Despite the more efficient opportunities afforded by diesel locomotion and electrification, British Railways still purchased steam engines well into the 1950s.
Beeching’s remit in 1961 was to lead to railways back into profitability by the end of the decade. With Britain’s economic fortunes on the wane by the early 1960s, the time was ripe for a thorough rationalisation of Britain’s most prominent nationalised industry.
Economic vs social cost
The clinical and ruthless assessment of what was required to put the railways back on stable footing won many admirers in the then Conservative government. It embraced Beeching’s proposals and was quick to implement his report’s recommendations. Few alternatives were offered. When Labour returned to power in 1964, it did little to reverse the cuts – although Beeching was removed as chairman of British Railways in 1965. The reality was that – for both the main parties at the time – the vision of modernisation was framed around a transport system dominated by roads.
As prime minister at the time, Harold Macmillan confided in his diary in 1963: “In ten years we have gone from 2m to 6m motor cars. In another ten years we may go to 12 and eventually 18m cars.” The opening of the new M1 motorway in 1959 – eventually connecting the city of Leeds in the north of England to London in the south – had already provided an iconic symbol of a new vision that was to be pursued vigorously in the decades that followed.
The main opposition to Beeching’s proposals focused on the social impact of the proposed cuts. Opponents argued that Beeching had paid scant attention to the social importance of the railways. Many argued that the closure of many lines in rural Britain would isolate communities.
In regions of Britain such as rural north Wales – where tourism was widely viewed as an alternative to the fast-declining extractive industries and where depopulation was a significant social, cultural and economic problem – opposition to Beeching was voiced across the political spectrum. As a local Labour MP argued at the time, the railways were “a form of social service, which is as essential as the supply of electricity, gas, water and the NHS”.
Beeching did recognise these concerns, but it was outside his remit to find a solution to the social issue. Although local campaigns slowed down the rate of closures, the vast majority of the report’s recommendations were enacted. Across Wales, of the 1,500 miles of line in operation in 1951, only 670 miles remained in 1965. By 1975, the figure had fallen to less than 500.
The Beeching legacy
Since the closures, more than 50 lines have already reopened. In parts of the UK, many former lines were resurrected as part of the new “Heritage Rail” sector, while many successful community rail partnerships have also flourished. Elsewhere, disused lines have become popular cycle and walking tracks.
The former line from Bangor to Caernarfon along the North Wales coastline encapsulates the Beeching legacy. A significant proportion of the former line is now a popular cycle track, the site of the former station in Caernarfon now hosts a supermarket, while the line south of Caernarfon has been developed as part of hugely successful Welsh Highland Railway. To reopen that line would, no doubt, stimulate a vigorous debate among local cyclists, environmental campaigners, the local industry and heritage conservationists.
Contrary to the apocalyptic narrative that accompanies any discussion of his infamous “axe”, there was life for the railways after Beeching. But the reality was that the majority of Britons did view the roads as a more convenient, economical and practical mode of travel from the 1950s onwards.
Today, with those now roads overcrowded and motoring costs escalating, the railways are once again providing a viable alternative. Rail passenger numbers have risen dramatically over the past two decades. For that reason alone, there is logic in revisiting Beeching.
Regular rail users may well have a different view. Overcrowded trains, idiosyncratic timetabling and frequent delays are just some of the problems that need to be addressed. Moreover, rail fares have risen rapidly in real terms since the recession more than a decade ago. The rise of 3.4% in prices in 2018 will compound that problem. And the problems that faced Beeching back in the early 1960s are still there.
Whether nationalised or in private hands, Britain’s railways are still in desperate need of investment, modernisation and coordination. Meanwhile, Beeching’s elusive search for a more efficient railway goes on.
Andrew Edwards previously received UK funding body grants.
Exercise alone does not lead to weight loss in women – in the medium term
Author: Hans-Peter Kubis, Director of the Health Exercise and Rehabilitation Group, Bangor University
Knowing whether or not exercise causes people to lose weight is tricky. When people take up exercise, they often restrict their diet – consciously or unconsciously – and this can mask the effects of the exercise. In our latest study, we avoided this bias and discovered that exercise, on its own, does not lead to weight loss in women.
For our research, we concealed the true objective of our investigation (investigating weight loss response to exercise) from the participants, and used bogus objectives instead (cognitive performance and cardiovascular fitness improvement). We also excluded women who intended to lose weight from the study because there was a higher risk that they would restrict their diet.
In two training studies, over four and eight weeks, women aged 18 to 32 attended circuit-training classes three times a week. We recorded the women’s body weight, muscle and fat mass at the start and at the end of the study. We also took blood samples so that we could measure appetite hormones (insulin, leptin, amylin, ghrelin and PYY), as they can alter appetite and food intake.
Results showed that neither lean nor obese women lost weight, including the 34 finishers of the four-week training programme, and the 36 finishers of the eight-week exercise programme. Although, lean women did gain muscle mass.
When we looked at individual weight responses to the exercise programmes, we noticed that the levels of appetite hormones leptin and amylin helped explain why some people gained or lost weight by the end of the study. Changes in appetite hormones as a result of exercise make it much harder for some people to lose weight than for others. In other words, the energy they burned during the exercise class was replaced in their diet. Their body was effectively defending against weight loss, regardless of whether they were lean or obese.
This somewhat frustrating outcome does not mean that exercise is not good for people. There is no doubt that exercise has health benefits on many levels, whether it is for prevention of lifestyle diseases, such as type 2 diabetes or cardiovascular disease, or mental health issues, like depression. But we need to consider that our ancestors evolved to survive over millennia in environments where food was scarce, so our bodies are better adapted to defending against weight loss than defending against weight gain. Our bodies adjust and try to preserve our body weight if we take up exercise, but they don’t adjust to help us lose weight if we gain a few pounds.
However, exercise can help to control weight in indirect ways. It may help us develop more self-control and not give in to food temptations easily. We can also transfer some skills learned from regularly taking part in exercise, such as time management and overcoming periods of low motivation, to other behaviours, such as eating.
People need to work on their diet if they want to achieve weight loss. Combining a healthy diet – such as avoiding processed and sugary foods, eating lots of veg and other high-fibre foods, avoiding snacking and having regular meals – with exercise will certainly produce results.
Hans-Peter Kubis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Apa yang menyebabkan kawanan paus kerap terdampar?
Author: Peter Evans, Honorary Senior Lecturer, Bangor University
Baru-baru ini, 10 ekor paus sperma terdampar di perairan kawasan Ujong Batee, Kabupaten Aceh Besar. Enam dari 10 paus itu berhasil diselamatkan, tetapi empat sisanya mati.
Sementara itu, awal tahun ini, 600 paus pilot terdampar di Selandia Baru. Sekitar 400 di antaranya mati sebelum para relawan bisa mengembalikan mereka ke laut.
Terdamparnya kawanan paus seperti ini telah terjadi sejak dimulainya catatan manusia, dan hingga sekarang masih terjadi secara reguler.
Pada penghujung 2015, misalnya, 337 paus sei mati di fjord di Cile. Pada Februari 2016, 29 paus sperma ditemukan terdampar di pantai di Jerman, Belanda, Inggris bagian timur, dan Prancis bagian utara, sebuah rekor untuk spesies ini di Laut Utara.
Mengapa hewan-hewan ini, yang amat menguasai seluk-beluk kehidupan di perairan, justru bergerak memasuki lingkungan daratan yang tidak ramah—sehingga berujung kematian?
Terdampar beramai-ramai terjadi pada hampir semua spesies paus di samudra. Paus pilot sirip panjang dan sirip pendek cenderung menjadi korban yang paling sering. Spesies lain misalnya paus pembunuh palsu, paus kepala melon, paus berparuh Cuvier dan paus sperma.
Mereka biasa hidup di kedalaman 1.000 meter lebih dan merupakan makhluk sosial. Mereka membentuk kelompok yang bisa terdiri dari ratusan ekor.
Spesies paus yang paling sering terdampar adalah mereka yang hidup di laut dalam, dan di lokasi yang sama, sehingga alam lebih berperan sebagai penyebab dibandingkan manusia. Paus kerap terdampar di area yang sangat dangkal, dengan lantai laut yang melandai perlahan dan sering kali berpasir.
Dengan situasi seperti itu, tidak heran jika hewan-hewan ini, yang terbiasa berenang di laut dalam, bisa kesulitan dan bahkan kembali terdampar bila mereka berhasil mengambang lagi.
Kemampuan ekolokasi yang mereka gunakan untuk membantu navigasi juga tidak berfungsi baik di lingkungan yang demikian. Jadi cukup mungkin bila mayoritas paus terdampar akibat kesalahan navigasi, misalnya ketika mereka memburu mangsa hingga ke daerah asing dan berbahaya.
Di bagian selatan Laut Utara, kawanan paus pernah tercatat terdampar setidaknya sejak tahun 1577.
Selain itu, terdampar secara massal tidak hanya disebabkan oleh tersesat atau kesalahan menentukan kedalaman air. Bisa saja ada satu ekor atau lebih paus yang memang sakit, dan ketika mereka makin lemah, mencari perairan yang lebih dangkal sehingga lebih mudah bernafas ke permukaan.
Baca juga:Wallacea: laboratorium hidup evolusi
Namun ketika tubuh mereka beristirahat pada permukaan keras untuk waktu yang lama, rongga dada mereka akan tertekan dan organ-organ dalam mereka pun rusak.
Terkadang, kegiatan manusia dapat menyebabkan paus terdampar, khususnya kegiatan militer yang melibatkan penggunaan sonar. Hubungan ini pertama kali diungkapkan pada 1996 setelah latihan militer NATO di lepas pantai Yunani berlangsung bersamaan dengan terdamparnya 12 paus berparuh Cuvier. Sayangnya, hewan-hewan ini tidak sempat diperiksa dokter hewan.
Pada Mei 2000, kasus paus terdampar terjadi di Bahama bersamaan dengan aktivitas angkatan laut (AL) yang menggunakan sonar serupa. Ditemukan perdarahan pada sejumlah paus yang diperiksa, khususnya di telinga bagian dalam. Ini menandakan adanya trauma akustik.
Setelah kejadian serupa di Kepulauan Canary pada September 2002, dokter hewan juga mengidentifikasi gejala penyakit dekompresi yang artinya paus-paus itu tidak selalu mati karena terdampar, tapi mungkin saja terluka atau sudah mati lebih dulu di laut.
Banyak peneliti meyakini gelombang sonar mungkin memicu perilaku tertentu pada paus, yang mengganggu mereka dalam mengelola gas di dalam tubuh mereka. Akibatnya, kemampuan mereka menyelam dan timbul ke permukaan dengan aman pun terganggu.
Kebisingan dalam laut adalah masalah besar, yang muncul sebagai dampak kegiatan manusia memasukkan suara (dengan beragam intensitas dan frekuensi) ke dalam laut, yang berasal dari berbagai teknologi bahkan peledakan.
Gempa laut juga merupakan sumber kebisingan di bawah laut, yang juga bisa menyebabkan kerusakan fisik atau perilaku yang mengakibatkan paus terdampar, meski belum seorang pun yang membuat hubungan statistik di antara keduanya.
Kasus terdamparnya paus di Aceh dan Selandia Baru, dengan keberhasilan menyelamatkan paus dalam jumlah signifikan, juga menimbulkan pertanyaan apakah beberapa hewan yang sehat hanya mengikuti yang sakit ke daerah berbahaya.
Bertahun-tahun lalu, saya ikut membantu lumba-lumba paruh pendek biasa yang terdampar hidup-hidup di Teifi Estuary, Inggris. Satu paus mati dengan cepat dan hasil otopsi menunjukkan, hewan itu memiliki infeksi parasit paru berat, yang diperkirakan mempersulit bernafas. Individu lainnya tetap berada di dekat temannya yang sekarat dan tampak sangat tertekan, terus saja bersiul.
Kami berhasil mengambangkan kembali lumba-lumba ini dan akhirnya ia pun berenang pergi. Bagi saya, kejadian itu menunjukkan kuatnya ikatan sosial yang terjadi di antara mereka. Ketika kita melihat sejumlah besar paus atau lumba-lumba seolah-olah melakukan bunuh diri massal, kemungkinannya adalah mereka saling merespon satu sama lain secara vokal, mencerminkan hubungan sosial mereka yang kuat.
Riset menunjukkan, paus yang terdampar massal bahkan belum tentu saling terkait satu sama lain. Jadi mungkin kasus terdampar beramai-ramai adalah cerminan dari betapa kuatnya ikatan sosial di antara paus.
Peter Evans tidak bekerja, menjadi konsultan, memiliki saham, atau menerima dana dari perusahaan atau organisasi mana pun yang akan mengambil untung dari artikel ini, dan telah mengungkapkan bahwa ia tidak memiliki afiliasi di luar afiliasi akademis yang telah disebut di atas.
Blue Planet II: can we really halt the coral reef catastrophe?
Author: John Turner, Professor & Dean of Postgraduate Research, Bangor University
The third episode of the BBC’s Blue Planet II spectacularly described a series of fascinating interactions between species on some of the most pristine reefs in the world. These reefs, analogous to bustling cities, are powered by sunlight, and provide space and services for a wealth of marine life.
Competition is rife, as exemplified by the ferocious jaws of the metre-long bobbit worm, ready to pounce on unsuspecting fish by night from its lair in the sand, or the pulsating show of colours of the cuttlefish as it stalks a mesmerised crab. Other reef species team up in unlikely partnerships to improve the outcome of a hunt for fish amongst the coral, as shown by the pointing display of an octopus working in cahoots with a grouper.
Inevitably, the episode described how these cities are under threat, as warming oceans destroy the symbiotic relationship between the corals and the algae living within them, causing the corals to lose their algae, and become bleached.
Prolonged bleaching leads to the death of the colonies that build the reef, leaving behind lifeless ruins. Since 2014, an unprecedented series of consecutive warming events driven by climate change, have affected many reefs, including the Australian Great Barrier Reef, and annual bleaching is predicted to become more frequent, leaving no time for the reefs to recover between these extreme events. In the last scenes, narrator David Attenborough provides a glimmer of hope as he describes corals and other reef species spawning on mass to produce new generations of life to build new reefs.
What’s really going on?
The producers understandably visit the best and most pristine reefs in the world to capture these wonderful sequences. We must remember that the majority of coral reefs, especially those close to large human populations, are already degraded due to localised impact from over and destructive fishing, nutrient run off from urban and agricultural land, and coastal development.
The most severely threatened reefs are in South-East Asia and the Atlantic, but even the Indian Ocean, Middle East and wider Pacific are now suffering from direct human impact. Estimates indicate that 75% of the world’s reefs are already threatened by local threats combined with rising sea surface temperatures and mortality from coral bleaching.
Even the remote reefs of the central Indian Ocean and north-west Pacific are now weakened, and vulnerable to disease. Assuming current trajectories, by mid-century bleaching episodes are predicted to be annual events affecting most reefs, and by the end of the century, atmospheric carbon dioxide levels will have changed ocean chemistry causing acidification, weakening the calcium carbonate skeletons of corals and slowing their growth . In their weakened state, these corals reefs will be further compromised by more frequent tropical storms and rising sea levels.
Resilient reefs may have some ability to resist climate change and adapt to the changing conditions or recover from these disturbances. Corals in the Gulf experience high seasonal temperatures of up to 35°C without bleaching, having adapted to these conditions over evolutionary time, although sustained high temperatures, such as those as experienced in 2010, can still cause them to bleach .
Some corals grow in near shore murky waters, where they may receive protection from high solar irradiation; even cloudy conditions can protect corals during warming events. Strong water currents and upwelling may also mitigate bleaching on seaward reefs.
Calm conditions, on the other hand, appear to enhance bleaching susceptibility. The remote and protected reefs of the Chagos Archipelago in the central Indian Ocean experienced 90% mortality in shallow waters in the severe warming event of 1998. They displayed a relatively rapid recovery over 12 years compared to many other reefs with rapid growth of branching and tabular corals. But consecutive warming events in 2015, 2016 and 2017 have devastated the shallow (less than 15 metres deep) reefs of these uninhabited and isolated reefs once more, and recovery may be more challenging this time.
What can be done?
Coral recruits can already be observed, probably from slightly deeper depths, but they are settling on dead collapsing colonies and will be washed off the reefs in storms. Successful recolonisation may depend on the availability of stable substrates and being able to compete with the algae that is replacing the live coral.
Although global action is required to reduce greenhouse gas emissions (and this will have little effect until mid-century), management intervention at a local level can build resilience on reefs by reducing direct human impact. In a study in Belize, localised fishing was controlled in a Marine Reserve in which grazing of algae by parrotfish was maintained, halving the rate of reef decline.
By maintaining the organisation and complexity of reefs, we can ensure that these reef cities thrive, even in the most threatened regions.
At the end of the Blue Planet II reef episode, thousands of groupers gathered at the drop off on a pristine and remote reef in French Polynesia, risking gatherings of hundreds of sharks to swim out into the tidal stream to spawn.
Off the Cayman Islands, in the central Caribbean, similar groups of spawning Nassau grouper were once heavily exploited by local fishers but are now legally protected. Acoustic techniques have been used to show that they are now once more gathering in their thousands to spawn.
As Blue Planet II made clear, our planet’s reefs are both beautiful and in peril. We do, however, still have time to save them – but only if we act now.
John Turner receives funding from DEFRA Darwin Initiative and Bertarelli Foundation, and is a Trustee of the Chagos Conservation Trust
Why Holocaust jokes can only be told by a Jewish comedian
Author: Nathan Abrams, Professor of Film Studies, Bangor University
When Larry David joked about chatting up women in Nazi concentration camps recently he caused a minor storm of outrage. As part of a monologue on Saturday Night Live, David mused:
I’ve always been obsessed with women – and I’ve always wondered: If I’d grown up in Poland when Hitler came to power and was sent to a concentration camp, would I still be checking out women in the camp? I think I would.
“Of course,” he continued, “the problem is there are no good opening lines in a concentration camp. ‘How’s it going? They treating you OK? You know, if we ever get out of here, I’d love to take you out for some latkes. You like latkes?’”
David has joked about the Holocaust before. In the comedy show he co-created, Seinfeld, an entire episode is devoted to Schindler’s List. In his own show, Curb Your Enthusiasm, he plays Wagner (a favourite composer of Adolf Hitler) to a co-religionist who accuses him of being a self-hater. He invites a cast member of the reality show Survivor to meet a Holocaust survivor and they proceed to argue over who had it worse off. Many suggested David’s jokes weren’t in good taste, that he had crossed a line this time. But had he?
David is building upon a tradition of Holocaust humour which is nothing new. In the early 1960s, following the kidnap, trial, and execution of Adolf Eichmann, legendary Jewish comic, Lenny Bruce, had a joke in which he’d say in a redneck used car salesman’s voice: “Here’s a Volkswagen pickup truck that was just used slightly during the war carrying the people back and forth to the furnaces.” Or he held up a newspaper with the headline: “Six Million Jews Found Alive in Argentina.”
In 1964, Stanley Kubrick’s movie Dr Strangelove or: How I Learned to Stop Worrying and Love the Bomb parodied contemporary fears of nuclear destruction by conflating it with the Holocaust through its title character, a pantomime Nazi played by Peter Sellers. Three years later, in 1967, Mad Magazine’s Mein Kamp Humor Dept, produced the parody Hokum’s Heroes. “And here it is … the brand new weekly TV situation comedy featuring that gay, wild, zany, irrepressible bunch of World War II concentration camp prisoners … those happy inmates of ‘Buchenwald’ known as … ‘Hochman’s Heroes’.”
Then, in that same year, Mel Brooks directed The Producers a film which featured a bad taste musical named Springtime for Hitler, complete with Busby Berkeley-style routines of SS troops dancing in swastika formation.
Knowledge beats outrage
Such Holocaust humour has grown exponentially in recent decades. This is particularly evident in mainstream American cinema where the Holocaust often appears as an incidental, gratuitous, superfluous throwaway line, or in-joke. Take Woody Allen – who has had a career-long fascination with the Holocaust. When asked in Deconstructing Harry (1997): “Do you care even about the Holocaust or do you think it never happened?” Allen has his protagonist Harry Block respond: “Not only do I know that we lost six million, but the scary thing is records are made to be broken.”
As Holocaust scholar Lawrence Baron has pointed out in his book, Projecting the Holocaust into the present, images and themes from the Holocaust permeate popular culture like particles of dust filling the air. The Holocaust has become the benchmark and paradigm for evil. It is invoked – and, the more the term is used, the less powerful it becomes. This saturation has its consequences: it becomes ripe for humour. It is no longer taboo.
But it is also generational. For those born towards the end or soon after World War II, the Holocaust was a narrative they heard secondhand. For those born later, it is an historical event. They don’t know anyone who was murdered by the Nazis.
At the same time, Holocaust education has worked. In mainstream politics, it’s considered unacceptable to publicly deny the Holocaust – and is illegal to do so in many countries. For their part, younger Jews have learned that a low profile is useless, given that anti-Semites aren’t so discerning in their discrimination. At the same time, anti-Jewish prejudice has been on the decline in many countries – particularly towards the end of the 20th century and beginning of the 21st.
A generation of Jewish producers, directors, actors, actresses and screenwriters emerged that was less anxious, less afraid of stoking an antisemitic backlash. This is evidenced by the lack of outrage to so many of these jokes over the years, many of which have passed by barely noticed.
Larry David’s shtick on SNL is merely the latest in a 60-year trend. He is locating himself in a venerable tradition of gallows humour at which Jews have historically excelled. We have joked about pogroms before so why not the worst of them all? It does not mean that we are forgetting the Holocaust – on the contrary, the jokes are a form of remembrance. Having said that, I think that younger Jews are more likely to laugh than older Jewish people or non-Jews – we are more familiar with this humour and hence it’s less shocking.
But the key thing is: who is doing the telling? All the examples noted above are by Jews and that’s the principal point – if someone non-Jewish were to engage in this type of humour, it would have an entirely different connotation. It would not be appropriate.
Nathan Abrams receives funding from The British Academy.
Want to become self-compassionate? Run a marathon
Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University
Unsurprisingly, running a marathon is tough. It takes months of training before runners even make it to the starting line and this preparation can, at times, feel like punishment. The marathon runner in training can often be found limping around with blisters, sore muscles and blackened or lost toenails. Not, perhaps, an image we might naturally associate with the idea of “self-compassion”.
A relatively new concept, self-compassion has been hailed as a more robust alternative to self-esteem. While compassion refers to the demonstration of sympathy and concern for others in times of suffering, self-compassion entails showing this same understanding to ourselves.
One of first skills needed for self-compassion is self-kindness – extending compassion to yourself, even when you feel like you have failed, which can be challenging to say the least. Often when faced with failure, we implicitly assume self-criticism is necessary in order to motivate strong future performance. But in reality this strategy often falls flat. Giving oneself a harsh talking to doesn’t just make us feel bad, it also interferes with our ability to calmly examine a situation and identify what to change in order to improve – an essential component of psychological resilience.
But what does all of this have to do with running a marathon?
Training for a marathon can revolutionise self-perception, making kind self-talk – where you speak directly to yourself either mentally or out loud – easier for even the most reluctant of individuals. This shift isn’t prompted by changes in physique, but of mind. After dedicating oneself to a marathon, the anatomy receives a perceptual upgrade and transforms from a mere body into an essential tool. You begin to see the true value in your own body and the strength that it has.
Research suggests that working towards purposeful goals enhances our sense of self-worth, so under the conditions of marathon training, self-care – looking after ourselves physically – is not only viewed as essential for performance, but as something we deserve. Commit to a goal, invest time, energy and emotion in that goal, and anything that threatens the performance of the body – literally the vehicle needed to carry you to your end target – is unacceptable.
This relates to the second element of self-compassion: a balanced perspective. Described as caring for ourselves in an enduring way, a balanced perspective ensures happiness and health in the long-term. This can also be tricky, given we are typically geared toward instant gratification and struggle to connect the immediate rewards of pleasurable items such as food, alcohol and cigarettes, with their long-term consequences. In fact, neurological research suggests that we literally see our future selves as different people.
However, training for a marathon can help perceptual balance, because it directs our attention away from our immediate concerns and towards the future. Research suggests that goals cognitively activate stimuli which help us achieve them. This means the motivation to complete a marathon makes objects and activities which are relevant to our long-term health implicitly attractive and easier to engage with.
More specifically, setting a goal which requires us to plan and monitor progress over weeks or months can help to bridge the gap between current and future happiness. Sticking to a schedule and receiving feedback, such as identifying weekly mileage goals and achieving new distance targets, can make us more willing to make choices that will benefit us later on. This might be resisting the instant pleasure of one too many drinks on a Friday night, or getting enough sleep so that we feel at our best when training.
The third and final component of self-compassion is common humanity. This refers to the understanding that suffering is a natural and shared part of being human. Based on the idea that feeling isolated in our pain exacerbates perceptions of inadequacy and insecurity, common humanity is an important part of avoiding negative cycles of self-pity.
Running is sometimes considered an isolated and fiercely competitive sport, but this isn’t necessarily true. Runners step in to help one another in times of difficulty – just look at Matthew Rees who helped fellow runner David Wyeth complete the last 300m of the 2017 London Marathon, to the detriment of his own timing. Running provides a sense of human connection, because it shows that struggle is normal. Being one in a field of thousands, communally suffering in the pursuit of a common goal, is paradoxically satisfying. Perhaps because it allows us to appreciate just how small we are in the scheme of things.
So, while marathon training may be painful, sometimes we have to experience a degree of suffering in order to truly value ourselves, to appreciate others, and to learn what it means to be self-compassionate.
Rhi Willmot does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Investing in warmer housing could save the NHS billions
Author: Dr Nathan Bray, Research Officer in Health Economics, Bangor UniversityEira Winrow, PhD Research Candidate and Research Project Support Officer, Bangor UniversityRhiannon Tudor Edwards, Professor of Health Economics, Bangor University
British weather isn’t much to write home about. The temperate maritime climate makes for summers which are relatively warm and winters which are relatively cold. But despite rarely experiencing extremely cold weather, the UK has a problem with significantly more people dying during the winter compared to the rest of the year. In fact, 2.6m excess winter deaths have occurred since records began in 1950 – that’s equivalent to the entire population of Manchester.
Although the government has been collecting data on excess winter deaths – that is, the difference between the number of deaths that occur from December to March compared to the rest of the year – for almost 70 years, the annual statistics are still shocking. In the winter of 2014/15, there were a staggering 43,900 excess deaths, the highest recorded figure since 1999/2000. In the last 10 years, there has only been one winter where less than 20,000 excess deaths occurred: 2013/14. Although excess winter deaths have been steadily declining since records began, in the winter of 2015/16 there were still 24,300.
According to official statistics, respiratory disease is the underlying cause for over a third of excess winter deaths, predominantly due to pneumonia and influenza. About three-quarters of these excess respiratory deaths occur in people aged 75 or over. Unsurprisingly, cold homes (particularly those below 16°C) cause a substantially increased risk of respiratory disease and older people are significantly more likely to have difficulty heating their homes.
Health and homes
The UK is currently in the midst of a housing crisis – and not just due to a lack of homes. According to a 2017 government report, a fifth of all homes in England fail to meet the Decent Homes Standard– which is aimed at bringing all council and housing association homes up to a minimum level. Despite the explicit guidelines, an astonishing 16% of private rented homes and 12% of housing association homes still have no form of central heating.
Even when people have adequate housing, the cost of energy and fuel can be a major issue. Government schemes, such as the affordable warmth grant, have been implemented to help low income households increase indoor warmth and energy efficiency. However, approximately 2.5m households in England (about one in nine) are still in fuel poverty – struggling to keep their homes adequately warm due to the cost of energy and fuel – and this figure is rising.
Poor housing costs the NHS a whopping £1.4 billion every year. Reports indicate that the health impact of poor housing is almost on a par with that of smoking and alcohol. Clearly, significant public health gains could be made through high quality, cost-effective home improvements, particulalrly for social housing. Take insulation, for example: evidence shows that properly fitted and safe insulation can increase indoor warmth, reduce damp, and improve respiratory health, which in turn reduces work and school absenteeism, and use of health services.
Warmth on prescription
In our recent research, we examined whether warmer social housing could improve population health and reduce use of NHS services in the northeast of England. To do this, we analysed the costs and outcomes associated with retrofitting social housing with new combi-boilers and double glazed windows.
After the housing improvements had been installed, NHS service use costs reduced by 16% per household – equating to an estimated NHS cost reduction of over £20,000 in just six months for the full cohort of 228 households. This reduction was offset by the initial expense of the housing improvements (around £3,725 per household), but if these results could be replicated and sustained, the NHS could eventually save millions of pounds over the lifetime of the new boilers and windows.
The benefits were not confined to NHS savings. We also found that the overall health status and financial satisfaction of main tenants significantly improved. Furthermore, over a third of households were no longer exhibiting signs of fuel poverty – households were subsequently able to heat all rooms in the home, where previously most had left one room unheated due to energy costs.
Perhaps it is time to think beyond medicines and surgery when we consider the remit of the NHS for improving health, and start looking into more projects like this. NHS-provided “boilers on prescription” have already been trialled in Sunderland with positive results. This sort of cross-government thinking promotes a nuanced approach to health and social care.
We don’t need to assume that the NHS should foot the bill entirely for ill health related to housing, for instance the Treasury could establish a cross-government approach by investing in housing to simultaneously save NHS money. A £10 billion investment into better housing could pay for itself in just seven years through NHS cost savings. With a growing need to prevent ill health and avoidable death, maybe it’s time for the government to think creatively right across the public sector, and adopt a new slogan: improving health by any means necessary.
Nathan Bray receives funding from Health and Care Research Wales and the EU Horizon 2020 Framework Programme for Research and Innovation
Eira Winrow receives PhD funding from Health and Care Research Wales.
Rhiannon Tudor Edwards receives funding from the National Institute for Health Research, Health Technology Assessment (HTA), Health and Care Research Wales and the EU Horizon 2020 Framework Programme for Research and Innovation.
Why we taught psychology students how to run a marathon
Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University
Mike Fanelli, champion marathon runner and coach, tells his athletes to divide their race into thirds. “Run the first part with your head,” he says, “the middle part with your personality, and the last part with your heart.” Sage advice – particularly if you are a third year psychology student at Bangor University, preparing for one of the final milestones in your undergraduate experience: running the Liverpool Marathon.
For many students, the concluding semester of third year is a time of uncertainty. Not only are they tackling the demands of a dissertation and battling exams, but they are also teetering on the precipice of an unknown future, away from the comfort of university.
As spring draws to a close, the academic atmosphere provides a heady cocktail of sleep-deprivation, achievement and stress. Yet 22 of our students managed to do all this and train for a marathon as part of their “Born To Run” class. None of them had completed such a distance before – in fact, most had run no further than 5km prior to their module induction.
Rewind several months, and I am listening to my PhD supervisor, John Parkinson, and fellow academic Fran Garrad-Cole discuss their plans for “the running module”, which would coincide with more traditional lectures on postive and motivational psychology. I was greatly enthused by the idea given the psychological benefits of physical activity. Exercise is related to improvements in mood, self-esteem and social integration, as well as reducing symptoms of depression.
Particularly relevant to those under pressure at work or school, is the association between physical activity and the ability to cope with stress, as well as enhanced cognitive functioning. But despite these benefits, designing a class around running a marathon was no easy task.
Race to success
As neither module organiser nor student, it was easy for me to relish the gamble of this venture. My participation – assisting the classes and helping the students to train for the marathon – did not place my professional reputation on the line, nor did it have the potential to significantly impact the outcome of my degree. The danger with this kind of practical application is that when things fail, the failure is highly visible.
It would be easy to reduce “success” into a binary distinction of running or not running on race day. Yet this perspective would very much miss the point. The aim of the module wasn’t to complete a marathon, but to create graduates who set huge challenges, and nail them, whenever that may be.
Not every student ran the marathon, but for the 13 who did, the three who ran the half, and those who didn’t run at all, the lessons on perseverance and resilience demonstrate that failure can be an essential component of success.
The message from the Born to Run module was essentially one of courage. T S Elliot once said, “Only those who risk going too far can possibly find out how far one can go.” This statement rings true on multiple levels. It was visible in the students’ bravery in publicly committing to such a challenging goal, John and Fran’s professional risk, and in both the mental and physical ardour that training for a marathon takes.
What I saw was the incredible impact that setting high expectations balanced with warm support and strategic expertise can have on student engagement. Most importantly, I learnt how bringing your own passion into the classroom can transform the learning experience, transcending both their academic and personal life.
So to return to Mike Fanelli, the final stages of the module, as well as the marathon, are about the heart. The technical strategies the students learnt saw them through the first few miles, and the traits they were encouraged to develop enabled them to cover the next third. But in the final part, when delirium sets in, it’s the emotional bond created by such a challenging yet supportive experience that gets you through.
The pleasure I felt at eventually crossing the line was multiplied immeasurably by sharing this experience with the others I have seen develop over the semester. I will be forever grateful to one student, Patrick, for pulling me through that last mile, and forever in awe of Fran, John and the first ever Born to Runners.
Rhi Willmot has nothing to disclose.
Documenting three good things could improve your mental well-being in work
Author: Kate Isherwood, PhD Student in Health and Well-being, Bangor University
The UK is facing a mental health crisis in the workplace. Around 4.6m working people– 7% of the British population – suffer from either depression or anxiety. In total, 25% of all EU citizens will report a mental health disorder at some point in their lives.
People who have been diagnosed with a mental health disorder, or show symptoms of one, and remain in work are known as “presentees”. These individuals may have trouble concentrating, memory problems, find it difficult to make decisions, and have a loss of interest in their work. They underperform and are non-productive.
Medication and/or talking therapies – like cognitive behavioural therapy (CBT) – have been shown to be highly effective in treating common mental health disorders. But these interventions are aimed at those who are already signed off sick due to a mental health diagnosis (“absentees”).
Stress and pressure in work is not the same as at home, so those with mental health issues who are still in work need a different kind of help. In the workplace, employees can be subject to tight deadlines and heavy workloads, and may potentially be in an environment where there is a stigma against talking about mental health.
Reframing mental health
So what can be done for those working people who have depression or anxiety? Research has found that simply treating a person before they are signed off sick will not only protect their mental health, but can actually result in increased workplace productivity and well-being. For example, when a group of Australian researchers introduced CBT sessions into a British insurance company, they found it greatly improved workplace mental health.
In the study, seven three-hour sessions of traditional CBT were offered to all staff in the company. The sessions focused on thinking errors, goal-setting, and time management techniques. At follow up appointments seven weeks and three months after the sessions had ended, the participants showed significant improvements in things like job satisfaction, self-esteem, and productivity. They had also improved on clinical measures of things like attributional style – how a person explains life events to themselves – psychological well-being and psychological distress.
However, there have been concerns that using the types of treatment typically given to people outside work may be distracting to an employee. The worry is that they don’t directly contribute to company targets, instead offering more indirect benefits that can’t be as easily measured.
But there is an alternative that doesn’t take up too much company time and can still have a huge impact on employees’ mental health: positive psychology.
Three good things
In the last 15 years, psychological study has moved away from the traditional disease model, which looks at treating dysfunction or mental ill-health, towards the study of strengths that enable people to thrive. This research focuses on helping people to identify and utilise their own strengths, and encourages their ability to flourish.
Positive psychology concentrates on the development of “light-touch” methods – that take no longer than ten to 15 minutes a day – to encourage people to stop, reflect and reinterpret their day.
Something as easy as writing down three good things that have happened to a person in one day is proven to have a significant impact on happiness levels. In addition, previous research has also found that learning how to identify and use one’s own strengths, or express gratitude for even the littlest things, can also reduce depression and increase happiness too.
This is effective in the workplace as well: when a positive work-reflection diary system was put in place at a Swiss organisation, researchers found that it had a significant impact on employee well-being. Writing in diaries decreased employees’ depressive moods at bedtime, which had an effect on their mood the next morning. The staff members were going to work happier, simply by thinking positively about how their shift had gone the day before.
Added to this, when another group of researchers asked employees of an outpatient family clinic to spend ten minutes every day completing an online survey, stress levels, mental and physical complaints all significantly decreased. The questionnaire asked the participants to reflect on their day, and write about large or small, personal or work-related events that had gone well and explain why they had occurred – similar to the three good things diary. The staff members reported events like a nice coffee with a co-worker, a positive meeting, or just the fact that it was Friday. It showed that even small events can have a huge impact on happiness.
The simple practice of positive reflection creates a real shift in what people think about, and can change how they perceive their work lives. And, as an added benefit, if people share positive events with others, it can boost social bonds and friendships, further reducing workplace stress.
Reframing the day can also create a feedback loop that enhances its impact. When we are happier, we are more productive; when we are more productive, we reach our goals, which helps us to focus on our achievements more, which in turn makes us happier.
Kate Isherwood does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
What language tells us about changing attitudes to extremism
Author: Josie Ryan, PhD Researcher, Bangor University
The words “extreme”,“extremist” and “extremism” carry so many connotations these days – far more than a basic dictionary definition could ever cover. Most would agree that Islamic State, the London Bridge and Manchester Arena attackers, as well as certain “hate preachers” are extremists. But what about Darren Osbourne who attacked the Finsbury Park Mosque? Or Thomas Mair who murdered Labour MP Jo Cox? Or even certain media outlets and public figures who thrive on stirring up hatred between people? Their acts are hateful and ideologically-driven, but calls for them to be described in the same terms as Islamic extremists are more open to debate.
The word “extreme” comes from the Latin (as so many words do) “extremus”, meaning, literally, far from the centre. But the words “extremist” and “extremism” are relatively new to the English language.
Much language is metaphorical, especially when we talk about abstract things, such as ideas. So, when we use “extreme” metaphorically, we mean ideas and behaviour that are not moderate and do not conform to the mainstream. These are meanings we can find in a dictionary, but this is not necessarily how or when extreme, extremist, and extremism are used in everyday life.
One way of finding out how words are used is to look at massive databases of language, called corpora. To find out more about how these words developed in Britain, I turned to the Hansard corpus, a collection of parliament speeches, from 1803 to 2005. Political language is quite specific, but analysing it is a good way to see how the issues of the day are being described. In addition, having a record which covers two centuries shows us how words and their meanings have changed over time.
Apart from the adverb “extremely” – used in the same way as “really” and “very” – my search showed that the word extreme was used most frequently in its adjective form during this 200-year period. However, usage of extreme as an adjective has been declining since the mid-1800s, as has the noun form. At the same time, two new nouns, “extremist” and “extremism” begin to appear in the corpus in the late 1800s, and usage gradually increases as time goes on. No longer are certain views and opinions described as extreme, instead extremist and extremism are used as a shorthand for complex ideas, characteristics, processes and even people.
In the graph above, we can see three peaks in the frequency of the noun extremist(s). It is interesting to see which groups have been labelled as extremist in the past as this can provide clues about who is considered an extremist these days, and also who is not.
In the 1920s, extremist and extremism were often used in connection with the Irish and Indian fights for independence from the British Empire. 50 years on, they are linked with another particularly violent period in Irish history, while Rhodesia was also fighting for independence from Britain in the 1970s. The final increase in usage of the terms extremist and extremism comes, perhaps unsurprisingly, at the start of the 21st century.
However, the words have not been solely linked to violence: they were very often used to describe miners in the 1920s and animal rights activists in the 2000s. Both of these groups have had a lot of support from the British population if not from politicians speaking in parliament.
I also looked at the words that appear around the extreme words, or “collocates”. What I found is that the collocates of the search terms become increasingly negative over the period covered in the Hansard corpus. They also became less connected to situations, and more closely connected to political or religious ideas and violence. For example, in the late 20th century and early 2000s, “extremism” became more associated with Islam, and at the same time, it was collocated with words such as “threat”, “hatred”, “attack”, “terror”, “evil”, “destroy”, “fight”, and “xenophobic”.
After 2005, the extremist terms became much more frequently associated with the Islamic faith – to the point where the word “extremist” is now almost exclusively used to refer to a Muslim who has committed a terrorist act, and some have suggested there is reluctance to use it otherwise.
Looking at the collocates of extremist and extremism in a corpus of UK web news, which runs from 2010-2017, five of the top 10 collocates are related to Islam. “Right wing” and “far-right” also appear in the top 10. However, the top three collocates – “Islamic”, “Islamist” and “Muslim” – appear 50% more frequently than the other seven collocates in the list added together.
The most interesting thing to come out of this investigation is what has gone unsaid. Extremist and extremism are not being used as they were in the past to describe violent, hateful, and ideologically-driven acts, with no reference to ethnicity or faith. Today, the terms have become almost solely reserved for use in reference to Muslims who perpetrate terrorist attacks.
The words we use can affect and reveal how we perceive the world around us. Word meanings change over time, but reluctance to use the same word for the same behaviour betrays a bias towards crimes that are, perhaps, uncomfortably mainstream.
Josie Ryan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Forget Jon Snow, watch the young women to find out how Game of Thrones ends
Author: Raluca Radulescu, Professor of Medieval Literature and English Literature, Bangor University
For Game of Thrones fans, the current series has been a bit of a mystery. As the television writers have picked up the storyline where author George R. R. Martin’s A Song of Fire and Ice novels ended, there is, for the first time, no original text to refer back to.
Much virtual ink has been spilled recently over the role of the female characters in the political struggle, yet one of the most crucial themes of this series is going largely undiscussed: the role of children, particularly young girls.
The children of Game of Thrones might form the thick-woven fabric of the tapestry we have been watching, but they have not really taken centre stage. There were little nods in past episodes towards the vital importance of the children in Game of Thrones: take the little orphans of King’s Landing, for example, who killed Grand Maester Pycelle of the Citadel – a rather more unusual turn of the plot. Later episodes have been more obvious about the power of children, but it is only now that the series is being so explicit about it.
The latest episode to air, episode six, lays the central role of children and young people on a little more thickly. Without giving too much away, the struggle between Sansa and Arya, the Stark sisters, seemingly comes to a head, while a shocking event involving Daenerys Targaryen causes her once more to tearfully utter the phrase “they are my children”, while telling Jon Snow that she is unable to bear a child of her own. We have also recently heard that current queen of the Seven Kingdoms Cersei is pregnant once more with a new heir to the Lannister line.
Seen but not heard
From the start of the series, and indeed Martin’s novels, the struggle over dictating the future of the Seven Kingdoms has been very similar to that during the real-life Wars of the Roses. Cersei’s naked ambition and her son Joffrey’s stark cruelty (puns intended) remind of Margaret of Anjou. She was the 15th century French queen to the mentally unstable Lancastrian king Henry VI, whose son – allegedly begotten in adultery, though not incest as in Game of Thrones – was Prince Edward.
Like Margaret of Anjou, Cersei uses her reputation – and children – to her advantage. She takes charge of the family fortunes and boldly looks at the future as an opportunity for herself. There’s every chance she’ll don armour at some point, as Queen Margaret herself was rumoured to have done during the Wars of the Roses.
Unlike Margaret, however, Cersei faces a battle with the upcoming dynasties of women. Cersei still believes that she is the most important woman in Westeros, but the younger females we first saw as children have come more into the limelight during this and the last series. Cersei’s power is waning, while other prominent women such as Daenerys, or indeed the young lady Lyanna Mormont – the head of one of the great families of the North – are unafraid to ride into battle. Even Sansa, who Cersei once tried to humiliate and oppress, is now standing in as ruler of the North while her half-brother Jon Snow seemingly prefers his place in the heart of the action.
Since the first episodes, we have been watching these young women grow and change – but only now is their true significance being made clear. Where once they were shown in the more expected, traditional roles of a medieval female, now they are warriors in their own right. A feisty young Arya has transformed from the lively girl with her sword “needle” to an assassin, a “Faceless Man” trained in the dark arts and haunting Winterfell. Sansa meanwhile has become a different kind of fighter, going from dreams of being a princess to overcoming years of abuse and ultimately emulating her own strong mother, Lady Catlin Stark.
Valar Morghulis: All men must die
Yet Cersei is not that “old” – and potentially still has decades ahead of her to sit on the Iron Throne. If there’s one lesson that can be learned from Lady Olenna of House Tyrell – the wise older woman who tells Daenerys she has survived many powerful men – it is that even when women are no longer young and the focus of attention, they still have some influence to wield. Cersei may have lost her first three children – and the control she had in using them as pawns to her game – but her new pregnancy could very well serve to change that once more.
Ultimately lineages are the most important factor in winning the game of thrones – and it could very well be that Cersei’s new child grows to fight a ruling Daenerys, who, as of episode seven, had not yet named an heir to her throne.
As the battle focuses between the two – or three, if you count Sansa – queens, it has never been more clear that the young female combatants are now far more relevant than the adult male leaders – most of whom have been killed off. As children these women signalled change in dynastic struggles – but now they are grown up, they are heralding a second echelon of much wiser, perhaps untainted rulers: theirs is the future of Westeros.
Raluca Radulescu has nothing to disclose.
Independent music labels are creating their own streaming services to give artists a fair deal
Author: Steffan Thomas, Lecturer in Film and Media, Bangor University
Music streaming services are hard to beat. With millions of users – Spotify alone had 60m by July 2017, and is forecast to add another 10m by the end of the year – paying to access a catalogue of more than 30m songs, any initial concerns seem to have fallen by the wayside.
But while consumers enjoy streaming, tension is still bubbling away for the artists whose music is being used. There is a legitimacy associated with having music listed on major digital platforms, and a general acknowledgement that without being online you are not a successful business operation or artist.
Even the biggest stars are struggling to deny the power of Spotify, Apple Music and the like. Less than three years after pop princess Taylor Swift announced she would be removing her music from Spotify, the best-selling artist is back online, as it were. Swift’s initial decision came amid concerns that music streaming services were not paying artists enough for using their work – a view backed up by others including Radiohead’s Thom Yorke.
But while Yorke and Swift can survive without the power of streaming, independent production companies with niche audiences may not be able to.
Though the music industry is starting to get used to streaming – streamed tracks count towards chart ratings, and around 100,000 tracks are added every month to Spotify’s distribution list – it is still proving difficult for independent music companies to compete for exposure on these platforms.
Coping with diminishing sales of CDs and other physical copies of music, independent labels are already in a tough place. Independent labels and artists are also unable to negotiate with large digital aggregators such as Spotify or Deezer for more favourable rates, and are forced to accept the terms given. Independent labels lack the expertise, but mostly lack the catalogue size for bargaining power. Major record labels, backed by industry organisations, on the other hand can and have successfully negotiated more favourable terms for their artists based on the share of the catalogue that they represent.
There’s also been a shift in industry approach that some independent labels may find difficult to do. These days, major labels are focused less on the artists themselves and more on which music will do best on new platforms. This undermines the ethos for many culturally rich independent labels who work hard to safeguard niche areas of their market. For them, it is about building up different genres, not simply releasing songs that will generate the most money.
So if niche labels can’t get a strong footing on large services, what can they do?
Where once there were free sites such as SoundCloud, which gave emerging and niche musicians a place to share their music, indy labels are now developing their own streaming services to make sure their artists get the best exposure – and the best deal.
Wales in particular is leading the way for the minority language independent music scene. Streaming service Apton, launched in March 2016, provides a curated service to its music fans. It operates at a competitive price point, with a more selective catalogue representing several Welsh labels. More importantly, it returns a much fairer price to its recording artists than Spotify’s reported 0.00429p per stream.
By using a specialist, curated and targeted music service – such as Apton, or similar services The Overflow and PrimePhonic– consumers are better able to find the music they are looking for. Listeners are also more likely to value the service, as they can access and experience a greater percentage of a label’s catalogue or remain within a niche genre of music, compared with mainstream mass-market streaming services, where mass market recommendations are generated via popular playlists. Users of these streaming sites and apps also value the knowledge that the money they spend is being used to support the artists they follow.
Though they are certainly doing well as is, streaming services at all levels need more work to become the default for music listening. In addition, it is vital that music publishers start using streaming as a gateway for consumers to engage with the music they want to hear, rather than what they want to sell. If the former strategy continues to be followed, it may have a devastating effect on budding artists.
Likewise, listeners need to feel that streaming offers a level of transparency, value and that there is a two-way relationship worthy of their time and attention – something the major players could certainly learn from the independents.
Steffan Thomas was previously affiliated with Sain Records. ApTon is owned by Sain Records and was developed in response to research produced during his PhD. However, I have no ongoing role within the company and retain no commercial interest in the service.
Migrating birds use a magnetic map to travel long distances
Author: Richard Holland, Senior Lecturer in Animal Cognition, Bangor University
Birds have an impressive ability to navigate. They can fly long distances, to places that they may never have visited before, sometimes returning home after months away.
Though there has been a lot of research in this area, scientists are still trying to understand exactly how they manage to find their intended destinations.
Much of the research has focused on homing pigeons, which are famous for their ability to return to their lofts after long distance displacements. Evidence suggests that pigeons use a combination of olfactory cues to locate their position, and then the sun as a compass to head in the right direction.
We call this “map and compass navigation”, as it mirrors human orienteering strategies: we locate our position on a map, then use a compass to head in the right direction.
But pigeons navigate over relatively short distances, in the region of tens to hundreds of kilometres. Migratory birds, on the other hand, face a much bigger challenge. Every year, billions of small songbirds travel thousands of kilometres between their breeding areas in Europe and winter refuges in Africa.
This journey is one of the most dangerous things the birds will do, and if they cannot pinpoint the right habitat, they will not survive. We know from displacement experiments that these birds can also correct their path from places they have never been to, sometimes from across continents, such as in a study on white crowned sparrows in the US.
Over these vast distances, the cues that pigeons use may not work for migrating birds, and so scientists think they may require a more global mapping mechanism.
Navigation and location
To locate our position, we humans calculate latitude and longitude, that is our positon on the north-south and east-west axes of the earth. Human navigators have been able to calculate latitude from the height of the sun at midday for millennia, but it took us much longer to work out how to calculate longitude.
Eventually it was solved by having a highly accurate clock that could be used to tell the difference between local sunrise time and Greenwich meantime. Initially, scientists thought birds might use a similar mechanism, but so far no evidence suggests that shifting a migratory bird’s body clock effects its navigation ability.
There is another possibility, however, which has been proposed for some time, but never tested – until now.
The earth’s magnetic pole and the geographical north pole (true north) are not in the same place. This means that when using a magnetic compass, there is some angular difference between magnetic and true north, which varies depending on where you are on the earth. In Europe, this difference, known as declination, is consistent on an east west axis, and so can possibly be a clue to longitude.
To find out whether declination is used by migrating birds, we tested the orientation of migratory reed warblers. Migrating birds that are kept in a cage will show increased activity, and they tend to hop in the direction they migrate. We used this technique to measure their orientation after we had changed the declination of the magnetic field by eight degrees.
First, the birds were tested at the Courish spit in Russia, but the changed declination – in combination with unchanged magnetic intensity – indicated a location near Aberdeen in Scotland. All other cues were available and still told them they were in Russia.
If the birds were simply responding to the change in declination – like a magnetic compass would – they would have only shifted eight degrees. But we saw a dramatic reorientation: instead of facing their normal south-west, they turned to face south-east.
This was not consistent with a magnetic compass response, but was consistent with the birds thinking they had been displaced to Scotland, and correcting to return to their normal path. That is to say they were hopping towards the start of their migratory path as if they were near Aberdeen, not in Russia.
This means that it seems that declination is a cue to longitudinal position in these birds.
There are still some questions that need answering, however. We still don’t know for certain how birds detect the magnetic field, for example. And while declination varies consistently in Europe and the US, if you go east, it does not give such a clear picture of where the bird is, with many values potentially indicating more than one location.
There is definitely still more to learn about how birds navigate, but our findings could open up a whole new world of research.
Ricahrd Holland receives funding from the Leverhulme Trust and BBSRC
Welsh language media could hold the solution to Wales's democratic deficit
Author: Ifan Morgan Jones, Lecturer in Journalism, Bangor University
For the people of Wales, the country’s democratic deficit has become almost part and parcel of everyday life. While the country has spent its nearly 20 years of devolution building up many of the political institutions that underpin a modern nation, Wales does not yet have a well-developed public sphere. The result is that the Welsh public are not only voting under a misapprehension of what the assembly and government are responsible for, but there is also a lack of public scrutiny.
The problem has been mostly blamed on the lack of political coverage by English language media in Wales. Major outlets like the Trinity Mirror-owned Media Wales, BBC Wales and ITV Cymru have all claimed they are working to remedy the situation, yet still the deficit remains.
The Assembly itself is keen to get to grips with the issue too: a taskforce – of which I was a member – recently recommended direct state investment in journalists that would report on Welsh politics. This may sound like a step into the unknown, but in truth it would not be a radical departure. Three Welsh-language websites that discuss public affairs – Golwg 360, Barn magazine’s website and O’r Pedwar Gwynt– already receive grants from the Welsh government, via the Welsh Books Council. Another Welsh-language news website, BBC Cymru Fyw, is paid for by the licence fee.
The two most prominent of these sites, BBC Cymru Fyw and Golwg360, attracts a small but committed audience of more than 57,000 unique weekly visitors between them. Around half of readers are under 40 years of age – younger than that for Welsh-language print publications, television and radio.
Part of the success of these sites comes from reaching an audience that wouldn’t have made a conscious decision to seek out news stories about Wales or in Welsh in the past. Quite simply because the content appears in their social media feeds, they are more likely to click on it than they ever would be to go out and buy a Welsh-language newspaper or magazine, or tune in to a Welsh-language TV or radio channel.
Though this audience also visits English language outlets for news, readers visit Welsh language sites in search of a certain kind of content that is not available in the English language. My own analysis of Golwg 360’s statistics, as well as interviews with journalists from all four news sites, suggests that the most popular subjects are the Welsh language, Welsh politics, education in Wales, the Welsh media, the Welsh language and arts and Welsh institutions.
Meanwhile, subjects that were already well covered by other English-language news sites – such as British and international current affairs – or sports, tend to do poorly.
However, journalists working for Welsh sites other than the BBC’s Cymru Fyw, did suggest that they did not feel they have sufficient resources to properly scrutinise Welsh institutions –so their ability to carry out in-depth, investigative journalism was severely limited. This problem was made worse by a demand for multimedia content that the journalists did not feel they had the time, resources or technological capability to deliver.
While the number of news platforms providing Welsh-language news is impressive, there may still be a lack of plurality. BBC Cymru Fyw and Golwg360 cover many of the same topics, for example. And the investigative journalism conducted by the numerous Welsh language print magazines does not always find an audience because it isn’t publicised online.
None of the journalists I interviewed felt that their dependence on the Welsh government or the license fee for funding limited what they felt they could report. In fact, it was felt by some that the commercial press was more likely to restrict what they covered because of commercial interests.
The funding of Welsh language journalism by the Welsh government has clearly been a success. It has created a lively public sphere of avid readers who take a great interest in news about the Assembly itself as well as other Welsh political institutions.
One would wish that funding English-language journalism in such a way would be unnecessary – and that the commercial media in Wales will turn a corner and strengthen over the next few years. However, if it continues to weaken as it has over the past 20 years, the future of devolution could depend on a radical solution.
Ifan Morgan Jones does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Forest conservation approaches must recognise the rights of local people
Author: Sarobidy Rakotonarivo, Postdoctoral Research Fellow, University of StirlingNeal Hockley, Research Lecturer in Economics & Policy, Bangor University
Until the 1980s, biodiversity conservation in the tropics focused on the “fines and fences” approach: creating protected areas from which local people were forcibly excluded. More recently, conservationists have embraced the notion of “win-win”: a dream world where people and nature thrive side by side.
But over and over, we have seen these illusions shattered and the need to navigate complicated trade-offs appears unavoidable.
To this day, protected areas are being established coercively. They exclude local communities without acknowledging their customary rights. Sadly, most conservation approaches are characterised by a model of “let’s conserve first, and then compensate later if we can find the funding”.
A new conservation model, Reducing Emissions from Deforestation and forest Degradation (REDD+) is an example of this. Finalised at the Paris climate conference in 2015, it seemed to offer something for everyone: supplying global ecosystem services – such as capturing and storing carbon dioxide and biodiversity conservation – while improving the lives of local communities.
Unfortunately, REDD+ is often built on protected area regimes that exclude local people. For example in Kenya, REDD+ led to the forceful eviction of forest dependent people and exacerbated inequality in access to land. The approach is underpinned by laws (often a legacy of the colonial era) that fail to recognise local people’s traditional claims to the forest. In doing so, REDD+ fails to provide compensation to the people it most affects and risks perpetuating the illusion of win-win solutions in conservation.
REDD+ is just one way in which forest conservation can disadvantage local people. In our research we set out to estimate the costs that local people will incur as a result of a REDD+ pilot project in Eastern Madagascar: the Corridor Ankeniheny-Zahamena.
Our aim was to see whether we could robustly estimate these costs in advance, so that adequate compensation could be provided using the funds generated by REDD+. Our research found that costs were very significant, but also hard to estimate in advance. Instead, we suggest that a more appropriate approach might be to recognise local people’s customary tenure.
Social costs of protected areas
Madagascar, considered one of the top global biodiversity hotspots, recently tripled the island’s protected area network from 1.7 million hectares to 6 million hectares. This covers 10% of the country’s total land area.
Although the state has claimed ownership of these lands since colonial times, they are often the customary lands of local communities whose livelihoods are deeply entwined with forest use. The clearance of forests for cultivation has traditionally provided access to fertile soils for millions of small farmers in the tropics. Conservation restrictions obviously affect them negatively.
Conservationists need to assess the costs of conservation before they start. This could help to design adequate compensation schemes and alternative policy options.
We set out to estimate the local welfare costs of conservation in the eastern rainforests of Madagascar using innovative multi-disciplinary methods which included qualitative as well as quantitative data. We asked local people to trade off access to forests for swidden agriculture (land cleared for cultivation by slashing and burning vegetation) with compensation schemes such as cash payments or support for improved rice farming.
We selected households that differed in their past experience of forest protection from two sites in the eastern rainforests of Madagascar.
We found that households have different views about the social costs of conservation.
When households had more experience of conservation restrictions, neither large cash payments nor support for improved rice farming were seen as enough compensation.
Less experienced households, on the other hand, had strong aspirations to secure forest tenure. Competition for new forest lands is becoming increasingly fierce and government protection, despite undermining traditional tenure systems, is weakly enforced. They therefore believed that legal forest tenure is better since it would enable them to establish claims over forest lands.
Unfortunately, knowing what would constitute “fair” compensation is extremely complex.
Firstly, local people have very different appraisals of the social costs of conservation. That makes it difficult to estimate accurately the potential negative costs of an intervention.
It’s also hard to evaluate how cash or agricultural projects will stimulate development. This makes it challenging to estimate how much, or what type of compensation should be given.
These challenges are compounded by the high transaction costs of identifying those eligible as well as the lack of political power of communities to demand compensations.
Conservation approaches, particularly fair compensation for restrictions that are imposed coercively, need a major rethink.
One solution could be to formally recognise local people’s claims to the forest and then negotiate renewable conservation agreements with them. This is an approach already used successfully in many Western countries. In the US for example, conservation organisations negotiate “easements” with landowners, to protect wildlife. Agreements like this ensure that local people’s participation is genuinely voluntary and that compensation payments are sufficient.
Our research shows that there’s a strong demand from local people for securing local forest tenure. There’s also evidence that doing so may better protect forest resources because without customary tenure local people are more likely to clear forests faster than they would do if they were given secure rights.
We therefore argue that securing local tenure may be an essential part of social safeguards for conservation models like REDD+. It could also have the added benefit of helping to reduce poverty.
The social costs of forest conservation have been generally under-appreciated and advocacy for nature conservation reveals a lack of awareness of the high price that local people have to pay. As local forest dwellers have the greatest impact on resources and also the most to lose from non-sustainable uses of these resources, a radical change in current practices is needed.
Sarobidy Rakotonarivo received funding from the European Commission through the forest-for-nature-and-society (fonaso.eu) joint doctoral programme, and the Ecosystem Services for Poverty Alleviation (ESPA) programme (p4ges project: NE/K010220/1) funded by the Department for International Development (DFID), the Economic and Social Research Council (ESRC) and the Natural Environment Research Council (NERC).
Neal Hockley received funding for this work from the Ecosystem Services for Poverty Alleviation program (ESPA), funded by the UK Department for International Development, the Natural Environment Research Council and the Economic and Social Research Council.
Want to develop 'grit'? Take up surfing
Author: Rhi Willmot, PhD Researcher in Behavioural and Positive Psychology, Bangor University
My friend, Joe Weghofer, is a keen surfer, so when he was told he’d never walk again, following a 20ft spine-shattering fall, it was just about the worst news he could have received. Yet, a month later, Joe managed to stand. A further month, and he was walking. Several years on, he is back in the water, a board beneath his feet. Joe has what people in the field of positive psychology call “grit”, and I believe surfing helped him develop this trait.
Grit describes the ability to persevere with long-term goals, sustaining interest and energy over months or years. For Joe, this meant struggling through arduous physiotherapy exercises and remaining engaged and hopeful throughout his recovery.
Research suggests that gritty people are more likely to succeed in a range of challenging situations. Grittier high school students are more likely to graduate. Grittier novice teachers are more likely to remain in the profession and gritty military cadets are more likely to make it through intense mental and physical training. The secret to this success is found in the ability to keep going when things get tough. Gritty people don’t give up and they don’t get bored.
Research also suggests that grit can be learned. Certain conditions can foster grit, allowing grit developed in one domain to transfer to other, more challenging, situations. Surfing is a good example of how grit can be gently cultivated, strengthened and then honed. So although getting back in the water itself was important to Joe, his previous surfing experience may well have developed his ability to persevere long before he became injured. Here’s how:
Gritty people have a strong appreciation of the connection between hard work and reward. In contrast to simply running onto a hockey pitch, or diving into a pool, surfing is unique in that you have to battle through the white water at the shoreline before you can even begin to enjoy the feeling of sliding down a glassy, green wave. This is difficult, but the adrenaline rush of riding a wave is worth the cost of paddling out.
The theory of learned industriousness suggests that pairing effort and reward doesn’t just reinforce behaviour but also makes the very sensation of effort rewarding in itself. Repeated cycles of paddling out and surfing in are particularly effective at developing an association between intense effort and potent reward. This is especially relevant given that grit is described as a combination of effort and enjoyment. Gritty people don’t just slave away, they eagerly chase difficult goals in a ferocious pursuit of success.
Surfers’ passion for their sport is well known – it may even be described as an addiction. One of the properties that makes surfing so addictive is its unpredictability.
The ocean is a constantly changing environment, making it difficult to know exactly when and where the next wave is about to break. This means watery reinforcement is delivered on something called a variable-interval schedule; any number of quality waves might arrive at any point in a given time frame. Importantly, we receive a stronger release of the motivating neurotransmitter dopamine when a reward is unexpected. So when a surfer is surprised by the next perfect wave, dopamine-sensitive pleasure centres in the brain become all the more stimulated.
Behaviour that is trained under a variable-interval schedule is much more likely to be maintained than behaviour that is rewarded more consistently, making surfers better able to persevere when the waves take a long time to materialise.
The final grit-honing element of surfing is its ability to provide a sense of purpose. Feeling purposeful – a state psychologists describe as a belief that life is meaningful and worthwhile – involves doing things that take us closer to our important goals. It usually means acting in line with our values and being part of something bigger than ourselves. This could refer to religious practice, connecting to nature or simply helping other people.
Research suggests that as levels of grit increase, so does a sense of purpose. But this doesn’t mean that gritty people are saints – just that they have an awareness of how their activities connect to a cause beyond themselves, as well as their own deeply held values.
The physical and mental challenge offered by surfing provides a sense of personal fulfilment. It’s always possible to paddle faster, ride for longer or try the next manoeuvre, but spending time waiting for the next wave also provides a valuable opportunity to reflect.
The ocean is a powerful beast. Serenity can quickly be replaced with chaos when an indomitable set of waves arrives, five-foot-high walls of water, stacked one after the other. Witnessing the power of nature in this way can certainly deliver a sense of perspective, helping you to feel connected to something meaningful and awe inspiring.
Of course, surfing isn’t the only way to build grit. The important lesson here is that developing our passion and identifying our purpose can help us persevere with the activities we love. This provides a valuable reservoir of strength, to be used when we need it the most. And while coming back from such a serious injury requires more than just grit, Joe’s persistent effort and unwillingness to give in have undoubtedly helped him to once again enjoy the sport that made him who he is.
Rhi Willmot does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Artists and architects think differently to everyone else – you only have to hear them talk
Author: Thora Tenbrink, Reader in Cognitive Linguistics, Bangor University
How often have you thought that somebody talks just like an accountant, or a lawyer, or a teacher? In the case of artists, this goes a long way back. Artists have long been seen as unusual – people with a different way of perceiving reality. Famously, the French architect Le Corbusier argued in 1946 that painters, sculptors and architects are equipped with a “feeling of space” in a very fundamental sense.
Artists have to think about reality in different ways to other people every day in their jobs. Painters have to create an imaginary 3D image on a 2D plane, performing a certain magic. Sculptors turn a block of marble into something almost living. Architects can design buildings that would seem impossible.
Think of Edgar Mueller’s famous street art. Or Michaelangelo’s Pietà. Or Frank Lloyd Wright’s Fallingwater, which seems to defy physics. All of these people are (or were) experts in rearranging the spatial relationships in their environment, each in their own way. This is a necessary skill for anyone who takes up these crafts as a profession. How could this not affect the ways in which they think – and talk – about space?
Our recent study, a collaboration of UCL and Bangor University, set out to test this. Do architects, painters, and sculptors conceive of spaces in different ways from other people and from each other? The answer is: yes, they do – in a range of quite subtle ways.
Painters, sculptors, architects (all “spatial” professionals with at least eight years of experience) and a group of people in unrelated (“non-spatial”) professions took part in the study. There were 16 people in each professional group, with similar age range and equal gender distribution. They were shown a Google Street view image, a painting of St Peter’s Basilica in the Vatican and a computer-generated surreal scene.
For each picture, they were given a few tasks that made them think about the spatial scene in certain ways: they were asked to describe the environment, explain how they would explore the space shown and suggest changes to it in the image. This picture-based task was chosen because of its simplicity – it doesn’t take an expert to describe a picture or to imagine exploring or changing it.
From the answers, we categorised elements of the responses for both qualitative and quantitative analyses using a new technique called Cognitive Discourse Analysis with the aim of highlighting aspects of thought that underlie linguistic choices beyond what speakers are consciously aware of. We made a short film about the research which you can watch below.
Our analysis led to the identification of consistent patterns in the language used for talking about the pictures that were revealing. Painters, sculptors and architects all gave more elaborate, detailed descriptions than the others.
Painters were more likely to describe the depicted space as a 2D image and said things like: “It’s obvious the image wants you to follow the boat off onto the horizon.” They tended to shift between describing the scene as a 3D space or as a 2D image. By contrast, architects were more likely to describe barriers and boundaries of the space – as in: “There are voids within walls which become spaces in their own right.” Sculptors’ responses were between the two – they were somewhat like architects except for one measure: with respect to the bounded descriptions of space, they appeared more like painters.
Painters and architects also differed in how they described the furthest point of the space, as painters called it the “back” and architects called it the “end”. The “non-spatial” group rarely used either one of these terms – instead they referred to the same location by using other common spatial terms such as “centre” or “bottom” or “there”. All of this had nothing to do with expert language or register – obviously people can talk in detail about their profession. But our study reflected the way they think about spatial relationships in a task that did not require their expertise.
The “non-spatial” group did not experience any problems with the task – but their language seemed less systematic and less rich than that of the three spatial professional groups.
Thinking and talking like a professional
Our career may well change the way we think, in somewhat unexpected ways. In the late 1930s, American linguist Benjamin Lee Whorf suggested that the language we speak affects the way we think– and this triggered extensive research into how culture changes cognition. Our study goes a step further – it shows that even within the same culture, people of different professions differ in how they appreciate the world.
The findings also raise the possibility that people who are already inclined to see the world as a 2D image, or who focus on the borders of a space, may be more inclined to pursue painting or architecture. This also makes sense – perhaps we develop our thinking in a particular way, for whatever reasons, and this paves our way towards a particular profession. Perhaps architects, painters and sculptors already talked in their own fashion about spatial relationships, before they starting their careers.
This remains to be looked at in detail. But it’s clear from our study that artists and architects have a heightened awareness of their surroundings which is reflected in the way they talk about spatial environments. So next time you are at dinner with an architect, painter, or sculptor, show them a photograph of a landscape and get them to describe it – and see if you can spot the telltale signs of their profession slipping out.
Thora Tenbrink's research was carried out with Claudia Cialone and Hugo Spiers.
How we're using ancient DNA to solve the mystery of the missing last great auk skins
Author: Jessica Emma Thomas, PhD Researcher, Bangor University
On a small island off the coast of Iceland, 173 years ago, a sequence of tragic events took place that would lead to the loss of an iconic bird: the great auk.
The great auk, Pinguinus impennis, was a large, black and white bird that was found in huge numbers across the North Atlantic Ocean. It was often mistaken to be a member of the penguin family, but its closest living relative is actually the razorbill, and it is related to puffins, guillemots and murres.
Being flightless, the great auk was particularly vulnerable to hunting. Humans killed the birds in their thousands for meat, oil and feathers. By the start of the 19th dentury, the north-west Atlantic populations had been decimated, and the last few remaining breeding birds were to be found on the islands off the south-west coast of Iceland. But these faced another threat: due to their scarcity, the great auk had become a desirable item for both private and institutional collections.
The fateful voyage of 1844
Between 1830 and 1841 several trips were taken to Iceland’s Eldey Island, to catch, kill, and sell the birds for exhibitions. Following a period of no reported captures, great auk dealer Carl Siemsen commissioned an expedition to Eldey to search for any remaining birds.
Between June 2-5 1844, 14 men set sail in an eight-oared boat for the island. Three braved the dangerous landing and spotted two great auks among the smaller birds that also bred there. A chase began but the birds ran at a slow pace, their small wings extended, expressing no call of alarm. They were caught with relative ease and killed, their egg, broken in the hunt, was discarded.
But the birds – a male and a female – were never to reach Siemsen. The expedition leader sold them to a man named Christian Hansen, who then sold them on to Herr Möller, an apothecary in Reykjavik. Möller skinned the birds and sent them, and their preserved body parts, to Denmark.
The internal organs of these two birds now reside in the Natural History Museum of Denmark. The skins, however were lost track of, and – despite considerable effort by numerous scholars– their location has remained unknown.
In 1999, great auk expert Errol Fuller proposed a list of candidate specimens, the origins of which were not known, which he believed could be from the last pair of great auks. But how to find which of these were the true skins? For this we turned to the field of ancient DNA (aDNA).
In the last 30 years, aDNA technology has progressed greatly, and has been used to address a wide range of ecological and evolutionary questions, providing insight into countless species’ pasts, including humans. Museum specimens play a key role in aDNA research and have been used to solve several issues of unidentified or misidentified specimens – for example Greenlandic Norse fur, rare kiwi specimens, Aukland island shags, and mislabelled penguin samples.
We took things a step further, using aDNA techniques and a detective-like approach to try and resolve the mystery of what happened to the skins of the last two great auks.
We sampled the organs from the last birds, along with candidate specimens from Brussels, Belgium; Oldenburg and Kiel, in Germany; and Los Angeles. We then extracted and sequenced the mitochondrial genomes from each, and compared the sequences from the candidate skins to those from that came from the organs of the last pair.
The results showed that the skin held in the museum in Brussels was a perfect match for the oesophagus from the male bird. Unfortunately, there was no match between the other candidate skins and the female’s organs.
The specimens from Brussels and Los Angeles were thought to be the most likely candidates due to their history: both birds were in the hands of a well-known great auk dealer, Israel of Copenhagen, in 1845. As the bird in Brussels was a match, we thought it likely that the one in Los Angeles would also be a match for the female’s organs. It was surprising when it wasn’t. However, our research led us to speculate that a mix up which occurred following the death of Captain Vivian Hewitt in 1965 – who owned four birds which are now in Cardiff, Birmingham, Los Angeles and Cincinnati – was not resolved as once thought.
The identity of the birds now in Birmingham and Cardiff are now known after photographs were used to identify them – but those in Los Angeles and Cincinnati have been harder to determine. It was thought that their identities could be found from annotated photographs taken in 1871, but we speculate that they were not correctly identified, and that the bird in Cincinnati may be the original bird from Israel of Copenhagen. If this is the case, then it could explain why the Los Angeles bird fails to match with either of the last great auk organs held in Copenhagen.
We now have permission to test the great auk specimen in the Cincinnati Museum of Natural History and Science, and hopefully solve this final piece of a centuries-old puzzle. There is no guarantee that this bird will be a match either, but if it is, we will finally know what happened to the last two specimens of the extinct great auk.
Jessica Thomas is a double-degree PhD student enrolled at Bangor University and the University of Copenhagen. She receives funding from NERC PhD Studentship (NE/L501694/1), the Genetics Society-Heredity Fieldwork Grant, and European Society for Evolutionary Biology–Godfrey Hewitt Mobility Award.