Connect with us
  • Elysium

Tech

£1m support for automated brain scan detector

The tool would enable the immediate automated triage of abnormalities matching that of a consultant neuroradiologist, say researchers

Published

on

A deep learning tool which can automatically identify abnormalities on a brain MRI scan has been backed by £1million for its further development. 

The tool would enable the immediate automated triage of abnormalities matching that of a consultant neuroradiologist, and will help to address the fact that 330,000 patients currently wait more than 30 days for their MRI reports in the UK. 

That number is forecast to increase further due to greater demand for MRI and the availability of radiologists – a situation in the UK which is mirrored globally. 

Researchers from King’s College London say they have developed the means to detect more than 90 distinct abnormalities when testing their algorithm while selecting the scans that are normal.

Their work has now been given support from the Medical Research Council Development Pathway Funding Scheme (MRC DPFS), with the award of £1million to progress its technology.

The work follows earlier findings that it is possible to automate brain MRI image labelling where the researchers found that more than 100,000 exams can be labelled in less than 30 minutes.

This was an important step to overcome a bottleneck to label scans at scale. Scans labelled at scale allowed researchers to build a hugely promising brain MRI abnormality detection triage tool.

Without a large number of labelled scans, a deep learning tool could not have learnt the categories of normal or abnormal with such accuracy. 

Dr Tom Booth, senior lecturer in neuroimaging at the School of Biomedical Engineering & Imaging Sciences and Consultant Diagnostic and Interventional Neuroradiologist at King’s College Hospital, and Professor Sebastien Ourselin, head of School of Biomedical Engineering & Imaging Sciences, lead the project, which has been building up since its first Health Research Authority/Research Ethics Committee approval in 2018.

Dr Booth said simulations show this triage tool would triage effectively for outpatient scans at two London hospitals and reduce the time to report abnormal scans.

“Specifically, abnormal scans can be reported in five days rather than nine days in one large hospital and 14 days rather than 28 days in another large hospital,” he said. 

The researchers will now look to improve performance accuracy and ensure generalisability, where the algorithm works well with new data in new hospitals, across the UK.

To achieve generalisability, the researchers are ingesting data across the UK in order to finesse the model and ensure it works with high accuracy in different hospitals, with different scan manufacturers and imaging sequences.

The research is a strong first step towards automating the triage process.

“Immediate triage of a brain MRI into normal or abnormal allows early intervention to improve short and long-term clinical outcomes,” Dr Booth said.

“Detecting illnesses earlier in the patient pathway would result in lower costs for the healthcare system given that less specialised medicine, and fewer hours of treatment, are needed for patient recovery.”

MND

New AI model helps discover causes of MND

Using the RefMap tool, the number of known risk genes for MND has risen from around 15 to 690

Published

on

A new machine learning model has been developed for the discovery of genetic risk factors in diseases such as Motor Neurone Disease (MND) using artificial intelligence (AI).

The tool, named RefMap, has already been used by the research team to discover 690 risk genes for MND, the vast majority of which are new discoveries.

One of the genes highlighted as a new MND gene, called KANK1, has been shown by the team to produce neurotoxicity in human neurons very similar to that observed in the brains of patients. 

Although at an early stage, this discovery has been hailed as potentially a new target for the design of new drugs. It could also pave the way for new targeted therapeutics and genetic testing for MND.

Researchers from the University of Sheffield and the Stanford University School of Medicine have led the research. 

“This new tool will help us to understand and profile the genetic basis of MND,” said Dr Johnathan Cooper-Knock, from the University of Sheffield’s Neuroscience Institute.

“Using this model we have already seen a dramatic increase in the number of risk genes for MND, from approximately 15 to 690.

“Each new risk gene discovered is a potential target for the development of new treatments for MND and could also pave the way for genetic testing for families to work out their risk of disease.”

The 690 genes identified by RefMap led to a five-fold increase in discovered heritability, a measure which describes how much of the disease is due to a variation in genetic factors.

“RefMap identifies risk genes by integrating genetic and epigenetic data. It is a generic tool and we are applying it to more diseases in the lab,” Dr Sai Zhang, instructor of genetics at Stanford University School of Medicine said.

Dr Michael Snyder, professor and chair of the department of genetics at the  Stanford School of Medicine and also the corresponding author of this work, added: “By doing machine learning for genome analysis, we are discovering more hidden genes for human complex diseases such as MND, which will eventually power personalised treatment and intervention.”

Continue Reading

Brain injury

Can VR help with sight problems after brain injury?

The development of new immersive game-based technology could help with visual neglect, researchers believe

Published

on

Research is underway to discover the role virtual reality (VR) could play in the rehabilitation of sight after traumatic brain injury. 

TBI can have significant impact on vision, causing impaired visual attention – also known as visual neglect – even when there is no injury to the eye. 

Individuals with visual neglect lose the ability to explore the full extent of their surroundings and have difficulty reading, locating personal belongings, finding their way to destinations, and many other daily activities. 

Visual neglect is caused by disconnected neural networks and has been studied extensively in stroke but remains largely unexplored in other types of brain injury.

Now, Kessler Foundation is embarking on a two-year study, A Virtual Reality (VR) Exercise for Restoring Functional Vision after Head Trauma, to look into how technology can assist. 

The project uses immersive VR technology developed with the armed services and provided by Virtualware, an award-winning VR technology company based in Spain. 

The to-be-developed treatment is an intensive, game-like rehabilitation program leveraging a combination of VR and eye-tracking technologies to implement an oculomotor exercise protocol based on smooth eye pursuit.

Dr Peii Chen, senior research scientist in the Center for Stroke Rehabilitation Research at Kessler Foundation, said: “Our study will fill this knowledge gap by exploring visual neglect in TBI and developing a new treatment modality.”

Smooth eye pursuit exercise is an evidence-based treatment that improves patients’ ability to move their eyes toward the neglected side of space and voluntarily pay attention to the entire workspace relevant to a given task.

This ability is fundamental to spatial explorations that are required in learning, reading, and way finding. 

Dr Peii Chen

Conventionally, smooth eye pursuit exercise for treating visual neglect requires intensive and close supervision from therapists. VR technology combined with eye tracking can reduce therapist burden. 

Research participants will experience a VR session of smooth eye pursuit exercise and share their feedback. 

The study will reveal the feasibility and benefits of applying new technologies to rehabilitative treatment activities.

Research participants will also undergo functional and structural neuroimaging studies of the brain. 

The study outcomes will broaden the understanding of spatial processing and visual cognition as functions of brain connectivity and advance the development of treatments targeting head trauma-related visual dysfunction.

“Knowledge gained from this clinical study will advance patient care by identifying the neural basis of visual neglect due to TBI at rest and during smooth pursuit eye exercise,” said Dr Chen. 

“Reaching our goals will lead to improved visual health and quality of life for civilians, as well as active-duty military and veterans with trauma-related visual dysfunction.”

Dr Chen has been awarded a $376,109 grant from the US Department of Defense, US Army Medical Research & Development Command, Congressionally Directed Medical Research Programs (CDMRP), Vision Research Program.

Continue Reading

Tech

Care provider to develop sector-leading VR training

Newcross Healthcare is sharing its in-house expertise for the benefit of the wider healthcare sector

Published

on

A care provider is building on its experience of using virtual reality (VR) for in-house training to create the first programme of its kind for healthcare, which is set to be rolled out across the sector. 

Newcross Healthcare first began to adopt VR around three years ago, but over the past year has upskilled its in-house learning and development team to create its own bespoke content to deliver new and engaging staff training. 

The team has created a number of programmes and ‘virtual shifts’ to enable staff to learn more about the Newcross business and its continuing staff development to deliver the best possible client care. 

Now, Newcross plans to take its expertise in VR to a new level, by expanding into learning and training through the creation of an ‘extended reality’ programme. 

The pioneering new training – which will be a first for the healthcare sector – will enable the recreation of emergency situations, including life-saving first aid, to help staff develop their skills, confidence and ability to remain calm for when confronted with such an event in real life. 

The system – which Newcross hope to deliver within the next six to nine months – will be used in-house initially, but can also be used by clients and other care groups to use VR training to help raise standards and innovation throughout the health and social care sector.

“Our experiences of creating these environments has now enabled us to to define what how we want to use this in a learning environment,” says Mark Story, head of learning and development at Newcross. 

“We want to use technology – it might be VR, immersive, 360 videos, augmented reality – to allow people to experience stress environments so they can get over the initial shock factor and be able to think clearly for when it happens in real life.

“Our focus initially will be on those topics for learning that people might feel shocked by when they first when they first come in or, by their nature, they’re stressful things. 

“If you’re training someone in basic life support or in seizures, for example, you can only really go so far. But by being able to recreate that environment, for if and when it happens to them, they will be more prepared. 

“It also has a role to play in keeping people’s skills fresh and up to date. With something like CPR, you hope that you never have to use it, but this could provide the opportunity to keep the skills at a simmer, as opposed to letting them go cold.

“That will be for internal staff for healthcare staff, nurses and carers, and shortly after that it will be made available for external healthcare professionals. Our ambition is to offer this learning to anybody that wants to access it.

Having invested in the development of its own in-house capability, Newcross is now able to deliver something new for the sector, building on the training currently available and creating a cost-effective new option for the marketplace. 

“There is other training out there, but what we found is that it’s either immersive, so gives a bit of an immersion into the situation, or it’s something that has a simulation and assessment of a physical activity, or it’s something that is kind of broadly virtual reality. 

“There doesn’t seem to be anything in there that brings these three pieces together, which immerses you, assesses you and and keeps you in that virtual reality environment. 

“So we think we can we think we can bring those things together in a new way for for healthcare. 

“What we’re creating isn’t entirely new, and certainly exists in high-end learning activities, like training pilots, for example, or training surgeons. We know these sorts of learning interventions do work but we’re looking to create them for the mass market, to get thousands of people engaged as opposed to a small number. 

“We want it to be democratised, to be cheap enough for us to develop and offer out to anybody who needs it. That’s what we’re working towards.”

 

Continue Reading

Newsletter



Get the NR Times update

Trending