App optimizes meditation length to improve attention and memory

Adam Gazzaley and UCSF colleagues have developed a focus-driven digital meditation program that improved attention and memory in healthy adults in a recent study.

MediTrain tailors meditation session length to participant abilities, and challenges users to increase session time. Subjects received significant benefits in 6 weeks.  On their first day, they focused on their breath for an average of 20 seconds. After 30 days, they were able to focus for an average of six minutes.

According to Gazzaley: “We took an ancient experiential treatment of focused meditation, reformulated it and delivered it through a digital technology, and improved attention span in millennials, an age group that is intimately familiar with the digital world, but also faces multiple challenges to sustained attention.”

At the end of each segment, participants were asked whether they paid continuous attention for the allotted time.  The app adapted  slightly longer meditation periods for those who said yes, and shorter ones for those who said no. The team believes that  user participation contributed to the app’s usefulness.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Sensor glove identifies objects

In a Nature paper, the system accurately detected  objects, including a soda can, scissors, tennis ball, spoon, pen, and mug 76 percent of the time.

The tactile sensing sensors could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects. The dataset also measured cooperation between regions of the hand during  interactions, which could be used to customize prosthetics.

Similar sensor-based gloves used cost thousands of dollars and typically 50 sensors. The  STAG  glove costs approximately $10 to produce.

Click to view MIT video


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Study: Noninvasive BCI improves function in paraplegia

Miguel Nicolelis has developed a non-invasive system for lower-limb neurorehabilitation.

Study subjects wore an EEG headset  to record brain activity and detect movement intention. Eight electrodes were attached to each leg, stimulating muscles involved in walking.  After training, patients used their own brain activity to send electric impulses to their leg muscles, imposing a physiological gait. With a walker and a support harness, they learned to walk again, and increased their sensorimotor skills. A wearable haptic display delivered tactile feedback to forearms, to provide continuous proprioceptive walking feedback.

The system was tested on two patients with chronic paraplegia. Both were able to move with less dependency on walking assistance, and one displayed motor improvement. Cardiovascular capacity and muscle volume also improved.

Click to view EPFL video


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Deep learning mammography model detects breast cancer up to five years in advance

MIT CSAIL professor Regina Barzilay and Harvard/MGH professor Constance Lehman  have developed a deep learning model that can predict breast cancer, from a mammogram, up to five years in the future. The model learned subtle breast tissue patterns that lead to malignant tumors from mammograms and known outcomes of 90,000 MGH patients.

The goal is to individualize screening and prevention programs.

Barzilay said that “rather than taking a one-size-fits-all approach, we can personalize screening around a woman’s risk of developing cancer.  For example, a doctor might recommend that one group of women get a mammogram every other year, while another higher-risk group might get supplemental MRI screening.”

The algortithm accurately placed 31 percent of all cancer patients in its highest-risk category, compared to 18 percent for traditional models.

Lehman hopes to change screening strategies from age-based to risk based. “This is because before we did not have accurate risk assessment tools that worked for individual women.”

Current risk assement,  based on age, family history of breast and ovarian cancer, hormonal and reproductive factors, and breast density, are weakly correlated with breast cancer. This makes many organizations believe that risk-based screening is not possible.

Rather than manually identifying the patterns in a mammogram that drive future cancer, the algorithm deduced patterns directly from the data, detecting abnormalities too subtle for the human eye to see.

Lehman said that “since the 1960s radiologists have noticed that women have unique and widely variable patterns of breast tissue visible on the mammogram. These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss, and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual level.”

The MIT/MGH model  is equally accurate for white and black women, as opposed to prior models. Black women have been shown to be 42 percent more likely to die from breast cancer due to a wide range of factors that may include differences in detection and access to health care.

Barzilay believes the system could, in the future,  determine, based on mammograms, if patients are at a greater risk for cardiovascular disease or other cancers.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Atrial fibrillation-detecting ring

Eue-Keun Choi and Seoul National University colleages have developed an atrial fibrillation detecting ring, with similar functionality to AliveCor and other watch-based monitors. The researchers claim that the performance is comparable to medical grade pulse oximeters.

In a study, Soonil Kwon and colleagues analyzed data from 119 patients with AF who underwent simultaneous ECG and photoplethysmography before and after direct-current cardioversion. 27,569 photoplethysmography samples were analyzed by an algorithm developed with a convolutional neural network. Rhythms were then interpreted with the wearable ring.

The accuracy of the convolutional neural network was 99.3% to diagnose AF and 95.9% to diagnose sinus rhythm.  The accuracy of the wearable device was 98.3% for sinus rhythm and 100% for AF after filtering low-quality samples.

Choi believes that: “Deep learning or [artificial intelligence] can overcome formerly important problems of [photoplethysmography]-based arrhythmia diagnosis. It not only improves diagnostic accuracy in great degrees, but also suggests a metric how this diagnosis will be likely true without ECG validation. Combined with wearable technology, this will considerably boost the effectiveness of AF detection.”.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

AI detects depression in children’s voices

University of Vermont researchers have developed an algorithm that detects anxiety and depression in children’s voices with 80 per cent accuracy, according to a recent study.

Standard diagnosis involves a 60-90 minute semi-structured interview with a trained clinician and their primary care-giver. AI can make diagnosis faster and more reliable.

The researchers used an adapted version of  the Trier-Social Stress Task, which is intended to cause feelings of stress and anxiety in the subject.  71 children between ages three and eight were asked to improvise a three-minute story, and told that they would be judged based on how interesting it was. The researcher remained stern throughout the speech, and gave only neutral or negative feedback, to create stress.  After 90 seconds, and again with 30 seconds left, a buzzer would sound and the judge would tell them how much time was left.

The children were also diagnosed using a structured clinical interview and parent questionnaire.

The algorithm analyzed statistical features of the audio recordings of each child’s story and relate them to the the diagnosis. The algorithm diagnosed children with 80 per cent accuracy. The middle phase of the recordings, between the two buzzers, was the most predictive of a diagnosis.

Eight  audio features were identified.  Three stood out as highly indicative of internalizing disorders: low-pitched voices, with repeatable speech inflections and content, and a higher-pitched response to the surprising buzzer.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Personalized, gamified, inhibitory control training for weight loss

Evan Forman, Michael Wagner, and Drexel colleagues have developed Diet DASH,  a brain training game meant to inhibit sugar-eating impulses. A recent study using the the game examined the impact of highly personalized and/or gamified inhibitory control training on weight loss, using repeated, at-home training. The trial randomized 109 overweight, sweet-eating participants , who  attended a workshop on why sugar is bad for their health. The training was customized to focus on the sweets that each participant enjoyed, Difficulty was adjusted according to how well they resisted. They  played the game for a few minutes every day, for six weeks, and then once a week for two weeks. In the game, players moved quickly through a grocery store with the goal of putting healthy food in a cart, while refraining from choosing sweets. Points were awarded for choosing healthy items. Half of the participants lost as much 3.1 percent of their body weight over the eight week study.

Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Study: Blood + spinal fluid test detects Alzheimer’s 8 years before symptoms

Klaus Gerwert at Ruhr-Universität Bochum has developed a blood + CSF test that he claims can detect Alzheimer’s disease 8 years before the onset of symptoms.  The goal is early stage therapy to achieve better results than current treatment protocols.

To reduce false positive results from the initial study, the researchers first used a blood test to identify high-risk individuals. They added a dementia-specific biomarker, tau protein, for participants shown to have Alzheimer’s in the first step. The second analysis was carried out in cerebrospinal fluid extracted from the spinal cord — an invasive procedure that the team is working to eliminate from the next phase of research. If both biomarkers were positive, it was determined that the presence of Alzheimer’s disease was  highly likely.

According to Gerwert: “Through the combination of both analyses, 87 of 100 Alzheimer’s patients were correctly identified in our study. And we reduced the number of false positive diagnoses in healthy subjects to 3 of 100.  Now, new clinical studies with test participants in very early stages of the disease can be launched. Recently, two major promising studies have failed, especially Crenezumab and Aducanumab – not least because it had probably already been too late by the time therapy was taken up. The new test opens up a new therapy window.”

Researcher Andreas Nabers added: “Once amyloid plaques have formed, it seems that the disease can no longer be treated. We are now conducting in-depth research to detect the second biomarker, namely tau protein, in the blood, in order to supply a solely blood-based test in future.”


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Embryo stem cells created from skin cells

Yossi Buganim from The Hebrew University of Jerusalem has discovered a set of genes that can transform murine skin cells into all three of the cell types that comprise the early embryo: the embryo itself, the placenta and the extra-embryonic tissues, such as the umbilical cord.

Buganim and colleagues discovered a combination of five genes that, when inserted into skin cells, reprogram the cells into the three early embryonic cell types–iPS cells which create fetuses, placental stem cells, and stem cells that develop into other extra-embryonic tissues. The transformations take about one month.

To uncover the molecular mechanisms that are activated during the formation of these cell types, the researchers analyzed changes to the genome structure and function inside the cells when the five genes are introduced. They discovered that during the first stage, skin cells lose their cellular identity and then slowly acquire a new identity of one of the three early embryonic cell types, and that this process is governed by the levels of two of the five genes.

This discovery may enable creation of entire human embryos out of human skin cells, without the need for sperm or eggs. It will also impact the modeling of embryonic defects and the understanding of placental dysfunctions.  It could address fertility problems by creating human embryos in a petri dish.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Thought generated speech

Edward Chang and UCSF colleagues are developing technology that will translate signals from the brain into synthetic speech.  The research team believes that the sounds would be nearly as sharp and normal as a real person’s voice. Sounds made by the human lips, jaw, tongue and larynx would be simulated.

The goal is a communication method for those with disease and paralysis.

According to Chang: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity.”

Berkeley’s Bob Knight has developed related technology, using HFB activity to decode imagined speech to develop a BCI for treatment of disabling language deficits.  He described this work at the 2018 ApplySci conference at Stanford.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University