Wearable Tech + Digital Health + Neurotech Boston

Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School featuring talks by Brad Ringeisen, DARPA – Joe Wang, UCSD – Carlos Pena, FDA  – George Church, Harvard – Diane Chan, MIT – Giovanni Traverso, Harvard | Brigham & Womens – Anupam Goel, UnitedHealthcare  – Nathan Intrator, Tel Aviv University | Neurosteer – Arto Nurmikko, Brown – Constance Lehman, Harvard | MGH – Mikael Eliasson, Roche – Nicola Neretti, Brown – R. Jacob  Vogelstein, Camden Partners – Yael Mandelblat-Cerf, Biogen

Deep learning mammography model detects breast cancer up to five years in advance

MIT CSAIL professor Regina Barzilay and Harvard/MGH professor Constance Lehman  have developed a deep learning model that can predict breast cancer, from a mammogram, up to five years in the future. The model learned subtle breast tissue patterns that lead to malignant tumors from mammograms and known outcomes of 90,000 MGH patients.

The goal is to individualize screening and prevention programs.

Barzilay said that “rather than taking a one-size-fits-all approach, we can personalize screening around a woman’s risk of developing cancer.  For example, a doctor might recommend that one group of women get a mammogram every other year, while another higher-risk group might get supplemental MRI screening.”

The algortithm accurately placed 31 percent of all cancer patients in its highest-risk category, compared to 18 percent for traditional models.

Lehman hopes to change screening strategies from age-based to risk based. “This is because before we did not have accurate risk assessment tools that worked for individual women.”

Current risk assement,  based on age, family history of breast and ovarian cancer, hormonal and reproductive factors, and breast density, are weakly correlated with breast cancer. This makes many organizations believe that risk-based screening is not possible.

Rather than manually identifying the patterns in a mammogram that drive future cancer, the algorithm deduced patterns directly from the data, detecting abnormalities too subtle for the human eye to see.

Lehman said that “since the 1960s radiologists have noticed that women have unique and widely variable patterns of breast tissue visible on the mammogram. These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss, and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual level.”

The MIT/MGH model  is equally accurate for white and black women, as opposed to prior models. Black women have been shown to be 42 percent more likely to die from breast cancer due to a wide range of factors that may include differences in detection and access to health care.

Barzilay believes the system could, in the future,  determine, based on mammograms, if patients are at a greater risk for cardiovascular disease or other cancers.


Professor Constance Lehman will discuss this technology at ApplySci’s 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School 

Study: AI accurately predicts childhood disease from health records

Xia Huimin and Guangzhou Women and Children’s Medical Center researchers used AI to read 1.36 million pediatric health records, and diagnosed disease as accurately as doctors, according to a recent study.

Common childhood diseases were detected after processing symptoms, medical history and other clinical data from this massive sample.  The goal is the diagnosis of complex or rare diseases by providing more diagnostic predictions, and to assist triage patients.


Join ApplySci at the 11th Wearable Tech + Digital Health + Neurotech Boston conference on November 14th at Harvard Medical School

Alzheimer’s detected by AI 6 years before diagnosis

In a recent study, Jae Ho Sohn and UCSD colleagues used an AI to analyze glucose-monitoring PET scans to detect early-stage Alzheimer’s disease six years before  diagnosis.

The algorithm was trained on PET scans from patients who were eventually diagnosed with  Alzheimer’s disease, MCI, or no disorder. It was able to  identify 92% of patients who developed Alzheimer’s disease in the first test set and 98% in the second test set, 75.8 months before diagnosis on average.v


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay

AI predicts response to antipsychotic drugs, could distinguish between disorders

Lawson Health Research Institute, Mind Research Network and Brainnetome Center researchers have developed an algorithm that analyzes brain scans to classify illness in patients with complex mood disorders and help predict their response to medication.

A recent study analyzed and compared fMRI scans of those with MDD, bipolar I,  and no history of mental illness, and found that each group’s brain networks differed, including regions in the default mode network and thalamus.

When tested against participants with a known MDD or Bipolar I diagnosis, the algorithm correctly classified illness with 92.4 per cent accuracy.

The team also imaged the brains of 12 complex mood disorder patients with out a clear diagnosis, to predict diagnosis and examine medication response.

The researchers hypothesized that participants classified by the algorithm as having MDD would respond to antidepressants while those classified as having bipolar I would respond to mood stabilizers. When tested with the complex patients, 11 out of 12 responded to the medication predicted by the algorithm.

According to lead researcher Elizabeth Osuch:: “This study takes a major step towards finding a biomarker of medication response in emerging adults with complex mood disorders. It also suggests that we may one day have an objective measure of psychiatric illness through brain imaging that would make diagnosis faster, more effective and more consistent across health care providers.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

AI speeds MRI scans

Facebook and NYU’s fastMRI project, led by Larry Zitnick, uses AI in an attempt to make MRI imaging 10 times faster. Neural networks will be trained to fill in missing or degraded parts of scans, turning them from low resolution into high. The goal is to significantly reduce the time patients must lie motionless inside an MRI machine.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

AI – optimized glioblastoma chemotherapy

Pratik Shah, Gregory Yauney,  and MIT Media Lab researchers have developed an AI  model that could make glioblastoma chemotherapy regimens less toxic but still effective. It analyzes current regimens and iteratively adjusts doses to optimize treatment with the lowest possible potency and frequency toreduce tumor sizes.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of the doses It often skipped administration, which were then scheduled twice a year instead of monthly.

Reinforced learning was used to teach the model to favor certain behavior that lead to a desired outcome.  A combination of  temozolomide and procarbazine, lomustine, and vincristine, administered over weeks or months, were studied.

As the model explored the regimen, at each planned dosing interval it decided on actions. It either initiated or withheld a dose. If it administered, it then decided if the entire dose, or a portion, was necessary. It pinged another clinical model with each action to see if the the mean tumor diameter shrunk.

When full doses were given, the model was penalized, so it instead chose fewer, smaller doses. According to Shah, harmful actions were reduced to get to the desired outcome.

The J Crain Venter Institute’s Nicholas Schork said that the model offers a major improvement over the conventional “eye-balling” method of administering doses, observing how patients respond, and adjusting accordingly.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane