Кафедра биофизики

  • Increase font size
  • Default font size
  • Decrease font size
Новости науки
IEEE Spectrum
IEEE Spectrum

IEEE Spectrum
  • GPT Protein Models Speak Fluent Biology


    Artificial intelligence has already shaved years off research into protein engineering. Now, for the first time, scientists have synthesized proteins predicted by an AI model in the lab, and found them to work just as well as their natural counterparts.

    The research used a deep learning language model for protein engineering called ProGen, which was developed by the company Salesforce AI Research in 2020. ProGen was trained, on 280 million raw protein sequences from publicly available databases of sequenced natural proteins, to generate artificial protein sequences from scratch.

    To evaluate if the AI could generate functional artificial protein sequences, the researchers primed the model with 56,000 sequences from five different families of lysozymes, a type of enzyme found in human tears, saliva, and milk, capable of dissolving the cell walls of certain bacteria. The fine-tuned model produced a million sequences, from which 100 were selected to create artificial proteins to test and compare with naturally occurring lysozymes.

    The researchers hope that with ProGen generating sequences within milliseconds, it can generate large protein databases that can outstrip naturally occurring libraries.

    Some 70 percent of the artificial protein sequences worked, too. That, says James Fraser of the University of California San Francisco’s School of Pharmacy, one of the study’s co-authors, was “not just our top one or two favorites, but actually a statistically meaningful, large number of them.” In fact, Fraser reports, the activity for natural proteins was a little bit lower than artificial ones. The latter were also active when their resemblance to natural proteins was as low as 31.4 percent.

    “What that tells me is that when we use ProGen to generate artificial sequences, those proteins have as good a shot at being active as if we were to choose random natural proteins from the database,” he says, “That, I think, is a big breakthrough.” Researchers, in other words, now have a broader and deeper design palate for protein engineering.

    While both physics-based and evolutionary approaches to protein design have worked well so far, Fraser says, these methods have been limited in scope and the chemical catalytic activity the protein displays. This opens up a new way to design proteins with different types of activities in regions of the protein sequence that evolution has not yet explored, he adds.

    A key feature of ProGen is that it can be fine-tuned using property tags such as protein family, biological process, or molecular function. “So we can say, give us [protein sequences] that are, for example, more likely to be thermostable, less likely to interact with other proteins, or potentially better to work with under acidic conditions,” Fraser says. “Having that control, rather than starting from a natural [protein] sequence and trying to coax it to have those properties…is a big dream of protein engineering.”

    The researchers hope that with ProGen generating sequences within milliseconds, a large database can be created to expand protein sequence diversity beyond naturally occurring libraries. This would help find proteins capable of novel catalytic reactions that are related to natural protein activities. For example, says Fraser, “catalyzing a related reaction that might have great attributes for degrading plastic…[or in the] synthesis of a drug… Being able to go out into sequence space increases the probability of finding that novelty.”

    He predicts that the next exciting step for the field will be combining deep learning language models with other protein engineering approaches to get the best of both worlds—and in the process help researchers find novel activities faster. In the near future, he says, the applications coming out of this research are likely to be about creating new enzymes that could be useful for making small molecule drugs more cleanly, as well as in the natural process of removing contamination from waste, a.k.a. bioremediation.

    Nikhil Naik, director of research at SalesForce, says that their goal was to demonstrate that it is possible to deploy large language models to the problem of protein design, making use of publicly available protein data. “Now that we have demonstrated that [ProGen] has the capability to generate novel proteins, we have publicly released the models so other people can build on our research.”

    Meanwhile, they continue to work on ProGen, addressing limitations and challenges. One of these is that it is a very data dependent approach. “We have explored incorporating structure-based information to improve sequence design,” Naik says. “We’re also looking into how [to] improve generation capabilities when you don’t have too much data available for a particular protein family or domain.”

    The researchers reported their results in the 26 January issue of Nature Biotechnology.



  • Roboticists Want to Give You a Third Arm


    What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

    Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?


    We think that extra robotic limbs could be a new form of human augmentation, improving people’s abilities on tasks they can already perform as well as expanding their ability to do things they simply cannot do with their natural human bodies. If humans could easily add and control a third arm, or a third leg, or a few more fingers, they would likely use them in tasks and performances that went beyond the scenarios mentioned here, discovering new behaviors that we can’t yet even imagine.

    Levels of human augmentation

    Robotic limbs have come a long way in recent decades, and some are already used by people to enhance their abilities. Most are operated via a joystick or other hand controls. For example, that’s how workers on manufacturing lines wield mechanical limbs that hold and manipulate components of a product. Similarly, surgeons who perform robotic surgery sit at a console across the room from the patient. While the surgical robot may have four arms tipped with different tools, the surgeon’s hands can control only two of them at a time. Could we give these surgeons the ability to control four tools simultaneously?

    Robotic limbs are also used by people who have amputations or paralysis. That includes people in powered wheelchairs controlling a robotic arm with the chair’s joystick and those who are missing limbs controlling a prosthetic by the actions of their remaining muscles. But a truly mind-controlled prosthesis is a rarity.

    If humans could easily add and control a third arm, they would likely use them in new behaviors that we can’t yet even imagine.

    The pioneers in brain-controlled prosthetics are people with tetraplegia, who are often paralyzed from the neck down. Some of these people have boldly volunteered for clinical trials of brain implants that enable them to control a robotic limb by thought alone, issuing mental commands that cause a robot arm to lift a drink to their lips or help with other tasks of daily life. These systems fall under the category of brain-machine interfaces (BMI). Other volunteers have used BMI technologies to control computer cursors, enabling them to type out messages, browse the Internet, and more. But most of these BMI systems require brain surgery to insert the neural implant and include hardware that protrudes from the skull, making them suitable only for use in the lab.

    Augmentation of the human body can be thought of as having three levels. The first level increases an existing characteristic, in the way that, say, a powered exoskeleton can give the wearer super strength. The second level gives a person a new degree of freedom, such as the ability to move a third arm or a sixth finger, but at a cost—if the extra appendage is controlled by a foot pedal, for example, the user sacrifices normal mobility of the foot to operate the control system. The third level of augmentation, and the least mature technologically, gives a user an extra degree of freedom without taking mobility away from any other body part. Such a system would allow people to use their bodies normally by harnessing some unused neural signals to control the robotic limb. That’s the level that we’re exploring in our research.

    Deciphering electrical signals from muscles

    Third-level human augmentation can be achieved with invasive BMI implants, but for everyday use, we need a noninvasive way to pick up brain commands from outside the skull. For many research groups, that means relying on tried-and-true electroencephalography (EEG) technology, which uses scalp electrodes to pick up brain signals. Our groups are working on that approach, but we are also exploring another method: using electromyography (EMG) signals produced by muscles. We’ve spent more than a decade investigating how EMG electrodes on the skin’s surface can detect electrical signals from the muscles that we can then decode to reveal the commands sent by spinal neurons.

    Electrical signals are the language of the nervous system. Throughout the brain and the peripheral nerves, a neuron “fires” when a certain voltage—some tens of millivolts—builds up within the cell and causes an action potential to travel down its axon, releasing neurotransmitters at junctions, or synapses, with other neurons, and potentially triggering those neurons to fire in turn. When such electrical pulses are generated by a motor neuron in the spinal cord, they travel along an axon that reaches all the way to the target muscle, where they cross special synapses to individual muscle fibers and cause them to contract. We can record these electrical signals, which encode the user’s intentions, and use them for a variety of control purposes.

    How the Neural Signals Are Decoded

    Block diagram showing how the system decodes neural signals read by electromyography (EMG) using a training module and a real-time decoding module. A training module [orange] takes an initial batch of EMG signals read by the electrode array [left], determines how to extract signals of individual neurons, and summarizes the process mathematically as a separation matrix and other parameters. With these tools, the real-time decoding module [green] can efficiently extract individual neurons’ sequences of spikes, or “spike trains” [right], from an ongoing stream of EMG signals. Chris Philpot

    Deciphering the individual neural signals based on what can be read by surface EMG, however, is not a simple task. A typical muscle receives signals from hundreds of spinal neurons. Moreover, each axon branches at the muscle and may connect with a hundred or more individual muscle fibers distributed throughout the muscle. A surface EMG electrode picks up a sampling of this cacophony of pulses.

    A breakthrough in noninvasive neural interfaces came with the discovery in 2010 that the signals picked up by high-density EMG, in which tens to hundreds of electrodes are fastened to the skin, can be disentangled, providing information about the commands sent by individual motor neurons in the spine. Such information had previously been obtained only with invasive electrodes in muscles or nerves. Our high-density surface electrodes provide good sampling over multiple locations, enabling us to identify and decode the activity of a relatively large proportion of the spinal motor neurons involved in a task. And we can now do it in real time, which suggests that we can develop noninvasive BMI systems based on signals from the spinal cord.

    A typical muscle receives signals from hundreds of spinal neurons.

    The current version of our system consists of two parts: a training module and a real-time decoding module. To begin, with the EMG electrode grid attached to their skin, the user performs gentle muscle contractions, and we feed the recorded EMG signals into the training module. This module performs the difficult task of identifying the individual motor neuron pulses (also called spikes) that make up the EMG signals. The module analyzes how the EMG signals and the inferred neural spikes are related, which it summarizes in a set of parameters that can then be used with a much simpler mathematical prescription to translate the EMG signals into sequences of spikes from individual neurons.

    With these parameters in hand, the decoding module can take new EMG signals and extract the individual motor neuron activity in real time. The training module requires a lot of computation and would be too slow to perform real-time control itself, but it usually has to be run only once each time the EMG electrode grid is fixed in place on a user. By contrast, the decoding algorithm is very efficient, with latencies as low as a few milliseconds, which bodes well for possible self-contained wearable BMI systems. We validated the accuracy of our system by comparing its results with signals obtained concurrently by two invasive EMG electrodes inserted into the user’s muscle.

    Exploiting extra bandwidth in neural signals

    Developing this real-time method to extract signals from spinal motor neurons was the key to our present work on controlling extra robotic limbs. While studying these neural signals, we noticed that they have, essentially, extra bandwidth. The low-frequency part of the signal (below about 7 hertz) is converted into muscular force, but the signal also has components at higher frequencies, such as those in the beta band at 13 to 30 Hz, which are too high to control a muscle and seem to go unused. We don’t know why the spinal neurons send these higher-frequency signals; perhaps the redundancy is a buffer in case of new conditions that require adaptation. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

    That discovery set us thinking about what could be done with the spare frequencies. In particular, we wondered if we could take that extraneous neural information and use it to control a robotic limb. But we didn’t know if people would be able to voluntarily control this part of the signal separately from the part they used to control their muscles. So we designed an experiment to find out.

    Neural Control Demonstrated

    Illustration showing an experimental setup to study neural control. A user looks at a computer monitor that is connected to the electrode array on her shin. A volunteer exploits unused neural bandwidth to direct the motion of a cursor on the screen in front of her. Neural signals pass from her brain, through spinal neurons, to the muscle in her shin, where they are read by an electromyography (EMG) electrode array on her leg and deciphered in real time. These signals include low-frequency components [blue] that control muscle contractions, higher frequencies [beta band, yellow] with no known biological purpose, and noise [gray]. Chris Philpot; Source: M. Bräcklein et al., Journal of Neural Engineering

    In our first proof-of-concept experiment, volunteers tried to use their spare neural capacity to control computer cursors. The setup was simple, though the neural mechanism and the algorithms involved were sophisticated. Each volunteer sat in front of a screen, and we placed an EMG system on their leg, with 64 electrodes in a 4-by-10-centimeter patch stuck to their shin over the tibialis anterior muscle, which flexes the foot upward when it contracts. The tibialis has been a workhorse for our experiments: It occupies a large area close to the skin, and its muscle fibers are oriented along the leg, which together make it ideal for decoding the activity of spinal motor neurons that innervate it.

    Four diagrams showing examples of a user\u2019s ability to control a cursor on a screen via neural signals. These are some results from the experiment in which low- and high-frequency neural signals, respectively, controlled horizontal and vertical motion of a computer cursor. Colored ellipses (with plus signs at centers) show the target areas. The top three diagrams show the trajectories (each one starting at the lower left) achieved for each target across three trials by one user. At bottom, dots indicate the positions achieved across many trials and users. Colored crosses mark the mean positions and the range of results for each target.Source: M. Bräcklein et al., Journal of Neural Engineering

    We asked our volunteers to steadily contract the tibialis, essentially holding it tense, and throughout the experiment we looked at the variations within the extracted neural signals. We separated these signals into the low frequencies that controlled the muscle contraction and spare frequencies at about 20 Hz in the beta band, and we linked these two components respectively to the horizontal and vertical control of a cursor on a computer screen. We asked the volunteers to try to move the cursor around the screen, reaching all parts of the space, but we didn’t, and indeed couldn’t, explain to them how to do that. They had to rely on the visual feedback of the cursor’s position and let their brains figure out how to make it move.

    Remarkably, without knowing exactly what they were doing, these volunteers mastered the task within minutes, zipping the cursor around the screen, albeit shakily. Beginning with one neural command signal—contract the tibialis anterior muscle—they were learning to develop a second signal to control the computer cursor’s vertical motion, independently from the muscle control (which directed the cursor’s horizontal motion). We were surprised and excited by how easily they achieved this big first step toward finding a neural control channel separate from natural motor tasks. But we also saw that the control was not accurate enough for practical use. Our next step will be to see if more accurate signals can be obtained and if people can use them to control a robotic limb while also performing independent natural movements.

    We are also interested in understanding more about how the brain performs feats like the cursor control. In a recent study using a variation of the cursor task, we concurrently used EEG to see what was happening in the user’s brain, particularly in the area associated with the voluntary control of movements. We were excited to discover that the changes happening to the extra beta-band neural signals arriving at the muscles were tightly related to similar changes at the brain level. As mentioned, the beta neural signals remain something of a mystery since they play no known role in controlling muscles, and it isn’t even clear where they originate. Our result suggests that our volunteers were learning to modulate brain activity that was sent down to the muscles as beta signals. This important finding is helping us unravel the potential mechanisms behind these beta signals.

    Meanwhile, at Imperial College London we have set up a system for testing these new technologies with extra robotic limbs, which we call the MUlti-limb Virtual Environment, or MUVE. Among other capabilities, MUVE will enable users to work with as many as four lightweight wearable robotic arms in scenarios simulated by virtual reality. We plan to make the system open for use by other researchers worldwide.

    Next steps in human augmentation

    Connecting our control technology to a robotic arm or other external device is a natural next step, and we’re actively pursuing that goal. The real challenge, however, will not be attaching the hardware, but rather identifying multiple sources of control that are accurate enough to perform complex and precise actions with the robotic body parts.

    We are also investigating how the technology will affect the neural processes of the people who use it. For example, what will happen after someone has six months of experience using an extra robotic arm? Would the natural plasticity of the brain enable them to adapt and gain a more intuitive kind of control? A person born with six-fingered hands can have fully developed brain regions dedicated to controlling the extra digits, leading to exceptional abilities of manipulation. Could a user of our system develop comparable dexterity over time? We’re also wondering how much cognitive load will be involved in controlling an extra limb. If people can direct such a limb only when they’re focusing intently on it in a lab setting, this technology may not be useful. However, if a user can casually employ an extra hand while doing an everyday task like making a sandwich, then that would mean the technology is suited for routine use.

    Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

    Other research groups are pursuing the same neuroscience questions. Some are experimenting with control mechanisms involving either scalp-based EEG or neural implants, while others are working on muscle signals. It is early days for movement augmentation, and researchers around the world have just begun to address the most fundamental questions of this emerging field.

    Two practical questions stand out: Can we achieve neural control of extra robotic limbs concurrently with natural movement, and can the system work without the user’s exclusive concentration? If the answer to either of these questions is no, we won’t have a practical technology, but we’ll still have an interesting new tool for research into the neuroscience of motor control. If the answer to both questions is yes, we may be ready to enter a new era of human augmentation. For now, our (biological) fingers are crossed.



  • “Brain Age” AI Discovers Early Alzheimer’s Signs


    Although your brain has chronologically the same number of years in the world as the rest of you, determining its medical “age,” a measure of how processes of aging impact the brain, is far from obvious or straightforward. Now, researchers are testing an AI-based model that determines brain age via magnetic resonance imaging (MRI) data, potentially helping identify early signs of dementia and Alzheimer’s disease.

    This latest research is hardly the first machine-learning model to be applied to the “brain age” problem. But the new research, published in the Proceedings of the National Academy of Sciences, looks like it could help buy high-risk patients crucial time.

    “If we do identify people at high risk early, we can mitigate the risk through lifestyle changes or potential treatments,” said Andrei Irimia, an assistant professor of gerontology at the University of Southern California and the senior author of the study.

    Irimia and his colleagues trained their model using MRIs from 4,681 cognitively normal patients sampled from a combination of databases, with the largest number coming from the UK Biobank. The researchers used a different set of 1,170 MRIs from the same databases to test their network. For each MRI, the neural network produces an estimate of a patient’s chronological age. The researchers referred to this value as the patient’s brain age, which should be as close to a person’s chronological age as possible when it is given scans from cognitively normal adults.

    The algorithm predicted chronological age with an average error of about 2.3 years, which the researchers said is more accurate by about a year than other comparable brain-age techniques.

    “I think their improvement is meaningful,” said Han Peng, who was previously a postdoctoral researcher at Oxford University and helped develop a similar neural network to determine brain age that previously was the most accurate.

    Of course, as with much of AI, brain-age algorithms are often black boxes—inscrutable from inspection and unrevealing about how they generated their findings. Irimia’s team, however, wanted to make their algorithm interpretable. So it also generates what are called saliency maps—showing the areas of the MRIs the network relies most on to make its decisions.

    The researchers used 650 more MRIs from cognitively normal patients, as well as 359 who had Alzheimer’s dementia and 351 with less severe effects from the disease called mild cognitive impairment (MCI). About half of the patients with MCI later developed dementia. The saliency maps supported findings from other studies, including ones on normal aging in the brain and how Alzheimer’s might impact the brains of male and female people differently.

    “We found that our method could confirm and reproduce other findings from other studies that use completely different methods,” Irimia said.

    The researchers found that a larger gap between brain age and chronological age increased the risk that someone with MCI would eventually develop dementia, which previous studies have also found. They also found that for patients with either MCI or dementia, brain age was more correlated with their level of cognitive functioning than their chronological age was. This was also true of patients with MCI, but not for the group with dementia. That may be because the model was trained using data from cognitively normal patients, the researchers write, although there could be multiple explanations, Irimia said.

    Though this study used a large data set, the sample still skews heavily toward white people of European descent, said James Cole, a professor of neuroimage computing at University College London. Specific racial demographics are not given for the study because the authors didn’t have access to that information, said Irimia. However, the UK Biobank, the study’s largest data source, is about 95 percent white. It’s crucial that researchers be able to show that their research applies to a diverse group of people, said Cole.

    It’s hard to say how meaningful the one-year increase in accuracy is, said Eran Dayan, an associate professor in the department of radiology at the University of North Carolina, Chapel Hill’s school of medicine.

    To know this, Dayan said, future research would have to incorporate more clinical data. Though this study did incorporate some data on cognitive function, Dayan says more studies using longitudinal data, or patient data over time, would be needed to eventually use this technology with real patients.

    Irimia doesn’t disagree. “I think we need more research to really understand how these [saliency map] patterns are different and how we could leverage the information we have gained to improve risk assessment,” he said.



  • Wearable Ultrasound Patch Images the Heart in Real Time


    A wearable ultrasound imager for the heart that is roughly the size of a postage stamp, can be worn for up to 24 hours, and works even during exercise may one day help doctors spot cardiac problems that current medical technology might miss, a new study finds.

    Heart disease is the leading cause of death among the elderly, and is increasingly becoming a problem among those who are younger as well because of unhealthy diets and other factors. The signs of heart disease are often brief and unpredictable, so long-term cardiac imaging may help spot heart anomalies that might otherwise escape detection.

    For instance, patients with heart failure may at times seem fine at rest, “as the heart sacrifices its efficiency to maintain the same cardiac output,” says study colead author Hongjie Hu, a nanoengineer at the University of California, San Diego. “Pushing the heart towards its limits during exercise can make the lack of efficiency become apparent.”


    In addition, the heart can quickly recover from problems it may experience during exercise. This means doctors may fail to detect these issues, since cardiac imaging conventionally happens after exercise, not during it, Hu says.

    However, cardiac imaging techniques are often limited in terms of when they can analyze the heart, due in large part to the bulkiness of the equipment. For example, traditional cardiac ultrasound imaging “evaluates images of the heart right before and right after intensive exercise, as holding an ultrasound probe over the chest by hand and maintaining a stable position for it is impossible during this process,” Hu says. Previous wearable heart sensors could capture signals only on the skin.

    Now scientists have developed a wearable ultrasound device that can enable safe, continuous, real-time, long-term, and highly detailed imaging of the heart. They detailed their findings online on 25 January in the journal Nature.

    “Potential applications include continuously monitoring the heart in daily life, during exercise, during surgery, and much more,” says study coauthor Ray Wu, a nanoengineer at UC San Diego. “This will open up the possibility to detect previously undetectable symptoms of disease, identify symptoms in their very early stages, and greatly improve patient outcomes.”

    The new device is a patch 1.9 centimeters long by 2.2 cm wide and only 0.9 millimeters thick. It uses an array of piezoelectric transducers to send and receive ultrasound waves in order to generate a constant stream of images of the structure and function of the heart. The researchers were able to get such images even during exercise on a stationary bike. No skin irritation or allergy was seen after 24 hours of continuous wear.

    “The most exciting result is that our patch performs well when an individual is moving,” Hu says. “Our patch allows us to evaluate heart performance throughout exercise, providing valuable information about the heart when it is under high stress.”

    The new patch is about as flexible as human skin. It can also stretch up to 110 percent of its size, which means it can survive far more strain than typically experienced on human skin. These features help it stick onto the body, something not possible with the rigid equipment often used for cardiac imaging.

    “The wearable imager removes much of the bulkiness of traditional imagers and adheres to the body on its own,” Wu says. “This enables the heart to be imaged in ways and scenarios that were never possible before while also freeing up the hands of the doctor from having to continuously operate a traditional ultrasound probe.”

    a close up photograph of a person's upper chest with a small white square attached to the skin The wearable ultrasound sensor is roughly the size of a postage stamp, can be worn for up to 24 hours, and works even during strenuous exercise.Xu Laboratory/UC San Diego Jacobs School of Engineering

    In the new study, the researchers focused on imaging the left ventricle, the largest of the heart’s four chambers “and strongly considered to be the most important in terms of cardiovascular health, as it is responsible for pumping oxygenated blood to the entire body,” Wu says. Cardiac imaging generally focuses on the left ventricle, but the new device can image all of the heart’s four chambers simultaneously, “so it may be possible for future research to focus on other or multiple chambers,” he adds. In addition, “the imager can be applied to image various other organs, such as the stomach, kidney, or liver.”

    Traditional cardiac ultrasound imaging constantly rotates an ultrasound probe to analyze the heart in multiple dimensions. To eliminate the need for this rotation, the array of ultrasound sensors and emitters in the new device is shaped like a cross so that ultrasonic waves can travel at right angles to each other.

    The scientists developed a custom deep-learning AI model that can analyze the data from the patch and automatically and continuously estimate vital details, such as the percentage of blood pumped out of the left ventricle with each beat, and the volume of blood the heart pumps out with each beat and every minute. The root of most heart problems is the heart not pumping enough blood, issues that often manifest only when the body is moving, the researchers note.

    The scientists did not begin investigating heart imaging until after they developed a wearable ultrasound device.

    “After learning in a casual chat that one of our colleague’s relatives had died of a sudden heart attack, we realized our technology could make real impacts on people’s lives,” says study colead author Hao Huang, a nanoengineer at UC San Diego. “We did not expect the imaging quality of the imager to be very good. However, it turned out we could achieve a comparable quality to commercial probes after careful engineering. We knew we were onto something big when we saw the images full of details after many rounds of improvement, and we thought we should keep pursuing this path to not waste the capabilities of the imager.”

    In the beginning, “we were always struggling with focusing of the ultrasound beam,” Huang recalls. “Debugging involved intensive trial and error based on the readings. After many failures, we used an automatic alignment strategy to make the transducer array with a high uniformity and perfect alignment so that it guarantees beam focusing.”

    Another set of problems involved noise in the signals. “We did lots of trial and error to identify all the sources of noise and isolate the imager from all of them but were still left with the most major one, the human body,” Huang says. They ultimately developed a stretchable electromagnetic shield made of the same liquid metal used in the device’s electrodes “to remove the noise from the human body while maintaining the flexibility and stretchability of the imager. It turned out that the optimized shielding layer worked very well, and we could finally remove the noise from the images.”

    The researchers note that right now, the device is connected through cables to a computer. They have developed a wireless version of the patch, details of which are in a study currently under review, Hu says.

    The study’s senior author, Sheng Xu, a professor at UC San Diego, plans to commercialize this technology through Softsonics, a company cofounded with another of the study’s coauthors, Shu Xiang. Further work is needed to develop this device into a fully integrated wearable system with a built-in power source, Hu says.



  • Portable Life-Support Device Provides Critical Care in Conflict and Disaster Zones


    This is a sponsored article brought to you by LEMO.

    A bomb explodes — medical devices set to action.

    It is only in war that both sides of human ingenuity coexist so brutally. On the one side, it innovates to wound and kill, on the other it heals and saves lives. Side by side, but viscerally opposed.

    Dr. Joe Fisher is devoted to the light side of human ingenuity, medicine. His research at Toronto’s University Health Network has made major breakthroughs in understanding the absorption and use of oxygen by the body. Then, based on the results, he developed new, highly efficient methods of delivering oxygen to patients.

    In 2004, together with other physicians and engineers, he created a company to develop solutions based on his innovations. He named it after the Toronto neighborhood where he still lives — Thornhill Medical.



    Meanwhile, the studies conducted by Dr. Fisher started drawing attention from the U.S. Marines. They had been looking for solutions to reduce the use of large, heavy, and potentially explosive oxygen tanks transported by their medical teams to military operation sites.

    “At first, they asked us if we could prove that it was possible to ventilate patients using much less oxygen,” says Veso Tijanic, COO of Thornhill Medical. “We proved it. Then, they asked us whether we could develop a device for this. Finally, whether we could integrate other functionalities into this device.”

    The device is currently saving lives in Ukraine, Thornhill Medical having donated a number of them as well as its mobile anesthesia delivery module MADM.

    These back-and-forths lasted about five years, gradually combining science and technology. It resulted in a very first product, launched in 2011: MOVES, an innovative portable life support unit.

    This cooperation has also deeply transformed Thornhill Medical.

    “We used to see ourselves as an R&D laboratory, we have now also become a medical device manufacturer!” says Tijanic.

    Whilst the U.S. Marines started using MOVES, Thornhill Medical continued to innovate. In 2017, it launched an enhanced version, MOVES SLC.

    Today, the Canadian company employs a staff of about 70. It continues to do research and development with its own team and partners around the world, publishing regularly in scientific journals. It has sold MOVES SLC around the world and launched two other solutions, MADM and ClearMate.

    MADM is a portable device (capable of functioning on extreme terrain) which connects to any ventilator to deliver gas anaesthesia. ClearMate is an instrument — also portable and without electricity — which allows to take quick action in case of carbon monoxide poisoning. This is the most common respiratory poisoning, where every second without treatment worsens consequences on the brain and other organs.

    An innovative ventilator design

    Just like these two products, the heart of MOVES SLC is a technology stemming directly from Dr. Fisher’s research in breathing sciences. It includes a ventilator operating in circle-circuit: It recovers the oxygen expired by the patient, carefully controls its concentration (high FiO2) and redistributes only the strict minimum to the patient.

    MOVES SLC operates with significantly less oxygen than required by traditional open-circuit ventilators. This is so little that a small oxygen-concentrator — integrated into MOVES SLC, that extracts oxygen from ambient air — is sufficient. No need for supplies from large oxygen tanks.

    Yet, MOVES SLC is more than an innovative ultra-efficient ventilator, says Tijanic: “It is a complete life support device.” In addition to its integrated oxygen concentrator, it also includes suction and several sensors that monitor vital signs and brings it all together via a unique interface that can be operated on the device or by a mobile touch screen.


    The MOVES SLC portable life-support unit.


    The user can intubate a patient and monitor its ventilation (FiO2, ETCO2, SpO2, ABP and other indicators) in addition to the patient’s temperature (two sensors), blood pressure (internal and external) and 12-lead ECG. The evolution of these measurements can be followed over the last 24 hours.

    All of this, in a device measuring only 84 cm x 14 cm x 25 cm, weighing about 21 kilograms (including interchangeable batteries) which can be slung across the shoulder.

    “MOVES must function in the middle of military operations, and be resistant to vibrations, crashes and shock, continue operating smoothly in sandstorms or in the rain.”
    —Veso Tijanic, COO of Thornhill Medical

    “MOVES SLC represents no more than 30 percent of the volume and weight of traditional equipment — ventilator, concentrator, suction, monitoring device,” adds the COO. Integrating various technologies in such a lightweight, compact package was, without surprise, a major challenge for the engineers. Still, not the most difficult one.

    Making medical device components capable of withstanding extreme conditions will have been even more complex. “Traditional technologies were designed to function in hospitals,” explains Tijanic. “MOVES must function in the middle of military operations, and be resistant to vibrations, crashes and shock, continue operating smoothly in sandstorms or in the rain, in temperatures between -26°C and +54°C.”

    Sometimes, the engineers could take existing components and develop protective features for them. Occasionally, they would recast them from different markets (oxygen sensors, for instance) to integrate them into their device. And in other cases, they had to start from scratch, creating their own robust components.

    Military-grade ruggedness

    The challenge was successfully overcome: “MOVES is designed under the highest industry standards and has been tested and fully certified by various regulatory bodies.” It has been certified MIL-STD-810G, a ruggedness U.S. military standard, verified by over twenty different tests (acoustic vibration, explosive atmosphere, etc.).

    The device is hence approved for use — not only transported, but actually used on a patient — in various helicopters, aircraft and land vehicles. And this makes a world of difference for Tijanic. “Critical care, such as we provide, normally requires specially equipped facilities or vehicles. With MOVES SLC, any place or vehicle — even civilian — of sufficient size, is an opportunity for treatment.”

    Thornhill’s fully integrated mobile life support has been used by military medical teams for five years already. The device is currently saving lives in Ukraine, Thornhill Medical having donated a number of them as well as its mobile anesthesia delivery module MADM.

    An Introduction to MOVES SLC



    In July 2022, the U.S. Army published a report summarizing its medical modernization strategy. The 22-page report confirms the need for ever more lightweight, compact, and cost-effective technology. It also mentions the use of artificial intelligence for more autonomous monitoring of the patients’ medical condition. Thornhill is exploring the AI angle.

    “There isn’t always a qualified expert available everywhere,” explains Tijanic. “AI could ensure the optimum settings of the device, and then modify these depending on how the patient’s condition evolves.”

    Thornhill is also exploring another solution for cases where no experts are available on spot. Last April, a MOVES SLC was used in a demonstration of “remote control of ventilators and infusion pumps to support disaster care.” Operators based in Seattle successfully controlled remotely a device based in Toronto. Science-fiction thus becomes science, and turns into reality.

    The Canadian company continues innovating to heal and save lives on rough chaotic terrain and in the most extreme and unpredictable circumstances. It is driven by medical and technological progress. It is also driven by a many-thousand-year-old trend: Humans will likely never stop waging war.




  • Designing a Miniaturized Wastewater Treatment Plant for Micropollutant Degradation


    This sponsored article is brought to you by COMSOL.

    The 1985 action-adventure TV series MacGyver showcased the life of Angus MacGyver, a secret agent who solved problems using items he had on hand. For example, in one episode, he made a heat shield out of used refrigerator parts. In another, he made a fishing lure with a candy wrapper. More than three decades later, the show still has relevance. The verb MacGyver, to design something in a makeshift or creative way, was added to the Oxford English Dictionary in 2015.

    Try putting your MacGyver skills to the test: If you were handed some CDs, what would you make out of them? Reflective wall art, mosaic ornaments, or a wind chime, perhaps? What about a miniaturized water treatment plant?

    This is what a team of engineers and researchers are doing at Eden Tech, a company based in Paris, France, that specializes in the development of microfluidics technology. Within their R&D department, Eden Cleantech, they are developing a compact, energy-saving water treatment system to help tackle the growing presence of micropollutants in wastewater. To analyze the performance of their AKVO system (named after the Latin word for water, aqua), which is made from CDs, Eden Tech turned to multiphysics simulation.


    Contaminants of Emerging Concern

    “There are many ways micropollutants make it into wastewater,” says Wei Zhao, a senior chemical engineer and chief product officer at Eden Tech. The rise of these microscopic chemicals in wastewater worldwide is a result of daily human activities. For instance, when we wash our hands with soap, wipe down our sinks with cleaning supplies, or flush medications out of our bodies, various chemicals are washed down the drain and end up in sewage systems. Some of these chemicals are classified as micropollutants, or contaminants of emerging concern (CECs). In addition to domestic waste, agricultural pollution and industrial waste are also to blame for the rise of micropollutants in our waterways.

    Micropollutants are added to the world’s lakes, rivers, and streams every day. Many conventional wastewater treatment plants are not equipped to remove these potentially hazardous chemical residues from wastewater.

    Unfortunately, many conventional wastewater treatment plants (WWTP, Figure 1) are not designed to remove these contaminants. Therefore, they are often reintroduced to various bodies of water, including rivers, streams, lakes, and even drinking water. Although the risk they pose to human and environmental health is not fully understood, the increasing number of pollution found in the world’s bodies of water is of concern.


    A wastewater treatment plant seen from above, with multiple tanks and channels filled with water.


    With this growing problem in mind, Eden Tech got to work on developing a solution, thus AKVO was born. Each AKVO CD core is designed to have a diameter of 15 cm and a thickness of 2 mm. One AKVO cartridge is composed of stacked CDs of varying numbers, combined to create a miniaturized factory. One AKVO core treats 0.5 to 2 m3 water/day, which means that an AKVO system composed of 10,000 CDs can treat average municipal needs. This raises the question: How can a device made from CDs decontaminate water?

    A Sustainable Wastewater Treatment Method

    A single AKVO system (Figure 2) consists of a customizable cartridge filled with stacked CDs that each have a microchannel network inscribed on them. It removes undesirable elements in wastewater, like micropollutants, by circulating the water in its microchannel networks. These networks are energy savvy because they only require a small pump to circulate and clean large volumes of water. The AKVO system’s cartridges can easily be replaced, with Eden Tech taking care of their recycling.


    The AKVO device, which consists of a transparent cylinder filled with a stack of CDs.


    AKVO’s revolutionary design combines photocatalysis and microfluidics into one compact system. Photocatalysis, a type of advanced oxidation process (AOP), is a fast and effective way to remove micropollutants from wastewater. Compared to other AOPs, it is considered safer and more sustainable because it is powered by a light source. During photocatalysis, light is absorbed by photocatalysts that have the ability to create electron-hole pairs, which generate free hydroxyl radicals that are able to react with target pollutants and degrade them. The combination of photocatalysis and microfluidics for the treatment of wastewater has never been done before. “It is a very ambitious project,” said Zhao. “We wanted to develop an innovative method in order to provide an environmentally friendly, efficient way to treat wastewater.” AKVO’s current design did not come easy, as Zhao and his team faced several design challenges along the way.

    Overcoming Design Challenges

    When in use, a chemical agent (catalyst) and wastewater are dispersed through AKVO’s microchannel walls. The purpose of the catalyst, titanium dioxide in this case, is to react with the micropollutants and help remove them in the process. However, AKVO’s fast flow rate complicates this action. “The big problem is that [AKVO] has microchannels with fast flow rates, and sometimes when we put the chemical agent inside one of the channels’ walls, the micropollutants in the wastewater cannot react efficiently with the agent,” said Zhao. In order to increase the opportunity of contact between the micropollutants and the immobilized chemical agent, Zhao and his team opted to use a staggered herringbone micromixer (SHM) design for AKVO’s microchannel networks (Figure 3).


    Simulation of the microchannel network system, which has V-shaped channels along a rectangular container where the water enters the inlet and exists the outlet.


    To analyze the performance of the SHM design to support chemical reactions for micropollutant degradation, Zhao used the COMSOL Multiphysics software.

    Simulating Chemical Reactions for Micropollutant Degradation

    In his work, Zhao built two different models in COMSOL Multiphysics (Figure 4), named the Explicit Surface Adsorption (ESA) model and the Converted Surface Concentration (CSC) model. Both of these models account for chemical and fluid phenomena.


    Screenshot of COMSOL simulation software showing the microfluidic system.


    In both models, Zhao found that AKVO’s SHM structure creates vortices in the flow moving through it, which enables the micropollutants and the chemical agent to have a longer reaction period and enhances the mass transfer between each fluid layer. However, the results of the ESA model displayed that the design purified about 50 percent of the micropollutants under treatment, fewer than what Zhao expected.


    Screenshot of COMSOL simulation showing the water behavior inside the microchannels.


    Unlike the ESA model (Figure 5), in the CSC model, it is assumed that there is no adsorption limitation. Therefore, as long as a micropollutant arrives at the surface of a catalyst, a reaction happens, which has been discussed in existing literature (Ref. 1). In this model, Zhao analyzed how the design performed for the degradation of six different micropollutants, including gemfibrozil, ciprofloxacin, carbamazepine, clofibric acid, bisphenol A, and acetaminophen (Figure 6). The results of this model were in line with what Zhao expected, with more than 95 percent of the micropollutants being treated.


    Plot showing the photodegradation of pollutants and the flow of water through the device over time, with the photodegradation yield increasing as the flow rate decreases.


    “We are really satisfied with the results of COMSOL Multiphysics. My next steps will be focused on laboratory testing [of the AKVO prototype]. We are expecting to have our first prototype ready by the beginning of 2022,” said Zhao. The prototype will eventually be tested at hospitals and water treatment stations in the south of France.

    Using simulation for this project has helped the Eden Tech team save time and money. Developing a prototype of a microfluidic system, like AKVO, is costly. To imprint microchannel networks on each of AKVO’s CDs, a microchannel photomask is needed. According to Zhao, to fabricate one photomask would cost about €3000 (3500 USD). Therefore, it is very important that they are confident that their system works well prior to its fabrication. “COMSOL Multiphysics has really helped us validate our models and our designs,” said Zhao.

    Pioneer in the Treatment of Micropollutants

    In 2016, Switzerland introduced legislation mandating that wastewater treatment plants remove micropollutants from wastewater. Their goal? Filter out over 80 percent of micropollutants at more than 100 Swiss WWTPs. Following their lead, many other countries are currently thinking of how they want to handle the growing presence of these contaminants in their waterways. AKVO has the potential to provide a compact, environmentally friendly way to help slow this ongoing problem.

    The next time you go to throw out an old CD, or any other household item for that matter, ask yourself: What would MacGyver do? Or, better yet: What would Eden Tech do? You might be holding the building blocks for their next innovative design.

    Reference

    1. C. S. Turchi, D. F. Ollis, “Photocatalytic degradation of organic water contaminants: Mechanisms involving hydroxyl radical attack,” Journal of Catalysis, Vol. 122, p. 178, 1990.

    MacGyver is a registered trademark of CBS Studios Inc. COMSOL AB and its subsidiaries and products are not affiliated with, endorsed by, sponsored by, or supported by CBS Studios Inc.



  • RF-Induced Heating of Implanted Medical Devices in MRI Systems


    This sponsored article is brought to you by COMSOL.

    Over 80 million magnetic resonance imaging (MRI) scans are conducted worldwide every year. MRI systems come in many different shapes and sizes, and are identified by their magnetic field strength. These scanners can range from below 0.55 tesla (T) to 3 T and beyond, where tesla is the unit for the static magnetic field strength. For patients with implanted metallic medical devices, the strong magnetic fields generated by MRI systems can pose several safety concerns.

    For instance, high-powered magnets generate forces and torques that can cause the implant to migrate and potentially harm the patient. In addition, the gradient coils in MRI systems, used for spatial localization, can cause gradient-induced heating, vibrations, stimulation of the tissue, and device malfunction. Lastly, the large radiofrequency (RF) coil in MRI systems can cause the electrically conductive implant to electromagnetically resonate (called the “antenna effect”), resulting in RF-induced heating that can potentially burn the patient (Ref. 1).

    MED Institute, a full-service contract research organization (CRO) for the medical device industry, is using multiphysics simulation to better understand the effects of RF-induced heating of medically implanted devices for patients that need MRI scans (Ref. 2).


    A woman and a man in medical aprons next to a MRI machine.


    Standardized Test Methods for Medical Devices

    MED Institute provides support throughout the entire product development cycle. Its MRI Safety team helps manufacturers evaluate and perform physical testing of their medical devices for safety and compliance in the MRI environment (Figure 1). The team works closely with the Food and Drug Administration (FDA), which oversees the development of medical products to ensure safe and effective use. Furthermore, the team complies with the standards of the American Society for Testing and Materials (ASTM) and International Organization for Standardization (ISO). Specifically, it follows the ASTM F2182 standard to measure RF-induced heating of a medical implant within a gel phantom (Figure 2) and follows ISO/TS 10974 to evaluate electrically active implantable medical devices (AIMD) during MRI.

    The gel phantom used for testing is a rectangular acrylic container filled with a conductive gel that approximates the thermal and electrical properties of average human tissue (Ref. 3). The phantom is placed on the patient table inside the RF coil of an MRI scanner and fiber optic temperature probes (1 mm in diameter) are attached to the device before submerging it into the gel. The probes measure the temperature changes experienced by the device during the MRI scan. This type of physical experiment is used often, but it poses some potential problems. For instance, movement within the phantom can introduce uncertainty into the experiment, and inaccurate probe placement can lead to invalid results. In addition, depending on the materials of construction and their magnetic susceptibility, magnetic force could also be an issue (Ref. 4).


    Two images, on the left two people performing a test using wires on a plastic bin with devices inside, and on the right an illustration of a test device inside a RF body coil.


    To help address these issues, the team at MED Institute uses computational modeling and simulation as an alternative to physical testing. David Gross, PhD, PE, Director of MRI Safety Evaluations and Engineering Simulations, leads a team of analysts that use simulation to gain a better understanding of physics-based problems. He says, “The simulation provides us with 3D temperature contours anywhere within a volume of interest; we are not limited to discrete point-probe measurements, and we do not have to worry about the inaccuracies of the equipment or uncertainty of probe placement from the experiment.”

    The team has experience conducting these simulations for closed-bore MRI systems, in which a patient is contained in a compact tube. The team is now using simulation to perform these same analyses for open-bore systems (Figure 3), which have wider physical access, making them beneficial for “imaging pediatric, bariatric, geriatric and claustrophobic patients”, as is explained on the MED Institute website (Ref. 5).


    Three images, on the left an MRI system, in the middle a coil with a virtual human model inside, and on the right a knee implant inside two coils.


    Multiphysics Simulation for RF-Induced Heating

    With COMSOL Multiphysics, MED Institute is able to evaluate the RF-induced temperature rise of implants and compare the results of various sizes and constructs of a device within a product family to determine a worst-case configuration. The analysts at MED can import a CAD file of a client’s device using the CAD Import Module, an add-on to COMSOL Multiphysics. In terms of RF-induced heating, the team uses the RF Module and Heat Transfer Module add-on products to combine the physics of electromagnetics with transient heat transfer. For analyzing electromagnetics, the RF Module enables the use of Maxwell’s equations to solve for the wave equation at every point within the model that is impacted by electromagnetic fields. This is done in a steady-state frequency domain, which is then sequentially coupled to the transient heat transfer. With the Heat Transfer Module, the team is also able to solve heat conduction equations.

    In the example below, MED Institute imported a CAD file of a knee implant into the COMSOL Multiphysics software. The geometry of the implant included a stem extension, tibial tray, femoral tray, and other components. All of these components can have various sizes and can be assembled in various ways, and patients with the implant can be scanned in various MRI systems that create different electromagnetic fields. With the overwhelming amount of permutations that these variables can produce, it is often not clear which configuration would result in the worst-case RF-induced heating.

    “With our Medical Device Development Tool (MDDT), we can not only augment physical testing but even replace it with simulation in some cases. The immediate, positive results are that our clients are able to have their products evaluated quicker and at less cost because we are able to rely on the simulation.”
    —David Gross, MED Institute Director of MRI Safety Evaluations and Engineering Simulations

    “This is where the use of simulation comes in; you focus your efforts on the primary factors that can change the resonance of a particular implant,” Gross says. By using the COMSOL software, the organization is able to better understand the relative bounds of where it would expect to see resonance and how the device behaves under different electromagnetic fields. This helps with performing sensitivity analyses, where the team can test what causes the change in resonance, such as modifying the diameter of the stem or other components of the implant. For this particular case, the team ran hundreds of simulations to determine the worst-case device size and worst-case RF frequency.

    Using worst-case analysis is crucial in the verification process because it allows manufacturers to test different factors for a wide range of devices — such as determining which size brings the most complications — rather than conducting physical testing for every variant of one product (Ref. 6). “Performing multiple physical experiments becomes very expensive and time-consuming, especially when you account for the hourly cost of using a physical MRI scanner,” says Gross.


    Four images showing simulations of a knee implant in a gel during an MRI scan.


    As shown in Figure 4, the electric field in the gel phantom of a 1.2 T open-bore system (upper left) is very different from a 1.5 T closed-bore system (upper right). The knee implant was simulated in both systems, where the results show a different resonance and maximum temperature rise at the end of the stem (lower images).

    Using COMSOL allowed the team to better understand how a device behaves under electromagnetic fields. With these results, the team was then able to determine where they should place temperature probes while physically testing the device in an actual MRI system to obtain temperature rise results.

    FDA Qualification of MED Institute’s Virtual MRI Safety Evaluations

    MED Institute’s experience with using simulation to test RF-induced heating of medical devices has inspired development of a promising new simulation tool that accelerates the product development cycle. The MED Institute team submitted this simulation tool to the FDA’s Medical Device Development Tool (MDDT) program, which allows the FDA to evaluate new tools with the purpose of furthering medical products and studies. As stated on the FDA website, “The MDDT program is a way for the FDA to qualify tools that medical device sponsors can choose to use in the development and evaluation of medical devices.” (Ref. 7) Once qualified, the FDA recognizes the tool as an official MDDT.

    In November 2021, MED Institute was granted FDA qualification of its MDDT, “Virtual MRI Safety Evaluations of Medical Devices”. This is an evaluation process that involves using multiphysics modeling and simulation to test the interactions of medical devices in an MRI environment. The tool is used for modeling an RF coil of an MRI system, ASTM gel phantom, and a medical device placed within the gel. Simulation is then used to analyze the electromagnetics and the heat that generates around the device (Ref. 8).

    After testing is complete, the labeling of the device is described by ASTM 2503 or, if it is an electrically active implant, by the ISO 10974 test. The labeling is placed on the device packaging and inside the instructions for use (IFU) so that an MRI technologist or radiologist can see the relevant information for a patient with an implanted device.

    “With our MDDT, we can not only augment physical testing but even replace it with simulation in some cases,” says Gross.


    Modeling and Simulation Support from the FDA

    Over the years, MED Institute has evaluated many medical devices for MRI safety with COMSOL Multiphysics simulations. It has found that COMSOL is a powerful and efficient platform for solving complex multiphysics problems. “The immediate, positive results are that our clients are able to have their products evaluated quicker and at less cost because we are able to rely on the simulation. It does not require them to send us the actual product to test for RF-induced heating,” says Gross.

    The FDA has been supportive of computational modeling and is willing to evaluate and accept data from simulation in lieu of physical testing. “It is important for medical device sponsors to know that they have the encouragement and support of the Agency,” Gross says. MED Institute has had the privilege of working alongside the FDA for many years for the benefit of patients. “It goes to show that they are invested and believe in the power of modeling and simulation,” Gross adds.


    References

    1. “Thermal Injuries,” Questions and Answers in MRI; https://mri-q.com/rf-burns.html#:~:text=Antenna%20effect.&text=Antennas-produce-standing-wave-patterns,likely-to-create-heating-problems
    2. D. Gross, “Top 10 Challenges for MRI Safety Evaluation,” MED Institute Inc., June 2020; https://medinstitute.com/blog/top-10-challenges-for-mri-safety-evaluation/
    3. “Medical Device MRI Safety Testing,” MED Institute Inc., April 2016; https://medinstitute.com/blog/medical-device-mri-safety-testing-where-should-a-hip-implant-be-placed-in-an-astm-f2182-test-to-measure-the-maximum-rf-induced-heating/
    4. “Keynote: RF-Induced Heating of Medical Devices in Open-Bore MRI,” COMSOL; https://www.comsol.com/video/keynote-rf-induced-heating-of-medical-devices-in-open-bore-mri
    5. “Radiofrequency-Induced Heating in Open Bore MRI,” MED Institute Inc., Aug. 2020; https://medinstitute.com/download/radiofrequency-induced-heating-in-open-bore-mri/
    6. “The Worst-case Scenario,” Packaging Compliance Labs; https://pkgcompliance.com/the-worst-case-scenario/
    7. “Medical Device Development Tools (MDDT),” U.S. Food and Drug Administration; https://www.fda.gov/medical-devices/science-and-research-medical-devices/medical-device-development-tools-mddt
    8. “MDDT Summary of Evidence and Basis of Qualification Decision for Virtual MRI Safety Evaluations of Medical Devices,” Apr. 2021; https://www.fda.gov/media/154181/download



  • ChatGPT’s AI Can Help Screen For Alzheimer’s


    The AI-powered chatbot ChatGPT is taking the Internet by a storm with its impressive language capabilities, helping to draw up legal contracts as well as write fiction. But it turns out that the underlying technology could also help spot the early signs of Alzheimer’s disease, potentially making it possible to diagnose the debilitating condition sooner.

    Catching Alzheimer’s early can significantly improve treatment options and give patients time to make lifestyle changes that could slow progression. Diagnosing the disease typically requires brain imaging or lengthy cognitive evaluations though, which can be both expensive and time consuming and therefore unsuitable for widespread screening, says Hualou Liang a professor of biomedical engineering at Drexel University, in Philadelphia.

    A promising avenue for early detection of Alzheimer’s is automated speech analysis. One of the most common and noticeable symptoms of the disease is problems with language, such as grammatical mistakes, pausing, repetition, or forgetting the meaning of words, says Liang. This has led to growing interest in using machine learning to spot early signs of the disease in the way people talk.

    “The hope is that we can use machine learning to pick up these kinds of signals that allow us to do early diagnostics.”
    —Hoalou Liang, Drexel University

    Normally this relies on purpose-built models, but Liang and his colleagues wanted to see if they could repurpose the technology behind ChatGPT, OpenAI’s large language model GPT-3, to spot the telltale signs of Alzheimer’s. They discovered it could discriminate between transcripts of speech from Alzheimer’s patients and healthy volunteers well enough to predict the disease with 80 percent accuracy, which represents state-of-the-art performance.

    “These large language models like GPT-3 are so powerful they can pick up these kind of subtle differences,” says Liang. “If the subject has some kind of issue [involving] Alzheimer’s, and that’s already reflected in the language, the hope is that we can use machine learning to pick up these kinds of signals that allow us to do early diagnostics.”

    The researchers tested their approach on a collection of 237 audio recordings taken from healthy volunteers and Alzheimer’s patients, which were converted to text using a pre-trained speech recognition model. To enlist the help of GPT-3, the researchers made use of one of its less well-known capabilities. Its API makes it possible to feed a chunk of text into the model and get it to spit out what is known as an “embedding”—a numerical representation of a piece of text that encodes its meaning and can be used to assess its similarity to other text.

    While most machine learning models deal with word embeddings, one of the novel features of GPT-3, says Liang, is that it’s powerful enough to produce embeddings for entire paragraphs. And because of the model’s vast size and the huge amount of data used to train it, it is able to produce very rich representations of the text.

    The researchers used this capability to create embeddings for all of the transcripts from both Alzheimer’s patients and healthy individuals. They then took a selection of these embeddings, combined with labels to say which group they came from, and used them to train machine-learning classifiers to distinguish between the two groups. When tested on unseen transcripts the best classifier achieved an accuracy of 80.3 percent, as reported in a paper in PLOS Digital Health.

    That was significantly better than the 74.6 percent the researchers achieved when they applied a more conventional approach to the speech data, which relies on acoustic features that have to be painstakingly identified by experts. They also compared their technique to several cutting-edge machine-learning approaches that use large language models too but include an extra step in which the model is laboriously fine-tuned using some of the transcripts from the training data. They matched the performance of the top model and outperformed the other two.

    Interestingly, when the researchers tried fine-tuning, the GPT-3 model performance actually dropped. This might seem counter-intuitive, but Liang points out that this is probably due to the mismatch in size between the vast amount of data used to train GPT-3 and the small amount of domain-specific training data available for fine-tuning.

    While the team does achieve state-of-the-art results, Frank Rudzicz, an associate professor of computer science at the University of Toronto, says relying on privately owned models to carry out this kind of research does raise some problems. “Part of the reason these closed APIs are limiting is that we also can’t inspect or deeply modify the internals of those models or do a more complete set of experiments that would help elucidate potential sources of error that we need to avoid or correct,” he says.

    Liang is also open about the limitations of the approach. The model is nowhere near accurate enough to properly diagnose Alzheimer’s, he says, and any real-world deployment of this kind of technology would be as an initial screening step designed to direct people toward a specialist for a full medical evaluation. As with many AI-based approaches, it’s also hard to know exactly what the model is picking up on when it detects Alzheimer’s, which may be a problem for medical staff. “The doctor, very naturally would ask why you get these results,” says Liang. “They want to know what feature is really important.”

    Nonetheless, Liang thinks the approach holds considerable promise and he and his colleagues are planning to build an app that can be used at home or in a doctor’s office to simplify screening of the disease.



  • A New Treatment for Arthritis: Vagus-Nerve Stimulation


    Monique Robroek once had such crippling arthritis that, even with the best available medications, she struggled to walk across a room. But thanks to an electronic implant fitted under her skin, she managed to wean herself off all her drugs and live pain-free for nearly a decade—until recently, when a viral illness made her rheumatoid arthritis (RA) flare up again.

    This article is part of our special report Top Tech 2023.

    Robroek’s long remission is “very impressive” and rare among patients with RA, says her doctor Frieda Koopman, a rheumatologist at Amsterdam UMC, in the Netherlands. Robroek’s experience highlights the immense potential of so-called bioelectronic medicine, also known as electroceuticals, an emerging field of treatment for diseases that have traditionally been managed with pharmaceuticals alone.


    Robroek is also an outlier, though. Koopman led a landmark 17-person trial that tested whether modulating the nervous system’s electrical-signaling patterns could tamp down inflammation and joint pain in RA. Robroek was one of only a handful who achieved appreciable and sustained reductions in disease severity, according to the 2016 paper.

    An illustrated profile of a woman showing a branching nerve in her neck and a blue rectangle beside it. The SetPoint implant is inserted beside the patient’s vagus nerve, which travels down from the brain to innervate the spleen and other vital organs.Chris Philpot

    Pilot studies like Koopman’s are one thing, but scientific certainty demands randomized, sham-controlled trials. Doctors, neuroscientists, and bioengineers should soon get a better sense of the performance of electroceutical devices. In late 2023, SetPoint Medical, the Valencia, Calif., company that sponsored Koopman’s initial trial, will report preliminary findings from Reset-RA, the first large-scale examination of nerve stimulation for an autoimmune condition. Like the earlier trial, the Reset-RA study targets the vagus nerve, the main conduit of brain–body communication, in an attempt to fight inflammation.

    Expectations are charged. Although devices that harness electrical impulses are already widespread in medicine, these platforms all tap into neural circuits that directly impact diseased tissues; for example, deep-brain stimulators help with symptoms of Parkinson’s disease by hacking the brain’s motor control center. None take aim at what Kevin Tracey, in an influential 2002 article, termed the “inflammatory reflex,” a neural network that indirectly regulates immune responses to infection and injury through the vagus nerve and its connected organs.

    Tracey, a former neurosurgeon who leads the Feinstein Institutes for Medical Research in Manhasset, N.Y., was the first to show that vagus nerve stimulation in rats could suppress the release of immune-signaling molecules. He later linked the effect to vagus nerve signals running into the spleen, a fist-size organ in the abdomen where immune cells are activated. In 2007, Tracey cofounded SetPoint to bring the treatment to the clinic.

    The company first repurposed an off-the-shelf implant used to control seizures in people with epilepsy. SetPoint optimized the stimulation parameters, using rodent studies for guidance, before giving the devices to patients like Robroek. She and the other recipients each had a cookie-size pulse generator surgically placed inside their chests. A wire snaked up the left side of the neck, where an electrode wrapped around the vagus nerve. It gave a gentle, 1-minute buzz of stimulation up to four times every day.

    The study targets the vagus nerve, the main conduit of brain-body communication, in an attempt to fight inflammation.

    Paul Peter Tak, an immunologist and biotech entrepreneur who led the trial with Koopman, was worried that patients with RA might not want to undergo surgery and have hardware implanted under their skin. But after publicizing the study on Dutch television, Tak was inundated with requests from patients who were sick of endless regimens of pills and injections. “This was my unplanned market research,” Tak says. “To my surprise, there are many patients who might prefer a one-and-done surgery.”

    While the study’s results were promising, the device itself was cumbersome. So SetPoint overhauled the platform, shrinking it down to a peanut-size neurostimulator with integrated electrodes and a wirelessly rechargeable battery, all encased inside a silicone holding pod that sits directly atop the vagus nerve in the neck. “It’s like going from an old car to a Tesla—it’s completely redesigned,” says SetPoint’s chief medical officer, David Chernoff.

    A small trial performed in 2018 demonstrated that this miniaturized device was safe. The 250-person Reset-RA study, in which half the participants receive no stimulation for the first 12 weeks after implantation, is now evaluating efficacy. If it works, trials for other autoimmune diseases could follow.

    A silver and white capsule-shaped object lies on a blue background. Writing is on the white part. SetPoint shrank the vagus nerve stimulator so that it can be implanted in a patient’s neck instead of the chest.SetPoint Medical

    Other companies, meanwhile, are testing devices that target nerves closer to the site of immune activation—“at the business end,” says Kristoffer Famm, president of the British company Galvani Bioelectronics. This end-organ approach to nerve zapping, argues Famm, should allow for more precise, disease-specific neuromodulation, without the off-target effects of shocking the vagus nerve, which is central to many bodily processes.

    A joint venture between Google’s parent company, Alphabet, and the British pharmaceutical company GSK, Galvani is now evaluating its implantable splenic nerve stimulator in small numbers of patients with RA. Another company called SecondWave Systems, headquartered in Minneapolis, is also testing whether spleen-directed ultrasound waves can offer the same immune-quelling effects without the burden of invasive surgery. Both Galvani and SecondWave expect to announce first-in-human data within the next year.

    “Neuromodulation is definitely having a moment,” says Gene Civillico, a neurotechnologist at Northeastern University, in Boston, who previously oversaw bioelectronics research efforts at the U.S. National Institutes of Health. “Controlling nervous tissue in a spatially and temporally precise way is going to be the way that we cure or modify a lot of disease states,” Civillico contends. In the coming year, SetPoint and other companies hope to prove him right.

    This article appears in the January 2023 print issue as “Arthritis Gets a Jolt .”


    Top Tech 2023


    Top Tech 2023: A Special Report

    Preview exciting technical developments for the coming year.

    Can This Company Dominate Green Hydrogen?

    Fortescue will need more electricity-generating capacity than France.

    An Airship Resurgence

    Pathfinder 1 could herald a new era for zeppelins

    A New Way to Speed Up Computing

    Blue microLEDs bring optical fiber to the processor.

    The Personal-Use eVTOL Is (Almost) Here

    Opener’s BlackFly is a pulp-fiction fever dream with wings.

    Baidu Will Make an Autonomous EV

    Its partnership with Geely aims at full self-driving mode.

    China Builds New Breeder Reactors

    The power plants could also make weapons-grade plutonium.


    Economics Drives a Ray-Gun Resurgence

    Lasers should be cheap enough to use against drones.

    A Cryptocurrency for the Masses or a Universal ID?

    What Worldcoin’s killer app will be is not yet clear.

    IBM’s Quantum Leap

    The company’s Condor chip will boast more than 1,000 qubits.

    Arthritis Gets a Jolt

    Vagus-nerve stimulation promises to help treat autoimmune disorders.

    Smartphones Become Satphones

    New satellites can connect directly to your phone.

    Exascale Comes to Europe

    The E.U.’s first exascale supercomputer will be built in Germany.

    The Short List

    A dozen more tech milestones to watch for in 2023.



  • Self-Powered Electronic Pill Monitors Gut Chemistry


    A battery-free electronic pill that can help doctors wirelessly analyze molecules generated during vital bodily chemical reactions in the gut has been demonstrated for the first time, say the researchers behind a new study. The team of engineers, from the University of California at San Diego (UCSD), detailed their findings online in the 1 December edition of the journal Nature Communications.

    Previously, scientists had developed camera pills to help doctors look for cancer and other problems in the gut. Swallowable electronic capsules also exist that can report body temperature and heart rate, as well as other vital signs, and gases linked with inflammation and changes in diet.

    However, until now there was no electronic pill that could monitor glucose (the main sugar found in blood) and other metabolites—chemicals that are key to or formed during bodily metabolism—in the gut in real time. Instead, doctors have been analyzing metabolites using procedures that can result in significant discomfort, such as sticking probes into the stomach or intestines to collect fluids, while only generating snapshots of a constantly shifting environment.

    Now researchers have created an electronic pill to help analyze gut metabolites in real time.

    “Real-time data is always better—we can use the data to make real-time interventions,” says study cosenior author Patrick Mercier, an electrical engineer at the UCSD. “If we are doing nutrition monitoring, we can better determine real-time intake of food, real-time uptake of glucose into the body, and so on. If we extend our solution to measure other parameters such as pH, we could do real-time quantified intervention in the form of antacids, for example.”

    This pill could help analyze the gastrointestinal disorders that affect roughly one in five people at some point in their lives, the scientists note. These may include diabetes, inflammatory bowel disease, obesity, and other chronic diseases caused by abnormal intestinal processes involving the absorption or digestion of gut metabolites.

    “I think the big takeaway here is that this allows us to see a new window into the body,” Mercier says. “The gut and its microbiome are such an integral part in human health, and yet we have no real way to sample its operation beyond stool samples or via invasive endoscopies, neither of which capture dynamics. The ability to study these dynamics will potentially revolutionize our understanding of the gut microbiome, and offer opportunities for on-demand personalized therapies.”

    Most prior electronic pills made use of batteries. However, batteries often contain toxic elements that could result in serious complications. In contrast, the new battery-free device runs off a tiny fuel cell that consumes glucose in the intestines for energy, while simultaneously monitoring changing glucose concentrations.

    Batteries had been an important element of electronic pill design, in part because of the power demands of wireless communications; the small antennas used in those capsules were not good at long-range transmissions, and the body can absorb a significant amount of electromagnetic power. In contrast, the new device relies on energy-efficient magnetic human body communication, which transmits signals using magnetic fields.

    An outer coating on the new pill helps protect it from stomach acids, which previously were a major barrier to creating an electronic capsule to analyze intestinal metabolites. This coating dissolves in the gut once the pill is out of harm’s way. Once that layer is gone, the pill is free to monitor the intestines.

    The battery-less design and ultralow-power circuitry used in the capsule helped conserve space and enable significant miniaturization. All in all, the prototype device measures 2.6 centimeters long and 0.9 centimeters in diameter.

    The scientists experimented on pigs, which have a gastrointestinal tract similar in size to that in humans. The pill continuously monitored glucose levels for 14 hours after it was swallowed, transmitting data every 5 seconds for 2 to 5 hours.

    If doctors could use this pill in humans, “we would be able to study clinical glucose uptake by the body upon ingestion of food in real time,” Mercier says. “This could be quite interesting in applications concerning nutrition, diabetes, and so on.”

    The researchers now plan to shrink the pill to make it easier to swallow, potentially enabling human use. In addition, they aim to add more sensors to “enable all sorts of other interesting clinical applications,” Mercier says. “The clinical applications are potentially vast.”



  • NYU Tandon Exploring “Megabase-Scale” Genetic Engineering


    This is a sponsored article brought to you by NYU Tandon School of Engineering.

    The human genome is built from 23 chromosomes. Within those chromosomes are around 3 billion base pairs of DNA. Within these base pairs are every subtlety of what makes you uniquely you — the way your eyes change color in different lighting, the sound of your laugh, your freckles.

    It also encodes dangers. A host of genetic diseases and disorders can lurk among those 3 billion base pairs, in sequences that can cause multiple sclerosis, Alzheimer’s disease, and more.

    Since the completion of the Human Genome Project in 2003, scientists and researchers have pored over the map of our shared genetics to pick apart the clues and find the root causes of a host of human health problems. Their research has resulted in an explosion of knowledge that represents one of the great scientific advancements in human history. It also kickstarted the field of genetic engineering — the study of altering the genome to fight disease and change human health for the better.


    Portrait of David Truong stares at the camera against a gray background.


    With advancements like CRISPR, genetic engineering is entering its own renaissance. But while most geneticists are focusing on a few thousand base pairs at a time, some researchers are thinking bigger. Researchers like David Truong, Assistant Professor of Biomedical Engineering at the NYU Tandon School of Engineering and Associated Faculty of Pathology at the NYU School of Medicine. Truong is pushing the boundaries of biomedical engineering when it comes to genetics, building the technology necessary to change not just thousands of base pairs at a time, but millions. And that engineering work could have profound implications for the future of healthcare.

    Truong is conducting his research with funding from a prestigious National Institute of Allergy and Infectious Diseases DP2 New Innovator Award, and his other laurels include a Delil Nasser Award for Professional Development from the Genetics Society of America, and a National Institutes of Health Ruth L. Kirschstein National Research Service Award.

    We spoke with Truong about his work on genetics, and what it takes to work with huge amounts of genetic data at a time.

    When did you become interested in genetic engineering?

    When I was an undergraduate, I didn’t actually major in bioengineering. I was specifically interested in manipulating the genome, but in the early 2000s, bioengineering wasn’t that interested in human genomics. I pursued a molecular biology degree, but that was more of a means toward an end, which was the emerging field of genetic engineering.

    Around that time, the Human Genome Project was finally published. Once we had the map of human DNA, it was on everyone’s mind. And suddenly everyone understood that knowing the sequence of the human genome meant that we could start to manipulate that genome towards different ends, to cure diseases, to change medicine. At that point, bioengineering began to explode, and I was there to ride that wave of interest.


    Diagram showing three steps of a genetic engineering process.


    I did my Ph.D. at The University of Texas at Austin, producing a technique to directly edit genes. It was actually very similar to CRISPR-Cas9 — it combined a protein and a strand of RNA just like CRISPR. CRISPR came out right at the end of my Ph.D. studies, and unfortunately, I was never able to get my technology to work in human cells as well as the more well-known gene editing technique. So once I published my findings, like most genetic engineers, I moved over to CRISPR.

    What brought you to NYU Tandon?

    After my Ph.D. I was brought into the lab of Jef Boeke. He is a Professor of Biochemistry and Molecular Pharmacology at NYU Langone, as well as a Professor of Biomedical Engineering at Tandon. As a postdoc, I continued doing research into genomics.

    The Boeke Lab is largely interested in the DNA of yeast and using that model as a way to explore synthetic genomics and building entirely new genomes. In his lab, yeast is used as a platform for exploring the construction of fully synthetic chromosomes.

    But from that kind of cultural environment I started thinking about “How do we manipulate the genome at this much larger scale?” — building larger sections of the human genome from scratch or modifying sections, or building whole chromosomes. So by the end of my post-doc work, I was already transitioning to working on what I had dreamed about as an undergraduate — building human cell therapies by using synthetic genomics and technologies.

    I also helped during the early founding of a company while I was at the Boeke Lab called Neochromosome. I had written a small business grant to work on mammalian genome engineering, which would be a big focus of the company. It grew quite quickly once we had the funding, and it was acquired by Opentrons, who scaled Neochromosome up.

    Eventually though, we realized that the market didn’t have the patience for the type of work I do. The kind of large-scale genetic engineering I was pursuing was maybe 15 years away, and that’s before clinical trials and regulatory approvals start. So they tracked back towards engineering yeast for more immediate gains, and I began to look back towards academia to really focus on the future of my work.

    When a position in the Biomedical Engineering department opened up, I jumped at the chance. I was already familiar with NYU from my work with Boeke’s lab, and I was eager to rejoin the community.


    “We had gotten pretty good at building designer segments of one hundred thousand base pairs, or even 1 million base pairs. That’s what we call the mega-base scale … It allows you to make huge adjustments to the way the genome acts and you can reimagine the chromosome in important ways.”
    —David Truong


    When you say “a larger scale,” how large are we talking?

    Most genetic engineers are focusing on small amounts of DNA — a portion of a gene, or a couple of nucleotides. Mostly looking to change a mutation here or there in a very limited capacity. That can involve five-to-ten thousand base pairs.

    We had gotten pretty good at building designer segments of one hundred thousand base pairs, or even 1 million base pairs. That’s what we call the mega-base scale, where you’re changing many, many genes, as well as encoding the information that turns them off and on on-demand. It obviously allows you to make huge adjustments to the way the genome acts and you can reimagine the chromosome in important ways.

    Doing this requires some specific technologies that we had to develop. CRISPR is our baseline, which we use to remove large sections of DNA. But in order to replace it, we use what we call “landing pad” technology, which is optimized to receive these large pieces of genetic data.

    And going back to my work in Professor Boeke’s lab, we still use yeast to manufacture these large segments. Because most commercial DNA producers can’t put such large strings of synthetic DNA together, we utilize yeast’s unique abilities to take those small strands and combine them into something much larger than was previously possible. And those large chunks of DNA can be used in incredible ways.


    Image shows large group of stem cells illuminated with red fluorescence.


    A lot of your research focuses on human induced Pluripotent Stem Cells (iPSCs), which have the potential to transform from an embryonic state into any type of cell needed. What makes these cells so attractive to genetic engineering?

    So iPSCs are cells typically taken from blood or skin, and reprogrammed to take on the properties of embryonic stem cells — those are the cells that can morph into the various kinds of cells that populate our body, from heart cells to brain cells to anything you can imagine. And because those cells have such remarkable plasticity, we can make these genetic adjustments to have them grow into cells with a very specific purpose.

    One of the challenges, even with these cells, is that they can be highly personalized. So cells harvested from one person might be rejected by another. But what I’ve helped create was the technology to swap out the genetic coding in these cells that personalized them to an individual, and swap in genetic code that matches potential patients. And that is much cheaper and less invasive than having to start from scratch for each patient.

    We’re also engineering what we call synthetic genetic circuits. This is an idea in the synthetic biology field which takes from electrical engineering and computer science, where you actually use genes and the way they activate to actually power cells and change the way they act. The larger the amount of DNA you can swap out, the more complicated the function you can convince these cells to produce.

    For example, one of the biggest things we’re working on is turning iPSCs into cancer fighting cells. T-cells can hunt down tumors, but they are susceptible to the self-defense mechanisms of the tumors themselves, which also involves shutting down the body’s natural immune system surrounding it. So with this technology, we can “train” these cells to get around those suppression mechanisms and turn the immune system around the tumor back on, recruiting the body’s natural defense system to help defeat the cancerous cells.


    Eight images of stem cells.


    But we have a bunch of different potential uses for these cells. We’re working with various kinds of immune cells. For example, dendritic cells move around the body, picking up bits of protein and other molecules that they find, and when they find something that’s not supposed to be there, they put it on their surface. And then T-cells can ‘learn’ from these molecules what they should be attacking in the body. So we can harvest these T-cells from patients that are adapted to attack certain types of cancer, and recreate synthetic T-cells that can be introduced to new patients to kick-start their body’s defense system.

    How does your work dovetail with others at NYU Tandon?

    There’s a lot of opportunity for collaboration in the NYU Tandon BME department [Editor's note: The department was covered in the pages of this website earlier this year]. Irene de Lázaro, the newest Assistant Professor of Biomedical Engineering, also works with iPSCs. She’s doing very similar work, using these stem cells to produce therapies, specifically rejuvenating heart cells. There’s a lot of complementary overlap there. Alesha Castillo is working on stem cells, and Thorsten Kirsch is working on how cellular interactions affect things like osteoarthritis. So cell and tissue engineering is becoming an ever-growing field of study at the school, and that makes it a very exciting place to be right now.

    Then there’s research like Weiqiang Chen’s, who is currently working on cancer-on-a-chip technology. Essentially, he’s producing miniature, personalized cancer samples that represent an individual’s tumor. Theoretically, if he was looking to test a very specific treatment involving specialized genetically-modified cells, he could come to our lab and we could produce them for him, and quickly test in a controlled sample whether a treatment had the potential to work for any given patient.

    That’s what really excites me about this work. As bioengineering grows, it’s interfacing more with medicine, and we’re getting to the place where we can use cells and genomes as a technology itself. When I was thinking about bioengineering as an undergraduate, it wasn’t a thing. It’s only now that we have learned enough about nature, enough about the rules, that we can really start to take these things and put them together in new ways for new medical treatments and human health in general. I think it’s just a huge opportunity to grow.




  • Tools and Strategies for 3D EM Modeling and Design of Microwave Imaging Systems for Medical Applications


    Microwave Imaging (MWI) has attracted massive attention in the medical research field over the last decade due to its standout qualities of utilizing harmless non-ionizing radiation and affordable components. At present, conventional technologies (CT and MRI) which provide high-resolution images, still have several limitations such as their long examination time, non-portability, high cost, and also ionizing radiation.

    MWI has several potential applications and one of the promising areas is malignant tissue detection as a contrast of permittivity with respect to healthy tissues inside the human body. In order to detect malignancy using MWI at different organs, particular imaging scenarios need to be considered.

    An indispensable tool for the design, testing, and functioning of medical devices, which are based on MWI algorithms, is a powerful 3D EM simulation environment built around WIPL-D Pro (general-purpose 3D EM solver with libraries of phantoms and appropriate antennas).

    Download this free whitepaper now.



  • Are You Ready for Workplace Brain Scanning?


    Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

    To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

    The fundamental technology that these companies rely on is not new: Electroencephalography (EEG) has been around for about a century, and it’s commonly used today in both medicine and neuroscience research. For those applications, the subject may have up to 256 electrodes attached to their scalp with conductive gel to record electrical signals from neurons in different parts of the brain. More electrodes, or “channels,” mean that doctors and scientists can get better spatial resolution in their readouts—they can better tell which neurons are associated with which electrical signals.

    What is new is that EEG has recently broken out of clinics and labs and has entered the consumer marketplace. This move has been driven by a new class of “dry” electrodes that can operate without conductive gel, a substantial reduction in the number of electrodes necessary to collect useful data, and advances in artificial intelligence that make it far easier to interpret the data. Some EEG headsets are even available directly to consumers for a few hundred dollars.

    While the public may not have gotten the memo, experts say the neurotechnology is mature and ready for commercial applications. “This is not sci-fi,” says James Giordano, chief of neuroethics studies at Georgetown University Medical Center. “This is quite real.”

    How InnerEye’s TSA-boosting technology works

    InnerEye Security Screening Demo youtu.be

    In an office in Herzliya, Israel, Sergey Vaisman sits in front of a computer. He’s relaxed but focused, silent and unmoving, and not at all distracted by the seven-channel EEG headset he’s wearing. On the computer screen, images rapidly appear and disappear, one after another. At a rate of three images per second, it’s just possible to tell that they come from an airport X-ray scanner. It’s essentially impossible to see anything beyond fleeting impressions of ghostly bags and their contents.

    “Our brain is an amazing machine,” Vaisman tells us as the stream of images ends. The screen now shows an album of selected X-ray images that were just flagged by Vaisman’s brain, most of which are now revealed to have hidden firearms. No one can knowingly identify and flag firearms among the jumbled contents of bags when three images are flitting by every second, but Vaisman’s brain has no problem doing so behind the scenes, with no action required on his part. The brain processes visual imagery very quickly. According to Vaisman, the decision-making process to determine whether there’s a gun in complex images like these takes just 300 milliseconds.

    Brain data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier.

    What takes much more time are the cognitive and motor processes that occur after the decision making—planning a response (such as saying something or pushing a button) and then executing that response. If you can skip these planning and execution phases and instead use EEG to directly access the output of the brain’s visual processing and decision-making systems, you can perform image-recognition tasks far faster. The user no longer has to actively think: For an expert, just that fleeting first impression is enough for their brain to make an accurate determination of what’s in the image.

    An illustration of a person in front of screens with suitcases above it.  InnerEye’s image-classification system operates at high speed by providing a shortcut to the brain of an expert human. As an expert focuses on a continuous stream of images (from three to 10 images per second, depending on complexity), a commercial EEG system combined with InnerEye’s software can distinguish the characteristic response the expert’s brain produces when it recognizes a target. In this example, the target is a weapon in an X-ray image of a suitcase, representing an airport-security application.Chris Philpot

    Vaisman is the vice president of R&D of InnerEye, an Israel-based startup that recently came out of stealth mode. InnerEye uses deep learning to classify EEG signals into responses that indicate “targets” and “nontargets.” Targets can be anything that a trained human brain can recognize. In addition to developing security screening, InnerEye has worked with doctors to detect tumors in medical images, with farmers to identify diseased plants, and with manufacturing experts to spot product defects. For simple cases, InnerEye has found that our brains can handle image recognition at rates of up to 10 images per second. And, Vaisman says, the company’s system produces results just as accurate as a human would when recognizing and tagging images manually—InnerEye is merely using EEG as a shortcut to that person’s brain to drastically speed up the process.

    While using the InnerEye technology doesn’t require active decision making, it does require training and focus. Users must be experts at the task, well trained in identifying a given type of target, whether that’s firearms or tumors. They must also pay close attention to what they’re seeing—they can’t just zone out and let images flash past. InnerEye’s system measures focus very accurately, and if the user blinks or stops concentrating momentarily, the system detects it and shows the missed images again.

    Can you spot the manufacturing defects?

    Examine the sample images below, and then try to spot the target among the nontargets.

    Ten images are displayed every second for five seconds on loop. There are three targets.

    A pair of black and white images.  The left is labelled "non target" and the right is "target." there is a red circle around a black line on the right image.

    A gif of a black and white static image

    Can you spot the weapon?

    Three images are displayed every second for five seconds on loop. There is one weapon.

    A gif of x-rayed pieces of luggage. InnerEye

    Having a human brain in the loop is especially important for classifying data that may be open to interpretation. For example, a well-trained image classifier may be able to determine with reasonable accuracy whether an X-ray image of a suitcase shows a gun, but if you want to determine whether that X-ray image shows something else that’s vaguely suspicious, you need human experience. People are capable of detecting something unusual even if they don’t know quite what it is.

    “We can see that uncertainty in the brain waves,” says InnerEye founder and chief technology officer Amir Geva. “We know when they aren’t sure.” Humans have a unique ability to recognize and contextualize novelty, a substantial advantage that InnerEye’s system has over AI image classifiers. InnerEye then feeds that nuance back into its AI models. “When a human isn’t sure, we can teach AI systems to be not sure, which is better training than teaching the AI system just one or zero,” says Geva. “There is a need to combine human expertise with AI.” InnerEye’s system enables this combination, as every image can be classified by both computer vision and a human brain.

    Using InnerEye’s system is a positive experience for its users, the company claims. “When we start working with new users, the first experience is a bit overwhelming,” Vaisman says. “But in one or two sessions, people get used to it, and they start to like it.” Geva says some users do find it challenging to maintain constant focus throughout a session, which lasts up to 20 minutes, but once they get used to working at three images per second, even two images per second feels “too slow.”

    In a security-screening application, three images per second is approximately an order of magnitude faster than an expert can manually achieve. InnerEye says their system allows far fewer humans to handle far more data, with just two human experts redundantly overseeing 15 security scanners at once, supported by an AI image-recognition system that is being trained at the same time, using the output from the humans’ brains.

    InnerEye is currently partnering with a handful of airports around the world on pilot projects. And it’s not the only company working to bring neurotech into the workplace.

    How Emotiv’s brain-tracking technology works

    Workers wearing earbuds sit in an office in front of computers. Emotiv’s MN8 earbuds collect two channels of EEG brain data. The earbuds can also be used for phone calls and music. Emotiv

    When it comes to neural monitoring for productivity and well-being in the workplace, the San Francisco–based company Emotiv is leading the charge. Since its founding 11 years ago, Emotiv has released three models of lightweight brain-scanning headsets. Until now the company had mainly sold its hardware to neuroscientists, with a sideline business aimed at developers of brain-controlled apps or games. Emotiv started advertising its technology as an enterprise solution only this year, when it released its fourth model, the MN8 system, which tucks brain-scanning sensors into a pair of discreet Bluetooth earbuds.

    Tan Le, Emotiv’s CEO and cofounder, sees neurotech as the next trend in wearables, a way for people to get objective “brain metrics” of mental states, enabling them to track and understand their cognitive and mental well-being. “I think it’s reasonable to imagine that five years from now this [brain tracking] will be quite ubiquitous,” she says. When a company uses the MN8 system, workers get insight into their individual levels of focus and stress, and managers get aggregated and anonymous data about their teams.

    The Emotiv Experience

    Illustration of head with an earpiece in.  With columns of data on either side. The Emotiv Experience Chris Philpot

    Emotiv’s MN8 system uses earbuds to capture two channels of EEG data, from which the company’s proprietary algorithms derive performance metrics for attention and cognitive stress. It’s very difficult to draw conclusions from raw EEG signals [top], especially with only two channels of data. The MN8 system relies on machine-learning models that Emotiv developed using a decade’s worth of data from its earlier headsets, which have more electrodes.

    To determine a worker’s level of attention and cognitive stress, the MN8 system uses a variety of analyses. One shown here [middle, bar graphs] reveals increased activity in the low-frequency ranges (theta and alpha) when a worker’s attention is high and cognitive stress is low; when the worker has low attention and high stress, there’s more activity in the higher-frequency ranges (beta and gamma). This analysis and many others feed into the models that present simplified metrics of attention and cognitive stress [bottom] to the worker.

    Emotiv launched its enterprise technology into a world that is fiercely debating the future of the workplace. Workers are feuding with their employers about return-to-office plans following the pandemic, and companies are increasingly using “ bossware” to keep tabs on employees—whether staffers or gig workers, working in the office or remotely. Le says Emotiv is aware of these trends and is carefully considering which companies to work with as it debuts its new gear. “The dystopian potential of this technology is not lost on us,” she says. “So we are very cognizant of choosing partners that want to introduce this technology in a responsible way—they have to have a genuine desire to help and empower employees,” she says.

    Lee Daniels, a consultant who works for the global real estate services company JLL, has spoken with a lot of C-suite executives lately. “They’re worried,” says Daniels. “There aren’t as many people coming back to the office as originally anticipated—the hybrid model is here to stay, and it’s highly complex.” Executives come to Daniels asking how to manage a hybrid workforce. “This is where the neuroscience comes in,” he says.

    Emotiv has partnered with JLL, which has begun to use the MN8 earbuds to help its clients collect “true scientific data,” Daniels says, about workers’ attention, distraction, and stress, and how those factors influence both productivity and well-being. Daniels says JLL is currently helping its clients run short-term experiments using the MN8 system to track workers’ responses to new collaboration tools and various work settings; for example, employers could compare the productivity of in-office and remote workers.

    “The dystopian potential of this technology is not lost on us.” —Tan Le, Emotiv CEO

    Emotiv CTO Geoff Mackellar believes the new MN8 system will succeed because of its convenient and comfortable form factor: The multipurpose earbuds also let the user listen to music and answer phone calls. The downside of earbuds is that they provide only two channels of brain data. When the company first considered this project, Mackellar says, his engineering team looked at the rich data set they’d collected from Emotiv’s other headsets over the past decade. The company boasts that academics have conducted more than 4,000 studies using Emotiv tech. From that trove of data—from headsets with 5, 14, or 32 channels—Emotiv isolated the data from the two channels the earbuds could pick up. “Obviously, there’s less information in the two sensors, but we were able to extract quite a lot of things that were very relevant,” Mackellar says.

    Once the Emotiv engineers had a hardware prototype, they had volunteers wear the earbuds and a 14-channel headset at the same time. By recording data from the two systems in unison, the engineers trained a machine-learning algorithm to identify the signatures of attention and cognitive stress from the relatively sparse MN8 data. The brain signals associated with attention and stress have been well studied, Mackellar says, and are relatively easy to track. Although everyday activities such as talking and moving around also register on EEG, the Emotiv software filters out those artifacts.

    The app that’s paired with the MN8 earbuds doesn’t display raw EEG data. Instead, it processes that data and shows workers two simple metrics relating to their individual performance. One squiggly line shows the rise and fall of workers’ attention to their tasks—the degree of focus and the dips that come when they switch tasks or get distracted—while another line represents their cognitive stress. Although short periods of stress can be motivating, too much for too long can erode productivity and well-being. The MN8 system will therefore sometimes suggest that the worker take a break. Workers can run their own experiments to see what kind of break activity best restores their mood and focus—maybe taking a walk, or getting a cup of coffee, or chatting with a colleague.

    What neuroethicists think about neurotech in the workplace

    While MN8 users can easily access data from their own brains, employers don’t see individual workers’ brain data. Instead, they receive aggregated data to get a sense of a team or department’s attention and stress levels. With that data, companies can see, for example, on which days and at which times of day their workers are most productive, or how a big announcement affects the overall level of worker stress.

    Emotiv emphasizes the importance of anonymizing the data to protect individual privacy and prevent people from being promoted or fired based on their brain metrics. “The data belongs to you,” says Emotiv’s Le. “You have to explicitly allow a copy of it to be shared anonymously with your employer.” If a group is too small for real anonymity, Le says, the system will not share that data with employers. She also predicts that the device will be used only if workers opt in, perhaps as part of an employee wellness program that offers discounts on medical insurance in return for using the MN8 system regularly.

    However, workers may still be worried that employers will somehow use the data against them. Karen Rommelfanger, founder of the Institute of Neuroethics, shares that concern. “I think there is significant interest from employers” in using such technologies, she says. “I don’t know if there’s significant interest from employees.”

    Both she and Georgetown’s Giordano doubt that such tools will become commonplace anytime soon. “I think there will be pushback” from employees on issues such as privacy and worker rights, says Giordano. Even if the technology providers and the companies that deploy the technology take a responsible approach, he expects questions to be raised about who owns the brain data and how it’s used. “Perceived threats must be addressed early and explicitly,” he says.

    Giordano says he expects workers in the United States and other western countries to object to routine brain scanning. In China, he says, workers have reportedly been more receptive to experiments with such technologies. He also believes that brain-monitoring devices will really take off first in industrial settings, where a momentary lack of attention can lead to accidents that injure workers and hurt a company’s bottom line. “It will probably work very well under some rubric of occupational safety,” Giordano says. It’s easy to imagine such devices being used by companies involved in trucking, construction, warehouse operations, and the like. Indeed, at least one such product, an EEG headband that measures fatigue, is already on the market for truck drivers and miners.

    Giordano says that using brain-tracking devices for safety and wellness programs could be a slippery slope in any workplace setting. Even if a company focuses initially on workers’ well-being, it may soon find other uses for the metrics of productivity and performance that devices like the MN8 provide. “Metrics are meaningless unless those metrics are standardized, and then they very quickly become comparative,” he says.

    Rommelfanger adds that no one can foresee how workplace neurotech will play out. “I think most companies creating neurotechnology aren’t prepared for the society that they’re creating,” she says. “They don’t know the possibilities yet.”

    This article appears in the December 2022 print issue.



  • Blood Test Only Needs a Drop and a Smartphone for Results


    The phrase “from a single drop of blood” is full of both promise and peril for researchers trying to integrate clinical-quality medical testing technology with consumer devices like smartphones. While university researchers and commercial startups worldwide continue to introduce innovative new consumer-friendly takes on tests that have resided in laboratories for decades, the collective memory of the fraud perpetrated by those behind Theranos’s discredited blood-testing platform is still pervasive.

    “What are you claiming from a single drop of blood?” says Shyamnath Gollakota, director of the mobile intelligence lab at the University of Washington’s Paul G. Allen School of Computer Science and Engineering. Gollakota and colleagues have developed a proof-of-concept test that is able to analyze how quickly a person’s blood clots using a single drop of blood by utilizing a smartphone’s camera, haptic motor, a small attached cup, and a floating piece of copper about the size of a ballpoint pen’s writing tip.


    To activate the system, the user adds a drop of blood from a finger prick to a small cup attached to a bracket that fits over the phone. Then the phone’s motor shakes the cup while the camera monitors the movement of the copper particle, which slows down and eventually stops as the clot forms. To calculate the time it takes the blood to clot, the phone collects two time stamps. The first is when the user inserts the blood, and second is when the particle stops moving. The technology performed is in line with commercial coagulation tests in the original study (published in Nature Communications) in a medical facility; Gollakota’s team is now studying how it works in at-home environments.

    If the technology ever enters the commercial realm, those with conditions such as atrial fibrillation or who have mechanical heart valves might be able to test their coagulation times quickly and simply themselves instead of making frequent trips to doctors’ offices or going without testing at all—they would have to visit a doctor only when their home tests are out of range. Gollakota is careful not to claim the technology can do too much, but he is also dedicated to making its potentially lifesaving capabilities available to anyone with a smartphone.

    Blood clot testing using smartphones www.youtube.com

    “We are not trying to say we can do miracles from a single drop of blood, but we are trying to say the devices that exist in hospitals to test for this haven’t changed much for 20 or 30 years,” Gollakota said. “But smartphones have been changing a lot. They have vibration motors, they have a camera, and these sensors exist on almost any smartphone.”

    Ron Paulus, executive in residence at venture capital firm General Catalyst, said the Gollakota team’s technology hews to a trio of ongoing trends he sees with smartphones in health care. The first is the ability to interact with current lab infrastructure for things like ordering and scheduling tests and receiving results directly instead of relying on a doctor as middleman. The second trend is using the phone in the field as a power source for a separate plug-in or bridge to a wireless module with the analyzing intelligence built into that. The third trend is using the phone as both a power source and an analyzing platform.

    There is no shortage of devices that inhabit the second category in Paulus’s triumvirate; one example he cited was a dongle that plugged into a phone’s headphone jack and performed tests for HIV and syphilis, returning results in 15 minutes, but the project’s senior author, Columbia University vice provost and professor of biomedical engineering Samuel Sia, said it did not advance to commercialization.

    Another similar device is being developed by Sudbury, Ontario–based Verv Technologies, which is perfecting a platform that uses a drop of blood from a finger prick, a disposable test cartridge, a Bluetooth-enabled analyzer, and a connected smartphone app that will give the user results in 15 minutes. The company recently received C$3.8 million seed funding from Crumlin, Northern Ireland–based Randox Laboratories, and a C$314,000 grant with McMaster University from the Natural Sciences and Engineering Research Council of Canada; the grant will allow the McMaster research team to validate and derisk the technologies, according to Canadian Healthcare Technology.

    Paulus said consumer-ready smartphone-enabled tests are promising but not ready for mass market adoption yet.

    “We’re getting closer, but we’re still not there,” he said. “People can’t go through an eight-step process that requires any kind of technology expertise. It has to be made so any normal, regular person can just do it and can’t really make an error, and it has to be a reliable test. But there is no reason why in three to seven years, people should have to go out for a routine test, the kind of things people go to urgent care for. There is going to be a relentless push into this democratization.”

    Ironically, both Paulus and Gollakota think the widespread at-home testing precipitated by the COVID pandemic made the idea of user tests requiring swabbing and dipping indicators and reading results commonplace to a large audience while developers perfect more streamlined devices.

    “With COVID tests there were a lot of things we ended up doing ourselves and people are used to it in the home scenario now,” Gollakota said. “So I don’t think it’s completely far-fetched to expect people to be able to do testing themselves with multipart tests. But I also think the idea of going forward is to roll the whole thing into one simple attachment.”



  • App Identifies Parkinson’s, COVID-19 Based on User’s Voice


    During the early stages of some illnesses, there may be limited evidence that a health complication is developing, yet some subtle clues may still be present. Two recent studies show that AI algorithms can successfully analyze people’s voices to identify those in the early stages of Parkinson’s disease or a severe COVID-19 infection of the lungs. The researchers incorporated these algorithms into a smartphone app called Aum, which can be used to diagnose and monitor the respective conditions.

    “[People’s voices are] routinely used by clinicians to evaluate their patients, especially for neurodegenerative disease, such as Parkinson’s disease, and pulmonary disease, whether it be COVID-19 or other lung diseases,” explains Dinesh Kumar, a professor at the Royal Melbourne Institute of Technology (RMIT), in Australia, who was involved in both studies. “This [approach] is not only to detect the disease, but to evaluate the effects of medicine and to optimize the dosage.”

    Kumar and his colleagues sought to explore whether these subtleties in a person’s voice could be detected by machine-learning algorithms. In their first study, they recruited 36 people with Parkinson’s disease and 36 healthy volunteers.

    Importantly, there are natural differences between people’s voices, which could make it difficult to differentiate who does and does not have an underlying disorder. “To overcome this problem, we looked for nonlanguage sounds that could detect the disease despite the natural differences in voice of people,” explains Kumar.

    Therefore, his team asked study participants to say different phonemes that require sounds from the throat (/a/), the mouth (/o/), and the nose (/m/). Recordings were made using an iOS-based smartphone. They then developed and applied a machine-learning algorithm that could differentiate between people with Parkinson’s disease and the healthy volunteers, despite ambient background noise. In their study published 12 September in IEEE Access, they report that the algorithm can identify people in the study population who have Parkinson’s disease with 100 percent accuracy.

    An App That Detects Parkinson’s and COVID-19 www.youtube.com

    “More importantly, we can differentiate between PD with active medication and without active medication,” Kumar notes. “This is very important because the neurologist can evaluate their patients even when the patients are unable to go to the clinic, and hence is suitable for people living in remote regions.”

    One of the researchers on this initial study, RMIT associate professor Nemuel Pah, had an opportunity to conduct a similar study in COVID-19 patients in Indonesia during June and July of 2021. In that case, Pah recorded 40 hospitalized COVID-19 patients and 48 healthy subjects over the course of 22 days, asking them to say six phonemes (/a/, /e/, /i/, /o/, /u/, and /m/).

    The researchers then applied a different machine-learning algorithm to that data, finding that the features extracted from the vowel /i/ during the first three days after admittance to the hospital were the most effective at differentiating between people with a COVID-19 lung infection and healthy controls. The algorithm did so with 94 percent accuracy. These results were published 20 September in the IEEE Journal of Translational Engineering in Health and Medicine.

    An advantage of using an app to identify COVID-19 patients, Kumar says, is that voice recordings do not require doctors to make physical contact with a patient, and therefore would be a suitable way to limit exposure. “Voice recordings can be obtained from patients located remotely,” he says, noting that this approach to phoneme analysis is also not dependent on language.

    Kumar says his team is interested applying these approaches in the real world, but more work needs to be done. Specifically, they are looking to register their technology with Australia’s Therapeutic Goods Administration, the U.S. Food and Drug Administration (FDA), and China’s National Medical Products Administration. “That needs time and money; we are hoping to get funds from [people and] organizations focused on Parkinson’s disease, such as Michael J. Fox, or investors for this purpose,” Kumar says.

    This article appears in the January 2023 print issue as “App Senses COVID, Parkinson’s in a Speaker’s Voice.”



  • NYU Biomedical Engineering Speeds Research from Lab Bench to Bedside


    This is a sponsored article brought to you by NYU’s Tandon School of Engineering.

    When Andreas H. Hielscher, the chair of the biomedical engineering (BME) department at NYU’s Tandon School of Engineering, arrived at his new position, he saw raw potential. NYU Tandon had undergone a meteoric rise in its U.S. News & World Report graduate ranking in recent years, skyrocketing 47 spots since 2009. At the same time, the NYU Grossman School of Medicine had shot from the thirties to the #2 spot in the country for research. The two scientific powerhouses, sitting on opposite banks of the East River, offered Hielscher a unique opportunity: to work at the intersection of engineering and healthcare research, with the unmet clinical needs and clinician feedback from NYU’s world-renowned medical program directly informing new areas of development, exploration, and testing.



    “There is now an understanding that technology coming from a biomedical engineering department can play a big role for a top-tier medical school,” said Hielscher. “At some point, everybody needs to have a BME department.”

    In the early days of biomedical engineering departments nationwide, there was some resistance even to the notion of biomedical engineering: either you were an electrical engineer or a mechanical engineer. “That’s no longer the case,” said Hielscher. “The combining of the biology and medical aspects with the engineering aspects has been proven to be the best approach.”


    Dr. Andreas Hielscher, NYU Tandon Biomedical Engineering Department Chair and head of the Clinical Biophotonics Laboratory, speaks with IEEE Spectrum about his work leveraging optical tomography for early detection and treatment monitoring for breast cancer.


    The proof of this can be seen by the trend that an undergraduate biomedical degree has become one of the most desired engineering degrees, according to Hielscher. He also noted that the current Dean of NYU’s Tandon School of Engineering, Jelena Kovačević, has a biomedical engineering background, having just received the 2022 IEEE Engineering in Medicine and Biology Society career achievement award for her pioneering research related to signal processing applications for biomedical imaging.

    Mary Cowman, a pioneer in joint and cartilage regeneration, began laying the foundations for NYU Tandon’s biomedical engineering department in the 2010s. Since her retirement in 2020, Hielscher has continued to grow the department through innovative collaborations with the medical school and medical center, including the recently-announced Translational Healthcare Initiative, on which Hielscher worked closely with Daniel Sodickson, the co-director of the medical school’s Tech4Health.


    Man with beard and eyeglasses in a suit against a background showing bookshelves.


    “The fundamental idea of the Initiative is to have one physician from Langone Medical School, and one engineer at least—you could have multiple—and have them address some unmet clinical needs, some particular problem,” explained Hielscher. “In many cases they have already worked together, or researched this issue. What this initiative is about is to give these groups funding to do some experimentation to either prove that it won’t work, or demonstrate that it can and prioritize it.”

    With this funding of further experimentation, it becomes possible to develop the technology to a point where you could begin to bring investors in, according Hielscher. “This mitigates the risk of the technology and helps attract potential investors,” added Hielscher. “At that point, perhaps a medical device company comes in, or some angel investor, and then you can get to the next level of investment for moving the technology forward.”

    Biophotonics for Cancer Diagnosis

    Hielscher himself has been leading research on developing new technologies within the Clinical Biophotonics Laboratory. One of the latest areas of research has been investigating the application of optical technologies to breast cancer diagnosis.


    Six images showing tomographic cross sections of a breast with tumor


    Hielscher and his colleagues have built a system that shines light through both breasts at the same time. By measuring how much light is reflected back it’s possible to generate maps of locations with high levels of oxygen and total hemoglobin, which may indicate tumors.

    “We look at where there’s blood in the breast,” explained Hielscher. “Because breast tumors recruit new blood vessels, or, once they grow, they generate their own vascular network requiring more oxygen, wherever there is a tumor you will see an increase in total blood volume, and you will see more oxygenated blood.”

    Initially, this diagnostic tool was targeted for early detection, since mammograms can only detect calcification in lower density breast tissue of women over a certain age. But it soon became clear in collaboration with clinical partners that it was also highly effective in monitoring treatment.


    “Technology coming from a biomedical engineering department can play a big role for a top-tier medical school”
    —Andreas H. Hielscher, Biomedical Engineering Department Chair, NYU Tandon


    This realization came in part because of a recent change in cancer treatment that has moved towards what is known as neoadjuvant chemotherapy, in which chemotherapy drugs are administered before surgical extraction of the tumor. One of the drawbacks of this approach is that only around 60 percent of patients respond favorably to the chemotherapy, resulting in a large percentage of patients suffering through a grueling six-month-long chemotherapy treatment with minimal-to-no impact on the tumor.

    With the optical technique, Hielscher and his colleagues have found that if they can detect a noticeable decrease of blood in targeted areas after two weeks, it’s very likely that the patient will respond to the chemotherapy. On the other hand, if they see that the amount of blood in that area stays the same, then there’s a very high likelihood that the patient will not respond to the therapy.

    This same fundamental technique can also be applied to what is known as peripheral artery disease (PAD), which affects many patients with diabetes and involves the narrowing or blockage of the vessels that carry blood from the heart to the legs. An Israel-based company called VOTIS has licensed the technology for diagnosing and treating PAD.


    Medical equipment image with blue background showing in bright yellow a finger joint affected by lupus arthritis


    While Hielscher’s work is in biophotonics, he recognized that the department has also quickly been developing a reputation in other emerging areas, including wearables, synthetic biology, and neurorehabilitation and stroke prediction.

    Hielscher highlighted the recent work of Rose Faghih, working in smart wearables and data for mental health, Jef Boeke, a synthetic biology pioneer, and S. Farokh Atashzar, doing work in neurorehabilitation and stroke prediction. Atashzar’s work was highlighted last year in the pages of IEEE Spectrum.

    “Rose Faghih is leveraging all kinds of sensors to make inferences about the mental state of patients, to determine if someone is depressed or schizophrenic, and then possibly have a feedback loop where you actually also treat them,” said Hielscher. “Jef Boeke is involved in what I term ‘wet engineering,’ and is currently involved in efforts to take cancer cells outside of the body to find a way to attack them, or reprogram them.”

    Future Collaborations

    As NYU Tandon’s BME department goes forward, Hielscher’s aim is that the department becomes a trusted source for the medical school, and that partnership enables key technologies to go from an unmet clinical need or an idea in a lab to a patient’s bedside in a 3-5 year timeframe.

    “What I really would like, “Hielscher concluded, “is that if somebody in the medical school has a problem, the first thing they would say is, ‘Oh, I’ll call the engineering school. I bet there’s somebody there that can help me.’ We can work together to benefit patients, and we’re starting this already.”





  • Nanoparticles in Medicine—Microbots to Blood Clots


    As nanotechology burrows into an increasing number of medical technologies, new developments in nanoparticles point to the ways that treatments can today be nanotechnologically targeted. In one case, would-be end effectors on microrobots are aimed at clearing up cases of bacterial pneumonia. In another, a smart-targeting system may decrease clotting risks in dangerous cases of thrombosis.

    Scientists from the University of California, San Diego, demonstrated antibiotic-filled nanoparticles that hitch a ride on microbots made of algae to deliver targeted therapeutics. Their paper was recently published in Nature Materials. As a proof of concept, the researchers administered antibiotic-laden microbots to mice infected with a potentially fatal variety of pneumonia (a strain that is common in human patients who are receiving mechanical ventilation in intensive-care settings). All infections in the treated mice cleared up within a week, while untreated mice died within three days.

    The algae–nanoparticle hybrid microbots were effectively distributed to infected tissue through lung fluid and showed negligible toxicity. “Our goal is to do targeted drug delivery into more challenging parts of the body, like the lungs,” said bioengineering professor Liangfang Zhang in a press statement. “And we want to do it in a way that is safe, easy, biocompatible, and long lasting.”

    The nanoparticle treatment was also shown to be more effective than an IV injection because of its targeted delivery. “With an IV injection, sometimes only a very small fraction of antibiotics will get into the lungs,” said coresearcher Victor Nizet in the press release. This results in high mortality for patients with current antibiotic treatments for pneumonia. “Based on these mouse data, we see that the microrobots could potentially improve antibiotic penetration to kill bacterial pathogens and save more patients’ lives.”

    Nano targets neutro

    Thrombosis, or the obstruction of blood flow in the body because of clot (thrombus) formation, is one of the leading killers of humans globally. Even so, the molecular drivers of thrombosis are poorly understood. A recent paper published in Science Translational Medicine identifies neutrophils as playing a key role in thrombosis. Targeting neutrophils using nanoparticle therapy, the researchers report, decreases clotting risk without increasing the risk of bleeding.

    Lalitha Nayak, associate professor at the Case Western Reserve School of Medicine, in Cleveland, first author of the study, says this is the first time overactive neutrophils have been identified as a key driver of both venous and arterial thrombosis.

    Diseases associated with arterial clots are not the same as those with venous clots. Arteries are thicker, as are their walls, while veins are collapsible with thinner walls; the endothelial lining of these vessels are different, as are the pressures of the blood running through them. Myocardial infarction, or heart attack, for instance, is an arterial clot event, while deep-vein thrombosis is due to venous clots. Therefore, treatments for these two types of thromboses have also been different.

    However, there are some diseases where you see both arterial and venous clots, Nayak says, one of those being antiphospholipid antibody syndrome (APS). This is what the researchers used as a model in their study. They identified key molecular events that serve as potential targets for treatment of thrombosis in APS.

    Neutrophils are a type of white blood cells that are first responders in the body’s immune system. Ordinarily, they rush to a site of injury or infection, and capture and destroy (by ingesting) infection-causing microorganisms.

    In the present paper, a culmination of 10 years of research, Nayak and colleagues used mice models to show how overactive neutrophils participate in thrombosis because of their tendency to migrate and adhere to sites of injury, increasing the production of key factors used as building blocks of clots.

    Theoretically, blocking neutrophils should make the thrombosis go away, but as these cells play an important immune role, that wouldn’t be practical. “That’s why we developed nanoparticles that specifically [identify and] target one receptor on an activated neutrophil,” Nayak says.

    The nanoparticles are synthetically engineered particles and are coated with different proteins of interest, she adds. For this study, they were coated with an antibody that would target specific receptors on neutrophils. “We did a lot of ex vivo studies to show that this is very specific and targets only activated neutrophils,” she says. “And we showed that if the animals were injected with this nanoparticle, thrombosis was significantly mitigated.”

    At this point, as Nayak points out, their study is a proof of principle to show that if they could develop something like this for human patients, then they could mitigate thrombosis for them as well, irrespective of whether it was caused by, cancer, APS, or any other disease.

    “For us now, the [next] challenge would be to try and develop something that would be translational, to take it to the bench [and then] from the bench to the bedside,” she says. “My next study would include patients’ samples, patients with antiphospholipid antibodies. We want to show that what we did in the mouse can be done in humans.”

    Nayak’s study could be significant not just in treating thrombosis in general, but also could play a crucial role in the treatment regimes of other diseases that have blood clots as a common complication, such as cancer. Testing their nanoparticle therapy on cancer-associated thrombosis in mice is also among Nayak’s future plans.



  • This Implant Turns Brain Waves Into Words


    A computer screen shows the question “Would you like some water?” Underneath, three dots blink, followed by words that appear, one at a time: “No I am not thirsty.”

    It was brain activity that made those words materialize—the brain of a man who has not spoken for more than 15 years, ever since a stroke damaged the connection between his brain and the rest of his body, leaving him mostly paralyzed. He has used many other technologies to communicate; most recently, he used a pointer attached to his baseball cap to tap out words on a touchscreen, a method that was effective but slow. He volunteered for my research group’s clinical trial at the University of California, San Francisco in hopes of pioneering a faster method. So far, he has used the brain-to-text system only during research sessions, but he wants to help develop the technology into something that people like himself could use in their everyday lives.


    In our pilot study, we draped a thin, flexible electrode array over the surface of the volunteer’s brain. The electrodes recorded neural signals and sent them to a speech decoder, which translated the signals into the words the man intended to say. It was the first time a paralyzed person who couldn’t speak had used neurotechnology to broadcast whole words—not just letters—from the brain.

    That trial was the culmination of more than a decade of research on the underlying brain mechanisms that govern speech, and we’re enormously proud of what we’ve accomplished so far. But we’re just getting started. My lab at UCSF is working with colleagues around the world to make this technology safe, stable, and reliable enough for everyday use at home. We’re also working to improve the system’s performance so it will be worth the effort.

    How neuroprosthetics work

    A series of three photographs shows the back of a man\u2019s head that has a device and a wire attached to the skull. A screen in front of the man shows three questions and responses, including \u201cWould you like some water?\u201d and \u201cNo I am not thirsty.\u201d The first version of the brain-computer interface gave the volunteer a vocabulary of 50 practical words. University of California, San Francisco

    Neuroprosthetics have come a long way in the past two decades. Prosthetic implants for hearing have advanced the furthest, with designs that interface with the cochlear nerve of the inner ear or directly into the auditory brain stem. There’s also considerable research on retinal and brain implants for vision, as well as efforts to give people with prosthetic hands a sense of touch. All of these sensory prosthetics take information from the outside world and convert it into electrical signals that feed into the brain’s processing centers.

    The opposite kind of neuroprosthetic records the electrical activity of the brain and converts it into signals that control something in the outside world, such as a robotic arm, a video-game controller, or a cursor on a computer screen. That last control modality has been used by groups such as the BrainGate consortium to enable paralyzed people to type words—sometimes one letter at a time, sometimes using an autocomplete function to speed up the process.

    For that typing-by-brain function, an implant is typically placed in the motor cortex, the part of the brain that controls movement. Then the user imagines certain physical actions to control a cursor that moves over a virtual keyboard. Another approach, pioneered by some of my collaborators in a 2021 paper, had one user imagine that he was holding a pen to paper and was writing letters, creating signals in the motor cortex that were translated into text. That approach set a new record for speed, enabling the volunteer to write about 18 words per minute.

    In my lab’s research, we’ve taken a more ambitious approach. Instead of decoding a user’s intent to move a cursor or a pen, we decode the intent to control the vocal tract, comprising dozens of muscles governing the larynx (commonly called the voice box), the tongue, and the lips.

    A photo taken from above shows a room full of computers and other equipment with a man in a wheelchair in the center, facing a screen. The seemingly simple conversational setup for the paralyzed man [in pink shirt] is enabled by both sophisticated neurotech hardware and machine-learning systems that decode his brain signals. University of California, San Francisco

    I began working in this area more than 10 years ago. As a neurosurgeon, I would often see patients with severe injuries that left them unable to speak. To my surprise, in many cases the locations of brain injuries didn’t match up with the syndromes I learned about in medical school, and I realized that we still have a lot to learn about how language is processed in the brain. I decided to study the underlying neurobiology of language and, if possible, to develop a brain-machine interface (BMI) to restore communication for people who have lost it. In addition to my neurosurgical background, my team has expertise in linguistics, electrical engineering, computer science, bioengineering, and medicine. Our ongoing clinical trial is testing both hardware and software to explore the limits of our BMI and determine what kind of speech we can restore to people.

    The muscles involved in speech

    Speech is one of the behaviors that sets humans apart. Plenty of other species vocalize, but only humans combine a set of sounds in myriad different ways to represent the world around them. It’s also an extraordinarily complicated motor act—some experts believe it’s the most complex motor action that people perform. Speaking is a product of modulated air flow through the vocal tract; with every utterance we shape the breath by creating audible vibrations in our laryngeal vocal folds and changing the shape of the lips, jaw, and tongue.

    Many of the muscles of the vocal tract are quite unlike the joint-based muscles such as those in the arms and legs, which can move in only a few prescribed ways. For example, the muscle that controls the lips is a sphincter, while the muscles that make up the tongue are governed more by hydraulics—the tongue is largely composed of a fixed volume of muscular tissue, so moving one part of the tongue changes its shape elsewhere. The physics governing the movements of such muscles is totally different from that of the biceps or hamstrings.

    Because there are so many muscles involved and they each have so many degrees of freedom, there’s essentially an infinite number of possible configurations. But when people speak, it turns out they use a relatively small set of core movements (which differ somewhat in different languages). For example, when English speakers make the “d” sound, they put their tongues behind their teeth; when they make the “k” sound, the backs of their tongues go up to touch the ceiling of the back of the mouth. Few people are conscious of the precise, complex, and coordinated muscle actions required to say the simplest word.

    A man looks at two large display screens; one is covered in squiggly lines, the other shows text.\u00a0 Team member David Moses looks at a readout of the patient’s brain waves [left screen] and a display of the decoding system’s activity [right screen]. University of California, San Francisco

    My research group focuses on the parts of the brain’s motor cortex that send movement commands to the muscles of the face, throat, mouth, and tongue. Those brain regions are multitaskers: They manage muscle movements that produce speech and also the movements of those same muscles for swallowing, smiling, and kissing.

    Studying the neural activity of those regions in a useful way requires both spatial resolution on the scale of millimeters and temporal resolution on the scale of milliseconds. Historically, noninvasive imaging systems have been able to provide one or the other, but not both. When we started this research, we found remarkably little data on how brain activity patterns were associated with even the simplest components of speech: phonemes and syllables.

    Here we owe a debt of gratitude to our volunteers. At the UCSF epilepsy center, patients preparing for surgery typically have electrodes surgically placed over the surfaces of their brains for several days so we can map the regions involved when they have seizures. During those few days of wired-up downtime, many patients volunteer for neurological research experiments that make use of the electrode recordings from their brains. My group asked patients to let us study their patterns of neural activity while they spoke words.

    The hardware involved is called electrocorticography (ECoG). The electrodes in an ECoG system don’t penetrate the brain but lie on the surface of it. Our arrays can contain several hundred electrode sensors, each of which records from thousands of neurons. So far, we’ve used an array with 256 channels. Our goal in those early studies was to discover the patterns of cortical activity when people speak simple syllables. We asked volunteers to say specific sounds and words while we recorded their neural patterns and tracked the movements of their tongues and mouths. Sometimes we did so by having them wear colored face paint and using a computer-vision system to extract the kinematic gestures; other times we used an ultrasound machine positioned under the patients’ jaws to image their moving tongues.

    A diagram shows a man in a wheelchair facing a screen that displays two lines of dialogue: \u201cHow are you today?\u201d and \u201cI am very good.\u201d Wires connect a piece of hardware on top of the man\u2019s head to a computer system, and also connect the computer system to the display screen. A close-up of the man\u2019s head shows a strip of electrodes on his brain. The system starts with a flexible electrode array that’s draped over the patient’s brain to pick up signals from the motor cortex. The array specifically captures movement commands intended for the patient’s vocal tract. A port affixed to the skull guides the wires that go to the computer system, which decodes the brain signals and translates them into the words that the patient wants to say. His answers then appear on the display screen. Chris Philpot

    We used these systems to match neural patterns to movements of the vocal tract. At first we had a lot of questions about the neural code. One possibility was that neural activity encoded directions for particular muscles, and the brain essentially turned these muscles on and off as if pressing keys on a keyboard. Another idea was that the code determined the velocity of the muscle contractions. Yet another was that neural activity corresponded with coordinated patterns of muscle contractions used to produce a certain sound. (For example, to make the “aaah” sound, both the tongue and the jaw need to drop.) What we discovered was that there is a map of representations that controls different parts of the vocal tract, and that together the different brain areas combine in a coordinated manner to give rise to fluent speech.

    The role of AI in today’s neurotech

    Our work depends on the advances in artificial intelligence over the past decade. We can feed the data we collected about both neural activity and the kinematics of speech into a neural network, then let the machine-learning algorithm find patterns in the associations between the two data sets. It was possible to make connections between neural activity and produced speech, and to use this model to produce computer-generated speech or text. But this technique couldn’t train an algorithm for paralyzed people because we’d lack half of the data: We’d have the neural patterns, but nothing about the corresponding muscle movements.

    The smarter way to use machine learning, we realized, was to break the problem into two steps. First, the decoder translates signals from the brain into intended movements of muscles in the vocal tract, then it translates those intended movements into synthesized speech or text.

    We call this a biomimetic approach because it copies biology; in the human body, neural activity is directly responsible for the vocal tract’s movements and is only indirectly responsible for the sounds produced. A big advantage of this approach comes in the training of the decoder for that second step of translating muscle movements into sounds. Because those relationships between vocal tract movements and sound are fairly universal, we were able to train the decoder on large data sets derived from people who weren’t paralyzed.

    A clinical trial to test our speech neuroprosthetic

    The next big challenge was to bring the technology to the people who could really benefit from it.

    The National Institutes of Health (NIH) is funding our pilot trial, which began in 2021. We already have two paralyzed volunteers with implanted ECoG arrays, and we hope to enroll more in the coming years. The primary goal is to improve their communication, and we’re measuring performance in terms of words per minute. An average adult typing on a full keyboard can type 40 words per minute, with the fastest typists reaching speeds of more than 80 words per minute.

    A man in surgical scrubs and wearing a magnifying lens on his glasses looks at a screen showing images of a brain.\u00a0 Edward Chang was inspired to develop a brain-to-speech system by the patients he encountered in his neurosurgery practice. Barbara Ries

    We think that tapping into the speech system can provide even better results. Human speech is much faster than typing: An English speaker can easily say 150 words in a minute. We’d like to enable paralyzed people to communicate at a rate of 100 words per minute. We have a lot of work to do to reach that goal, but we think our approach makes it a feasible target.

    The implant procedure is routine. First the surgeon removes a small portion of the skull; next, the flexible ECoG array is gently placed across the surface of the cortex. Then a small port is fixed to the skull bone and exits through a separate opening in the scalp. We currently need that port, which attaches to external wires to transmit data from the electrodes, but we hope to make the system wireless in the future.

    We’ve considered using penetrating microelectrodes, because they can record from smaller neural populations and may therefore provide more detail about neural activity. But the current hardware isn’t as robust and safe as ECoG for clinical applications, especially over many years.

    Another consideration is that penetrating electrodes typically require daily recalibration to turn the neural signals into clear commands, and research on neural devices has shown that speed of setup and performance reliability are key to getting people to use the technology. That’s why we’ve prioritized stability in creating a “plug and play” system for long-term use. We conducted a study looking at the variability of a volunteer’s neural signals over time and found that the decoder performed better if it used data patterns across multiple sessions and multiple days. In machine-learning terms, we say that the decoder’s “weights” carried over, creating consolidated neural signals.

    University of California, San Francisco

    Because our paralyzed volunteers can’t speak while we watch their brain patterns, we asked our first volunteer to try two different approaches. He started with a list of 50 words that are handy for daily life, such as “hungry,” “thirsty,” “please,” “help,” and “computer.” During 48 sessions over several months, we sometimes asked him to just imagine saying each of the words on the list, and sometimes asked him to overtly try to say them. We found that attempts to speak generated clearer brain signals and were sufficient to train the decoding algorithm. Then the volunteer could use those words from the list to generate sentences of his own choosing, such as “No I am not thirsty.”

    We’re now pushing to expand to a broader vocabulary. To make that work, we need to continue to improve the current algorithms and interfaces, but I am confident those improvements will happen in the coming months and years. Now that the proof of principle has been established, the goal is optimization. We can focus on making our system faster, more accurate, and—most important— safer and more reliable. Things should move quickly now.

    Probably the biggest breakthroughs will come if we can get a better understanding of the brain systems we’re trying to decode, and how paralysis alters their activity. We’ve come to realize that the neural patterns of a paralyzed person who can’t send commands to the muscles of their vocal tract are very different from those of an epilepsy patient who can. We’re attempting an ambitious feat of BMI engineering while there is still lots to learn about the underlying neuroscience. We believe it will all come together to give our patients their voices back.

    This article appears in the February 2023 print issue.



  • This AI Watches You Walk to Diagnose Parkinson’s, MS


    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

    When it’s suspected that a person may have a certain neurological disorder, such as multiple sclerosis or Parkinson’s disease, doctors will often assess the person’s ability to walk. Simply by looking at someone’s gait, clues may emerge about an underlying neurological disorder.

    In a recent study, a team of researchers at the University of Illinois explored a technique using standard video cameras combined with AI that can assess a person’s gait and identify those who may have Parkinson’s disease or MS. The results, which show the approach can reach accuracies as high as 79 percent, were published on 20 September in the IEEE Journal of Biomedical and Health Informatics.

    Neurological disorders can often cause subtle changes in a person’s gait, even during the early-to-mid stages of disease. Often health care professionals will use specialized equipment such as a lab-based motion-capture system, force plates, or electromyography sensors to assess a person’s gait for neurological abnormalities, which can be expensive and require skilled personnel to analyze the results.

    “The integration of video of people walking and AI may allow for a wider range of health care providers in rural or underserved communities to identify early gait changes from neurological conditions and more efficiently provide a potential diagnosis,” explains Manuel Enrique Hernandez, an assistant professor in the Department of Kinesiology and Community Health at the University of Illinois at Urbana-Champaign.

    “Properly developed, this could be a game changer.” Richard Sowers

    In their study, Hernandez and his colleagues recruited a total of 33 volunteers—10 with MS, 9 with Parkinson’s disease, and 14 who did not have any neurological disease. All of the volunteers were asked to walk on a treadmill while two standard RGB cameras recorded their movements from side and front angles.

    “We looked at the body coordinates for hips, knees, ankles, the big and small toes and the heels,” explains Rachneet Kaur, a Ph.D. student at the University of Illinois who was involved in the research. “We analyzed how these coordinates moved over time to look for differences between adults with and without MS or Parkinson’s disease.”

    In total, the researchers developed and validated 16 different AI algorithms to assess these gait movements. Several of the algorithms were more than 75 percent accurate in predicting a person’s neurological status, with the top-performing algorithm—a convolutional deep-learning model—achieving 79 percent of accuracy.

    “We were pleasantly surprised with the validation results of using somewhat inexpensive video equipment and open-source image processing software to get the performance we saw,” says Richard Sowers, a professor in the departments of Mathematics and of Industrial and Enterprise Systems Engineering at Urbana-Champaign who was also involved in the study. “Properly developed, this could be a game changer.”

    Although commercialization of such an approach is still a few years away, the team says they have made their work available for free online for other researchers to use. In future work, they hope to explore how the inclusion of people with other neurological disorders could improve the accuracy of this approach. They also hope to experiment with the number and positioning of the cameras.



  • Pong-in-a-Dish


    Ever hear of the Turk—the 19th-century mechanism topped by a turbaned head that played chess against all comers? In fact, hidden inside was a diminutive chessmaster, one you might imagine deadpanning, “Eh, It’s a living.

    Then there’s its namesake, the Mechanical Turk—a 21st-century service offered by Amazon to mark up images on the Web with the help of crowdsourced freelancers. They, too, might intone, glassy-eyed, “It’s a living.”

    Now we have a kind of Biological Turk. A mass of neurons act as a computer that mimics a human being playing the classic computer game Pong. The neurons, some taken from mouse embryos, others grown from human precursor cells, spread out into a one-layer, 800,000-cell mesh called a biological neural network, which lives in a giant petri dish called the DishBrain. There it interfaces with arrays of electrodes that form an interface to silicon hardware. Software mounted on that hardware provides stimulation and feedback, and the minibrain learns how to control a paddle on a simulated ping-pong table.

    The work was described recently in the journal Neuron by Brett Kagan, the chief scientific officer of Cortical Labs, a startup in Melbourne, Australia, and nine colleagues at that company.

    The authors talk hopefully about the emergence of sentience, a notion that other brain-in-a-dish researchers have also recently floated. But they seem to stand on solid ground when they say their method will help to advance brain science, on the one hand, and computer science, on the other. A bio-neuro-network might model the effects of drugs on the brain in ways that single-cell neurons can’t. Also, neurons may show themselves to be more than just protoplasmic logic switches but more like entire computers.

    The question before us, though, is how does the thing play Pong?

    Pong-in-a-Dish youtu.be

    First, the electronic scaffolding hits the minibrain with electrical signals that represent the position and movement of the virtual ball. It’s rather like the action potential that a firing neuron would use to convey, say, a sensory signal from the eye to the brain. Because the electrodes are placed at different points in the cell network, the system physically represents the different possible locations. Further information comes from the frequency of the signals, which varies with the distance of the ball to the virtual paddle.

    The network responds to these stimuli like a motor neuron, sending out a signal that moves the virtual paddle. If the resulting movement causes the ball to bounce, the neural network gets a “reward.” Failure results in a signal that has the opposite effect.

    “Reward” is put in sneer quotes because these cells don’t have feelings. They can’t experience the joy of victory, the agony of defeat. There’s no dopamine, no salted popcorn. Instead, the researchers say, the network is working to minimize unpredictability. In this view, the so-called reward is a predictable signal, the anti-reward is an unpredictable one.

    Kagan tells IEEE Spectrum that the system as a whole then reorganizes to become better at playing the game. The most marked improvement came in the first five minutes of play.

    It seems amazing that a mere 800,000 neurons can model the world, even a simplified world. But, Kagan says, such feats are seen in nature. "Flies have even fewer neurons but must be able to do some modeling—although perhaps not in a way a human may—to navigate a complex and changing 3D world," he says.

    As he and his colleagues point out in their report the ability of neurons to adapt to external stimuli is well established in vivo; it forms the basis for all animal learning. But theirs, they say, is the first in vitro demonstration involving a goal-directed behavior.

    The current version of Pong is forgiving. The paddle is broad, the volley slow, the ball unspinning. Even a neophyte would crush DishBrain. Then again, the same was true of all of AI’s early assays in game playing.

    The early chess machines would sometimes senselessly give up first a pawn, then a piece, then the queen—all because they were attempting to put off a disagreeable action to a point beyond the built-in planning horizon. Poker-playing programs got good pretty fast, but the early ones sometimes played too well—that is, too cautiously—against weak human opponents, which reduced their winnings. Car navigation programs would send you into a vacant lot.

    You might think that just getting a machine to play a decent game is the hard part, and that further improving it to perfection ought to be a snap. Edgar Allan Poe made that judgement when he called the Turk a fraud because it occasionally erred. His conclusion was correct but his reasoning was faulty.

    It’s not easy turning a barely there machine into a world champion at chess or Go. And yet it has been done.



Научная работа