-
New Neurotech Eschews Electricity for Ultrasound

2023 has been a year chock-full of developments in brain-computer interface (BCI) technology: devices that can decode brain signals and use them to control an external device, or use a device to alter brain signals. BCI developers Neuralink, Paradromics, and Synchron all reached important milestones in the clinical trial process, each bringing a fully implantable BCI a little closer to clinical reality in the United States. Those companies are developing BCIs that interact with the brain electrically, but a recently announced research collaboration is developing a BCI that uses a radically different medium: ultrasound.
In October, the companies Forest Neurotech and Butterfly Network announced a US $20 million joint research effort to pursue the development of an ultrasound-based BCI. Forest Neurotech will license Butterfly Network’s compact ultrasound-on-a-chip technology to develop a minimally invasive ultrasound BCI device for use in academic and research settings.
Most BCI systems measure brain activity by reading out electrical signals, and some also electrically stimulate the brain. Forest’s proposed device will instead use high-frequency sound waves to interface with the brain. The scientific principles guiding the approach are unusual but simple: Direct focused ultrasound (FUS) waves can change neurons’ action potentials—the ionic currents the brain cells use to communicate with each other. Ultrasound can also be used to estimate neural activity within brain regions by measuring local changes in blood flow via the doppler effect; that technique is known as functional ultrasound imaging (fUSI).
Forest Neurotech CTO William Biederman says that Butterfly’s technology will allow his team to build a BCI that, when implanted in the user’s skull, will use ultrasound to stimulate and record from the brain “with submillimeter accuracy.”
Why use ultrasound in a BCI?
Ultrasound has certain benefits over other neural stimulation and imaging techniques. For stimulation, focused ultrasound waves can flexibly target specific regions throughout the brain from outside the skull. In contrast, electrical stimulation techniques are more spatially limited because electricity does not travel very far through brain tissue, so electrically stimulating deep regions of the brain requires invasive surgical procedures that place electrodes near the relevant sites. Electrically recording from the brain requires either deep implants or electrodes on the scalp that can record activity only from brain regions near the surface. Recording deep neural activity via ultrasound can’t be done through the skull, but it can be done by removing a piece of skull and laying the device on the surface of the brain.
Butterfly’s ultrasound hardware places the audio generation, steering, and recording capabilities of larger clinical ultrasound systems onto a single chip-size device. Forest plans to use these ultrasound chips to stimulate brain regions with steered and focused ultrasound waves, and also to measure neural activity with fUSI.
Butterfly Network’s ultrasound-on-a-chip technology will be used in a new brain-computer interface.
youtu.be
The fUSI technique estimates changes in neural activity by measuring changes in blood flow. Neurons, like all cells, need blood to function. Increases in neural activity require increases in blood flow, which fUSI techniques measure by projecting ultrasound at a brain region of interest and recording the waves that bounce back. When sound bounces off of a mass of flowing blood, the returning sound waves wiggle at a different frequency than those emitted. fUSI uses this Doppler shift phenomenon to estimate changes in blood flow and, by proxy, the electrochemical chatter of neurons.
“Using blood flow to image neural activity with functional ultrasound works really well,” says Mikhail Shapiro, an advisor to Forest Neurotech and professor at the California Institute of Technology. “It works much better than anybody expected when the idea was first put forward.”
Ultrasound can both record and control neural activity. While the latter phenomenon has been known since at least the 1950s, scientists still aren’t sure exactly why FUS makes neurons fire. Recent experiments in which ultrasound energy was directed at isolated brain tissue from rodents showed neuroscientists that particular calcium ion channels appear to be thrown open by high-frequency sound waves, though the exact physics of that interaction are still a mystery.
To both stimulate and record with its BCI, Forest’s system will require implanting multiple Butterfly ultrasound chips in a user’s skull. While ultrasound stimulation is effective through bone, fUSI recording techniques are not. Sound is muffled when passing through the skull, and the outgoing and return attenuations make a fully noninvasive ultrasound recording system untenable. Instead, the company plans to place the transducers into the skull and flush with the surface of the dura, a protective membrane that covers the brain.
The collaboration between Forest and Butterfly is a part of Butterfly Network’s larger Butterfly Gardens program, which seeks to make its technology available to medical-device teams. Forest Neurotech, a part of the nonprofit group Convergent Research, is focusing on technological development rather than creating and marketing of a clinical device. “Within the lifetime of our nonprofit research organization, we don’t have plans to submit to the FDA,” says Biederman. “We are trying to drive fundamental scientific discovery and uses of this type of technology.” Once the technology is more developed, the company plans to make it available to other research organizations in academia and industry for further product and scientific development.
-
Butterfly-Eyed Sensors Capture UV Images

The compound eyes of an insect are structurally very different from that of a human—almost a separate category of organ altogether from the more familiar ocular arrangement of lens, cornea, iris, and retina. For instance, Papillio xuthus, the Asian Swallowtail Butterfly, can see wavelengths of light in the ultraviolet spectrum, far beyond the limits of human vision or the capabilities of standard optical electronics in cameras and mobile phones.
Researchers at the University of Illinois Urbana-Champaign (UIUC), inspired by P. xuthus’ unique eyes, have developed a new kind of camera sensor that can detect ultraviolet light. Recently published in Science Advances, the new device extends the capabilities of a real-time optical sensor with a layer of fluorescent crystal that makes it sensitive to ultraviolet light (UV). This design, which mimics the function of fluorescent compounds in the eyes of P. xuthus, may enable the development of new medical devices and imaging technologies.
“As an engineer, you always wonder: what principles can we mimic from [nature’s] visual systems to make better cameras?”
—Viktor Gruev, University of Illinois Urbana-Champaign
The xuthus butterfly can see UV light because of two complementary mechanisms. One is a set of cells in their eyes that, much like the shortest cone cells in human eyes, are most sensitive to blue light but are also partially sensitive to near-blue UV light. However, to see shorter wavelengths of light in the UVA and UVB ranges—280 to 400 nm—the butterfly uses a clever trick: cells containing fluorescent dyes within the insect’s eyes radiate a band of green light when excited by UV radiation. These dyes translate otherwise invisible UV light into a signal that the butterfly can see.
Viktor Gruev, a professor at UIUC and an author of the present study, says that the idea to mimic P. xuthus’ visual system in silico resulted from years of collaboration between him and biologists studying how different animals see. “I’ve been working with visual biologists for over ten years,” says Gruev, whose previous collaborations have produced other bio-inspired cameras. “As an engineer, you always wonder: what principles can we mimic from those visual systems to make better cameras?”
The UIUC team built the sensor using a layered CMOS design with a top layer of fluorescent perovskite crystal. Similar to a Foveon chip, the three photosensitive layers of the sensor divide the spectrum of visible light into three bands with peak sensitivities at blue, green, and red. These bands are determined by the width and position of each layer, with red light penetrating down to the lowest sensor layer and blue light only activating the highest layer.
The UV imaging sensor is seen here under white light [top] and under UV light [bottom]. The green appearance is attributed to PNC layer fluorescence.University of Illinois Urbana-Champaign/Science Advances
While this CMOS sensor arrangement does mimic the partial UV sensitivity of the layered photoreceptor cells in xuthus’ eyes, the real magic in the UIUC chip comes from a layer of perovskite crystal deposited on the sensor’s surface. The perovskite nanocrystal (CsPbBr3) is both fluorescent under UV radiation and transparent to visible light. This fluorescence causes the crystal layer to glow green when irradiated with UV light.
Just like the fluorescent dyes found in P. xuthus eyes, the perovskite nanocrystals translate incoming UV light into a wavelength that the CMOS sensor already responds to. Paired with the partial sensitivity of the blue-band sensor layer to near-UV light, the perovskite fluorescence enables the sensor to resolve exact wavelengths of UV light—as small as 250 nm—in real time and at any point on its surface. “The UV light is detected in two components,” says Shuming Nie, a UIUC professor and another author on the paper. “The first is the UV absorbed by the nanocrystals that convert it into a visible signal—in our case, green—and that’s picked up by the photodiode. There’s also a residue UV picked up by the top layer photodiode. We can use the ratio of those two to calculate the UV wavelength.”
The researchers say this is a big step forward in UV sensing technology. “There is no, that I’m aware of, UV sensing technology that gives you a 2D image in real time with exact wavelength resolution all at once,” says Gruev. “UV spectrometers can give you more spectral information, but only at a single point.”
The researchers propose that their fast, flexible sensor has direct biomedical applications. Cancer cells overproduce certain kinds of proteins and amino acids fluoresce in the UV spectrum when irradiated. Clinical devices can use this principle to detect cancerous tissues quickly and directly. The researchers show in the paper that the device can resolve the distinct fluorescent UV signatures of the cancer-indicating proteins and amino acids tyrosine, elastin, tryptophan, and nicotinamide adenine dinucleotide (NADH). They also show that the device can distinguish between lab cultures of cancerous and non-cancerous cells based on their UV fluorescence.
In future work, the researchers plan to expand on their design by adding multiple perovskite crystal layers that fluoresce in different bands, which would provide better UV wavelength resolution. “A better way to resolve wavelengths is with two-color nanocrystals,” says Nie. “This gives you more real estate, more space, to resolve the UV wavelengths.” Adding more color-tuned crystal layers would heighten the device’s cancer detection capabilities, Nie says. “With only one layer of nanocrystal,” he continues, “Some of the UV signals from amino acids in tumor cells are clustered together. If we use the two-color crystals, they are better separated.”
-
Zoom Fatigue is Real, According to Brain Scans

Since the pandemic began in 2020, Zoom video teleconferences have become a new normal for many people’s working life. Now a study finds that these virtual meetings exhaust the brain more than face-to-face communication.
Lockdowns due to COVID-19 led to skyrocketing adoption of Zoom, Skype and other videoconferencing tools as substitutes for face-to-face meetings. For example, as the new research points out, the number of Zoom.us monthly visits went from 71.6 million in December 2019 to an all-time high of 2.8 billion in October 2020, falling to 943 million in March 2023.
This rise in remote work and online learning has revealed a phenomenon often dubbed “Zoom fatigue,” a feeling of exhaustion linked with videoconferencing.
Previous research depended on questionnaires. The new study examined the effects of videoconferencing directly on the brain and heart.
“The personal implications of Zoom fatigue extend far beyond mere tiredness,” says study co–senior author René Riedl, an information systems researcher at the University of Applied Sciences Upper Austria in Steyr and Johannes Kepler University Linz in Austria. “Individuals grapple with increased stress levels, a reduction in productivity, and a pervasive sense of disconnection.”
In addition, “On a broader social scale, the consequences materialize as a potential deterioration in the quality of communication and collaboration, impacting both professional and personal relationships,” says study co–senior author Gernot Müller-Putz, a biomedical engineer at Graz University of Technology, in Austria. “The absence of nuanced nonverbal cues in virtual interactions inhibits the richness of communication, making it challenging for participants to fully engage and connect in a meaningful way.”
Previous research into videoconference fatigue depended on questionnaires. In the new study, Riedl, Müller-Putz, and their colleagues examined the effects of videoconferencing on the brain and heart.
The scientists had 35 university students all take part in 50-minute lectures that were held both in person in a conventional lecture hall and online via videoconferencing. The researchers analyzed the effects of these meetings using electroencephalography (EEG) and electrocardiography (ECG). These methods noninvasively record brain and heart activity using electrodes stuck on the head and chest, respectively. They also gave the volunteers questionnaires asking about fatigue and mood.
“We found that after 50 minutes of videoconferencing, significant changes in physiological and subjective fatigue could be observed.”
—René Riedl, Johannes Kepler University Linz, Austria
The brain and heart readings suggested that videoconferencing led to significantly greater signs of fatigue, sadness, drowsiness, and negative feelings, as well as less attention and engagement, than a face-to-face lecture. The questionnaires also showed the volunteers felt significantly more tired, drowsy, and fed up and less lively, happy, and active from videoconferencing than face-to-face sessions.
When it comes to finding ways to reduce videoconferencing fatigue, “individuals and organizations can adopt practices such as scheduling regular breaks,” Riedl says. “Based on our research results, we recommend a break after 30 minutes, because we found that after 50 minutes of videoconferencing, significant changes in physiological and subjective fatigue could be observed. Moreover, utilizing features like ‘speaker view’ to mitigate the intensity of perceived continuous eye contact could be helpful.”
Videoconferencing tool developers may also investigate “designing platforms that not only facilitate natural communication, but also replicate the nuanced dynamics of face-to-face interactions, thereby reducing the cognitive load associated with virtual meetings,” Müller-Putz says. However, “in our opinion, it is not possible to fully replicate natural face-to-face interactions, for which human evolution has endowed us with specific capabilities, in particular those related to brain function.”
All in all, the scientists do note that it is unrealistic to completely abstain from videoconferencing. Still, they say these findings suggest that videoconferencing should be viewed not as a wholesale substitute for face-to-face communication, but as a complementary tool.
“It is important to underscore how a supportive work culture, rooted in understanding the nuances of virtual interactions, can substantially alleviate the impact of prolonged reliance on digital communication platforms,” Müller-Putz says. “Videoconferencing is just one form of electronic communication. Several other forms, like email communication, social-collaboration platforms in organizations, the extensive use of mobile phones, and communication via avatars in the metaverse, should be studied more extensively in the future. In particular, we recommend investigating the various implications of the use of such communication forms, including stress, cognitive workload, well-being, health, development, satisfaction, and productivity, among others.”
One potential criticism of this work is how much these findings might apply beyond the context of university lectures. “Other research groups are invited to replicate our findings in other contexts, such as the business context,” Riedl says. “Moreover, future studies could also complement our findings by the use of other neuroscience and physiological measurement tools.”
The scientists detailed their findings online 26 October in the journal Scientific Reports.
-
Spinal Stimulator’s Gentle Zaps Help Treat Parkinson’s

Marc Gauthier was 36 years old when he was diagnosed with Parkinson’s disease. At the age of 44, Gauthier, who lives in Pessac, France, was treated with a deep brain stimulator device that helps to control tremors in his arms. As he grew older, the disease made it difficult for him to walk, causing him to lose balance and fall up to six times a day.
In 2021, at the age of 61, Gauthier underwent an experimental medical procedure that has given him back his mobility. Scientists and doctors with the NeuroRestore group at the Swiss Federal Institute of Technology (EPFL) in Lausanne, Switzerland, have surgically implanted an array of epidural electrodes within Gauthier’s vertebrae. These electrodes can stimulate neurons in Gauthier’s spine in a way that enables the nerves and muscles that control his gait and balance to function more as they did before he developed Parkinson’s. The team documented the procedure in a paper published 6 November in the journal Nature Medicine.
Parkinson’s disease is a neurological disorder characterized by the progressive loss of motor control. While the disease develops differently in every individual, Parkinson’s generally leads to a nervous-system-wide inability to produce the neurotransmitter dopamine and a loss of the neurons that transmit it. These dopaminergic neurons are necessary for many brain functions, including fine motor control. As these neurons are lost, Parkinson’s patients develop the tremors commonly associated with the disease.
“You’re taking advantage of the circuitry of the spinal cord. The spinal cord is doing it for you.”
—Eduardo Moraud, NeuroRestore
To counteract these tremors, the research team at EPFL developed a novel intervention: targeted electrical stimulation to the patient’s spine. The researchers had previously seen success using the same stimulator system for spinal injury patients. The team found that in primate models of Parkinson’s, tuned patterns of electrical stimulation applied to the spinal cord could restore a smooth and steady gait to primates with impaired mobility. The team then sought to translate these results to a clinical treatment, and found a willing participant in Gauthier through their connections to the Lausanne and Bourdeaux medical community. “He was never scared,” says researcher and NeuroRestore group member Eduardo Moraud. “He was very courageous in that sense.”
The technology works by electrically stimulating structures on the spine called dorsal root ganglia, where sensory nerves return to the spine from the body’s extremities. Normally, the dorsal roots coordinate the delicate balance of muscles required for smooth, easy walking. By stimulating the neurons in the dorsal ganglia, the epidural stimulator gets those reflex-driven circuits working as they were before the onset of Parkinson’s disease. “You’re taking advantage of the circuitry of the spinal cord,” says Moraud. “The spinal cord is doing it for you.”
The stimulator array—a pad of electrodes arranged in two columns—was surgically inserted alongside the back of Gauthier’s spinal cord. The array was positioned to specifically stimulate the dorsal ganglia that most directly controlled Gauthier’s gait. Once inserted between the vertebrae and the spinal cord, the stimulator was fastened to nearby vertebrae to keep it in place.
After surgical implantation and months of physical rehabilitation, Gauthier has recovered his ability to walk in a smooth, controlled manner. The device is capable of delivering different stimulation patterns, depending on the situation. The standard pattern is a constant stimulation to compensate for Gauthier’s baseline imbalances. While this baseline stimulation is enough to keep Gauthier walking at an easy pace, the stimulator system can drive more effortful leg motions—climbing stairs, getting up from chairs, or walking on uneven terrain—with a supplementary stimulation pattern driven by signals collected from a set of sensors placed on Gauthier’s feet and legs that measure his intended motions.
Speaking through a translator at a recent press conference, Gauthier stated, “I experienced a first rebirth 20 years ago when I received a deep-brain stimulation implant. I then experienced a second rebirth two years ago when I received the spinal stimulator. I can now walk with much more confidence and my daily life has profoundly improved.”
With support from the Michael J. Fox Foundation for Parkinson’s Research and the medical device developer Onward, the research team at EPFL hopes to repeat the treatment in six more patients, beginning early next year.
-
Organic Electronics Mimic Retinal Neurons

Finding ways to interface electronics with our biology could help treat neurodegenerative diseases, improve prosthetics, and even lead to novel computing technologies. A chip made from organic electronics that can imitate the way the retina works brings that goal a step closer, its creators say.
Hooking conventional electronics up to biological systems can be challenging because they are not biocompatible and their rigid structures don’t marry well with soft, living tissue. Organic electronics have emerged as a promising alternative as they can be made from flexible polymers that aren’t toxic and don’t illicit an immune response. Now, researchers have created a new organic electronic device that can react to light, transmit signals using ions, and remember the amount of signal that has previously flowed through it, which mimics the way neurons in the retina operate.
Described in a paper in Nature Communications, the device is currently just a proof of concept. Ottavia Bettucci, an assistant professor of organic chemistry at the University of Milano-Bicocca and coauthor of the paper says that in the future the technology could potentially be used to create neural implants designed to treat a variety of conditions such as Parkinson’s disease or Alzheimer’s disease. Or it could be used for retinal implants to treat some forms of blindness.
“In all the pathologies in which the neural communication is not working in a proper way, our material and our device could help in restoring this interaction,” she says.
The materials used to build the device were chosen with the goal of being implanted in the body.
The researchers describe the device they’ve built as an organic photoelectrochemical transistor. It consists of two thin films of conductive polymer deposited onto a glass substrate, which are connected by an electrolyte. One of the two polymer films acts as a channel between source and drain electrodes, while the other acts as a gate.
Applying a voltage to the gate causes ions to migrate from the electrolyte into the channel changing it’s conductivity. But the researcher also modified the polymer at the gate by integrating photosensitive molecules, which means the gate can also be controlled by light. Crucially, the materials used to build the device were chosen with the goal of being implanted in the body.
“The gate terminal is just a complete organic biocompatible material, which potentially can make these be interfaced with biological cells.” says Francesca Santoro, a professor of neuroelectronic interfaces at RWTH Aachen University, in Germany, and coauthor of the new study. “The fact that it’s made of polymer materials also makes it very good for being integrated into flexible substrates.”
The device shares key characteristics with biological neurons, says Santoro. In addition to using ions to modulate the electrical behavior of the device, the response to light is intensity dependent, much like biological photoreceptors. And the device was able to mimic the way connections between neurons strengthen or weaken over time based on the amount of activity they see—a key ingredient in learning.
The researchers showed that if the gate was subjected to two light pulses less than 5 seconds apart, a higher current was seen after the second pulse. But if they applied 500 light pulses in a row, 1 second apart, the conductivity of the channel fell with each pulse. Once the light signal was removed, the conductivity gradually recovered, but it still hadn’t returned to normal after more than half an hour.
This is similar to how things work in biology, says Santoro, where two complimentary processes known as short-term plasticity and long-term plasticity operate over different timescales to modulate the strength of neural connections. “This is important because a number of patterns that we have in our visual recognition system are also result of a long-term interaction, not just short-term,” she says.
While going from a proof of concept to a practical device will be a long road, Bettuci says the team made sure to use cheap, common materials and also significantly simplified the process of synthesizing the light-sensitive polymer. The main barrier to creating devices that can be implanted in the body will be switching from a glass substrate to something that is flexible and biocompatible, says Bettuci.
The current device also operates much slower than biological neurons, says Santoro, and it is only able to react to biological signals, not transmit them. That’s why her group is already working on a follow-up device that can release chemical signals to create a feedback loop with biological systems.
As well as holding promise for developing fully organic neural implants, Santoro thinks the technology could also be used to interface conventional electronics with biological systems. She has several projects looking at how these artificial neurons could be used to drive prosthetics and the team is also working with an ophthalmologist who has experience with retinal implants.
“He can really give us good direction on where we can make improvements to the platform, so it could potentially be integrated into existing or a new classes of implantable retinal devices,” she says.
-
Will Their Bionic Eyes Be Obsolete and Unsupported Too?

Pixium Vision SA, a Paris-based company that was developing high-tech retinal implants to treat certain visual conditions, is headed into receivership while it hunts for a buyer.
Pixium’s PRIMA system, currently implanted in 47 users in Europe and the United States, comprises a tiny implant in the retina, connected to a pair of video glasses and an external computing unit. The system is currently undergoing human clinical trials for the treatment of dry age-related macular degeneration (dry AMD).
If that sounds familiar, it’s a similar set-up to the one offered by Second Sight, a U.S. retinal implant company that, in facing looming bankruptcy proceedings, suddenly left its community of users in the lurch in 2020. The fallout from the company’s abandonment of its implantees was detailed in an award-winning investigation by IEEE Spectrum last year.
“You want to...think critically about what good innovation is if people can only use it for a small period of time.”
—Michele Friedner, University of Chicago
With two leading retinal implant companies now having run out of money, does that signal wider problems with the technology? And what does it mean for people using—or hoping to use—their systems?
Focus on the tech
There are important differences between Pixium’s and Second Sight’s systems. Despite being much smaller than Second Sight’s Argus II device, Pixium’s PRIMA has over six times as many electrodes, meaning it can produce a more detailed artificial image. And where the Argus II system sent visual data from the glasses to the implant via radio waves, PRIMA literally beams information from the glasses into the eye using near-infrared radiation. The same infrared signal also powers the tiny implant.
Dry AMD affects hundreds of thousands more people in the United States, and millions more worldwide, than the retinitis pigmentosa targeted by the Argus II, giving it a larger potential market.
Daniel Palanker, a professor of ophthalmology at Stanford University who licensed his technology to Pixium, remains bullish on its potential. “Many startups fail for many reasons. In some the technology doesn’t work, but in our case, the technology actually did work,” he told Spectrum. “It’s just the climate of fundraising has now turned very difficult.”
A feasibility study with a handful of patients in France indicated that the PRIMA system could be safely implanted, and that it restored some light perception in parts of the retina where users had lost all vision, according to the company. The results of a similar feasibility study in the United States, and a much larger pivotal trial in Europe, are due early next year, Pixium says. The company says they then hope to move forward with a pivotal trial in the United States, and begin selling the PRIMA in Europe as soon as 2025, followed by the United States a year or two later.
But that all depends on what happens with Pixium now. The company is currently in a legal state similar to U.S. Chapter 11 bankruptcy, heading toward a state similar to U.S. Chapter 7. That would begin a liquidation process, leaving Pixium to settle its debts as best it could, while its main assets—its staff, intellectual property, and the clinical trials—would hopefully be acquired by a buyer.
“The most important thing is that no patients will be hurt by this,” Lloyd Diamond, Pixium’s CEO, told Spectrum. “All patients have been implanted and the postsurgical period is over. There’s always a potential that there’s a latent problem later on. But these patients will continue to be followed, and they will be taken care of by the public health systems in the countries in which they reside.”
Diamond points out that all 47 PRIMA users were implanted under clinical trial conditions, rather than as paying customers, as with Second Sight, and thus they enjoy more protection and support from the hospitals that operated on them.
However, he says that if Pixium cannot find a buyer in the coming weeks, their long-term prospects would be uncertain. “That’s where it becomes tricky,” says Diamond. “If there’s a problem with the device, there of course wouldn’t be individuals that could service it.”
An engineer of French startup Pixium Vision holds a tiny PRIMA retinal chip.Eric Piermont/AFP/Getty Images
That worries Michele Friedner, a medical anthropologist at the University of Chicago. “There’s all this focus on innovation and not on maintenance, the afterlife of that innovation,” she says. “You want to encourage innovation, but you also want to think critically about what good innovation is if people can only use it for a small period of time. When somebody uses it and then has to stop, it can be devastating on so many levels.”
This was the experience for many Second Sight patients, as Spectrum reported in 2022, some of whom resorted to home repairs or sourcing Argus II components from other users.
Retinal frenemies
The connection between Second Sight and Pixium is more than incidental, too. After Second Sight stopped making or supporting the Argus II, the company planned a merger with Pixium in 2021 that was intended “to create a global leader with the potential to treat nearly all forms of blindness,” according to a press release at the time.
That merger fell through, leading Pixium to sue Second Sight for pulling out of the deal. A French court awarded Pixium over 2.5 million euros (US $2.6 million) in December, although the U.S. company is appealing the judgement.
“We don’t in any way take any pleasure in Pixium’s troubles,” says Jonathan Adams, CEO of Cortigent, the company that inherited Second Sight’s assets. “I hope Pixium is going to find a white knight.”
Second Sight’s own white knight, Cortigent’s parent company, Vivani, has been able to help 24 stranded Argus II users with replacement glasses, computing units, and batteries from its dwindling stock, the company says. But rather than resurrecting Second Sight’s retinal technology, Cortigent is now pursuing a brain implant to help people with blindness. “We believe cortical stimulation could address a much larger number of people and turn the business model of retinal stimulation, which was not able to succeed, into something that could be very commercially attractive,” says Adams.
Pixium’s endgame
If Pixium Vision is unable to find a single buyer for its assets, there is a chance that Cortigent might bid for some of its intellectual property, says Adams. “But the application of it to the brain does provide some difference, and as we look into whether or not any of the Pixium IP could be of value to us, that difference may be important,” he says.
Diamond remains hopeful that Pixium will find a buyer prepared to invest the 60 million euros ($64 million) he estimates it will take to bring PRIMA to the market. But others see the future of vision implants firmly in the brain. Elon Musk’s brain implant company Neuralink, which has proposed using its technology to restore vision, raised $280 million in August.
“By going directly to the vision cortex, it’s a much easier surgery than putting an electrode array in the retina,” says Adams. “And then what’s exciting is that you can take that array and move it to the motor cortex for recovery from stroke paralysis. We see much bigger market opportunities in the brain than trying to fine-tune particular applications for small groups of patients.”
As of press time, the deadline for potential purchasers to submit offers for Pixium’s assets was 20 November.
-
This Pulsing DNA Nanomotor Can Power Molecular Machines

Taking advantage of DNA’s ability to store information, researchers have over the past decade been working on data storage systems and digital logic circuits built on the biological molecule. DNA’s complex physical structure has also led to the exciting technique of DNA origami, with which researchers have made tiny nanoscale machines.
In the latest example of clever DNA origami, an international team of researchers have now developed a nanoengine that can perform pulsing movements. The pulsing nanomotor should be easy to couple to other tiny molecular machines, driving them to construct drugs and electronics one building block at a time. “We have designed the nanoengine as a broadly applicable module that can be coupled to all possible origami structures to set them in motion,” says Michael Famulok, a professor of chemical biology at the University of Bonn.
The tiny nanomotors look like the spring handgrips used for forearm-strengthening exercises. They have two rigid arms connected by a spring in a V-shaped structure that rhythmically open and close as long as the solution they are in has a chemical fuel.
University of Bonn
DNA origami involves carefully designing DNA sequences that can fold, twist, and staple together at the right places to assemble into preconceived structures. Using DNA origami tricks, scientists have made many different types of DNA nanomotors and machines. Many of these nanoassemblies are driven by Brownian motion: the constant, random movement of molecules.
Famulok and his colleagues have been working on building DNA nanosystems that are driven by chemical energy. Specifically, the fuel they use is RNA polymerase. This is the natural enzyme that translates DNA molecules into single-stranded RNA molecules, which are then used to synthesize proteins. In 2018, the researchers reported a nanomachine made up of a DNA ring, driven by RNA polymerase, that could be set into motion like a wheel.
In the new work, two stiff DNA origami arms are connected by a short DNA sequence that serves as a release spring. The whole structure is 70 by 70 by 12 nanometers in size. The researchers connected RNA polymerase on one of the arms near the DNA spring strand. The RNA grabs the DNA strand and starts translating its A, C, G, and T building blocks letter by letter, in the process pulling the strand and bringing the handles closer.
When it reaches the end of the DNA strand, a special termination sequence instructs the polymerase to let go of the strand. “The energy stored in the spring causes it to snap open,” Famulok says, and the arms return to their original positions. “Then the whole process starts over again. Once you have set it in motion, as long as [RNA polymerase] is present, it will continue to close and open.”
The biological molecules are robust, and there was no indication that the DNA spring would wear out in the laboratory experiments, Famulok says.
As a demonstration, the researchers attached the nanoengine to another V-shaped DNA origami arm that did not have a spring. As the nanoengine closed and snapped open, it drove the second “follower” unit to shut and release along with it.
Famulok’s vision is to use the pulsing nanomotor to drive several DNA origami machines “in a sort of molecular assembly line” that could build complex chemicals such as artificial proteins, or drive chemical reactions that are otherwise difficult to achieve. To do that, the researchers are now trying to devise a clutch-type mechanism that would detach itself from one follower unit and couple to another one in response to a chemical or other trigger signal.
-
Magnetic Gel Can Help Wounds Heal Faster

Researchers have developed a magnetic gel that can help diabetic wounds heal roughly three times as fast as they would with a conventional wound dressing. The magnetic gel, in combination with an external device to generate a dynamic magnetic field gives the skin cells around the wound a gentle workout.
In a new study, researchers at the National University of Singapore focused on how to encourage chronic wounds, especially wounds in people with diabetes, to heal more quickly. Diabetes often compromises natural healing, and therefore, diabetics often develop chronic wounds that are slow to heal.
For example, diabetic foot ulcers afflict 15 to 25 percent of the roughly 30 million people living with diabetes in the United States. These and other chronic wounds linked to diabetes are the second leading cause worldwide of lower-limb amputations, after amputations necessitated by accidents or injuries. Currently, more than 540 million people globally are living with diabetes, according to the World Health Organization, and this number is expected to rise significantly over time.
“Approximately 10 to 15 percent of diabetic foot ulcers fail to heal,” says study senior author Andy Tay, a biomedical engineer at the National University of Singapore. “Shockingly, about a quarter of these nonhealing cases eventually necessitate amputation.”
Conventional dressings for wounds “fail to fundamentally accelerate the healing of wounds. They mainly serve to prevent wounds from deteriorating, requiring patients to follow a schedule for dressing changes every two to three days over an extended period,” Tay explains. “The fundamental reason behind their limited effectiveness lies in their inability to simultaneously address multiple critical factors associated with diabetic wounds, as well as their incapacity to modify the local microenvironment.”
Andy Tay [center] holds an adhesive bandage loaded with magnetic gel, while Shou Yufeng [right] holds a device that provides magnetic stimulation. Le Zhicheng [left] is holding a sample of the magnetic gel in liquid form.National University of Singapore
In the new study, Tay and his colleagues investigated strategies to more actively treat chronic wounds, rather than wait for the body to take its time. They focused on the way in which the mechanical forces that a person’s skin normally experiences during daily activity can help spur healing. Those mechanical forces can trigger cell proliferation and migration, the secretion of growth factors, and the formation of new blood vessels. However, patients with wounds are usually advised not to walk around or undertake more rigorous activities.
The scientists found another way to create a mechanical force by experimenting with a gel that had been loaded with tiny magnetic particles. When exposed wirelessly to a dynamic magnetic field, the gel exerts a gentle, rhythmic mechanical force.
Aside from the magnetic particles, the researchers loaded the magnetic gel with two kinds of U.S. Food and Drug Administration-approved skin cells—keratinocytes, which are essential for skin repair, and fibroblasts, which help form connective tissue. The mechanical stimulation from the gel can help encourage the skin cells to become more active. The scientists also loaded this gel with insulin, which can influence blood-sugar levels in wounds to help guide the rate of healing.
“One of the most compelling features of this platform is its holistic ‘all-in-one’ approach to wound healing, targeting multiple aspects of the healing process simultaneously,” Tay says.
The scientists placed this gel in a bandage to help treat wounds in diabetic mice. They found the bandages helped wounds heal roughly three times as fast as current approaches, after applying magnetic stimulation for about 1 or 2 hours.
The novel dressings also promoted healing in other ways, the researchers say. For example, the bandages improved fibroblast growth rate by about 240 percent, and more than doubled the production of collagen, a key protein for wound healing. In addition, the researchers found that magnetically directed mechanical stimulation of the gel could help control the rate at which insulin was delivered into wounds.
“Our technology takes a comprehensive approach, effectively targeting several critical factors linked to diabetic wounds,” Tay says. “It not only manages elevated glucose levels within the wound area, but also reactivates dormant skin cells near the wound, restores damaged blood vessels, and repairs the disrupted vascular network within the wound.”
The researchers noted that this new technology may have applications beyond healing diabetic foot ulcers. For instance, it may help treat complex wounds such as burns, Tay says.
Currently, the researchers are conducting additional tests to refine the gel to enhance its effectiveness, Tay says. “In addition, we are collaborating with a clinical partner to assess the gel’s effectiveness using human tissues from diabetic patients,” he adds. “Together, we are working on addressing key challenges related to regulation, industrialization, and clinical implementation.”
The scientists detailed their findings online on 8 September in the journal Advanced Materials. They have also filed a patent for the technology.
-
Smart Glasses Make Human Echolocation Possible

Blind and low-vision (BLV) people often use sound to navigate, such as via echolocation or within various assistive technologies. Scientists at the University of Technology Sydney (UTS) and Aria Research, a company that makes bionic devices, decided to blend the two to develop a technique called “acoustic touch.” When used in combination with smart glasses, acoustic touch converts objects in front of the user into auditory icons.
Acoustic touch uses head movement—head position also being key in echolocation—to dictate what sound icons play to support the exploration of a surrounding environment. Howe Yuan Zhu, one of the paper’s authors, describes acoustic touch as a user interface based on sensory-motor coupling. In this case, sensory feedback is generated by the wearer’s head movement.
Imagine that there is a virtual cone extending out in front of one’s head, he says. Any object in the region of this cone will be represented with a unique audio signature. “The wearable glasses might still see more objects, but it will only relay to the user the objects within this narrower field of view.” Their paper was published in PLOS One last month. The research team also included two blind researchers.
The researchers tested the interface with 14 participants—7 BLV individuals and 7 sighted people who were blindfolded—wearing acoustic touch-enabled smart glasses. They had to identify objects on a table before them. They found that the BLV participants performed well in recognizing and reaching for objects without adding to their mental exertion.
“We were focused on understanding, is this learnable?”
—Howe Yuan Zhu, University of Technology Sydney
Wearables traditionally relay information using computer vision along with computer-generated speech. With acoustic touch, however, even though the glasses can detect everything before the wearer, the “head-scanning” movement creates a “middle layer.” This allows the user to decide what they want to explore first, and choose that.
“One of the key questions we weren’t sure about is how intuitive head movement was,” Zhu says. “We know in vision, it plays a key role in how we observe a room…but [with] audio, we know it plays a role, but not how much.” Their observations suggested that the head-scanning movement wasn’t something that required a significant amount of added effort. “Even though it was a bit more physical effort, the participants were still able to pick it up, and still found it somewhat intuitive,” Zhu says.
For their research, they also built a benchmark platform that used computer vision and object-recognition algorithms to recognize 120-odd objects. However, in the tests, they used only four objects—to understand whether users preferred using the interface to play icons for all the objects, or if it was better for them to use head movement to selectively explore. “We were focused on understanding, is this learnable? Can someone build mental associations between a similar sound to the object? And then we compared that against speech,” Zhu says. “We don’t want to necessarily dismiss speech-based interfaces [either].” For example, for identifying a specific person, or to describe more complex objects, it might be simpler to use speech.
He also clarifies that while their research is about the technique of acoustic touch rather than building smart glasses (which is something Aria Research does), they did make some technical observations that could be relevant in the future. For example, they noticed that the speed of head rotation could be quite important. “If [the user] sweeps too fast, they’re more likely to miss objects,” Zhu says, “because the computer vision is just not fast enough.”
One of the main limitations in the current paper was that the study was heavily controlled and in a closed, indoor environment. These are not conditions that can be achievable in the real world, Zhu says. Another technical challenge was accurate object recognition. “But that’s something that’s continually improving,” he says. “And even between the time of the study and now…object recognition is a lot better.”
Since the publication of the paper, he says, they have been continuing their research using more real-world environments, wider contexts, and more objects. This includes navigating a maze using sound icons to dictate a path or the way around an obstacle. Zhu notes that using computer speech is more likely to slow people down, as the person would need to stop to process it. “Whereas,” he adds, “if we played just the audio icon that pings for a lane, the person actually could just follow along, using it like a cue that you just walk along with.”
-
This Durable Strand of Jelly Can Block Pain

Is there a better way for scientists to shine a light on nerve cells throughout the body? When researchers Xinyue Liu and Siyuan Rao first began their collaboration at MIT, they treated this question literally.
Optogenetics is an interdisciplinary branch of science in which cells are genetically altered to be light-sensitive, making it possible to inhibit or excite cells and to study their function by applying colored light. Typically, the light-transmitting wires that reach target cells are made of materials that work well when stationary in the brain. However, if they’re implanted elsewhere in a test animal’s body, they could break, damage tissue, or affect behavior, making it difficult to study the peripheral nervous system and pain in particular.
“This flexible fiber expands the toolbox of approaches we have.” —Rob Bonin, University of Toronto
Now, Liu, Rao, and colleagues have developed a soft, flexible, durable optical fiber capable of delivering an optogenetic light signal away from the brain or spine using a new material:
hydrogel. The filament consists of an inner core and an outer cladding of two versions of the hydrogel with different refractive properties, yet the fiber is only about one millimeter in diameter.
Researchers described the optical fiber and a variety of ways it was put through its paces in model mice in a paper in
Nature Methods, published 19 October. The work adds another technique—and a bit of flexibility—to the repertoire of optogenetics, the study of the peripheral nervous system, and possibly future translational medicine, including the treatment of pain, chronic pain, and nerve disorders.
“This flexible fiber expands the toolbox of approaches we have for peripheral optogenetic work,” said Rob Bonin, a pain researcher at the
University of Toronto who was not involved in the research, citing flexibility and durability as two major advantages of the new approach.
Broadly, hydrogels are soft networks of polymers and water, such as tofu or jelly. “Our body is also made of hydrogels. Except for bones and teeth, our muscles and other organs are all actually hydrogels,” said Liu, a materials scientist now at
Michigan State University. The fiber uses a polyvinyl alcohol hydrogel, selected for its combination of optical properties and durability under repeated mechanical stress.
The investigation of soft materials was initiated with optogenetic pain research in the peripheral nervous system specifically in mind. “If your implant itself is causing pain, how are you going to use this technology to study pain?” said Rao, a neuroscientist now at the
University of Massachusetts Amherst.
And although at the moment the hydrogel fiber primarily figures as a research tool in mice, the same qualities that set this new technology apart for basic science—it’s durable and apparently comfortable in a freely moving body with no compromises in optical performance—are also positives for potential therapeutic purposes. “We are working towards that direction,” said Rao.
The technology promises a wide range of potential applications beyond just the brain and spine.
Researchers anchored one end of their fiber to the mice’s skulls, threaded it underneath the skin, and wrapped a cuff at the other end around the sciatic nerve in the leg. From a practical standpoint, this made the implant compatible with existing external light sources, and kept mice from scratching at any element of the device. But it also worked as a demonstration that enabled a full range of motion of the subject. At a mouse scale, the fiber needed to be only 6 centimeters long, but the authors said it could be extended for other uses.
A series of tests showed that the fiber transmitted light and also how it performed in the mice, blocking pain caused by a hot plate on the foot and inducing movement in the leg. Critically, the fiber performed well after several weeks of voluntary exercise-wheel use, which researchers estimated added up to thousands of bends and twists.
Other optogenetic studies of the peripheral nervous system in mice have attempted various methods of light delivery that don’t use an optical fiber at all, instead shining light through the skin or implanting miniaturized remote devices. In comparison, the new hydrogel fiber should be able to more precisely target specific cells, said Rao.
For Bonin, the external light source has its pros and cons, including higher intensity light and the possibility that a tether could affect behavior, respectively.
Federico Iseppon, a pain researcher at
University College London who was not involved in the study, said that although the fiber may be relatively easy to use, it will still require specialized knowledge to fabricate and surgically implant. It promises a wide range of potential applications beyond just the brain and spine. “Its plasticity lies in the multiple different tissues that could be targeted with this technology,” he said.
Liu is currently working on an interface, such as a patch, between the hydrogel and organs that would enable connections that the current cuff design doesn’t allow. Ideally, the fiber will eventually also let scientists record activity as well as send signals to cells.
-
Virtual Superlensing Made a Reality

A little bit of distance can make all the difference. That’s what two researchers at the University of Sydney Nano Institute recently concluded in their study on a new method for “superlensing,” a way of seeing things that are smaller than the wavelength of light, and doing so without any actual lenses.
The researchers found a new way to crash through the diffraction limit, which prevents you from discerning any feature smaller than the wavelength of the light that reflects off it. Features smaller than that are encoded only in what are called evanescent waves, which have amplitudes that die off exponentially and are almost completely lost within a few wavelength.
Previously, evanescent waves have been detected by placing a photoconductive probe right next to the object being imaged, which can distort the results. It had been assumed that when the probe is too far from the object—even a tenth of the wavelength farther back—the subwavelength information is completely lost. “We realized that it’s not lost. It’s just really, really dim,” says author Boris Kuhlmey.
To regain that high-resolution information, researchers need to amplify the dim signal with a superlens, made of specially engineered metamaterials. But instead of a physical lens, Kuhlmey and coauthor Alessandro Tuniz used numerical calculations to achieve the same result. They measured tiny fluctuations in the electromagnetic field caused by the evanescent waves and virtually amplified these by applying equations that describe how the waves die out. Then, they could reconstruct the original field during postprocessing, achieving a resolution one-fourth the size of the diffraction limit.
What matters, says Kuhlmey, is not the specific technology they used but the ability to pick up high-resolution information farther from the sample than had been thought possible, even without a superlens.
A researcher demonstrates how a photoconductive antenna scans the sample, which has features as small as 0.15 millimeters. The University of Sydney
That’s important because they were working with wavelengths of around 1 millimeter, which is in the terahertz frequency range. And although physical superlensing has been demonstrated in other frequencies, it has not been achieved here.
The terahertz region of the spectrum is a relatively new area of research, in part because the frequencies are too high to be used with electronics and too low for photonics, says Kuhlmey. But such frequencies could be important in biological research, due to water’s strong absorbency of terahertz frequencies, and for peering through the ceramic coatings used on semiconductor chips, among other things. “In every part of the spectrum, you find new physics,” says Kuhlmey.
Tuniz and Kuhlmey note that their method is not the first to break through the diffraction limit, nor does it represent the highest resolution achieved. Other techniques, such as those using a very close probe, offer better resolution. But that resolution comes at a cost. These methods are slow and can be used only to scan small areas. “We’re adding to the library of available high-resolution techniques that someone could consider,” says Tuniz.
To achieve high-resolution terahertz imaging without superlensing, Tuniz and Kuhlmey would have to place an expensive probe less than a millimeter away from the sample being observed. Tuniz was always afraid of scraping the instrument against that sample and damaging either object. To make matters worse, the probe can distort the very field it’s there to measure. Alternately, using a physical lens to amplify the evanescent waves blocks out some of the light. By performing superlensing virtually, researchers eliminate that loss. “It’s niche, in a sense, but laboratories all over the world have equipment such as this to understand really complicated things,” says Tuniz.
“There will be really beautiful applications,” adds Kuhlmey, though he acknowledges that it’s unlikely to revolutionize microscopy as a whole.
In fact, the postprocessing central to the Australian approach is similar to techniques routinely used in other areas of microscopy, according to Durdu Guney who studies superlensing at Michigan Technological University. Although the application to terahertz imaging is new, Guney says, “conceptually, I think the idea is not very novel.” His research has used similar techniques in higher optical frequencies, for which superlensing is more advanced. Guney also questions whether the approach will be effective for more complicated objects, some of the features of which may be overwhelmed by noise.
After the research was published, Tuniz and Kuhlmey found that it had been posted on social media, where commenters made joking allusions to a trope in police-procedural TV shows in which the characters “enhance” blurry CCTV footage to reveal a key detail. Using real concepts of physics, Tuniz acknowledges that the end result is quite similar.
“It is turning the absurdity into a reality.”
-
Nanoparticles Show Quick and Easy Way to Target Cancer

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
Researchers have been exploring the use of injectable nanoparticles that can quickly home in on a microscopic tumor. It’s a novel technique that could pave the way for the early detection of small tumors that may not show up on traditional imaging technologies. In a study published in the October issue of the IEEE Internet of Things Journal, one team has found a way to guide cancer-detecting nanoparticles to a tumor faster, while using fewer resources.
An estimated 13 million people worldwide may die from cancer in 2023, according to the World Health Organization. One key way to reduce the mortality rate is through early detection of the disease, yet existing medical imaging techniques offer limited resolution when it comes to detecting microscopic tumors less than 0.5 millimeters in diameter.
“The rise of nanotechnologies provides a strong hope to solve this problem, where the small sizes of nanoparticles enable them to leak out of blood vessels and accumulate within tumors,” explains Yifan Chen, a professor at the University of Electronic Science and Technology of China, in Chengdu.
However, it can be difficult to develop these “nanoswimmers,” which can efficiently disperse throughout a patient’s body, while sufficiently accumulating at the cancer site. Past studies show that only 0.7 percent of the injected nanoparticles reach their target.
There are two solutions to help nanoswimmers better target tumors. The first is to guide them to a suspected cancer site using a magnetic field applied outside the patient’s body. This approach helps the particles move relatively quickly through the body—but requires a lot of oversight, as the nanoswimmers must be continuously monitored and guided throughout the process.
Another option is to develop self-propelled nanoswimmers, which autonomously move inside the human body and have a chemical tendency to accumulate in tumors. For example, nanoswimmers designed to gravitate toward acidic environments will gravitate toward tumors, which tend to be more acidic than healthy tissues. But, autonomous nanoswimmers tend to move much slower than the magnetically guided nanoswimmers.
Chen’s solution is to combine the advantages of each approach, for a more efficient way to target tumors. His team is proposing a fleet of semi-autonomous nanoswimmers that begin to gravitate toward cancer. In their theoretical scenario, the speed and aggregation pattern of the entire swarm is occasionally measured to see a general pattern of where the nanoswimmers are converging. Using this information, the semi-autonomous swarm can then be magnetically guided more quickly in the optimal direction: toward the tumor.
In their study, the researchers use simulations to show that this “spot sampling technique” provides sufficiently precise data to steer the semi-autonomous swarm toward the target using 90 percent fewer monitoring resources.
“Our studies have demonstrated that a hundredfold increase in the targeting efficiency can be achieved by using [our semi-autonomous approach] compared to techniques where nonautonomous nanoparticles are injected into the body with zero guidance,” says Chen.
While magnetically guided nanoswimmers already exist, Chen’s group is working on developing the semi-autonomous fleet. “We foresee that commercialization of the technology will happen in the next three to five years after completion of proof-of-concept animal experiments,” Chen says, noting his team has filed several patents. “We are also in the process of communicating with several med-tech companies in China for commercialization plans.”
-
Particle Accelerator on a Chip Hits Penny-Size

Particle accelerators range in size from a room to a city. However, now scientists are looking closer at chip-sized electron accelerators, a new study finds. Potential near-term applications for the technology include radiation therapy for zapping skin cancer and, longer-term, new kinds of laser and light sources.
Particle accelerators generally propel particles within metal tubes or rings. The rate at which they can accelerate particles is limited by the peak fields the metallic surfaces can withstand. Conventional accelerators range in size from a few meters for medical applications to kilometers for fundamental research. The fields they use are often on the scale of millions of volts per meter.
“The idea for this is almost as old as the laser. Sometimes, research just takes time.”
—Peter Hommelhoff, University of Erlangen-Nuremberg, Germany
In contrast, electrically insulating dielectric materials (stuff that doesn’t conduct electricity well but does support electrostatic fields well) can withstand light fields thousands of times stronger. This has led scientists to investigate creating dielectric accelerators that rely on lasers to hurl particles. Such devices could lead to “accelerators on a chip“ many times smaller and cheaper than conventional machines.
“The idea for this is almost as old as the laser,” says study senior author Peter Hommelhoff, a physicist at the University of Erlangen-Nuremberg in Germany. “Sometimes, research just takes time, and we see that realizing something is much more demanding than just throwing the idea out.”
In the new study, physicists fabricated a tiny channel 225 nanometers wide and up to 0.5 millimeters long. An electron beam entered one end of the channel and exited the other end.
The researchers shone infrared laser pulses 250 femtoseconds long on top of the channel to help accelerate electrons down it. Inside the channel, two rows of up to 733 silicon pillars, each 2 micrometers high, interacted with these laser pulses to generate accelerating forces.
The electrons entered the accelerators with an energy of 28,400 electron-volts, traveling at roughly one-third the speed of light. They exited it with an energy of 40,700 electron-volts, a 43 percent boost in energy.
This new type of particle accelerator can be built using standard cleanroom techniques, such as electron beam lithography. “This is why we think that our results represent a big step forward,” Hommelhoff says. “Everyone can go ahead and start engineering useful machines from this.”
Particle accelerators not only propel particles, but also confine them, minimizing particle loss. Previous research found that bunches of charged particles cannot be focused in all three dimensions at once. As such, the scientists designed their accelerator structure to alternate the directions in which they focused bunches of electrons, leading to a net result of confinement in all three dimensions.
Still, the researchers expected this confinement strategy would not work over relatively long distances.
“We had hoped that the 0.1- and 0.2-millimeter-long accelerator structures work, and after we saw that they did, we just went on and tried also the longer ones we made,” Hommelhoff says. “It was fantastic to see that also the 0.3-, 0.4- and even 0.5-millimeter-long structures worked.”
“People telling us that this was never going to work, and if so, this was never going to be relevant or useful, we took as a motivation.”
—Peter Hommelhoff
Hommelhoff credits their success to fabrication quality, removing nearly every piece of dust or any other obstacle that might kill the electron beam.
“Several of our colleagues told us that this was never going to work because the level of fabrication accuracy required might be too high, but fortunately this was not the case,” Hommelhoff says. “Other people telling us that this was never going to work, and if so, this was never going to be relevant or useful, we took as a motivation.”
Applications for these nanophotonic electron accelerators depend on the energies they can reach. Electrons of up to about 300,000 electron-volts are typical for electron microscopy, Hommelhoff says. For treatment of skin cancer, 10 million electron-volt electrons are needed. Whereas such medical applications currently require an accelerator 1 meter wide, as well as additional large, heavy and expensive parts to help drive the accelerator, “we could in principle get rid of both and have just a roughly 1-centimeter chip with a few extra centimeters for the electron source,” adds study lead author Tomáš Chlouba, a physicist at the University of Erlangen-Nuremberg in Germany.
Applications such as synchrotron light sources, free electron lasers, and searches for lightweight dark matter appear with billion electron-volt electrons. With trillion electron-volt electrons, high-energy colliders become possible, Hommelhoff says.
Hommelhoff notes it is relatively difficult for their structures to accelerate relatively slow electrons, such as the roughly 30,000 electron-volt electrons they experimented with. “The structures become more efficient when the electrons approach the speed of light, so when their energy is approaching 1 million electron-volts,” he says.
The scientists note there are many ways to improve their device beyond their initial proof-of-concept structures. They now aim to experiment with greater acceleration and higher electron currents to help enable applications, as well as boosting output by fabricating many accelerator channels next to each other that can all be driven by the same laser pulses.
In addition, although the new study experimented with structures made from silicon due to the relative ease of working with it, “silicon is not really a high-damage threshold material,” Hommelhoff says. Structures made of glass or other materials may allow much stronger laser pulses and thus more powerful acceleration, he says.
The researchers are interested in building a small-scale accelerator, “maybe with skin cancer treatment applications in mind first,” Hommelhoff says. “This is certainly something that we should soon transfer to a startup company.”
The scientists detailed their findings in the 19 October issue of the journal Nature.
-
The Space-Based Drug Factory That Can’t Come Home

Five hundred kilometers above the Earth, a small spacecraft is waiting patiently for permission to return home. The autonomous return capsule, made by startup Varda Space Industries, of Torrance, CA, was meant to have landed in the remote Utah desert early in September.
It would have been the first commercial space company to return a drug made in space to Earth, in this case a few grams of the HIV and hepatitis C antiviral ritonavir. Instead, the satellite, about the size of a large trash can and code-named Winnebago 1, continues to orbit the planet at nearly 30,000 kilometers per hour.
The FAA may still regulate re-entry operations of US space missions, even in Australia.
The delay has nothing to do with the satellite itself, which appears to be operating perfectly, and everything to do with an ongoing struggle between Varda and U.S. government agencies back on the ground.
According to a Varda public filing with the U.S. Federal Communications Commission (FCC), Winnebago 1 will now re-enter the atmosphere no sooner than January next year—at least a four-month delay in discovering whether Varda’s proof of concept space factory has delivered the goods.
This stand-off highlights the tension between regulators and commercial space companies in the U.S., which are becoming increasingly vocal in their criticisms of agencies responsible for overseeing private space missions.
Years in the planning
Varda’s mission is to design and build the infrastructure needed to make low Earth orbit accessible to industry, beginning with pharmaceuticals that should be easier to make in microgravity conditions. Planning for Winnebago 1 began two and a half years ago, says Delian Asparouhov, Varda’s co-founder and president. It is the first of four planned missions that will use identical satellites, launched into space by rideshare partners such as Rocket Lab or SpaceX.
But while many thousands of private satellites have been launched on such commercial rockets, none have yet made it back to Earth in one piece. Virtually all satellites are designed to burn up completely on re-entry once their useful life is over, to avoid collisions with active satellites on orbit or risk damaging property or people on Earth. Varda was the first company to apply for a re-entry license for space-made medicines.
Documentation shows possible landing locations for Varda’s space factory, at a military range in Utah.Varda Space Industries
“We are absolutely trailblazers here,” Asparouhov tells Spectrum. “And you can imagine how difficult the coordination has been.” The first step was to select a landing site for the 90kg capsule, which would plunge through the atmosphere at hypersonic speeds before releasing a parachute to slow for landing. The company settled on the Utah Test and Training Range (UTTR), two millions acres of desert controlled by the Pentagon, about 80 miles west of Salt Lake City.
As well as getting the military’s buy-in, Varda had to work with two offices at the U.S. Federal Aviation Administration (FAA); one that deals with air traffic control to avoid the capsule approaching aircraft during re-entry, and another that deals with the safety of the re-entry process itself. The FAA would only issue a re-entry license when it decided that the company had met all the legal and safety requirements.
Flight safety
These include making a Flight Safety Analysis document that imagines all the things that could wrong during re-entry, and the subsequent risks posed to people in aircraft, on the ground, and even on boats. The Flight Safety Analysis for Winnebago 1 is not publicly available, but Spectrum did obtain the safety analysis for the Winnebago 2 mission, planned for next year with an identical spacecraft and originally also intended to land at UTTR.
Safety analyses show possible impact locations if Varda’s upcoming Winnebago 2 spacecraft were to malfunction on re-entry [left], and where people in aircraft might be affected [right].Varda Space Industries
The document shows that the most dangerous events are those that might happen early in the re-entry process, if the re-entry rocket accidentally shoots the small capsule in the wrong direction. One map shows a range of possible impact locations stretching from northern Mexico, through California, to near Las Vegas. Other risky scenarios include the capsule breaking up during the intense heat of re-entry, or its parachute opening too early.
But the casualty expectations from all the mishaps combined remains extremely small. For Winnebago 1, the risk of a human casualty was calculated to be 1 in 14,600–less than the 1 in 10,000 risk that NASA requires.
“I think it is unquestionable that we meet the regulatory requirements that have been laid out for re-entry,” says Asparouhov. “Ultimately, the challenge we face today has nothing to do with safety and regulatory departments. It comes down to the coordination between military ranges that haven’t done this type of commercial activity.”
No room for error
When Winnebago 1 launched in June on a Rocket Lab Photon mission, Varda had still not received its license for re-entry at UTTR. The company continued to communicate with the FAA and the Department of Defense while its space factory went to work, but the days quickly ticked down to its planned 7 September re-entry. And that date had little wiggle room, says Asparouhov: “If you think about a launch delay, you can get ready for a delay of an hour or a day. But with the orbital mechanics of re-entry, you really have to all be aligned on a narrow operational window.”
Varda’s on-orbit space factory satellite is codenamed Winnebago.Varda Space Industries
On 6 September, the FAA denied Varda its re-entry license “because the company did not demonstrate compliance with the regulatory requirements, including not having an authorized landing location,” the FAA told Spectrum.
“There was no one thing that made it not work,” says Asparouhov. “It was everything from the military range’s schedule to FAA’s AST office which handles licensing, to FAA’s ATO, the air traffic office. This was ultimately a question of coordination.”
On September 8, Varda requested that the FAA reconsider its decision. But nothing happened immediately. In mid-September, Varda asked the FCC for a six-month extension on being able to communicate with the Winnebago 1 via radio. “We do not expect to need that much time,” it wrote. “We will deorbit as soon as conditions permit.”
On 12 October, Varda sent another hopeful message to the FCC: “We are actively engaged with the FAA to keep them up to date. This week UTTR has suggested January for reentry, and our discussions with UTTR to schedule specific landing date(s) will continue through October and November, in coordination with the FAA.”
Moving operations to Australia
But even as it struggled to get Winnebago 1 back down to Earth, Varda was shifting its plans for future missions. On 19 October, Varda announced a partnership to use the Koonibba Test Range in southern Australia for some future re-entry operations, possibly even as soon as Winnebago 2 in 2024. Asparouhov told Spectrum that using Koonibba, which has fewer nearby population centers and fewer commercial flights overhead, might mean fewer constraints on operations.
The FAA, however, would still regulate re-entry operations of US space missions, even in Australia. “We just need a more responsive agency from the FAA,” says Asparouhov. “And obviously that has to do with funding and staffing levels not lining up to the huge increase in activity in commercial space.”
That refrain was echoed this week by SpaceX, Blue Origin and Virgin Galactic at a Senate hearing, where SpaceX vice president Bill Gerstenmaier testified that the FAA’s commercial space office “needs at least twice the resources that they have today” for licensing rocket launches.
Any shift to overseas operations would come too late for Winnebago 1, says Asparouhov, as the mission was designed to land in Utah.
For now, while the capsule circles the Earth at thousands of kilometers per hour, the licensing process on the surface seems to be proceeding at a snail’s pace. Varda continues to negotiates with UTTR, and the FAA has not even started to review its decision to deny the space factory a license to land.
On 20 October, the FAA told Spectrum: “Varda still has not submitted the required revised license application that is necessary for the reconsideration process to begin.”
Update 25 Oct. 2023:This story was updated to remove the link to the flight safety analysis document described in this story, which the FCC posted to a public database in error.
-
A Shapeshifting Molecule Leads to a Tiny Piezoresistor

Electronic sensors that use piezoresistance—that is, a change in the electric resistance of a material due to mechanical activity—are common in many devices, including cars, medical wearables, and smartphones. Now, researchers in Australia have developed a tiny version of a piezoresistor, as small as a single molecule, that could enable an entirely new host of applications.
Aside from the piezoresistor’s small size, what’s new is its molecular base, says one of the study’s authors Nadim Darwish, a molecular electronics researcher at Curtin University in Perth, Australia. “The base of existing piezoresistors are made from solid states, or from silicone, or other type of crystals…but we use organic molecules, made mainly from carbon…[which means they] can be coupled to other molecules and to biological molecules [like proteins and enzymes].”
This would make the sensor useful in biosensors to detect diseases, for instance. There are other possible applications in chemical sensors, human–machine interfaces, and health monitoring devices. The researchers’ paper was published in Nature Communications on 3 October.
“We found a use for something that would normally be a disaster.” —Jeffrey Reimers, University of Technology Sydney
The sensor is based around a hydrocarbon molecule called bullvalene, with 10 carbon and 10 hydrogen atoms. The bullvalene appears to be a stable molecule, but displays some peculiar behavior—its carbon atoms are constantly changing places with one another internally. Study co-author Thomas Fallon, an organic synthesis expert at Newcastle University, likens it to a wire that changes its shape, length, and properties simultaneously and spontaneously.
When mechanically strained, the bullvalene molecule shape-shifts into different forms, called isomers; this happens very fast, and the isomers have different electrical resistance, which alters electricity flow. “You wouldn’t want something in your body changing like that, or the parts of a car engine rearranging [themselves] 3,000 times a second,” co-author Jeffrey Reimers, a physical chemistry researcher at the University of Technology Sydney, says. “We found a use for something that would normally be a disaster.”
The speed at which bullvalene shape-shifts is the underlying principle by which the sensor operates. However, the molecule’s “dance” would be meaningless without being able to take sensible measurements of its resistance on a millisecond timescale. This was where the experiment began, Fallon says, to see if the researchers could detect the molecule changing shape in real time. “Because if we can do that, we can do many things in the future.”
The researchers used scanning tunneling microscopy (STM) techniques to take their measurements. An STM microscope is a very specialized device that can obtain resolutions at the atomic scale using voltage pulses. Instead of using light or electron beams, the STM uses a quantum mechanical process called electron tunneling. An ultrasharp conductive tip (in this case, made of gold) is positioned less than 1 nanometer from the specimen, which scans the surface. Applying a small voltage to the STM tip makes electrons tunnel across the gap. The changes in the tunneling current can be resolved into an image of the atomic surface.
“You can think about it as an alternative way of sensing,” says Darwish. By being able to detect how the resistance changes in this molecular sensor, when it is coupled with a biomolecule, like a protein or enzyme, it would be possible to detect changes in those proteins or enzymes.
At this point, the researchers stress, this is fundamental research, and not a functional device that can go into the market. Their work comprised multiple specializations, which Reimers sums up: “Thomas [Fallon] makes the molecule; Nadim [Darwish] puts them on the STM and takes the measurements; I simulate all the chemical structures that could be involved, and the rates and the processes; and Daniel [Kosov] does the quantum physics to work out the conductivity based upon the structures that I generate.”
To make the device practically useful involves getting rid of the STM, and transferring the technology to a cheap platform that can used in a pathology lab or a doctor’s studio, on spacecraft to analyze the stresses on its body in real time, and so on. The researchers are already working on some of this.
“There are other technologies being developed right now, like nano arrays that have electrodes separated by by by few nanometers, [which] can be combined with with our bullvalenes,” says Darwish. He feels they are not so far from creating a functional device. Since the publication of their current paper, the team have made progress in developing a thin film made of bullvalenes that can be put on a small chip. They call it the next logical step from the single-molecule level.
-
Exploring Sydney’s Deep Tech Ecosystem

This sponsored article is brought to you by BESydney.
In the dynamic landscape of Australian technology, market advancements are often attributed to consumer-focused products like Canva and Afterpay. Capturing headlines and attention with their renowned success stories, these, along with other global companies like Atlassian, Facebook, and Apple, have become the face of the tech industry.
The accomplishments of these companies are remarkable. They generate immense wealth for stakeholders and employees and boast a staggering market value. But this high-profile side of the industry is just the tip of the iceberg. Deep tech – characterised by breakthrough scientific innovations – is where hidden impacts take place. Beneath the surface of these tech giants lies a thriving industry dedicated to researching and developing solutions that address large-scale problems, with a profound effect on society.
The power of deep tech
The tech industry in Australia is a powerhouse, employing one in 16 Australians and ranking as the country’s third-largest industry. In 2021, it accounted for 8.5 percent of the GDP, an undeniably significant contribution to the nation’s economy.
For nearly two decades, Sydney has also nurtured a thriving community of resilient problem solvers, quietly pushing the boundaries of scientific discovery. While consumer-focused tech giants often steal the spotlight, it is imperative to recognize the profound impact of deep tech solutions that operate behind the scenes.
From eco-friendly fabric manufacturing and hydrogen storage to molecular diagnostics and sustainable alternatives to plastics, Sydney’s brightest minds are tackling some of the world’s most pressing challenges.
The transformation of deep tech startups
Navigating the deep tech landscape is no small feat. These enterprises offer long-term solutions to pressing global challenges – a benefit that cannot be ignored – but deep tech innovations require significant time for research and development, often incubating for years before reaching the market.
They demand substantial investment and unwavering focus. Finding the right path to commercialization is paramount. Thankfully, incubators are emerging as champions in successfully transforming deep tech startups into thriving businesses.
“Sydney’s DNA demands a deep-rooted vision, an unwavering belief in problem-solving, and the determination to persevere despite challenges.” —Sally-Ann Williams, Cicada Innovations
Cicada Innovations is Australia’s oldest and largest deep tech incubator. It knows better than anyone the extent to which Australia’s deep tech evolution hinges on the power of startups. With over 365 resident companies incubated, over $1.7 billion raised, over $1.4 billion exits, and over 900 patents filed, these dynamic ventures are already spearheading groundbreaking advancements.
It’s creating intelligent robots and pioneering scaled drone delivery to minimize environmental impacts in transportation. It’s slashing the cost of cancer drugs, offering hope for prolonged lifespans and alleviating suffering. And it’s crafting innovative farming tools to enhance agricultural yields and contribute to global food security.

A thriving hub for deep tech innovation
With its vibrant ecosystem, Sydney emerges as an ideal hub for unveiling and further developing deep tech innovations. The Australian spirit, shaped by resilience and problem-solving, thrives in this city. Sally-Ann Williams, chief executive of Cicada Innovations, affirms that “Sydney’s DNA demands a deep-rooted vision, an unwavering belief in problem-solving, and the determination to persevere despite challenges.”
The city offers a supportive community, facilitating connections and access to the talent necessary for entrepreneurs to pursue their dreams. It’s this unique blend of ingredients that fuels the growth of deep tech companies, propelling them toward success.

Discover deep tech at Tech Central
Deep tech is just one facet of what’s happening at Tech Central. While we shed light on these industry accomplishments and celebrated breakthroughs, it’s crucial to support and foster the growth of a wider industry: one that thrives on resilience, problem-solving, and visionary entrepreneurship.
Sydney – with its unique blend of community, talent, and resources – stands at the forefront of this transformative revolution, ready to propel tech innovation for the benefit of all.
For more information on Sydney’s Tech Industry and hosting your next conference in Sydney, visit besydney.com.au.
A Closer Look at Deep Tech Innovators
To truly grasp the essence of deep tech, we must explore the stories of individuals and companies that are driving change. Here are a few examples of how deep tech is flourishing at Tech Central:
Xefco: A sustainable textile revolution
Xefco is a groundbreaking new materials company revolutionizing fabric manufacturing. Its innovative process significantly reduces water usage by up to 90% and eliminates the need for dyes and harsh chemicals. Traditionally, textile mills worldwide have polluted rivers and harmed local communities – Xefco aims to transform the textile industry, benefitting both the environment and economically disadvantaged communities worldwide.
Rux: Empowering the hydrogen economy
Another trailblazing company in Sydney’s deep tech ecosystem,
Rux Energy is tackling the challenge of hydrogen storage. Hydrogen presents immense potential in the energy transition movement, but efficient and scalable storage solutions are essential for its widespread adoption. Rux is developing new materials and technologies to store hydrogen more effectively, paving the way for a cleaner and more sustainable future.
SpeeDX: Revolutionising molecular diagnostics
Amidst the global pandemic,
SpeeDX, a Sydney-based company, emerged as a key player in molecular diagnostic testing and antimicrobial resistance. SpeeDX aims to address the rising concern of antibiotic overuse by providing personalized recommendations for effective treatment. This groundbreaking technology has far-reaching implications, reducing unnecessary antibiotic usage, minimizing the risk of antimicrobial resistance, and safeguarding public health on a global scale.
-
Superficial Brain Implant Could Have a Deep Impact

Two types of neural stimulation have proved themselves in the clinic. There’s deep brain stimulation (DBS), which requires implanting electrodes deep in the brain tissue and wiring them to a battery-powered pulse generator in the chest. It involves major surgery, but once the technology is in place, patients can go about their lives without issue—until the battery needs replacing in a few years. And there’s transcranial magnetic stimulation (TMS), in which a wand delivers powerful magnetic pulses from outside the head to alter the activity of brain cells. No surgery is required, but patients need to be close to a clinic, because they’ll go in for daily treatments for about six weeks.
Now a new technology could offer the best of both worlds: a minimally invasive neurostimulation device that patients could operate in their own homes.
In a preprint paper posted to the server medRxiv, neural engineers describe a tiny wireless device that could be implanted below the skull but above a protective membrane called the dura mater that surrounds the brain. Study coauthor Jacob Robinson, a professor at Rice University, tells IEEE Spectrum that he expects the procedure would take less than 30 minutes. Afterward, the patient could do at-home treatments with the help of a wearable headset that transmits power and instructions to the implant. Robinson is also the cofounder of Motif Neuroscience, a startup that’s trying to commercialize the technology.
The “real breakthrough,” he says, is the device’s wireless power delivery system. It starts with a transmitter in the wearable headset that emits a magnetic field. Within the implant is a magnetoelectric material that vibrates in the presence of a magnetic field, and another component that turns those vibrations’ mechanical energy into electricity.
This technique is somewhat similar, Robinson says, to the work that’s been done pairing ultrasound with a piezoelectric crystal, which turns sound-wave vibrations into electricity and can power a tiny neural implant. “The challenge with ultrasound is that it’s tough to align with the focal spot, and it’s tough to get through air or bone because there are all these reflective interfaces,” says Robinson.
The new implant is powered by a magnetoelectric film that vibrates in the presence of a magnetic field.Motif Neuroscience
In a proof-of-concept experiment with a human volunteer who was already undergoing brain surgery, Robinson’s team demonstrated that their tiny device (about 9 millimeters across) could stimulate the brain through the dura mater. They placed the implant on the dura and above a part of the brain called the motor cortex, which controls movement; when they turned the device on, the volunteer’s hand contracted. The experiment was similar to how TMS clinics calibrate their devices: At the beginning of a patient’s treatment, the technician positions the wand over the motor cortex and turns up the power until a muscle in the patient’s hand contracts.
However, it’s a far cry from making a hand twitch to providing a patient lasting relief from an ailment, such as depression. Indeed, depression is already treated with TMS, and it has also been a target application in DBS trials. Helen Mayberg, a leading DBS researcher and a neurologist in the Mount Sinai Health System, says she looks forward to seeing “how this technology evolves for use in brain locations outside of the motor cortex.”
The researchers’ next step—and it’s a big one—will be an FDA-approved clinical trial to see if their brain stimulator has a therapeutic effect on people with depression. If it passes that test, their technology could be considered for many other neuropsychiatric disorders that have been the focus of recent promising trials with TMS, including obsessive-compulsive disorder, addiction, Alzheimer’s, and PTSD.
Maysam Chamanzar, an associate professor at Carnegie Mellon University who works on next-generation neural interfaces, says he’s impressed with the results in the preprint. “Using the magnetoelectric effect to transfer energy into the tissue is an interesting idea,” he tells Spectrum. “The wirelessly transferred power and the achieved voltage levels at the implant are encouraging.” Chamanzar says he’d like to see long-term data about both the reliability and safety of the device. He also wonders whether the device can be miniaturized further without sacrificing too much efficiency, and whether it can precisely target small brain regions through the dura mater. But “this work shows a great first step,” he says.
Robinson believes that Motif’s technology has several advantages over TMS. He notes that some studies have shown that increasing the duration or frequency of TMS treatments may yield better results, and Motif’s at-home treatment regime would make it easy to change those variables. What’s more, future clinical trials could put several implants in different places to change brain activity at multiple locations in a neural circuit. Finally, the device can do more than just stimulate the brain; it can also record its electrical activity, allowing for data collection on the effects of the treatment. There’s a great deal of interest in “closed loop” systems, which use a patient’s response to stimulation to adjust that stimulation.
Robinson says the technology’s design leaves plenty of room for future improvements. “The exciting thing about separating the implant and the wearable,” he says, “is that the implant can have features like stimulation and recording, and later we can update the wearable to support things like closed-loop neuromodulation. Our vision is that over time we unlock patient-specific and adaptive therapies using that same implant, or perhaps a network of implants.”
-
Nanopore Sequencing Detects Dozens of Disease Biomarkers

Nanopore sequencing has, over the past three decades, become the most affordable, efficient way to sequence DNA. The method, which involves pulling DNA strands through a membrane via channels just nanometers wide, can read much longer DNA sequences than is possible with traditional techniques. Nanopore sequencing allows scientists to analyze complex parts of the genome, but while the technique is excellent at parsing DNA, it does not work with other biological molecules that can be markers for diseases.
Now, researchers at Imperial College London have found a way around that limitation. By using small pieces of DNA as a barcode for other biomolecules, the researchers have harnessed the power of nanopore sequencing to identify dozens of different disease biomarkers in blood samples. In a paper published on 25 September in the journal Nature Nanotechnology, the team demonstrates that the strategy can simultaneously detect 40 biomarkers from human blood serum, including proteins, small RNA pieces called microRNA, and chemicals transmitted by brain cells (neurotransmitters).
The researchers show that their technique can detect 40 different biomarkers in human blood samples. But they are now working on detecting 100.
The advance opens a door to personalized medicine, and could allow doctors to accurately diagnose cancer and heart disease early and customize treatment for patients. “A lot of components of this technology existed before,” says Aleksandar Ivanov, a professor of chemistry at Imperial College London and a member of the research team behind the advancement. “It was the synergy of bringing it all together that makes this work important. This includes the precision with which you sequence, the ability to control the transport of the analyte, and also elements of machine learning to be able to reconstruct the signal.”
Ivanov, chemistry professor Joshua Edel, and their team at Imperial College worked with researchers at Oxford Nanopore Technologies, which makes the world’s only commercial nanopore sequencer. The machine generates an electric field to pull long DNA strands through pores narrower than 3 nanometers. As a strand moves through a pore, the pore restricts the passage of ions present in the sample, causing tiny changes in the electric field that can be used to identify individual nucleotides—the DNA bases commonly known by the letters A, C, G, and T.
But other biomolecules are either too small to be detected by nanopore sequencing, says Ivanov, or so big that they don’t pass through. Even if they can squeeze through the pores, the signals produced by the sequencer can be nonspecific, he says. Two proteins, for instance, can have similar electronic signatures but completely different biological functions.
To address the problem, the team made small, 30-nucleotide-long DNA snippets that acted like unique barcodes for different biomarkers. They connected each barcode to special “probe” molecules that bind to specific disease biomarkers, whether that’s a protein or a neurotransmitter or something else. After adding these DNA barcode probes to a blood sample, the researchers put it through the nanopore sequencer.
That’s when the machine-learning algorithms kick in. The team trained algorithms to identify the nucleic acid sequence of each unique barcode from the jumble of signals created as multiple barcodes from different biomarkers are read simultaneously.
“We use these probes to fish out molecules of interest in the sample,” says Ivanov. “We know which barcode corresponds to which probe. This allows us to detect whether a molecule is present in solution or not. In a way we are fingerprinting what’s inside the solution.”
“It was the synergy of bringing it all together that makes this work important. This includes the precision with which you sequence, the ability to control the transport of the analyte, and also elements of machine learning to be able to reconstruct the signal.” —Aleksandar Ivanov, professor of chemistry, Imperial College London
Combining DNA bar coding and nanopore sequencing makes for a powerful technique by pairing both selectivity and sensitivity to more effectively screen for diseases, says Edel. “In the context of personalized [medicine] and being able to perform more effective diagnostics and screening, there are two aspects at play. One is getting a better snapshot of a patient by detecting an array of biomarkers. And for some diseases, biomarkers—especially in the early stages—are found at very low concentration. So being able to detect on an individual basis at low concentrations is absolutely critical.”
In their paper, the researchers show that their technique can detect 40 different biomarkers in human blood samples. But they are now working on detecting 100, says Edel. “In terms of upper limit, it’s likely closer to 1,000,” he says. “That’s the bigger picture, long-term vision.”
-
This Robot Could Be the Key to Empowering People With Disabilities

In 2010, Henry Evans saw a robot on TV. It was a PR2, from the robotics company Willow Garage, and Georgia Tech robotics professor Charlie Kemp was demonstrating how the PR2 was able to locate a person and bring them a bottle of medicine. For most of the people watching that day, the PR2 was little more than a novelty. But for Evans, the robot had the potential to be life changing. “I imagined PR2 as my body surrogate,” Evans says. “I imagined using it as a way to once again manipulate my physical environment after years of just lying in bed.”
Eight years earlier, at the age of 40, Henry was working as a CFO in Silicon Valley when he suffered a strokelike attack caused by a birth defect, and overnight, became a nonspeaking person with quadriplegia. “One day I was a 6’4”, 200 Lb. executive,” Evans wrote on his blog in 2006. “I had always been fiercely independent, probably to a fault. With one stroke I became completely dependent for everything…. Every single thing I want done, I have to ask someone else to do, and depend on them to do it.” Evans is able to move his eyes, head, and neck, and slightly move his left thumb. He can control a computer cursor using head movements and an onscreen keyboard to type at about 15 words per minute, which is how he communicated with IEEE Spectrum for this story.
Henry Evans shaves with the assistance of a PR2 robot in 2012.
Georgia Tech
After getting in contact with Kemp at Georgia Tech, and in partnership with Willow Garage, Evans and his wife Jane began collaborating with the roboticists on a project called Robots for Humanity. The goal was to find ways of extending independence for people with disabilities, helping them and, just as importantly, their caregivers live better and more fulfilling lives. The PR2 was the first of many assistive technologies developed through Robots for Humanity, and Henry was eventually able to use the robot to (among other things) help himself shave and scratch his own itch for the first time in a decade.
“Robots are something that was always science fiction for me,” Jane Evans told me. “When I first began this journey with Henry, it never entered my mind that I’d have a robot in my house. But I told Henry, ‘I’m ready to take this adventure with you.’ Everybody needs a purpose in life. Henry lost that purpose when he became trapped in his body, and to see him embrace a new purpose—that gave my husband his life back.”
Even simple tasks like repositioning a blanket require a caregiver, but Henry can use Stretch to move it on his own.Peter Adams
Henry stresses that an assistive device must not only increase the independence of the disabled person but also make the caregiver’s life easier. “Caregivers are super busy and have no interest in (and often no aptitude for) technology,” he explains. “So if it isn’t dead simple to set up and it doesn’t save them a meaningful amount of time, it very simply won’t get used.”
While the PR2 had a lot of potential, it was too big, too expensive, and too technical for regular real-world use. “It cost $400,000,” Jane recalls. “It weighed 400 pounds. It could destroy our house if it ran into things! But I realized that the PR2 is like the first computers—and if this is what it takes to learn how to help somebody, it’s worth it.”
For Henry and Jane, the PR2 was a research project rather than a helpful tool. It was the same for Kemp at Georgia Tech—a robot as impractical as the PR2 could never have a direct impact outside of a research context. And Kemp had bigger ambitions. “Right from the beginning, we were trying to take our robots out to real homes and interact with real people,” he says. To do that with a PR2 required the assistance of a team of experienced roboticists and a truck with a powered lift gate. Eight years into the Robots for Humanity project, they still didn’t have a robot that was practical enough for people like Henry and Jane to actually use. “I found that incredibly frustrating,” Kemp recalls.
In 2016, Kemp started working on the design of a new robot. The robot would leverage years of advances in hardware and computing power to do many of the things that the PR2 could do, but in a way that was simple, safe, and affordable. Kemp found a kindred spirit in Aaron Edsinger, who like Kemp had earned a Ph.D. at MIT under Rodney Brooks. Edsinger then cofounded a robotics startup that was acquired by Google in 2013. “I’d become frustrated with the complexity of the robots being built to do manipulation in home environments and around people,” says Edsinger. “[Kemp’s idea] solved a lot of problems in an elegant way.” In 2017, Kemp and Edsinger founded Hello Robot to make their vision real.
Stretch is a relatively small robot that one person can easily move, but it has enough range of motion to reach from the floor to countertop height.Hello Robot
The robot that Kemp and Edsinger designed is called Stretch. It’s small and lightweight, easily movable by one person. And with a commercial price of US $20,000, Stretch is a tiny fraction of the cost of a PR2. The lower cost is due to Stretch’s simplicity—it has a single arm, with just enough degrees of freedom to allow it to move up and down and extend and retract, along with a wrist joint that bends back and forth. The gripper on the end of the arm is based on a popular (and inexpensive) assistive grasping tool that Kemp found on Amazon. Sensing is focused on functional requirements, with basic obstacle avoidance for the base along with a depth camera on a pan-and-tilt head at the top of the robot. Stretch is also capable of performing basic tasks autonomously, like grasping objects and moving from room to room.
This minimalist approach to mobile manipulation has benefits beyond keeping Stretch affordable. Robots can be difficult to manually control, and each additional joint adds extra complexity. Even for non-disabled users, directing a robot with many different degrees of freedom using a keyboard or a game pad can be tedious, and requires substantial experience to do well. Stretch’s simplicity can make it a more practical tool than robots with more sensors or degrees of freedom, especially for novice users, or for users with impairments that may limit how they’re able to interact with the robot.
A Stretch robot under Henry Evans’s control helps his wife, Jane, with meal prep and cleanup.
Vy Nguyen/Hello Robot
“The most important thing for Stretch to be doing for a patient is to give meaning to their life,” explains Jane Evans. “That translates into contributing to certain activities that make the house run, so that they don’t feel worthless. Stretch can relieve some of the caregiver burden so that the caregiver can spend more time with the patient.” Henry is acutely aware of this burden, which is why his focus with Stretch is on “mundane, repetitive tasks that otherwise take caregiver time.”
Vy Nguyen [left] is an occupational therapist at Hello Robot who has been working extensively with both Henry and Jane to develop useful applications for Stretch in their home.Peter Adams
Vy Nguyen is an occupational therapist who has been working with Hello Robot to integrate Stretch into a caregiving role. With a $2.5 million Small Business Innovation Research grant from the National Institutes of Health and in partnership with Wendy Rogers at the University of Illinois Urbana-Champaign and Maya Cakmak at the University of Washington, Nguyen is helping to find ways that Stretch can be useful in the Evans’s daily lives.
To scratch an itch on his head, Henry uses a hairbrush that has been modified with a soft sleeve to make it easier for the robot to grasp it. Vy Nguyen/Hello Robot
There are many tasks that can be frustrating for the patient to depend on the caregiver for, says Nguyen. Several times an hour, Henry suffers from itches that he cannot scratch, and which he describes as debilitating. Rather than having to ask Jane for help, Henry can instead have Stretch pick up a scratching tool and use the robot to scratch those itches himself. While this may seem like a relatively small thing, it’s hugely meaningful for Henry, improving his quality of life while reducing his reliance on family and caregivers. “Stretch can bridge the gap between the things that Henry did before his stroke and the things he aspires to do now by enabling him to accomplish his everyday activities and personal goals in a different and adaptable way via a robot,” Nguyen explains. “Stretch becomes an extension of Henry himself.”
This is a unique property of a mobile robot that makes it especially valuable for people with disabilities: Stretch gives Henry his own agency in the world, which opens up possibilities that go far beyond traditional occupational therapy. “The researchers are very creative and have found several uses for Stretch that I never would have imagined,” Henry notes. Through Stretch, Henry has been able to play poker with his friends without having to rely on a teammate to handle his cards. He can send recipes to a printer, retrieve them, and bring them to Jane in the kitchen as she cooks. He can help Jane deliver meals, clear dishes away for her, and even transport a basket of laundry to the laundry room. Simple tasks like these are perhaps the most meaningful, Jane says. “How do you make that person feel like what they’re contributing is important and worthwhile? I saw Stretch being able to tap into that. That’s huge.”
Using Stretch to manipulate cards, Henry can play games with friends and family without having to be on a team with someone else.Vy Nguyen/Hello Robot
One day, Henry used Stretch to give Jane a rose. Before that, she says, “Every time he would pick flowers for me, I’m thanking Henry along with the caregiver. But when Henry handed me the rose through Stretch, there was no one else to thank but him. And the joy in his face when he handed me that rose was unbelievable.”
Henry has also been able to use Stretch to interact with his three-year-old granddaughter, who isn’t quite old enough to understand his disability and previously saw him, says Jane, as something like a piece of furniture. Through Stretch, Henry has been able to play little games of basketball and bowling with his granddaughter, who calls him “Papa Wheelie.” “She knows it’s Henry,” says Nguyen, “and the robot helped her see him as a person who can play with and have fun with her in a very cool way.”
Through Stretch, Henry can play games with his granddaughter, like this version of bowling adapted for both small children and robots.Vy Nguyen/Hello Robot
The person working the hardest to transform Stretch into a practical tool is Henry. That means “pushing the robot to its limits to see all it can do,” he says. While Stretch is physically capable of doing many things (and Henry has extended those capabilities by designing custom accessories for the robot), one of the biggest challenges for the user is finding the right way to tell the robot exactly how to do what you want it to do.
The graphical user interface that Henry (in collaboration with the researchers) developed to control Stretch uses multiple camera views and large onscreen buttons to make it easier for Henry to do tasks like feeding himself.Julian Mehu/Hello Robot
Henry collaborated with the researchers to develop his own graphical user interface to make manual control of Stretch easier, with multiple camera views and large onscreen buttons. But Stretch’s potential for partially or fully autonomous operation is ultimately what will make the robot most successful. The robot relies on “a very particular kind of autonomy, called assistive autonomy,” Jane explains. “That is, Henry is in control of the robot, but the robot is making it easier for Henry to do what he wants to do.” Picking up his scratching tool, for example, is tedious and time consuming under manual control, because the robot has to be moved into exactly the right position to grasp the tool. Assistive autonomy gives Henry higher-level control, so that he can direct Stretch to move into the right position on its own. Stretch now has a menu of prerecorded movement subroutines that Henry can choose from. “I can train the robot to perform a series of movements quickly, but I’m still in complete control of what those movements are,” he says.
Henry adds that getting the robot’s assistive autonomy to a point where it’s functional and easy to use is the biggest challenge right now. Stretch can autonomously navigate through the house, and the arm and gripper can be controlled reliably as well. But more work needs to be done on providing simple interfaces (like voice control), and on making sure that the robot is easy to turn on and doesn’t shut itself off unexpectedly. It is, after all, still research hardware. Once the challenges with autonomy, interfaces, and reliability are addressed, Henry says, “the conversation will turn to cost issues.”
Henry Evans uses a Stretch robot to feed himself scrambled eggs.
Vy Nguyen/Hello Robot
A $20,000 price tag for a robot is substantial, and the question is whether Stretch can become useful enough to justify its cost for people with cognitive and physical impairments. “We’re going to keep iterating to make Stretch more affordable,” says Hello Robot’s Charlie Kemp. “We want to make robots for the home that can be used by everyone, and we know that affordability is a requirement for most homes.”
But even at its current price, if Stretch is able to reduce the need for a human caregiver in some situations, the robot will start to pay for itself. Human care is very expensive—the nationwide average is over $5,000 per month for a home health aide, which is simply unaffordable for many people, and a robot that could reduce the need for human care by a few hours a week would pay for itself within just a few years. And this isn’t taking into account the value of care given by relatives. Even for the Evanses, who do have a hired caregiver, much of Henry’s daily care falls to Jane. This is a common situation for families to find themselves in, and it’s also where Stretch can be especially helpful: by allowing people like Henry to manage more of their own needs without having to rely exclusively on someone else’s help.
Henry Evans uses his custom graphical user interface to control the Stretch robot to pick up a towel, place the towel in a laundry basket, and then tow the laundry basket to the laundry room.
Vy Nguyen/Hello Robot
Stretch does still have some significant limitations. The robot can lift only about 2 kilograms, so it can’t manipulate Henry’s body or limbs, for example. It also has no way of going up and down stairs, is not designed to go outside, and still requires a lot of technical intervention. And no matter how capable Stretch (or robots like Stretch) become, Jane Evans is sure they will never be able to replace human caregivers, nor would she want them to. “It’s the look in the eye from one person to another,” she says. “It’s the words that come out of you, the emotions. The human touch is so important. That understanding, that compassion—a robot cannot replace that.”
Stretch may still be a long way from becoming a consumer product, but there’s certainly interest in it, says Nguyen. “I’ve spoken with other people who have paralysis, and they would like a Stretch to promote their independence and reduce the amount of assistance they frequently ask their caregivers to provide.” Perhaps we should judge an assistive robot’s usefulness not by the tasks it can perform for a patient, but rather on what the robot represents for that patient, and for their family and caregivers. Henry and Jane’s experience shows that even a robot with limited capabilities can have an enormous impact on the user. As robots get more capable, that impact will only increase.
“I definitely see robots like Stretch being in people’s homes,” says Jane. “When, is the question? I don’t feel like it’s eons away. I think we are getting close.” Helpful home robots can’t come soon enough, as Jane reminds us: “We are all going to be there one day, in some way, shape, or form.” Human society is aging rapidly. Most of us will eventually need some assistance with activities of daily living, and before then, we’ll be assisting our friends and family. Robots have the potential to ease that burden for everyone.
And for Henry Evans, Stretch is already making a difference. “They say the last thing to die is hope,” Henry says. “For the severely disabled, for whom miraculous medical breakthroughs don’t seem feasible in our lifetimes, robots are the best hope for significant independence.”
This article appears in the November 2023 print issue as “A Robot for Humanity.”
-
Programmable DNA Machines Offer General-Purpose Computing

What may be the first programmable DNA computer is capable of running billions of different circuits, according to a new study published in the journal Nature. The Chinese scientists who created the liquid machine say it could solve math problems and may one day find use in the diagnosis of diseases.
Whereas regular computers depend on silicon microchips, DNA computers rely on the molecules that nature has used to encode the blueprints for life for billions of years. DNA computing uses lab operations to perform calculations, with data in the form of DNA strands as the inputs and outputs.
One potential advantage that DNA computing might have over regular computing is the density of data it can store—in theory, DNA can store up to one exabyte, or 1 billion gigabytes, per cubic millimeter. In addition, trillions of DNA molecules can fit in a drop of water, suggesting that DNA computing is capable of performing a huge number of computations in parallel while requiring very little energy.
How DNA computers work
DNA consists of strands made up of four different molecules known as bases: adenine, thymine, cytosine, and guanine, abbreviated as A, T, C, and G. In electronics, data is typically encoded in series of zeroes and ones. In DNA computing, the number pairs 00, 01, 10, and 11 can be encoded as A, T, C, and G.
DNA computing typically performs computations based on the specific way in which bases bind to each other. Adenine pairs with thymine, and cytosine with guanine; a short strand made up of ATCG, for example, would bind to TAGC and not other sequences.
When DNA molecules with specially designed sequences are mixed with each other, they can bind together and come apart in ways that make them serve as logic gates—devices that carry out logic operations such as AND, OR, and NOT. Logic gates are the building blocks of the digital circuits at the heart of regular computers.
A major problem that DNA computing has faced is developing programmable arrays of logic gates. Most DNA computers are designed to perform only specific algorithms or a limited number of computational tasks. In contrast, regular computers are general-purpose machines that run software that helps them perform many tasks.
“Our team has been working in the field of DNA computing for many years,” says study coauthor Fei Wang, a molecular engineer at Shanghai Jiao Tong University. “During our work, we gradually realized that existing DNA circuit design processes were application-specific. We always needed to design a set of molecules for a new function, which is time-consuming and not friendly to nonexperts, limiting the development and application of DNA computing.”
Now Wang and his colleagues have created DNA-based programmable gate arrays for general-purpose DNA computing. They say they can program a single array to implement more than 100 billion distinct circuits.
The technical challenges of DNA computing
A key obstacle standing in the way of DNA computing is that DNA molecules can flow in essentially any direction. This makes it challenging to bring logic gates together to perform computations in programmed sequences.
To surmount this problem, the researchers built so-called DNA origami. By designing a DNA sequence just right, one can get the resulting floppy strand to stick to itself and bend into virtually whatever 2D or 3D shape is desired. The DNA origami can fold and hold together because each DNA base binds to a specific partner.
The scientists generated DNA origami that could act like registers—devices that guide the flow of data and instructions within computers. These helped control the intrinsically random collisions of DNA molecules.
In the new DNA computers, oligonucleotides—short segments of DNA—shuffle around in test tubes much as electrons are shuttled inside regular computers. In their experiments, the researchers used a DNA computer made up of 30 logic gates with about 500 DNA strands to accurately find square roots. They also used it to identify three genetic molecules related to kidney cancer. When the computer was given 18 diseased and five healthy samples, it correctly detected and reported which were which in about 2 hours.
Th researchers stressed that DNA computers will not supplant regular computers at conventional tasks. For one thing, it took hours for the new devices to carry out computations.
Nevertheless, the new DNA computers “will be useful in biomedical applications—for example, cellular programming and molecular diagnostics,” Wang says. Since DNA computers use DNA as both input and output material, one can design them to, say, respond to a gene they detect by releasing a strand of DNA that can in turn have biological effects. Wang suggests that DNA computers may find use in programming cells to respond to pollutants for environmental monitoring, or to cancer-related molecules for disease treatment.
One weakness the scientists noted is that programming and running these DNA computers requires manual operations, “somewhat like the human programmers of the early programmable electronic general-purpose computer ENIAC,” Wang says. “We are now working on the automation of DNA computing by combining molecular reactions with electrically controlled liquid transfer.”
The researchers next hope to perform some complex algorithms with their new devices “to show the advantages of DNA computing,” Wang says. The researchers aim to use their creations to diagnose different kinds of diseases, he adds.