Connected successfully
+ What is an RSS feed?
Scroll down to read. Use the menu above to choose a different RSS feed. Note: As of 9/2022 CNN has been leaning to the right. As such, they have been taken off as the default feed. For now I'm going to use Google for the default.
The available RSS feeds are valid news sites that are all considered to be neutral. Nothing leaning too far left, nothing leaning too far right. Plus some fun stuff. Hope you find the page useful.
If you want to know more about how this works, please visit the Tutorial page to learn to make your own RSS reader.
Current feed - IEEE Spectrum
Imagine playing a new, slightly altered version of the game GeoGuessr. You’re faced with a photo of an average U.S. house, maybe two floors with a front lawn in a cul-de-sac and an American flag flying proudly out front. But there’s nothing particularly distinctive about this home, nothing to tell you the state it’s in or where the owners are from.
You have two tools at your disposal: your brain, and 44,416 low-resolution, bird’s-eye-view photos of random places across the United States and their associated location data. Could you match the house to an aerial image and locate it correctly?
I definitely couldn’t, but a new machine learning model likely could. The software, created by researchers at China University of Petroleum (East China), searches a database of remote sensing photos with associated location information to match the streetside image—of a home or a commercial building or anything else that can be photographed from a road—to an aerial image in the database. While other systems can do the same, this one is pocket-size compared to others and super accurate.
At its best (when faced with a picture that has a 180 degree field of view), it succeeds up to 97 percent of the time in the first stage of narrowing down location. That’s better than or within two percentage points of all the other models available for comparison. Even under less-than-ideal conditions, it performs better than many competitors. When pinpointing an exact location, it’s correct 82 percent of the time, which is within three points of the other models.
But this model is novel for its speed and memory savings. It is at least twice as fast as similar ones and uses less than a third the memory they require, according to the researchers. The combination makes it valuable for applications in navigation systems and the defense industry.
“We train the AI to ignore the superficial differences in perspective and focus on extracting the same ‘key landmarks’ from both views, converting them into a simple, shared language,” explains Peng Ren, who develops machine learning and signal processing algorithms at China University of Petroleum (East China).
The software relies on a method called deep cross-view hashing. Rather than try to compare each pixel of a street view picture to every single image in the giant bird’s-eye-view database, this method relies on hashing, which means transforming a collection of data—in this case, street-level and aerial photos—into a string of numbers unique to the data.
To do that, the China University of Petroleum research group employs a type of deep learning model called a vision transformer that splits images into small units and finds patterns among the pieces. The model may find in a photo what it’s been trained to identify as a tall building or circular fountain or roundabout, and then encode its findings into number strings. ChatGPT is based on similar architecture, but finds patterns in text instead of images. (The “T” in “GPT” stands for “transformer.”)
The number that represents each picture is like a fingerprint, says Hongdong Li, who studies computer vision at the Australian National University. The number code captures unique features from each image that allow the geolocation process to quickly narrow down possible matches.
In the new system, the code associated with a given ground-level photo gets compared to those of all of the aerial images in the database (for testing, the team used satellite images of the United States and Australia), yielding the five closest candidates for aerial matches. Data representing the geography of the closest matches is averaged using a technique that weighs locations closer to each other more heavily to reduce the impact of outliers, and out pops an estimated location of the street view image.
The new mechanism for geolocation was published last month in IEEE Transactions on Geoscience and Remote Sensing.
“Though not a completely new paradigm,” this paper “represents a clear advance within the field,” Li says. Because this problem has been solved before, some experts, like Washington University in St. Louis computer scientist Nathan Jacobs, are not as excited. “I don’t think that this is a particularly groundbreaking paper,” he says.
But Li disagrees with Jacobs—he thinks this approach is innovative in its use of hashing to make finding images matches faster and more memory efficient than conventional techniques. It uses just 35 megabytes, while the next smallest model Ren’s team examined requires 104 megabytes, about three times as much space.
The method is more than twice as fast as the next fastest one, the researchers claim. When matching street-level images to a dataset of aerial photography of the United States, the runner-up’s time to match was around 0.005 seconds—the Petroleum group was able to find a location in around 0.0013 seconds, almost four times faster.
“As a result, our method is more efficient than conventional image geolocalization techniques,” says Ren, and Li confirms that these claims are credible. Hashing “is a well-established route to speed and compactness, and the reported results align with theoretical expectations,” Li says.
Though these efficiencies seem promising, more work is required to ensure this method will work at scale, Li says. The group did not fully study realistic challenges like seasonal variation or clouds blocking the image, which could impact the robustness of the geolocation matching. Down the line, this limitation can be overcome by introducing images from more distributed locations, Ren says.
Still, long-term applications (beyond a super advanced GeoGuessr) are worth considering now, experts say.
There are some trivial uses for an efficient image geolocation, such as automatically geotagging old family photos, says Jacobs. But on the more serious side, navigation systems could also exploit a geolocation method like this one. If GPS fails in a self-driving car, another way to quickly and precisely find location could be useful, Jacobs says. Li also suggests it could play a role in emergency response within the next five years.
There may also be applications in defense systems. Finder, a 2011 project from the Office of the Director of National Intelligence, aimed to help intelligence analysts learn as much as they could about photos without metadata using reference data from sources including overhead images, a goal that could be accomplished with models similar to this new geolocation method.
Jacobs puts the defense application into context: If a government agency sent a photo of a terrorist training camp without metadata, how can the site be geolocated quickly and efficiently? Deep cross-view hashing might be of some help.
After developing a prototype robot that was effective at cleaning both hard floors and carpets using a relatively simple carpet-sweeping mechanism, iRobot vice president Winston Tao and the iRobot marketing team have organized a focus group so that Roomba’s engineers can witness the reaction of potential first customers.
One pleasant midsummer day in 2001, Roomba’s engineers, Winston Tao, and several other iRobot folk rendezvoused at an unremarkable, multistory office building on the Cambridge side of the Charles River, across from Boston. We assembled in a narrow room. A long table occupied the room’s center. Snacks and sodas were set out along the back wall; the lighting was subdued. The dominant feature of this cramped chamber was a big one-way mirror occupying almost the entire front wall. Sitting at the table, one could see through the looking glass into a wonderland of market research on the other side. In that much larger, brightly lit room were comfortable chairs, an easel with a large pad of paper, and our hired facilitator. Although this was a familiar trope I’d seen a hundred times on TV, actually lurking in an observation room like this felt a touch surreal.
We’d paid maybe US $10,000 for the privilege of setting up some focus groups—probably the most the company had ever spent on a market research event. But we needed to know how potential customers would react to our Roomba prototype when they saw one in the (plastic) flesh cleaning the floor at their feet. At the appointed hour, our facilitator welcomed eight to 10 bona fide ordinary people as they filed into the large room and sat in the chairs. Our mind-child was about to receive its first critical judgment from strangers.
aspect_ratioThis article was adapted from the author’s new book, Dancing with Roomba: Cracking the Robot Riddle and Building an Icon (Routledge 2025).Joe Jones
The facilitator prepared participants by encouraging them to state their honest views and not to be swayed by the comments of others. “You are the world’s expert in your own opinion,” she told them.
At first the facilitator described Roomba without showing the group any photos or the device itself. She was met with skepticism that such a thing would actually work. Then she demonstrated one of the prototypes we had prepared for the event. As participants watched Roomba go about its business on both carpets and hard floors, their doubts ebbed. Even those who stated that they would never purchase such a device couldn’t help being intrigued. As the group discussion proceeded, soccer moms (representing “early mass-market adopters”) emerged as the most interested. They saw Roomba as a time-saver. This surprised and pleased us, as we’d expected the much smaller market of gadget geeks would be the first to fall in love.
iRobot built about 20 of its third major Roomba prototype, the T100, all with 3D-printed shells.Joe Jones
But we could take neither interest nor love to the bank. We needed to know how much customers would pay. Our facilitator eased into that part of the gathering’s proceedings. She did not inquire directly but rather asked, “If you saw this product in a store, what would you expect the price to be?”
The focus group’s responses were all over the map. Some people mentioned a price close to the $200 we intended to charge. A few folks we regarded as saints-in-training expected an even higher number. But most were lower. One woman said she’d expect Roomba to be priced at $25. Later when asked what she thought a replacement battery might cost, she said, “$50.” That hurt. For this lady, attaching our robot to a battery devalued the battery.
Throughout the proceedings our facilitator had been careful to leave a couple of things unmentioned. First, she never referred to Roomba as a robot, calling it instead an “automatic floor cleaner.” Three separate groups, comprising an aggregate of around two dozen people, gave their opinions that day. Of these, only two individuals spontaneously applied the term “robot” to Roomba.
The second unmentioned characteristic was the nature of Roomba’s cleaning mechanism. That is, the facilitator had revealed no details about how it worked. Participants had seen the demo, they observed Roomba cleaning effectively, they had given their opinion about the price. They’d all assumed that a vacuum was at work, several used that term to refer to the robot. But now the facilitator told them, “Roomba is a carpet sweeper, not a vacuum.” Then she asked again what they would expect to pay. On average, focus-group members from all three groups cut their estimates in half. Participants who had previously said $200 now said $100.
The focus group’s brutal revaluation exploded our world. The enabling innovation that made the energy budget work, that made Roomba technically and economically feasible, was cleaning with a carpet sweeper rather than a vacuum. People had seen that the carpet-sweeper-Roomba really did work. Yet they chose to trust conventional wisdom about vacuums versus carpet sweepers rather than their own apparently lying eyes. If we were forced to cut the robot’s price in half, we would lose money on every unit sold, and there would be no Roomba.
At the end of the evening, before any member of our stunned team could stagger out the door, Winston said simply, “Roomba has to have a vacuum.” A shotgun wedding was in the offing for bot and vac.
Scamp, the earliest Roomba prototype, was built in 1999.Joe Jones
The next day at work we gathered to discuss the focus group’s revelation. A half-hearted attempt or two to deny reality quickly faded—electrical engineer Chris Casey saw to that—and we accepted what we needed to do. But changing things now would be a huge challenge in multiple ways. We were deep into development, closer to launch than kickoff. All the electrical power our battery could supply was already spoken for. None was available for a new system that would likely be more power hungry than all the robot’s other systems combined. And where could we put a vacuum? All the space in the robot was also fully assigned. Our mandate to clean under furniture and between chair legs wouldn’t let us make the robot any bigger.
One escape hatch beckoned, but no one was eager to leap through it. Chris articulated what we were all thinking. “We could build a vestigial vacuum,” he said. That is, we could design a tiny, pico-power vacuum—one that consumes almost no power and does almost nothing—strap it on the robot, and call it done. Perversely, that seemed reasonable. The robot already cleaned the floor well; our cleaning tests proved it. Customers, however, didn’t know that. They were all steeped in the dogma of vacuum supremacy. Reeducating the masses wasn’t possible—we didn’t have the funds. But if we could assert on the box that Roomba had a vacuum, then everyone would be satisfied. We could charge the price that makes our economics work. Customers would deem that cost reasonable and wouldn’t have to unlearn their vacuum bias.
But it felt wrong. If we must add a new system to the robot, we wanted it—like all the other systems—to earn its keep honestly, to do something useful. Through further discussion and calculation, we concluded that we could afford to devote about 10 percent of the robot’s 30-watt power budget to a vacuum. Conventional manual vacuums typically gorged themselves on 1,200 watts of power, but if we could develop a system that provided useful cleaning while consuming only 3 W (0.25 percent of 1,200) then we would feel good about adding it to the robot. It just didn’t seem very likely.
iRobot built two identical second-generation Roomba prototypes, named Kipper and Tipper, one of which is shown here.Joe Jones
I sometimes find that solving a problem is largely a matter of staring at the problem’s source. Gaze long and intently enough at something and, Waldo-like, the solution may reveal itself. So I took one of the team’s manual vacuums and stared at it. What exactly made it use as much power as it did? I knew the answer was partly marketing rather than reality. There was no simple, objective way to compare cleaning efficacy between vacuums. Lacking a results-based method, shoppers looked at inputs. For example, a vacuum with a 10-ampere motor sounds as though it should clean better than a vacuum with a 6-amp motor. But the bigger number might only mean that the manufacturer with the 10-amp claim was using a less-efficient motor—the 6-amp (720-W) motor might clean just as well.
But even when you corrected for the amperage arms race, a vacuum was still a power glutton. Staring at the vacuum cleaner, I began to see why. The vacuum fixed in my gaze that day used the standard configuration: a cylindrical beater brush occupied the center of a wide air inlet. A motor, attached by a belt, spun the brush. Another motor, deeper in the machine, drove a centrifugal blower that drew air in through the inlet. To keep dirt particles kicked up by the beater brush entrained in the airstream, the air needed to move fast. The combination of a wide inlet and high velocity meant that every second the vacuum motor had to gulp a huge volume of air.
Accelerating all that air took considerable power—the physics was inescapable. If we wanted a vacuum that sipped power rather than guzzled it, we had to move a much smaller volume of air per second. We could accomplish that—without reducing air velocity—if, instead of a wide inlet, we used a narrow one. To match the manual vacuum’s air velocity using only a 3-W motor, I computed that we would need a narrow opening indeed: only a millimeter or two.
That instantly disqualified Roomba from using the standard vacuum configuration—we could not put our bristle brush in the middle of the air inlet. That would require an inlet maybe 20 times too wide. We’d have to find another arrangement.
To test the narrow-inlet idea I turned to my favorite prototyping materials: cardboard and packing tape. Using these, I mocked up my idea. The inlet for my test vacuum was as long as Roomba’s brush but only about 2 millimeters wide. To provide suction I repurposed the blower from a defunct heat gun. Then I applied my jury-rigged contraption to crushed Cheerios and a variety of other dirt stand-ins. My novel vacuum was surprisingly effective at picking up small debris from a hard surface. Using an anemometer to measure the speed of the air rushing through my narrow inlet showed that it was, as desired, as fast as the airstream in a standard vacuum cleaner.
The next step was to somehow shoehorn our microvacuum into Roomba. To form the narrow inlet we used two parallel vanes of rubber. Small rubber bumps protruding from one vane spanned the inlet, preventing the vanes from collapsing together when vacuum was applied. We placed the air inlet parallel to and just behind the brush. The only plausible space for the vacuum impeller, motor, and filter (needed to separate the dirt from the flowing air) was to take over a corner of the dust cup. Drawing on his now well-honed skills of packing big things into tiny spaces where they had no business fitting, mechanical engineer Eliot Mack managed somehow to accomplish this. But we did get help from an outside consultant to design the intricate shape the impeller needed to move air efficiently.
In general, regular vacuums perform better on carpet than on hard floors. But Roomba inverted that relationship. Our vacuum operated like a squeegee, pulling dirt from tile, linoleum, and wooden floors. But it was less effective on other surfaces. The sweeper mechanism did the heavy lifting when cleaning carpet.
iRobot released its first production version of the Roomba in September 2002.Joe Jones
Despite the team’s reluctance to add a vacuum and despite the unit’s low power, the vacuum genuinely improved Roomba’s cleaning ability. We could demonstrate this convincingly. First, we disabled Roomba’s new vacuum by disconnecting the power and then cleaned a hard floor relying only on the carpet-sweeper mechanism. If we then walked across the floor barefoot, we would feel a certain amount of grit underfoot. If we repeated the exercise with vacuum power on, the floor was pristine. Bare feet would detect no grit whatsoever.
The Roomba contributors present on the occasion of the 500,000th Roomba include Steve Hickey, Eliot Mack [front row], Paul Sandin, Chris Casey, Phil Mass, Joe Jones, and Jeff Ostaszewski [back row].Joe Jones
Years later I learned that the focus group had a back story no one mentioned at the time. While the Roomba team had swallowed the carpet-sweeper concept hook, line, and sinker, Winston had not. He was uneasy with the notion that customers would be cleaning-mechanism agnostic—thinking instead that they simply wouldn’t believe our robot would clean their floors if it didn’t have a vacuum. He found at least indirect support for that position when he scoured marketing data from our earlier collaboration with SC Johnson.
But Winston, well-attuned to the engineering psyche, knew he couldn’t just declare, “Roomba has to have a vacuum.” We’d have pushed back, probably saying something like, “What your business-school-addled brain doesn’t appreciate is that it’s the carpet sweeper that makes the whole concept work!” Winston had to show us. That was a key purpose of the focus group, to demonstrate to the Roomba team that we had made a deal-breaking omission.
Dancing With Roomba is now available for preorder.
This is a sponsored article brought to you by MBZUAI.
If you’ve ever tried to guess how a cell will change shape after a drug or a gene edit, you know it’s part science, part art, and mostly expensive trial-and-error. Imaging thousands of conditions is slow; exploring millions is impossible.
A new paper in Nature Communications proposes a different route: simulate those cellular “after” images directly from molecular readouts, so you can preview the morphology before you pick up a pipette. The team calls their model MorphDiff, and it’s a diffusion model guided by the transcriptome, the pattern of genes turned up or down after a perturbation.
At a high level, the idea flips a familiar workflow. High-throughput imaging is a proven way to discover a compound’s mechanism or spot bioactivity but profiling every candidate drug or CRISPR target isn’t feasible. MorphDiff learns from cases where both gene expression and cell morphology are known, then uses only the L1000 gene expression profile as a condition to generate realistic post-perturbation images, either from scratch or by transforming a control image into its perturbed counterpart. The claim is that competitive fidelity on held-out (unseen) perturbations across large drug and genetic datasets plus gains on mechanism-of-action (MOA) retrieval can rival real images.
This research led by MBZUAI researchers starts from a biological observation: gene expression ultimately drives proteins and pathways that shape what a cell looks like under the microscope. The mapping isn’t one-to-one, but there’s enough shared signal for learning. Conditioning on the transcriptome offers a practical bonus too: there’s simply far more publicly accessible L1000 data than paired morphology, making it easier to cover a wide swath of perturbation space. In other words, when a new compound arrives, you’re likely to find its gene signature which MorphDiff can then leverage.
Under the hood, MorphDiff blends two pieces. First, a Morphology Variational Autoencoder (MVAE) compresses five-channel microscope images into a compact latent space and learns to reconstruct them with high perceptual fidelity. Second, a Latent Diffusion Model learns to denoise samples in that latent space, steering each denoising step with the L1000 vector via attention.
Wang et al., Nature Communications (2025), CC BY 4.0
Diffusion is a good fit here: it’s intrinsically robust to noise, and the latent space variant is efficient enough to train while preserving image detail. The team implements both gene-to-image (G2I) generation (start from noise, condition on the transcriptome) and image-to-image (I2I) transformation (push a control image toward its perturbed state using the same transcriptomic condition). The latter requires no retraining thanks to an SDEdit-style procedure, which is handy when you want to explain changes relative to a control.
It’s one thing to generate photogenic pictures; it’s another to generate biologically faithful ones. The paper leans into both: on the generative side, MorphDiff is benchmarked against GAN and diffusion baselines using standard metrics like FID, Inception Score, coverage, density, and a CLIP-based CMMD. Across JUMP (genetic) and CDRP/LINCS (drug) test splits, MorphDiff’s two modes typically land first and second, with significance tests run across multiple random seeds or independent control plates. The result is consistent: better fidelity and diversity, especially on OOD perturbations where practical value lives.
The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments.
More interesting for biologists, the authors step beyond image aesthetics to morphology features. They extract hundreds of CellProfiler features (textures, intensities, granularity, cross-channel correlations) and ask whether the generated distributions match the ground truth.
In side-by-side comparisons, MorphDiff’s feature clouds line up with real data more closely than baselines like IMPA. Statistical tests show that over 70 percent of generated feature distributions are indistinguishable from real ones, and feature-wise scatter plots show the model correctly captures differences from control on the most perturbed features. Crucially, the model also preserves correlation structure between gene expression and morphology features, with higher agreement to ground truth than prior methods, evidence that it’s modeling more than surface style.
Wang et al., Nature Communications (2025), CC BY 4.0
The drug results scale up that story to thousands of treatments. Using DeepProfiler embeddings as a compact morphology fingerprint, the team demonstrates that MorphDiff’s generated profiles are discriminative: classifiers trained on real embeddings also separate generated ones by perturbation, and pairwise distances between drug effects are preserved.
Wang et al., Nature Communications (2025), CC BY 4.0
That matters for the downstream task everyone cares about: MOA retrieval. Given a query profile, can you find reference drugs with the same mechanism? MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images. In top-k retrieval experiments, the average improvement over the strongest baseline is 16.9 percent and 8.0 percent over transcriptome-only, with robustness shown across several k values and metrics like mean average precision and folds-of-enrichment. That’s a strong signal that simulated morphology contains complementary information to chemical structure and transcriptomics which is enough to help find look-alike mechanisms even when the molecules themselves look nothing alike.
MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images.
The paper also lists some current limitations that hint at potential future improvements. Inference with diffusion remains relatively slow; the authors suggest plugging in newer samplers to speed generation. Time and concentration (two factors that biologists care about) aren’t explicitly encoded due to data constraints; the architecture could take them as additional conditions when matched datasets become available. And because MorphDiff depends on perturbed gene expression as input, it can’t conjure morphology for perturbations that lack transcriptome measurements; a natural extension is to chain with models that predict gene expression for unseen drugs (the paper cites GEARS as an example). Finally, generalization inevitably weakens as you stray far from the training distribution; larger, better-matched multimodal datasets will help, as will conditioning on more modalities such as structures, text descriptions, or chromatin accessibility.
What does this mean in practice? Imagine a screening team with a large L1000 library but a smaller imaging budget. MorphDiff becomes a phenotypic copilot: generate predicted morphologies for new compounds, cluster them by similarity to known mechanisms, and prioritize which to image for confirmation. Because the model also surfaces interpretable feature shifts, researchers can peek under the hood. Did ER texture and mitochondrial intensity move the way we’d expect for an EGFR inhibitor? Did two structurally unrelated molecules land in the same phenotypic neighborhood? Those are the kinds of hypotheses that accelerate mechanism hunting and repurposing.
The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments. We’ve already seen text-to-image models explode in consumer domains; here, a transcriptome-to-morphology model shows that the same diffusion machinery can do scientifically useful work such as capturing subtle, multi-channel phenotypes and preserving the relationships that make those images more than eye candy. It won’t replace the microscope. But if it reduces the number of plates you have to run to find what matters, that’s time and money you can spend validating the hits that count.
Nokia Bell Labs has a lot to celebrate. The research giant marked its 100th anniversary in May at its venerable campus in Murray Hill—part of New Providence, N.J.—where major technological developments have occurred, such as the Bellmac-32 microprocessor and the satellite Earth station known as the Horn Antenna, which helped prove the big bang theory.
The company also held a groundbreaking ceremony on 4 September for its new headquarters in New Brunswick, N.J., about 32 kilometers south of Murray Hill and 10 km from IEEE’s Piscataway office.
Construction of the 10-story, 34,374-square-meter building is scheduled to be completed by the end of 2027. The Health and Life Science Exchange 2 building, known as HELIX 2, is the second of three planned edifices being constructed in the city’s new innovation district, which is designed to attract research labs, workspaces, and office suites for startups.
Attendees at the ceremony included Thierry E. Klein, the Bell Labs solutions research president, and Peter Vetter, the Bell Labs core research president. Both men are IEEE Fellows. New Jersey’s governor, Phil Murphy, was there too, as were New Brunswick Mayor James Cahill and other state and local officials.
“Today’s groundbreaking marks a new and exciting chapter in Bell Labs’ long history in New Jersey,” Klein said. “As we build and move into the HELIX, this continues our legacy of excellence, pioneering spirit, and commitment to breakthrough research on the East Coast. The location offers unique advantages that will accelerate our innovation capabilities and provide greater proximity to academic centers of excellence and fantastic new startups and ventures.”
The new location, he said, “will give access to a vibrant and urban environment that will help us attract the next generation of talent. Access to universities such as Princeton, Rutgers, the New Jersey Institute of Technology, and the Stevens Institute of Technology is incomparable. This is not just a move for the next two, three, four, or five years; this is going to be home for Bell Labs for a very, very long time.”
Nokia Bell Labs could have relocated its headquarters anywhere in the world, Murphy noted, but it chose to remain in New Jersey.
“Our illustrious history of innovation in New Jersey could be summarized in two words: Bell Labs,” the governor said. “For over a century, Bell Labs has transformed our state, our nation, and the world. This is literally an iconic and globally unique institution. We break ground and start to establish a new foundation for quantum physics, generative artificial intelligence, and optical communications. Through partnerships, joint ventures, and spinoffs, Nokia Bell Labs will facilitate new products and companies that will [continue to] drive the innovation economy in New Jersey.”
To ensure New Jersey would be at the forefront of innovation, the governor in 2018 announced his intent to establish 12 innovation hubs throughout the state as a way to attract entrepreneurs, startups, and early-stage companies. The first hub—the HELIX 1 building, adjacent to Nokia Bell Labs’ new headquarters—is expected to open next year and include Rutgers’s medical school and translational research institute.
New Jersey’s governor, Phil Murphy, at the podium addresses attendees at the groundbreaking ceremony. Nokia
The facilities will offer furnished offices and labs outfitted with scientific equipment, officials say. Tenants will include Hackensack Meridian Health and Robert Wood Johnson Barnabas Health.
New Brunswick is no stranger to innovators, Cahill noted. The Johnson & Johnson pharmaceutical company is headquartered in the city and got its start in a local wallpaper factory. The Johnson brothers and Thomas Edison often ate at a nearby drugstore lunch counter, where they discussed advancements in manufacturing, the mayor said. Edison’s laboratory was in Menlo Park. Cahill also said that Albert Einstein, who worked at Princeton University and lived in the town, was often spotted strolling the streets of New Brunswick, about 30 km away.
The new Nokia Bell Lab offices will cater to the needs of research scientists and specialists in focused areas, Klein said.
“It’s an efficient, modern, and low-carbon facility providing sustainable power, heating, and cooling capabilities,” he said. “Our researchers will have access to the best facility possible. That is our dream.”
This is not the first time Bell Labs has moved its headquarters, Vetter noted. The primary R&D activities were set up in New York City in 1925. They moved to Murray Hill in 1941. Some of the biggest innovations were developed there during the following decade, including the transistor and the cellular network.
“I want to think that our move will again be a catalyst for breakthrough innovations to happen in the decade after we move in and will be in a variety of areas such as 7G, AI, quantum computing, and quantum network security,” Vetter said.
“As we build and move into the HELIX, this continues our legacy of excellence, pioneering spirit, and commitment to breakthrough research on the East Coast.” —Thierry Klein
“We also need to make sure the research goes into the real world,” he said. “We like to say that if somebody has a problem in the real world and you solve it in the lab but you don’t make that leap of technology into the real world, the problem still exists.
“It’s not just research or breakthrough technologies,” he added. “It’s also creating the companies that will commercialize these technologies and lead the next century of innovation.”
Another celebratory event is scheduled for 21 October in Murray Hill. Several technologies developed there are to be designated as IEEE Milestones. The technologies include three Nobel Prize winners: super-resolved microscopy, the charge-coupled device, and the fractional quantum hall effect. IEEE Region 1 and the IEEE North Jersey Section sponsored the nominations.
Administered by the IEEE History Center and supported by donors, the Milestones program recognizes outstanding technical developments around the world.
Watch for The Institute’s article on the Nokia Bell Labs Milestone achievement ceremony in November.Walk into a typical data center and one of the first things that jumps out at you is the noise—the low, buzzing sound of thousands of fans: fans next to individual computer chips, fans on the back panels of server racks, fans on the network switches. All of those fans are pushing hot air away from the temperature-sensitive computer chips and toward air-conditioning units.
But those fans, whirr as they might, are no longer cutting it. Over the past decade, the power density of the most advanced computer chips has exploded. In 2017, Nvidia came out with the V100 GPU, which draws 300 watts of power. Most of that power dissipates back out as heat. Three years later, in 2020, Nvidia’s A100 came out, drawing up to 400 W. The now-popular H100 arrived in 2022 and consumes up to 700 W. The newest Blackwell GPUs, revealed in 2024, consume up to 1,200 W.
“Road maps are looking at over 2,000 watts [per chip] over the next year or two,” says Drew Matter, president and CEO of the liquid-cooling company Mikros Technologies. “In fact, the industry is preparing for 5-kilowatt chips and above in the foreseeable future.”
This power explosion is driven by the obvious culprit—AI. And all the extra computations consuming all that added power from advanced chips are generating unmanageable amounts of heat.
“The average power density in a rack was around 8 kW,” says Josh Claman, CEO of the startup Accelsius. “For AI, that’s growing to 100 kW per rack. That’s an order of magnitude. It’s really AI adoption that’s creating this real urgency” to figure out a better way to cool data centers.
Specifically, the urgency is to move away from fans and toward some sort of liquid cooling. For example, water has roughly four times the specific heat of air and is about 800 times as dense, meaning it can absorb around 3,200 times as much heat as a comparable volume of air can. What’s more, the thermal conductivity of water is 23.5 times as high as that of air, meaning that heat transfers to water much more readily.
“You can stick your hand into a hot oven and you won’t get burned. You stick your hand into a pot of boiling water and you can instantly get third-degree burns,” says Seamus Egan, general manager of immersion cooling at Airedale by Modine. “That’s because the liquid transfers heat much, much, much, much more quickly.”
The data-center industry by and large agrees that cooling chips with liquid is the future, at least for AI-focused data centers. “As AI has made racks denser and hotter, liquid cooling has become the de facto solution,” Karin Overstreet, president of Nortek Data Center Cooling, said via email.
But there are a number of ways to do liquid cooling, from the simple and straightforward to the complex and slightly weird.
At the simple end, there’s circulating chilled water through cold plates attached to the hottest chips. Then there’s circulating not water but a special dielectric fluid that boils inside the cold plate to take away the heat. A third approach is dunking the entire server into a fluid that keeps it cool. And, last and most splashy, is dunking the server into a boiling vat of liquid.
Which method will end up being the industry standard for the high-end AI factories of the future? At this point, it’s anyone’s guess. Here’s how the four methods work, and where they might find the most use.
The most technologically mature approach is to use water. Already, many AI data centers are employing such direct-to-chip liquid cooling for their hottest chips.
In this scheme, metal blocks, called cold plates, with channels in them for coolant to circulate, are placed directly on top of the chips. The cold plates match the size of the chips and go inside the server. The liquid is usually water, with some glycol added to prevent bacterial growth, stabilize the temperature, protect against freezing and corrosion, and increase the viscosity of the liquid. The glycol-water mixture is forced through the cold plate, whisking away heat right from the source.
Companies like Mikros Technologies are pursuing single-phase direct-to-chip liquid cooling. In this technique, a cold plate is placed on top of the hottest chips. Liquid is circulated through the cold plate, whisking away heat. Marvell Technology
One difficulty with this approach is that putting a cold plate on every single heat-producing component in a server is unfeasible. It only makes sense to put cold plates on the most energy-dense components—namely GPUs and some CPUs—leaving smaller components, like power supplies and memory units, to be cooled the old-fashioned way, with fans.
“The trend is moving toward a hybrid-cooling solution,” Overstreet says. “So liquid cooling does about 80 percent of the cooling for the server room or the data hall, and about 20 percent is the existing air-cooling solution.”
With GPU power densities showing no signs of leveling off, direct-to-chip water cooling is hitting a limit. You can, of course, increase the flow of water, but that will use more energy. Or you can operate the chips at a higher temperature, which will cut into their performance and in the long run degrade the chips. Fortunately, there’s a third option: to squeeze a bit more out of the physics of heat exchange.
The extra cooling power offered by physics comes from latent heat—that is, the energy it takes to change phase, in this case from liquid to gas. As the liquid boils off the GPU, it absorbs that extra latent heat as it turns into gas, without increasing temperature.
Companies like Accelsius are proposing two-phase direct-to-chip liquid cooling. Here, a cold plate is also placed on top of the hottest chips, and the liquid circulating through the cold plate boils directly atop the chip. Big Idea Productions
“It’s really boiling to cool,” says My Truong, chief technology officer of the startup ZutaCore, which makes two-phase direct-to-chip cooling systems.
Water boils at 100 °C (at atmospheric pressure), which is too high for proper chip operation. So you need a specially formulated fluid with a lower boiling point. ZutaCore’s chief evangelist, Shahar Belkin, explains that the fluid they use is sourced from chemical suppliers like Honeywell and Chemours, and boils at a temperature as low as 18 °C, which can be adjusted up or down by tweaking the pressure in the loop. In addition, the fluid is dielectric—it’s not electrically charged unless polarized by an external electric field. So, unlike water, if some of the fluid spills onto the electronics, it won’t damage the costly equipment.
With water, the temperature increases drastically as it flows over the hot chips. That means the incoming water needs to be kept cold, and so the facility water requires cooling with chillers in most climates.
With boiling dielectric fluid, however, the fluid remains roughly the same temperature and simply changes phase into a vapor. That means both the liquid and the facility water can be kept at a higher temperature, resulting in significant energy savings.
When liquid boils on top of a hot chip, the chip is cooled not only through contact with the cooler liquid, but also through the latent heat it takes to induce a phase change. Accelsius
“Because of the really efficient boiling process that happens on the cold plate, we can accept facility water that’s 6 to 8 degrees warmer than [with] single phase,” says Lucas Beran, director of product marketing at Accelsius, another startup working on two-phase direct-to-chip liquid cooling.
The two-phase setup also requires lower liquid flow rates than the traditional single-phase water approach, so it uses less energy and runs less risk of damaging the equipment. The flow rate of two-phase cooling is about one-fifth that of single-phase cooling, Belkin says.
With single-phase water cooling, he says, “you’ll have to flow a gallon per minute into the cold plate” for the most advanced chips running at 2,000 W. “This means very, very high pressure, very, very high flow. It means that pumping will be expensive, and [the cooling system] will actually harm itself with the high flow.”
Direct-to-chip liquid cooling offers much more cooling capacity than just blowing air, but it still relies on cold plates as intermediaries to do the cooling.
What if you could bypass the cold plate altogether and just dunk the entire computer server in coolant? Some companies are doing just that.
In this approach, the data center is arranged around immersion tanks rather than racks, each tank roughly the size of a refrigerator. The immersion tanks are filled with a dielectric fluid, usually an oil, which must be nonconductive and have strong thermal transfer properties, says Rachel Bielstein, global sales manager of immersion cooling at Baltimore Aircoil Co. The fluid also requires long-term stability and low environmental and fire risk.
Sustainable Metal Cloud is advocating for single-phase immersion cooling, in which an entire server is submerged in a vat of liquid to keep it cool.Firmus Technologies
With immersion cooling, everything gets cooled by the same fluid. After the oil has whisked away the heat, there are various approaches to cooling the immersion fluid. Baltimore Aircoil, for one, has designed a heat exchanger that circulates facility water through coils and plates inside the tank, Bielstein explains. “The heated water is then pumped to an outside cooler that releases the heat into the air, cools the water, and sends it back to the heat exchanger to absorb more heat from the tank. This process uses up to 51 percent less energy versus traditional designs.”
The team at Singapore-based Sustainable Metal Cloud (SMC), which builds immersion-cooling systems for data centers, has figured out the modifications that need to be made to servers to make them compatible with this cooling method. Beyond removing the built-in fans, the company swaps out the thermal-interface materials that connect chips to their heat sinks, as some of those materials degrade in the oil. Oliver Curtis, co-CEO of SMC and its sister company Firmus, told IEEE Spectrum the modifications they make are small but important to the functioning of SMC’s setup.
“We’ve created the perfect operating environment for a computer,” Curtis says. “There’s no dust, no movement, no vibration, because there’s no fans. And it’s a perfect operating temperature.”
There are some chips whose power density is still too high to be completely cooled by the slow-moving oil. In those cases, it’s necessary to add cold plates to increase the oil flow over them. “Single-phase immersion has already hit the limits” for cooling these advanced chips, says Egan of Airedale by Modine. Adding cold plates to immersion cooling, he says, “will definitely provide support for more advanced chip architectures and reduce the heat load on the single-phase dielectric fluid. The new challenge is that I now need two separate cooling-loop systems.”
If no one cooling method is enough on its own, how about putting all of them together, and dunking your data center into a vat of boiling oil?
Some companies already are.
“Two-phase immersion is probably the most moon-shot technology when it comes to data-center liquid cooling,” says Beran, of Accelsius.
But Brandon Marshall, global marketing manager of data-center liquid cooling at Chemours, says this is where the industry is headed. “We believe from the research that we’ve done that two-phase immersion is going to come up in a pretty reasonable way.”
At their lab in Newark, Del., the Chemours team is developing a specially formulated liquid for two-phase immersion cooling. In this approach, the server is dunked into a vat of liquid, and the liquid boils atop the hot components, cooling the system. Chemours
Marshall argues that a two-phase—also known as boiling—liquid has 10 to 100 times as much cooling capacity as a single-phase liquid, due to its latent heat. And while two-phase direct-to-chip cooling may work for the chips of today, it still leaves many components, such as memory modules and power supplies, to be air cooled. As CPUs and GPUs grow more powerful, these memory modules and power supplies will also require liquid cooling.
“That list of problems is not going anywhere,” Marshall says. “I think the immersion-cooling piece is going to continue to grow in interest as we move forward. People are going to get more comfortable with having a two-phase fluid inside of a rack just like they have [with] putting water in a rack through single-phase direct-to-chip technology.”
In their lab in Newark, Del., the Chemours team has placed several high-power servers in tanks filled with a proprietary, specially formulated fluid. The fluid is dielectric, so as not to cause shorts, and it’s also noncorrosive and designed to boil at the precise temperature at which the chips are to be held. The fluid boils directly on top of the hot chips. Then the vapor condenses on a cooled surface, either at the top or the back panel of the tank.
In their lab in Newark, Dela., the Chemours team is testing their two-phase immersion cooling fluid. In this approach, the whole server is dunked into a tank with dielectric liquid. The heat from the server boils the liquid, resulting in cooling. Chemours
That condenser is cooled with circulating facility water. “All we need is water sent directly to the tank that’s about 6 degrees lower than our boiling point, so about 43 °C,” Marshall says. “The fluid condenses [back to a liquid] right inside of the tank. The temperature required to condense our fluid can eliminate the need for chillers and other complex mechanical infrastructure in most cases.”
According to a recent case study by Chemours researchers, two-phase immersion cooling is more cost effective than single-phase immersion or single-phase direct-to-chip in most climates. For example, in Ashburn, Va., the 10-year total cost of ownership was estimated at US $436 million for a single-phase direct-to-chip setup, $491 million for a single-phase immersion setup, and $433 million for a two-phase immersion-cooling setup, mostly due to lower power requirements and a simplified mechanical system.
Critics argue that two-phase immersion makes it hard to maintain the equipment, especially since the oils are so specialized, expensive, and prone to evaporating. “When you’re in an immersion tank, and there’s dollar signs evaporating from it, that can make it a bit of a challenge to service,” Beran says.
However, Egan of Airedale by Modine says his company has developed a way to mostly avoid this issue with its immersion tanks, which are intended for edge applications. “Our EdgeBox is specifically designed to maintain the vapor layer lower down in the tank with a layer of air above it and closer to the tank lid. When the tank is opened (for a short maintenance period), the vapor layer does not ‘flow out’ of the tank,” Egan wrote via email. “The vapor is much heavier than air and therefore stays lower in the tank. The minimal vapor loss is offset by a buffer tank of fluid within the system.”
For the foreseeable future, people in the industry agree that the power demands of AI will keep going up, and the need for cooling along with them.
“Unless the floor falls out from under AI and everybody stops building these AI clusters, and stops building the hardware to perform training for large language models, we’re going to need to keep advancing cooling, and we’re going to need to solve the heat problem,” Marshall says.
Which cooling technology will dominate in the coming AI factories? It’s too soon to say. But the rapidly changing nature of data centers is opening up the field to a lot of inventiveness and innovation.
“There’s not only a great market for liquid cooling,” says Drew Matter, of Mikros Technologies, “but it’s also a fun engineering problem.”
The bacteria Geobacter sulfurreducens came from humble beginnings; it was first isolated from dirt in a ditch in Norman, Okla. But now, the surprisingly remarkable microbes are the key to the first ever artificial neurons that can directly interact with living cells.
The G. sulfurreducens microbes communicate with one another through tiny, protein-based wires that researchers at the University of Massachusetts Amherst harvested and used to make artificial neurons. These neurons can, for the first time, process information from living cells without an intermediary device amplifying or modulating the signals, the researchers say.
While some artificial neurons already exist, they require electronic amplification to sense the signals our bodies produce, explains Jun Yao, who works on bioelectronics and nanoelectronics at UMass Amherst. The amplification inflates both power usage and circuit complexity, and so counters efficiencies found in the brain.
The neuron created by Yao’s team can understand the body’s signals at their natural amplitude of around 0.1 volts. This is “highly novel,” says Bozhi Tian, a biophysicist who studies living bioelectronics at the University of Chicago and was not involved in the work. This work “bridges the long-standing gap between electronic and biological signaling” and demonstrates interaction between artificial neurons and living cells that Tian calls “unprecedented.”
Biological neurons are the fundamental building blocks of the brain. If external stimuli are strong enough, charge builds up in a neuron, triggering an action potential, a spike of voltage that travels down the neuron’s body to enable all types of bodily functions, including emotion and movement.
Scientists have been working to engineer a synthetic neuron for decades, chasing after the efficiency of the human brain, which has so far seemed to escape the abilities of electronics.
Yao’s group has designed new artificial neurons that mimic how biological neurons sense and react to electrical signals. They use sensors to monitor external biochemical changes and memristors—essentially resistors with memory—to emulate the action-potential process.
As voltage from the external biochemical events increases, ions accumulate and begin to form a filament across a gap in the memristor—which in this case was filled with protein nanowires. If there is enough voltage, the filament completely bridges the gap. Current shoots through the device and the filament then dissolves, dispersing the ions and stopping the current. The complete process mimics a neuron’s action potential.
The team tested its artificial neurons by connecting them to cardiac tissue. The devices measured a baseline amount of cellular contraction, which did not produce enough signal to cause the artificial neuron to fire. Then the researchers took another measurement after the tissue was dosed with norepinephrine—a drug that increases how frequently cells contract. The artificial neurons triggered action potentials only during the medicated trial, proving that they can detect changes in living cells.
The experimental results were published 29 September in Nature Communications.
The group has G. sulfurreducens to thank for the breakthrough.
The microbes synthesize miniature cables, called protein nanowires, that they use for intraspecies communication. These cables are charge conductors that survive for long periods of time in the wild without decaying. (Remember, they evolved for Oklahoma ditches.) They’re extremely stable, even for device fabrication, Yao says.
To the engineers, the most notable property of the nanowires is how efficiently ions move along them. The nanowires offer a low-energy means of transferring charge between human cells and artificial neurons, thus avoiding the need for a separate amplifier or modulator. “And amazingly, the material is designed for this,” says Yao.
The group developed a method to shear the cables off bacterial bodies, purifying the material and suspending it in a solution. The team laid the mixture out and let the water evaporate, leaving a one-molecule-thin film made from the protein nanowire material.
This efficiency allows the artificial neuron to yield huge power savings. Yao’s group integrated the film into the memristor at the core of the neuron, lowering the energy barrier for the reaction that causes the memristor to respond to signals recognized by the sensor. With this innovation, the researchers say, the artificial neuron uses one-tenth the voltage and 1/100 the power of other artificial neurons.
Chicago’s Tian thinks this “extremely impressive” energy efficiency is “essential for future low-power, implantable, and biointegrated computing systems.”
The power advantages make this synthetic-neuron design attractive for all kinds of applications, the researchers say.
Responsive wearable electronics, like prosthetics that adapt to stimuli from the body, could make use of these new artificial neurons, Tian says. Eventually, implantable systems that rely on the neurons could “learn like living tissues, advancing personalized medicine and brain-inspired computing” to “interpret physiological states, leading to biohybrid networks that merge electronics with living intelligence,” he says.
The artificial neurons could also be useful in electronics outside the biomedical field. Millions of them on a chip could replace transistors, completing the same tasks while decreasing power usage, Yao says. The fabrication process for the neurons does not involve high temperatures and utilizes the same kind of photolithography that silicon chip manufacturers do, he says.
Yao does, however, point out two possible bottlenecks producers could face when scaling up these artificial neurons for electronics. The first is obtaining more of the protein nanowires from G. sulfurreducens. His lab currently works for three days to generate only 100 micrograms of material—about the mass of one grain of table salt. And that amount can coat only a very small device, so Yao questions how this step in the process could scale up for production.
His other concern is how to achieve a uniform coating of the film at the scale of a silicon wafer. “If you wanted to make high-density small devices, the uniformity of film thickness actually is a critical parameter,” he explains. But the artificial neurons his group has developed are too small to do any meaningful uniformity testing for now.
Tian doesn’t expect artificial neurons to replace silicon transistors in conventional computing, but instead sees them as a parallel offering for “hybrid chips that merge biological adaptability with electronic precision,” he says.
In the far future, Yao hopes that such bioderived devices will also be appreciated for not contributing to e-waste. When a user no longer wants a device, they can simply dump the biological component in the surrounding environment, Yao says, because it won’t cause an environmental hazard.
“By using this kind of nature-derived, microbial material, we can create a greener technology that’s more sustainable for the world,” Yao says.
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
The rapid build-out of fast-charging stations for electric vehicles is testing the limits of today’s power grid. With individual chargers drawing 350 to 500 kilowatts (or more)—which makes EV charging times now functionally equivalent to the fill-up time for a gasoline or diesel vehicle—full charging sites can reach megawatt-scale demand. That’s enough to strain medium-voltage distribution networks—the segment of the grid that links high-voltage transmission lines with the low-voltage lines that serve end users in homes and businesses.
DC fast-charging stations tend to be clustered in urban centers, along highways, and in fleet depots. Because the load is not spread evenly across the network, particular substations are overworked—even when overall grid capacity is rated to accommodate the load. Overcoming this problem, as more charging stations, with greater power demands, come online requires power electronics that are not only compact and efficient but also capable of managing local storage and renewable inputs.
One of the most promising technologies for modernizing the grid so it can keep up with the demands of vehicle electrification and renewable generation is the solid-state transformer (SST). An SST performs the same basic function as a conventional transformer—stepping voltage up or down. But it does so using semiconductors, high-frequency conversion with silicon carbide or gallium nitride switches, and digital control, instead of passive magnetic coupling alone. An SST’s setup allows it to control power flow dynamically.
For decades, charging infrastructure has relied on line-frequency transformers (LFTs)—massive assemblies of iron and copper that step down medium-voltage AC to low-voltage AC before or after external conversion from alternating current to the direct current that EV batteries require. A typical LFT can contain as much as a few hundred kilograms of copper windings and a few tonnes of iron. All that metal is costly and increasingly difficult to source. These systems are reliable but bulky and inefficient, especially when energy flows between local storage and vehicles. SSTs are much smaller and lighter than the LFTs they are designed to replace.
“Our solution achieves the same semiconductor device count as a single-port converter while providing multiple independently controlled DC outputs.” —Shashidhar Mathapati, Delta Electronics
But most multiport SSTs developed so far have been too complex or costly (between five and 10 times as much as the upfront cost of LFTs). That difference—plus SSTs’ reliance on auxiliary battery banks that add more expense and reduce reliability—explains why solid-state’s obvious benefits have not yet incentivized shifting to the technology from LFTs.
Surjakanta Mazumder, Saichand Kasicheyanula, Harisyam P.V., and Kaushik Basu hold their SST prototype in a lab.Harisyam P.V., Saichand Kasicheyanula, et al.
In a study published on 20 August in IEEE Transactions on Power Electronics, researchers at the Indian Institute of Science and Delta Electronics India, both in Bengaluru, proposed what’s called a cascaded H-bridge (CHB)–based multiport SST that eliminates those compromises. “Our solution achieves the same semiconductor device count as a single-port converter while providing multiple independently controlled DC outputs,” says Shashidhar Mathapati, the chief technology officer of Delta Electronics. “That means no additional battery storage, no extra semiconductor devices, and no extra medium-voltage insulation.”
The team built a 1.2-kilowatt laboratory prototype to validate the design, achieving 95.3 percent efficiency at rated load. They also modeled a full-scale 11-kilovolt, 400-kW system divided into two 200-kW ports.
At the heart of the system is a multiwinding transformer located on the low-voltage side of the converter. This configuration avoids the need for costly, bulky medium-voltage insulation and allows power balancing between ports without auxiliary batteries. “Previous CHB-based multiport designs needed multiple battery banks or capacitor networks to even out the load,” the authors wrote in their paper. “We’ve shown you can achieve the same result with a simpler, lighter, and more reliable transformer arrangement.”
A new modulation and control strategy maintains a unity power factor at the grid interface, meaning that none of the current coming from the grid goes to waste by oscillating back and forth between the source and the load without doing any work. The SST described by the authors also allows each DC port to operate independently. In practical terms, each vehicle connected to the charger would be able to receive the appropriate voltage and current, without affecting neighboring ports or disturbing the grid connection.
Using silicon carbide switches connected in series, the system can handle medium-voltage inputs while maintaining high efficiency. An 11-kV grid connection would require just 12 cascaded modules per phase, which is roughly half as many as some modular multilevel converter designs. Fewer modules ultimately means lower cost, simpler control, and greater reliability.
Although still at the laboratory stage, the design could enable a new generation of compact, cost-effective fast-charging hubs. By removing the need for intermediate battery storage—which adds cost, complexity, and maintenance—the proposed topology could extend the operational lifespan of EV charging stations.
According to the researchers, this converter is not just for EV charging. Any application that needs medium-voltage to multiport low-voltage conversion—such as data centers, renewable integration, or industrial DC grids—could benefit.
For utilities and charging providers facing megawatt-scale demand, this streamlined solid-state transformer could help make the EV revolution more grid-friendly, and faster for drivers waiting to charge.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
There are two things that I really appreciate about this video on grippers from Boston Dynamics. First, building a gripper while keeping in mind that the robot will inevitably fall onto it, because I’m seeing lots of very delicate-looking five-fingered hands on humanoids, and I’m very skeptical of their ruggedness. And second, understanding that not only is a five-fingered hand very likely unnecessary for the vast majority of tasks, but also robot hands don’t have to be constrained by a human hand’s range of motion.
[ Boston Dynamics ]
Yes, okay, it’s a fancy-looking robot, but I’m still stuck on what useful, practical things can it reliably and cost-effectively and safely DO?
- YouTubeyoutu.be
[ Figure ]
Life on Earth has evolved in constant relation to gravity, yet we rarely consider how deeply it shapes living systems until we imagine a place without it. In MycoGravity, pink oyster mushrooms grow inside a custom-built bioreactor mounted on a KUKA robotic arm. Inspired by NASA’s random positioning machines, the robot’s programmed movement simulates altered gravity. Over time, sculptural mushrooms emerge, shaped by their environment without a stable gravitational direction.
[ MycoGravity ]
A new technological advancement gives robotic systems a natural sense of touch without extra skins or sensors. With advanced force sensing and deep learning, this robot can feel where you touch, recognize symbols, and even use virtual buttons—paving the way for more natural and flexible human-robot interaction.
[ Science Robotics ]
Thanks, Maged!
The creator of Mini Pupper introduces HeySanta, which can be yours for under $60.
I think humanoid robotics companies are starting to realize that they’re going to need to differentiate themselves somehow.
[ DEEP Robotics ]
Drone swarm performances—synchronized, expressive aerial displays set to music—have emerged as a captivating application of modern robotics. Yet designing smooth, safe choreographies remains a complex task requiring expert knowledge. We present SwarmGPT, a language-based choreographer that leverages the reasoning power of large language models (LLMs) to streamline drone performance design.
[ SwarmGPT ]
Dr. Mark Draelos, assistant professor of robotics and ophthalmology, received the National Institutes of Health (NIH) Director’s New Innovator Award for a project that seeks to improve how delicate microsurgeries are conducted by scaling up tissue to a size where surgeons could “walk across the retina” in virtual reality and operate on tissue as if “raking leaves.”
The intricate mechanisms of the most sophisticated laboratory on Mars are revealed in Episode 4 of the ExoMars Rosalind Franklin series, called “Sample Processing.”
There’s currently a marketplace for used industrial robots, and it makes me wonder what’s next. Used humanoids, anyone?
[ Kuka ]
On October 2, 2025, the 10th “Can We Build Baymax?” Workshop Part 10: What Can We Build Today? & BYOB (Bring Your Own Baymax) was held in Seoul, Korea. To celebrate the 10th anniversary, Baymax delivered a special message from his character designer, Jin Kim.
[ Baymax ]
I am only sharing this to declare that iRobot has gone off the deep end with their product names: Meet the “Roomba® Max 705 Combo Robot + AutoWash™ Dock.”
[ iRobot ]
Daniel Piedrahita, Navigation Team Lead, presents on his team’s recent work rebuilding Digit’s navigation stack, including a significant upgrade to footstep path planning.
[ Agility Robotics ]
A bunch of videos from ICRA@40 have just been posted, and here are a few of my favorites.
[ ICRA@40 ]
This is a sponsored article brought to you by ADIPEC.
Returning to Abu Dhabi between 3 and 6 November, ADIPEC 2025 – the world’s largest energy event – aims to show how AI is turning ideas into real-world impact across the energy value chain and redrawing the global opportunity map. At the same time, it addresses how the world can deliver more energy – by adding secure supply, mobilizing investment, deploying intelligent solutions, and building resilient systems.
Across heavy industry and utilities, AI is cutting operating costs, lifting productivity, and improving energy efficiency, while turning data into real-time decisions that prevent failures and optimize output. Clean-energy and enabling-technology investment is set to reach US$2.2 trillion this year out of US$3.3 trillion going into the energy system, highlighting a decisive swing toward grids, renewables, storage, low-emissions fuels, efficiency and electrification.
Taking place in Abu Dhabi from 3-6 November 2025, ADIPEC will host 205,000+ visitors and 2,250+ exhibiting companies from the full spectrum of the global energy ecosystem, to showcase the latest breakthroughs shaping the future of energy. Under the theme “Energy. Intelligence. Impact.”, the event is held under the patronage of H.H. Sheikh Mohamed Bin Zayed Al Nahyan, President of the United Arab Emirates and hosted by ADNOC.
With a conference program featuring 1,800+ speakers across 380 sessions and its most expansive exhibition ever, ADIPEC 2025 examines how scaling intelligent solutions like AI and building resilience can transform the energy sector to achieve inclusive global progress.
Two flagship programs anchor the engineering agenda at ADIPEC’s Technical Conferences: the SPE-organized Technical Conference and the Downstream Technical Conference.
Technical Conference attendees can expect upwards of 1,100 technical experts across more than 200 sessions focused on field-proven solutions, operational excellence, and AI-powered optimization. From cutting-edge innovations reshaping the hydrogen and nuclear sectors to AI-driven digital technologies embedded across operations, the Conference showcases practical applications and operational successes across the upstream, midstream, and downstream sectors.
Clean-energy and enabling-technology investment is set to reach US$2.2 trillion this year out of US$3.3 trillion going into the energy system.
Technical pioneers demonstrate solutions that transform operations, enhance grid reliability, and enable seamless coordination between energy and digital infrastructure through smart integration technologies. In 2025, submissions hit a record 7,086, with about 20% centered on AI and digital technologies, and contributions arriving from 93 countries.
Running in parallel to the engineering deep-dive, the ADIPEC Strategic Conference convenes ministers, CEOs, investors, and policymakers across 10 strategic programs to tackle geopolitics, investment, AI, and energy security with practical, long-term strategies. Over four days, a high-level delegation of 16,500+ participants will join a future-focused dialogue that links policy, capital, and technology decisions.
Core program areas include Global Strategy, Decarbonization, Finance and Investment, Natural Gas and LNG, Digitalization and AI, Emerging Economies, and Hydrogen, with additional themes spanning policy and regulation, downstream and chemicals, diversity and leadership, and maritime and logistics. The result is a system-level view that complements the Technical Conference by translating boardroom priorities into roadmaps that operators can execute.
ADIPEC’s agenda addresses this balance – how to harness intelligence to decarbonize operations, while ensuring the grid keeps up with compute.
Curated in partnership with ADNOC, the AI Zone is an immersive showcase of how intelligence – both human and artificial – is redefining energy systems, empowering people, and enabling bold, cross-sector disruption.
It brings together tech giants such as Microsoft, Honeywell, ABB, Hexagon, Cognite, DeepOcean, and SUPCON, with AI innovators such as Bechtel, Clean Connect AI, and Gecko Robotics. Fast-scaling startups, data analytics firms, system integrators, and academic labs will demonstrate AI-enhanced hardware, predictive analytics, and smart energy-management platforms.
The AI Zone is an immersive showcase of how intelligence – both human and artificial – is redefining energy systems, empowering people, and enabling bold, cross-sector disruption.
The goal is practical: to make the full set of AI building blocks for energy clear – from sensors and data platforms to models and control systems – so operators can integrate them with confidence, as well as accelerate adoption and deployment, and connect decision-makers with innovators and investors.
In addition to the AI Zone, dedicated digitalization and AI conference content explores secure automation, cost-reduction playbooks, and real-time platforms that can help cut downtime and emissions.
ADIPEC 2025 arrives at precisely the right moment. With its scale, technical depth and curated focus on AI, ADIPEC serves as a catalyst for the next chapter of energy progress.
Whether you lead operations, build digital platforms, allocate capital, or shape policy, ADIPEC 2025 is where conversation becomes coordination and ideas turn into action. Join the global community in Abu Dhabi to transform vision into reality and ambition into impact.
Without support from her family, Mini Thomas says, she would not have had a successful career in academia.
The IEEE senior member has held several leadership positions in India, including dean of engineering at the Delhi Technological University (formerly the Delhi College of Engineering) and (the first female) president of the National Institute of Technology, Tiruchirappalli. Today she is a professor of electrical engineering at Jamia Millia Islamia University in New Delhi, where she formerly was a dean.
Employer:
Jamia Millia Islamia, in New Delhi
Title:
Professor of electrical engineering
Member grade:
Senior member
Alma maters:
University of Kerala; the Indian Institute of Technology, Madras; the Indian Institute of Technology, New Delhi.
Thomas, an expert in power systems and smart grids, is working to get more women into the power and energy industry.
She is an active IEEE volunteer, having worked with student branches and membership recruitment. As a member of the IEEE Technology for a Sustainable Climate Matrix Organization, she shares her knowledge about energy, climate-resilient infrastructure, and ozone-layer recovery.
“For a woman to succeed, she needs a lot of family support,” Thomas says, because many women’s careers are interrupted by caretaking and child-rearing responsibilities. She acknowledges that not all women have the same support system she has—which is part of the reason why she is dedicated to helping others succeed.
Thomas was born and raised in Kerala, India. Kerala students who excelled at school were expected to choose a career in either medicine or engineering, she says. Medicine wasn’t an option for her, she says, because she faints at the sight of blood. She was good at mathematics, though, so she chose to pursue engineering.
Although both her parents were teachers (her father taught chemistry; her mother was a language instructor), she wasn’t inspired to pursue a similar path until she was an undergraduate at the University of Kerala. Her extensive note-taking during class made her popular among her classmates, she says, and some would ask her to tutor them during exam season.
“My friends would come over to my home so I could explain the material to them using my notes,” she says. “Afterward, they would tell me that they were able to understand the subject much better than how the professor had explained it. That’s what inspired me to become a teacher.”
After earning her bachelor’s degree in electrical engineering in 1984, Thomas continued her education at the Indian Institute of Technology, Madras. Shortly after earning her master’s of technology in electrical engineering in 1986, she began her first teaching job at the National Institute of Technology, Calicut, also in Kerala.
The year was a whirlwind for Thomas, who got married, left her job, and moved to New Delhi, where her husband lived. Instead of searching for another teaching job, she decided to pursue a doctoral degree in the electrical engineering program at the Indian Institute of Technology, New Delhi.
“By the time I was 28, I had a Ph.D. in electrical engineering, which I earned in 1990,” she says. “I soon got a job at Delhi Technological University, the only other college in New Delhi that had an engineering school at that time, other than IIT. From there, I never looked back.”
She taught at the university for five years, then left in 1995 to join Jamia Millia Islamia. She eventually was promoted to lead the electrical engineering department.
During her 11 years there, she established labs to conduct research in supervisory control and data acquisition (SCADA) and substation automation, collaborating with industry on projects. In 2003 she created a curriculum for—and led the launch of—a master’s of technology program in electrical power system management as well as a training program for industry professionals. For her work, she received a 2015 IEEE Educational Activities Board Meritorious Achievement Award.
In 2014 she founded the school’s Center for Innovation and Entrepreneurship to help startups turn their ideas into prototypes and launch businesses.
She received an offer she couldn’t refuse in 2016: become president of the National Institute of Technology, Tiruchirappalli.
“This was a great honor to become the first woman president of that institute,” she says. “I was the only woman among 90 presidents of all the institutions of national importance at that time.”
But, she says, as president, she didn’t have much time to teach, and after five years, she began to miss her time in the classroom. After her five-year term was completed, she returned to Jamia Millia Islamia in 2021 as engineering dean. Since then, she has led the launch of five programs: three undergraduate programs (in data science, electrical and computer engineering, and VLSI) and graduate programs in data science and environmental sciences.
This year she stepped down after completing her three-year term as dean and is focusing more on teaching.
She teaches at least one class each semester because, she says, she finds joy in “imparting and giving knowledge to young minds.”
Thomas mentors doctoral students as well as professors who aspire to serve as deans or other high-level positions.
In addition, she trains mid-career women in the power industry on the skills they need to get promoted—to technical and senior management roles—through the South Asia WePOWER network’s South Asia Region (SAR) 100 professional development program. WePOWER is a coalition of nonprofit and government organizations that aim to increase the number of women working in the power and energy sectors through education. A 2020 World Bank study found that the percentage of women in technical roles in the industry in South Asia ranges from 0.1 to 21.
The six-month-long program provides technical training, mentorship, and networking opportunities to 100 women from Bangladesh, Bhutan, India, the Maldives, Nepal, Pakistan, and Sri Lanka. Thomas is one of 40 experts who remotely teach topics such as transmission details, distribution, renewable energy, and the importance of women in leadership.
She also mentors women to give them confidence and tools to reach leadership positions because “mentorship is what changed my career trajectory,” she says. When she first began teaching, she says she was reluctant to take high-level positions. But after participating in a six-day leadership training at the Jawaharlal Nehru University, which was hosted by the Government of India’s University Grants Commission, she felt confident in her ability to move up the career ladder.
“Many women take a break from their careers to raise their children, struggle to balance their personal and professional lives, or don’t have a support system,” she says. “I want to impart the lessons I learned from my experiences and the training I received. Whenever I get a chance, I get involved.”
Thomas joined IEEE in 1990 as a graduate student member and says she continues renewing her membership to stay up to date on emerging technologies, specifically SCADA systems.
“I learned everything about SCADA from a tutorial developed by the IEEE Power & Energy Society. There was no such material available at that time,” she says.
Years later, in 2015, Thomas cowrote Power System SCADA and Smart Grids with her friend John McDonald, whom she met through the organization. McDonald is an IEEE Life Fellow and the founder and CEO of JDM Associates in Duluth, Ga.
Thomas became an active volunteer for the Delhi Technological University’s student branch, where she helped organize technical talks and other events. When Thomas joined Jamia Millia Islamia, she revived the inactive student branch there and served as its counselor for 14 years.
During her 35 years with IEEE, she has served as chair of the Region 10 student activities committee and vice chair of membership development for IEEE Member and Geographic Activities. She was a member of the IEEE Educational Activities and the IEEE Publication Services and Products boards.
“Creating programs that benefit members makes me feel satisfied,” Thomas says. “Volunteering has also boosted my confidence.”
She is also a member of IEEE Spectrum’s editorial advisory board.
Not only does she attribute much of her professional growth to the organization, she also has created lifelong friendships through IEEE, she says. One friend is 2023 IEEE President Saifur Rahman, whom she met in 2000 when he spoke to the Jamia Millia Islamia student branch.
“Our friendship has grown so much that Saifur is like family,” she says.
When Rahman launched the IEEE Technology for a Sustainable Climate Matrix Organization in 2022, he asked Thomas to become a member. She helped create the IEEE Climate Change Collection on the IEEE Xplore Digital Library. The following year, she led the development of a climate change taxonomy. The 620 words are included in the IEEE Thesaurus, which defines almost 12,500 engineering, technical, and scientific terms. Now she is working with a team to expand the taxonomy by defining hundreds more climate-change terms.
“You should always do what you enjoy. For me, that’s teaching and volunteering with IEEE,” she says. “I could just be a member, access the technical content, and be happy with just that, but I volunteer because I can do things that help others.”