Connected successfully
dailynewspick.dpoisn.com : Collection of RSS News Feeds
menuButton

Daily News Pick Main page

+ What is an RSS feed?

Scroll down to read. Use the menu above to choose a different RSS feed. Note: As of 9/2022 CNN has been leaning to the right. As such, they have been taken off as the default feed. For now I'm going to use Google for the default.

The available RSS feeds are valid news sites that are all considered to be neutral. Nothing leaning too far left, nothing leaning too far right. Plus some fun stuff. Hope you find the page useful.

If you want to know more about how this works, please visit the Tutorial page to learn to make your own RSS reader.

Current feed - IEEE Spectrum

Are There Enough Engineers for the AI Boom?


The AI data center construction boom continues unabated, with the demand for power in the United States potentially reaching 106 gigawatts by 2035, according to a December report from research and analysis company BloombergNEF. That’s a 36 percent jump from the company’s previous outlook, published just seven months earlier. But there are severe constraints in power availability, material, equipment, and—perhaps most significantly—a lack of engineers, technicians, and skilled craftsmen that could turn the data center boom into a bust.

The power grid engineering workforce is currently shrinking, and data center operators are also hurting for trained electrical engineers. Laura Laltrello, the chief operating officer for Applied Digital, says demand has accelerated for civil, mechanical, and electrical engineers, as well as construction management and oversight positions in recent months. (Applied Digital is a data center developer and operator that is building two data center campuses near Harwood, North Dakota that will require 1.4 GW of power when completed.) The growing demand for skilled workers has forced her company to widen the recruitment perimeter.

“As we anticipate a shortage of traditional engineering talent, we are sourcing from diverse industries,” says Laltrello. “We are finding experts who understand power and cooling from sectors like nuclear energy, the military, and aerospace. Expertise doesn’t have to come from a data center background.”

Growing Demand for Data Center Engineers

For every engineer needed to design, specify, build, inspect, commission, or run a new AI data center, dozens of other positions are in short supply. According to the Association for Computer Operations and Management’s (AFCOM) State of the Data Center Report 2025, 58 percent of data center managers identified multi-skilled data center operators as the top area of growth, while 50 percent signaled increasing demand for data center engineers. Security specialists are also a critical need.

Through the next decade, the U.S. Bureau of Labor Statistics projects the need for almost 400,000 more construction workers by 2033. By far the biggest needs are in power infrastructure, electricians, plumbing and HVAC, and roughly 17,500 electrical and electronics engineers. These categories directly map to the skills required to design, build, commission, and operate modern data centers.

“The challenge is not simply the absolute number of workers available, but the timing and intensity of demand,” says Bill Kleyman, author of the AFCOM report and the CEO of AI infrastructure firm Apolo. “Data centers are expanding at the same time that utilities, manufacturing, renewables, grid infrastructure, and construction are all competing for the same skilled labor pool and AI is amplifying this pressure.”

Data center developers like Lancium and construction firms like Crusoe face enormous demands to build faster, bigger, and more power-dense facilities. For example, they’re developing the Stargate project in Abilene, Texas for Oracle and OpenAI. The project has two buildings that went live in October of 2025, with another six scheduled for completion by the middle of 2026. The entire AI data center campus, once completed, will require 1.2 GW of power.

Michael McNamara, the CEO of Lancium, says that in one year his company can currently build enough AI data center infrastructure to require one gigawatt of power. Big tech firms, he says, want this raised to 1 GW a quarter and eventually 1 GW per month or less.

That kind of ramp up of construction pace calls for tens of thousands more engineers. The shortage of engineering talent is paralleled by persistent staffing shortages in data center operations and facility management professionals, electrical and mechanical technicians, high-voltage and power systems engineers, skilled HVAC technicians with experience in high-density or liquid cooling, and construction specialists familiar with complex mechanical, electrical and plumbing (MEP) integration, says Matthew Hawkins, the director of education for Uptime Institute.

“Demand for each category is rising significantly faster than supply,” says Hawkins.

Technical colleges and applied education programs are among the most effective engines for workforce growth in the data center industry. They focus on hands on skills, facilities operations, power and cooling systems, and real-world job readiness. With so many new data centers being built in Texas, workforce programs are popping up all over that state. One example is the SMU Lyle School of Engineering’s Master of Science in Datacenter Systems Engineering (MS DSE) in Dallas. The program blends electrical engineering, IT, facilities management, business continuity, and cybersecurity. There is also a 12-week AI data center technician program at Dallas College and a similar program at Texas State Technical College near Waco.

“Technical colleges are driving the charge in bringing new talent to an industry undergoing exponential growth with an almost infinite appetite for skilled workers,” says Wendy Schuchart, an association manager at AFCOM.

Vendors and industry associations are actively addressing the talent gap too. Microsoft’s Datacenter Academy is a public-private partnership involving community colleges in regions where Microsoft operates data center facilities. Google supports local nonprofits and colleges offering training in IT and data center operations, and Amazon offers data center apprenticeships.

The Siemens Educates America program has surpassed 32,000 apprenticeships across 32 states, 36 labs, and 72 partner industry labor organizations. The company has committed to training 200,000 electricians and electrical manufacturing workers by 2030. Similarly, the National Electrical Contractors Association (NECA) operates the Electrical Training Alliance; the Society of Manufacturing Engineers (SME) offers ToolingU-SME, aimed at expanding the manufacturing workforce; and Uptime Institute Education programs look to accelerate the readiness of technicians and operators.

“Every university we speak with is thinking about this challenge and shifting its curriculum to prepare students for the future of digital infrastructure,” said Laltrello. “The best way to predict the future is to build it.”


IEEE Medal of Honor Recipient Is Nvidia’s CEO Jensen Huang


Jensen Huang, cofounder and CEO of Nvidia, is the 2026 IEEE Medal of Honor recipient. The IEEE honorary member is being recognized for his “leadership in the development of graphics processing units and their application to scientific computing and artificial intelligence.” The news was announced on 6 January by IEEE’s president and CEO, Mary Ellen Randall, at the Consumer Electronics Show in Las Vegas.

Huang helped found Nvidia in 1993. Under his direction, the company introduced the programmable GPU six years later. The device sparked extraordinary advancements that have transformed fields including artificial intelligence, computing, and medicine—influencing how technology improves society.

“[Receiving the IEEE Medal of Honor] is an incredible honor, ” Huang said at the CES event. “I thank [IEEE] for this incredible award that I receive on behalf of all the great employees at Nvidia.”

With a US $2 million prize the award underscores IEEE’s commitment to celebrating visionaries who drive the future of technology for the benefit of humanity.

“The IEEE Medal of Honor is the pinnacle of recognition and our most prestigious award,” Randall said at the event. “[Jensen] Huang’s leadership and technical vision have unlocked a new era of innovation.

“His vision and subsequent development of [Nvidia’s first GPU hardware] is emblematic of the [award].”

Huang’s impact on technology

Huang’s impact has been acknowledged beyond the realm of engineering. He was named as one of the “Architects of AI,” a group of eight tech leaders who were collectively named Time magazine’s 2025 Person of the Year. He was also featured on a 2021 cover of Time magazine, was named the world’s top-performing CEO for 2019 by Harvard Business Review, and was Fortune’s 2017 Businessperson of the Year.

He is also an IEEE–Eta Kappa Nu eminent member.

This year’s IEEE Medal of Honor, along with other high-profile IEEE awards, will be presented during the IEEE Honors Ceremony, to be held in April in New York City. To follow news and updates on IEEE’s most prestigious awards, follow IEEE Awards on LinkedIn.


Video Friday: Bipedal Robot Stops Itself From Falling


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

This is one of the best things I have ever seen.

[ Kinetic Intelligent Machine LAB ]

After years of aggressive testing and pushing the envelope with U.S. Army and Marine Corps partners, the Robotic Autonomy in Complex Environments with Resiliency (RACER) program approaches its conclusion. But the impact of RACER will reverberate far beyond the program’s official end date, leaving a legacy of robust autonomous capabilities ready to transform military operations and inspire a new wave of private-sector investment.

[ DARPA ]

Best-looking humanoid yet.

[ Kawasaki ]

COSA (Cognitive OS of Agents) is a physical-world-native Agentic OS that unifies high-level cognition with whole-body motion control, enabling humanoid robots to think while acting in real environments. Powered by COSA, Oli becomes the first humanoid agent with both advanced loco-manipulation and high-level autonomous cognition.

[ LimX Dynamics ]

Thanks, Jinyan!

The 1X World Model’s latest update is a paradigm shift in robot learning: NEO now uses a physics-grounded video model (World Model) to turn any voice or text prompt into fully autonomous action, even for completely novel tasks and objects NEO has never seen before. By leveraging internet-scale video data fine-tuned on real robot experience, NEO can visualize future actions, predict outcomes, and execute them with humanlike understanding–all without prior examples. This marks the critical first step in NEO being able to collect data on its own to master new tasks all by itself.

[ 1X ]

I’m impressed by the human who was mocapped for this.

[ PNDbotics ]

We introduce the GuideData Dataset, a collection of qualitative data, focusing on the interactions between guide dog trainers, visually impaired (BLV) individuals, and their guide dogs. The dataset captures a variety of real-world scenarios, including navigating sidewalks, climbing stairs, crossing streets, and avoiding obstacles. By providing this comprehensive dataset, the project aims to advance research in areas such as assistive technologies, robotics, and human-robot interaction, ultimately improving the mobility and safety of visually impaired people.

[ DARoS Lab ]

Fourier’s desktop Care-Bot prototype is gaining much attention at CES 2026! Even though it’s still in the prototype stage, we couldn’t wait to share these adorable and fun interaction features with you.

[ Fourier ]

Volcanic gas measurements are critical for understanding eruptive activity. However, harsh terrain, hazardous conditions, and logistical constraints make near-surface data collection extremely challenging. In this work, we present an autonomous legged robotic system for volcanic gas monitoring, validated through real-world deployments on Mount Etna. The system combines a quadruped robot equipped with a quadrupole mass spectrometer and a modular autonomy stack, enabling long-distance missions in rough volcanic terrain.

[ ETH Zurich RSL ]

Humanoid and Siemens successfully completed a POC testing humanoid robots in industrial logistics. This is the first step in the broader partnership between the companies. The POC focused on a tote-to-conveyor destacking task within Siemens’s logistics process. HMND 01 autonomously picked, transported, and placed totes in a live production environment during a two-week on-site deployment at the Siemens Electronics Factory in Erlangen.

[ Humanoid ]

Four Growers, a category leader in intelligent ag-tech platforms, developed the GR-200 robotic harvesting platform, powered by FANUC’s LR Mate robot. The system combines AI-driven vision and motion planning to identify and harvest ripe tomatoes with quick precision.

[ FANUC ]

Columbia Engineers built a robot that, for the first time, is able to learn facial lip motions for tasks such as speech and singing. In a new study published in Science Robotics, the researchers demonstrate how their robot used its abilities to articulate words in a variety of languages, and even sing a song out of its AI-generated debut album, “hello world_.” The robot acquired this ability through observational learning rather than via rules. It first learned how to use its 26 facial motors by watching its own reflection in the mirror before learning to imitate human lip motion by watching hours of YouTube videos.

[ Columbia ]

Roborock has some odd ideas about what lawns are like.

[ Roborock ]

DEEP Robotics’ quadruped robots demonstrate coordinated multi-module operations under unified command, tackling complex and dynamic firefighting scenarios with agility and precision.

[ DEEP Robotics ]

Unlike statically stable wheeled platforms, humanoids are dynamically stable, requiring continuous active control to maintain balance and prevent falls. This inherent instability presents a critical challenge for functional safety, particularly in collaborative settings. This presentation will introduce Synapticon’s POSITRON platform, a comprehensive solution engineered to address these safety-critical demands. We will explore how its integrated hardware and software enable robust, certifiable safety functions that meet the highest industrial standards, providing key insights into making the next generation of humanoid robots safe for real-world deployment.

[ Synapticon ]

The University of California, Berkeley, is world-famous for its AI developments, and one big name behind them is Ken Goldberg. Longtime professor and lifelong artist, Ken is all about deep learning while staying true to “good old-fashioned engineering.” Hear Ken talk about his approach to vision and touch for robotic surgeries and how robots will evolve across the board.

[ Waymo ]


How to Gain Footing in AI as the Ground Keeps Shifting


The newly released Preparing for a Career as an AI Developer guide from the IEEE Computer Society argues that the most durable path to artificial intelligence jobs is not defined by mastering any single tool or model. Instead, it depends on cultivating a balanced mix of technical fundamentals and human-centered skills—capabilities that machines are unlikely to replace.

AI is reshaping the job market faster than most academic programs and employers can keep up with, according to the guide. AI systems now can analyze cybercrime, predict equipment failures in manufacturing, and generate text, code, and images at scale, leading to mass layoffs across much of the technology sector. It has unsettled recent graduates about to enter the job market as well as early-career professionals.

Yet the demand for AI expertise remains strong in the banking, health care, retail, and pharmaceutical industries, whose businesses are racing to deploy generative AI tools to improve productivity and decision-making—and keep up with the competition.

The uneven landscape leaves many observers confused about how best to prepare for a career in a field that is redefining itself. Addressing that uncertainty is the focus of the guide, which was written by San Murugesan and Rodica Neamtu.

Murugesan, an IEEE life senior member, is an adjunct professor at Western Sydney University, in Penrith, Australia. Neamtu, an IEEE member, is a professor of teaching and a data-mining researcher at Worcester Polytechnic Institute, in Massachusetts.

The downloadable 24-page PDF outlines what aspiring AI professionals should focus on, which skills are most likely to remain valuable amid rapid automation, and why AI careers are increasingly less about building algorithms in isolation and more about applying them thoughtfully across domains.

The guide emphasizes adaptability as the defining requirement for entering the field, rather than fluency in any particular programming language or framework.

Why AI careers are being redefined

AI systems perform tasks that once required human intelligence. What distinguishes the current situation from when AI was introduced, the authors say, is not just improved performance but also expanded scope. Pattern recognition, reasoning, optimization, and machine learning are now used across nearly every sector of the economy.

Although automation is expected to reduce the number of human roles in production, office support, customer service, and related fields, demand is rising for people who can design, guide, and integrate AI systems, Murugesan and Neamtu write.

The guide cites surveys of executives about AI’s effect on their hiring and retention strategies, including those conducted by McKinsey & Co. The reports show staffing shortages in advanced IT and data analytics, as well as applicants’ insufficient critical thinking and creativity: skills that are difficult to automate.

The authors frame the mismatch as an opportunity for graduates and early-career professionals to prepare strategically, focusing on capabilities that are likely to remain relevant as AI tools evolve.

Developing complementary skills

The strategic approach aligns with advice from Neil Thompson, director of FutureTech research at MIT’s Computer Science and Artificial Intelligence Laboratory, who was quoted in the guide. Thompson encourages workers to develop skills that complement AI rather than compete with it.

“When we see rapid technological progress like this, workers should focus on skills and occupations that apply AI to adjacent domains,” he says. “Applying AI in science, in particular, has enormous potential right now and the capacity to unlock significant benefits for humanity.”

The technical foundation still matters

Adaptability, the guide stresses, is not a substitute for technical rigor. A viable AI career still requires a strong foundation in data, machine learning, and computing infrastructure.

Core knowledge areas include data structures, large-scale data handling, and tools for data manipulation and analysis, the authors say.

Foundational machine-learning concepts, such as supervised and unsupervised learning, neural networks, and reinforcement learning, remain essential, they say.

Because many AI systems depend on scalable computing, familiarity with cloud platforms such as Amazon Web Services, Google Cloud, and Microsoft Azure is important, according to the guide’s authors.

Mathematics underpins all of it. Linear algebra, calculus, and probabilities form the basis of most AI algorithms.

Python has emerged as the dominant language for building and experimenting with models.

From algorithms to frameworks

The authors highlight the value of hands-on experience with widely used development frameworks. PyTorch, developed by Meta AI, is commonly used for prototyping deep-learning models in academia and industry. Scikit-learn provides open-source tools for classification, regression, and clustering within the Python ecosystem.

“When we see rapid technological progress like this, workers should focus on skills and occupations that apply AI to adjacent domains. —Neil Thompson, MIT

TensorFlow, a software library for machine learning and AI created by Google, supports building and deploying machine-learning systems at multiple levels of abstraction.

The authors emphasize that such tools matter less as résumé keywords than as vehicles for understanding how models behave within real-world constraints.

Soft skills as career insurance

Because AI projects often involve ambiguous problems and interdisciplinary teams, soft skills play an increasingly central role, according to the guide. Critical thinking and problem-solving are essential, but communication has become more important, the authors say. Many AI professionals must explain system behavior, limitations, and risks to nontechnical stakeholders.

Neamtu describes communication and contextual thinking as timeless skills that grow more valuable as automation expands, particularly when paired with leadership, resilience, and a commitment to continuous learning.

Murugesan says technical depth must be matched with the ability to collaborate and adapt.

Experience before titles

The guide recommends that students consider work on research projects in college, as well as paid internships, for exposure to real AI workflows and job roles with hands-on experience.

Building an AI project portfolio is critical. Open-source repositories on platforms such as GitHub allow newcomers to demonstrate applied skills including work on AI security, bias mitigation, and deepfake detection. The guide recommends staying current by reading academic papers, taking courses, and attending conferences. Doing so can help students get a solid grounding in the basics and remain relevant in a fast-moving field after beginning their career.

Entry-level roles that open doors

Common starting positions include AI research assistant, junior machine-learning engineer, and junior data analyst. The roles typically combine support tasks with opportunities to help develop models, preprocess data, and communicate results through reports and visualizations, according to the guide.

Each starting point reinforces the guide’s central message: AI careers are built through collaboration and learning, not merely through isolated technical brilliance.

Curiosity as a long-term strategy

Murugesan urges aspiring AI professionals to embrace continuous learning, seek mentors, and treat mistakes as part of the learning process.

“Always be curious,” he says. “Learn from failure. Mistakes and setbacks are part of the journey. Embrace them and persist.”

Neamtu echoes that perspective, noting that AI is likely to affect nearly every profession, making passion for one’s work and compatibility with organizational aims more important than chasing the latest technology trend.

In a field where today’s tools can become obsolete in a year, the guide’s core argument is simple: The most future-proof AI career is built not on what you know now but on how well you continue learning when things change.


Lessons for Your Career From 2025


This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!

As we enter 2026, we’re taking a look back at the top pieces of advice we shared in the Career Alert newsletter last year. Whether you’re looking for a new job or seeking strategies to excel in your current role, read on for the three most popular recommendations that could help advance your career.

1. Getting Past Procrastination

Across a decade working at hypergrowth tech companies like Meta and Pinterest, I constantly struggled with procrastination. I’d be assigned an important project, but I simply couldn’t get myself to start it. The source of my distraction varied—I would constantly check my email, read random documentation, or even scroll through my social feeds. But the result was the same: I felt a deep sense of dread that I was not making progress on the things that mattered.

At the end of the day, time is the only resource that matters. With every minute, you are making a decision about how to spend your life. Most of the ways people spend their time are ineffective. Especially in the tech world, our tasks and tools are constantly changing, so we must be able to adapt. What separates the best engineers from the rest of the pack is that they create systems that allow them to be consistently productive.

Here’s the core idea that changed my perspective on productivity: Action leads to motivation, not the other way around. You should not check your email or scroll Instagram while you wait for motivation to “hit you.” Instead, just start doing something, anything, that makes progress toward your goal, and you’ll find that motivation will follow.…

Read the full newsletter here.

2. Improve Your Chances of Landing That Job Interview

One of my close friends is a hiring manager at Google. She recently posted about an open position on her team and was immediately overwhelmed with applications. We’re talking about thousands of applicants within days.

What surprised me most, however, was the horrendous quality of the average submission. Most applicants were obviously unqualified or had concocted entirely fake profiles. The use of generative AI to automatically fill out (and, in some cases, even submit) applications is harmful to everyone; employers are unable to filter through the noise, and legitimate candidates have a harder time getting noticed—much less advancing to an interview.

So how can job seekers stand out among the deluge of candidates? When there are hundreds or thousands of applicants, the best way to distinguish yourself is by leveraging your network.

With AI, anyone with a computer can trivially apply to thousands of jobs. On the other hand, people are restricted by Dunbar’s number—the idea that humans can maintain stable social relationships with only about 150 people. Being one of those 150 people is harder, but it also carries more weight than a soulless job application.

Read the full newsletter here.

3. Learning to Code Still Matters in the Age of AI

Cursor, the AI-native code editor, recently reported that it writes nearly a billion lines of code daily. That’s one billion lines of production-grade code accepted by users every single day. If we generously assume that a strong engineer writes a thousand lines of code in a day, Cursor is doing the equivalent work of a million developers. (For context, while working at Pinterest and Meta, I’d typically write less than 100 lines of code per day.)

There are only about 25 million software developers worldwide! Naively, it appears that Cursor is making a meaningful percentage of coders obsolete.

This begs the question: Is it even worth learning to code anymore?

The answer is a resounding “yes.” The above fear-based analysis of Cursor misses several important points.…

Read the full newsletter here.

—Rahul


The Ultimate 3D Integration Would Cook Future GPUs


Peek inside the package of AMD’s or Nvidia’s most advanced AI products, and you’ll find a familiar arrangement: The GPU is flanked on two sides by high-bandwidth memory (HBM), the most advanced memory chips available. These memory chips are placed as close as possible to the computing chips they serve in order to cut down on the biggest bottleneck in AI computing—the energy and delay in getting billions of bits per second from memory into logic. But what if you could bring computing and memory even closer together by stacking the HBM on top of the GPU?

Imec recently explored this scenario using advanced thermal simulations, and the answer—delivered in December at the 2025 IEEE International Electron Device Meeting (IEDM)—was a bit grim. 3D stacking doubles the operating temperature inside the GPU, rendering it inoperable. But the team, led by Imec’s James Myers, didn’t just give up. They identified several engineering optimizations that ultimately could whittle down the temperature difference to nearly zero.

2.5D and 3D Advanced Packaging

Imec started with a thermal simulation of a GPU and four HBM dies as you’d find them today, inside what’s called a 2.5D package. That is, both the GPU and the HBM sit on substrate called an interposer, with minimal distance between them. The two types of chips are linked by thousands of micrometer-scale copper interconnects built into the interposer’s surface. In this configuration, the model GPU consumes 414 watts and reaches a peak temperature of just under 70 °C—typical for a processor. The memory chips consume an additional 40 W or so and get somewhat less hot. The heat is removed from the top of the package by the kind of liquid cooling that’s become common in new AI data centers.

“While this approach is currently used, it does not scale well for the future—especially as it blocks two sides of the GPU, limiting future GPU-to-GPU connections inside the package,” Yukai Chen, a senior researcher at Imec, told engineers at IEDM. In contrast, “the 3D approach leads to higher bandwidth, lower latency.… The most important improvement is the package footprint.”

Unfortunately, as Chen and his colleagues found, the most straightforward version of stacking, simply putting the HBM chips on top of the GPU and adding a block of blank silicon to fill in a gap at the center, shot up temperatures in the GPU to a scorching 140 °C—well past a typical GPU’s 80 °C limit.

System Technology Co-optimization

The Imec team set about trying a number of technology and system optimizations aimed at lowering the temperature. The first thing they tried was throwing out a layer of silicon that was now redundant. To understand why, you have to first get a grip on what HBM really is.

This form of memory is a stack of as many as 12 high-density DRAM dies. Each has been thinned down to tens of micrometers and is shot through with vertical connections. These thinned dies are stacked one atop another and connected by tiny balls of solder, and this stack of memory is vertically connected to another piece of silicon, called the base die. The base die is a logic chip designed to multiplex the data—pack it into the limited number of wires that can fit across the millimeter-scale gap to the GPU.

But with the HBM now on top of the GPU, there’s no need for such a data pump. Bits can flow directly into the processor without regard for how many wires happen to fit along the side of the chip. Of course, this change means moving the memory control circuits from the base die into the GPU and therefore changing the processor’s floorplan, says Myers. But there should be ample room, he suggests, because the GPU will no longer need the circuits used to demultiplex incoming memory data.

Cutting out this middleman of memory cooled things down by only a little less than 4 °C. But, importantly, it should massively boost the bandwidth between the memory and the processor, which is important for another optimization the team tried—slowing down the GPU.

That might seem contrary to the whole purpose of better AI computing, but in this case, it’s an advantage. Large language models are what are called “memory-bound” problems. That is, memory bandwidth is the main limiting factor. But Myers’s team estimated 3D stacking HBM on the GPU would boost bandwidth fourfold. With that added headroom, even slowing the GPU’s clock by 50 percent still leads to a performance win, while cooling everything down by more than 20 °C. In practice, the processor might not need to be slowed down quite that much. Increasing the clock frequency to 70 percent led to a GPU that was only 1.7 °C warmer, Myers says.

Optimized HBM

Another big drop in temperature came from making the HBM stack and the area around it more conductive. That included merging the four stacks into two wider stacks, thereby eliminating a heat-trapping region; thinning out the top—usually thicker—die of the stack; and filling in more of the space around the HBM with blank pieces of silicon to conduct more heat.

With all of that, the stack now operated at about 88 °C. One final optimization brought things back to near 70 °C. Generally, some 95 percent of a chip’s heat is removed from the top of the package, where in this case water carries the heat away. But adding similar cooling to the underside as well drove the stacked chips down a final 17 °C.

Although the research presented at IEDM shows it might be possible, HBM-on-GPU isn’t necessarily the best choice, Myers says. “We are simulating other system configurations to help build confidence that this is or isn’t the best choice,” he says. “GPU-on-HBM is of interest to some in industry,” because it puts the GPU closer to the cooling. But it would likely be a more complex design, because the GPU’s power and data would have to flow vertically through the HBM to reach it.


Stretchable OLEDs Just Got a Huge Upgrade


Wearable displays are catching up with phones and smartwatches. For decades, engineers have sought OLEDs that can bend, twist, and stretch while maintaining bright and stable light. These displays could be integrated into a new class of devices—woven into clothing fabric, for example, to show real-time information, like a runner’s speed or heart rate, without breaking or dimming.

But engineers have always encountered a trade-off: The more you stretch these materials, the dimmer they become. Now, a group co-led by Yury Gogotsi, a materials scientist at Drexel University in Philadelphia, has found a way around the problem by employing a special class of materials called MXenes—which Gogotsi helped discover—that maintain brightness while being significantly stretched.

The team developed an OLED that can stretch to twice its original size while keeping a steady glow. It also converts electricity into light more efficiently than any stretchable OLED before it, reaching a record 17 percent external quantum efficiency—a measure of how efficiently a device turns electricity into light.

The “Perfect Replacement”

Gogotsi didn’t have much experience with OLEDs when, about five years ago, he teamed up with Tae-Woo Lee, a materials scientist at Seoul National University, to develop better flexible OLEDs, driven by the ever-increasing use of flexible electronics like foldable phones.

Traditionally, the displays are built from multiple stacked layers. At the base, a cathode supplies electrons that enter the adjacent organic layers, which are designed to conduct this charge efficiently. As the electrons move through these layers, they meet positive charge injected by an indium tin oxide (ITO) film. The moment these charges combine, the organic material releases energy as light, creating the illuminated pixels that make up the image. The entire structure is sealed with a glass layer on top.

The ITO film—adhered to the glass—serves as the anode, allowing current to pass through the organic layers without blocking the generated light. “But it’s brittle. It’s ceramic, basically,” so it works well for flat surfaces, but can’t be bent, Gogotsi explains. There have been attempts to engineer flexible OLEDs many times before, but they failed to meaningfully overcome both flexibility and brightness limitations.

Gogotsi’s students started by creating a transparent, conducting film out of a MXene, a type of ultrathin and flexible material with metal-like conductivity. The material is unique in its inherent ability to bend because it’s made from many two-dimensional sheets that can slide relative to each other without breaking. The film—only 10 nanometers thick—“appeared to be this perfect replacement for ITO,” Gogotsi says.

Through experimentation, Gogotsi and Lee’s shared team found that a mix of the MXene and silver nanowire would actually stretch the most while maintaining stability. “We were able to double the size, achieving 200 percent stretching without losing performance,” Gogotsi says.

A bi-axially twisted exciplex-assisted phosphorescent film deposited on a small stretchable substrate. The new material can also be twisted without losing its glow.Source image: Huanyu Zhou, Hyun-Wook Kim, et al.

And the new MXene film was not only more flexible than ITO but also increased brightness by almost an order of magnitude by making the contact between the topmost light-emitting organic layer and the film more efficient.

Unlike ITO, the surface of MXenes can be chemically adjusted to make it easier for electrons to move from the electrode into the light-emitting layer. This more efficient electron flow significantly increases the brightness of the display, as evidenced by an external quantum efficiency of 17 percent, which the group claims is a record for stretchable OLEDs.

“Achieving those numbers in intrinsically stretchable OLEDs under substantial stretching is quite significant,” says Seunghyup Yoo, who runs the Integrated Organic Electronics Laboratory at South Korea’s KAIST. An external quantum efficiency of 20 percent is an important benchmark for this kind of device because it is the upper limit of efficiency dictated by the physical properties of light generation, Yoo explains.

To increase illumination, the researchers went beyond working with MXene. Lee’s group developed two additional organic layers to add into the middle of their OLED—one that directs positive charges to the light-emitting layer, ensuring that electricity is used more efficiently, and one that recycles wasted energy that would normally be lost, boosting overall brightness.

Together, the MXene layer and two organic layers allow for a notably bright and stable OLED, even when stretched. Gogotsi thinks the subsequent OLED is “very successful” because it combines both brightness and stretchability, while, historically, engineers have only been able to achieve one or the other.

“The performance that they are able to achieve in this work is an important advancement,” says Sihong Wang, a molecular engineer at the University of Chicago who also develops stretchable OLED materials. Wang also notes that the 200 percent stretchability that Gogotsi’s group attained is beyond robust for wearable applications.

Wearables and Health Care

A stretchable OLED that maintains its brightness has uses in many settings, including industrial environments, robotics, wearable clothing and devices, and communications, Gogotsi says, although he’s most excited about its adoption in health-monitoring devices. He sees a near future in which displays for diagnostics and treatment become embedded in clothing or “epidermal electronics,” comparing their function to smartwatches.

Before these displays can come to market, however, stability issues inherent to all stretchable OLEDs need to be solved, Wang says. Current materials are not able to sustain light emissions for long enough to serve customers in the ways they require.

Finding housings to protect them is also a problem. “You need a stretchable encapsulation material that can protect the central device without allowing oxygen and moisture to permeate,” Wang says.

Yoo agrees: He says it’s a tough problem to solve because the best protective layers are rigid and not very stretchable. He notes yet another challenge in the way of commercialization, which is “developing stretchable displays that do not exhibit image distortion.”

Regardless, Gogotsi is excited about the future of stretchable OLEDs. “We started with computers occupying the room, then moved to our desktops, then to laptops, then we got smartphones and iPads, but still we carry stuff with us,” he says. “Flexible displays can be on the sleeve of your jacket. They can be rolled into a tube or folded and put in your pocket. They can be everywhere.”


Meet the Two Members Petitioning to Be President-Elect Candidates


The IEEE Board of Directors has received petition intentions from IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee as candidates for 2027 IEEE president-elect. The petitioners are listed in alphabetical order and indicate no preference.

The winner of this year’s election will serve as IEEE president in 2028. For more information about the petitioners and Board-nominated candidates, visit ieee.org/pe27. You can sign their petitions at ieee.org/petition.

Signatures for IEEE president-elect candidate petitions are due 10 April at 12:00 p.m. EST/16:00 p.m. UTC.

IEEE Senior Member Gerardo Barbosa

Gerardo Barbosa smiling in a suit jacket. Gerardo Sosa

Barbosa is an expert in information technology management and technology commercialization, with a career spanning innovation, entrepreneurship,and an international perspective. He began his career designing radio-frequency identification systems for real-time asset tracking and inventory management. In 2014 he founded CLOUDCOM, a software company that develops enterprise software to improve businesses’ billing and logistics operations, and serves as its CEO.

Barbosa’s IEEE journey began in 2009 at the IEEE Monterrey (Mexico) Section, where he served as chair and treasurer. He led grassroots initiatives with students and young professionals. His leadership positions in IEEE Region 9 include technical activities chair and treasurer.

As the 2019—2020 vice chair and 2021—2023 treasurer of IEEE Member and Geographic Activities, Barbosa became recognized as a trusted, data-driven, and collaborative leader.

He has been a member of the IEEE Finance Committee since 2021 and is now its chair due to his role as IEEE treasurer on the IEEE Board of Directors. He is deeply committed to the responsible stewardship of IEEE’s global resources, ensuring long-term financial sustainability in service of IEEE’s mission.

IEEE Life Senior Member Timothy T. Lee

Timothy Lee smiling. Nikon/CES

Lee is a Technical Fellow at Boeing in Southern California with expertise in microelectronics and advanced 2.5D and 3D chip packaging for AI workloads, 5G, and SATCOM systems for aerospace platforms. He leads R&D projects, including work funded by the Defense Advanced Research Projects Agency. He previously held leadership roles at MACOM Technology Solutions and COMSAT Laboratories.

Lee was the 2015 president of the IEEE Microwave Theory and Technology Society. He has served on the IEEE Board of Directors as 2025 IEEE-USA president and 2021–2022 IEEE Region 6 director. He has also been a member of several IEEE committees including Future Directions, Industry Engagement, and New Initiatives.

His vision is to deliver societal value through trust, integrity, ownership, innovation, and customer focus, while strengthening the IEEE member experience. Lee also wants to work to prepare members for AI-enabled work in the future.

He earned bachelor’s and master’s degrees in electrical engineering from MIT and a master’s degree in systems architecting and engineering from the University of Southern California in Los Angeles.


This $5,200 Conductive Suit Could Make Power-Line Work Safer


In 2018, Justin Kropp was working on a transmission circuit in Southern California when disaster struck. Grid operators had earlier shut down the 115-kilovolt circuit, but six high-voltage lines that shared the corridor were still operating, and some of their power snuck onto the deenergized wires he was working on. That rogue current shot to the ground through Kropp’s body and his elevated work platform, killing the 32-year-old father of two.

“It went in both of his hands and came out his stomach, where he was leaning against the platform rail,” says Justin’s father, Barry Kropp, who is himself a retired line worker. “Justin got hung up on the wire. When they finally got him on the ground, it was too late.”

Budapest-based Electrostatics makes conductive suits that protect line workers from unexpected current. Electrostatics

Justin’s accident was caused by induction: a hazard that occurs when an electric or magnetic field causes current to flow through equipment whose intended power supply has been cut off. Safety practices seek to prevent such induction shocks by grounding all conductive objects in a work zone, giving electricity alternative paths. But accidents happen. In Justin’s case, his platform unexpectedly swung into the line before it could be grounded.

Conductive Suits Protect Line Workers

Adding a layer of defense against induction injuries is the motivation behind Budapest-based Electrostatics’ specialized conductive jumpsuits, which are designed to protect against burns, cardiac fibrillation, and other ills. “If my boy had been wearing one, I know he’d be alive today,” says the elder Kropp, who purchased a line-worker safety training business after Justin’s death. The Mesa, Ariz.–based company, Electrical Safety Consulting International (ESCI), now distributes those suits.

The lower half of a man\u2019s legs clothed in pants and socks that are connected by straps Conductive socks that are connected to the trousers complete the protective suit. BME HVL

Eduardo Ramirez Bettoni, one of the developers of the suits, dug into induction risk after a series of major accidents in the United States in 2017 and 2018, including Justin Kropp’s. At the time, he was principal engineer for transmission and substation standards at Minneapolis-based Xcel Energy. In talking to Xcel line workers and fellow safety engineers, he sensed that the accident cluster might be the tip of an iceberg. And when he and two industry colleagues scoured data from the U.S. Bureau of Labor Statistics, they found 81 induction accidents between 1985 and 2021 and 60 deaths, which they documented in a 2022 report.

“Unfortunately, it is really common. I would say there are hundreds of induction contacts every year in the United States alone,” says Ramirez Bettoni, who is now technical director of R&D for the Houston-based power-distribution equipment firm Powell Industries. He bets that such “contacts”—exposures to dangerous levels of induction—are increasing as grid operators boost grid capacity by squeezing additional circuits into transmission corridors.


Electrostatics’ suits are an enhancement of the standard protective gear that line workers wear when their tasks involve working close to or even touching energized live lines, or “bare-hands” work. Both are interwoven with conductive materials such as stainless steel threads, which form a Faraday cage that shields the wearer against the lines’ electric fields. But the standard suits have limited capacity to shunt current because usually they don’t need to. Like a bird on a wire, bare-hands workers are electrically floating, rather than grounded, so current largely bypasses them via the line itself.

Induction Safety Suit Design

Backed by a US $250,000 investment from Xcel in 2019, Electrostatics adapted its standard suits by adding low-resistance conductive straps that pass current around a worker’s body. “When I’m touching a conductor with one hand and the other hand is grounded, the current will flow through the straps to get out,” says Bálint Németh, Electrostatics’ CEO and director of the High Voltage Laboratory at Budapest University of Technology and Economics.

A man holds one side of his jacket open revealing conductive straps inside.  A strapping system links all the elements of the suit—the jacket, trousers, gloves, and socks—and guides current through a controlled path outside the body. BME HVL

The company began selling the suits in 2023, and they have since been adopted by over a dozen transmission operators in the United States and Europe, as well as other countries including Canada, Indonesia, and Turkey. They cost about $5,200 in the United States.

Electrostatics’ suits had to meet a crucial design threshold: keeping body exposure below the 6-milliampere “let-go” threshold, beyond which electrocuted workers become unable to remove themselves from a circuit. “If you lose control of your muscles, you’re going to hold onto the conductor until you pass out or possibly die,” says Ramirez Bettoni.

The gear, which includes the suit, gloves, and socks, protects against 100 amperes for 10 seconds and 50 A for 30 seconds. It also has insulation to protect against heat created by high current and flame retardants to protect against electric arcs.

Kropp, Németh, and Ramirez Bettoni are hoping that developing industry standards for induction safety gear, including ones published in October, will broaden their use. Meanwhile, the recently enacted Justin Kropp Safety Act in California, for which the elder Kropp lobbied, mandates automated defibrillators at power-line work sites.

This article was updated on 14 January 2026.


Researchers Beam Power From a Moving Airplane


On a blustery November day, a Cessna turboprop flew over Pennsylvania at 5,000 meters, in crosswinds of up to 70 knots—nearly as fast as the little plane was flying. But the bumpy conditions didn’t thwart its mission: to wirelessly beam power down to receivers on the ground as it flew by.

The test flight marked the first time power has been beamed from a moving aircraft. It was conducted by the Ashburn, Va.-based startup Overview Energy, which emerged from stealth mode in December by announcing the feat.

But the greater purpose of the flight was to demonstrate the feasibility of a much grander ambition: to beam power from space to Earth. Overview plans to launch satellites into geosynchronous orbit (GEO) to collect unfiltered solar energy where the sun never sets and then beam this abundance back to humanity. The solar energy would be transferred as near-infrared waves and received by existing solar panels on the ground.

The far-flung strategy, known as space-based solar power, has become the subject of both daydreaming and serious research over the past decade. Caltech’s Space Solar Power Project launched a demonstration mission in 2023 that transferred power in space using microwaves. And terrestrial power beaming is coming along too. The U.S. Defense Advanced Research Projects Agency (DARPA) in July 2025 set a new record for wirelessly transmitting power: 800 watts over 8.6 kilometers for 30 seconds using a laser beam.

But until November, no one had actively beamed power from a moving platform to a ground receiver.

Wireless Power Beaming Goes Airborne

Overview’s test transferred only a sprinkling of power, but it did it with the same components and techniques that the company plans to send to space. “Not only is it the first optical power beaming from a moving platform at any substantial range or power,” says Overview CEO Marc Berte, “but also it’s the first time anyone’s really done a power beaming thing where it’s all of the functional pieces all working together. It’s the same methodology and function that we will take to space and scale up in the long term.”

The approach was compelling enough that power-beaming expert Paul Jaffe left his job as a program manager at DARPA to join the company as head of systems engineering. Prior to DARPA, Jaffe spent three decades with the U.S. Naval Research Laboratory.

“This actually sounds like it could work.” –Paul Jaffe

It was hearing Berte explain Overview’s plan at a conference that helped to convince Jaffe to take a chance on the startup. “This actually sounds like it could work,” Jaffe remembers thinking at the time. “It really seems like it gets around a lot of the showstoppers for a lot of the other concepts. I remember coming home and telling my wife that I almost felt like the problem had been solved. So I thought: Should [I] do something which is almost unheard of—to leave in the middle of being a DARPA program manager—to try to do something else?”

For Jaffe, the most compelling reason was in Overview’s solution for space-based solar’s power-density problem. A beam with low power density is safer because it’s not blasting too much concentrated energy onto a single spot on the Earth’s surface, but it’s less efficient for the task of delivering usable solar energy. A higher-density beam does the job better, but then the researchers must engineer some way to maintain safety.

Startup Overview Energy demonstrates how space-based solar power could be beamed to Earth from satellites. Overview Energy

Space-Based Solar Power Makes Waves

Many researchers have settled on microwaves as their beam of choice for wireless power. But, in addition to the safety concerns about shooting such intense waves at the Earth, Jaffe says there’s another problem: Microwaves are part of what he calls the “beachfront property” of the electromagnetic spectrum—a range from 2 to 20 gigahertz that is set aside for many other applications, such as 5G cellular networks.

“The fact is,” Jaffe says, “if you somehow magically had a fully operational solar power satellite that used microwave power transmission in orbit today—and a multi-kilometer-scale microwave power satellite receiver on the ground magically in place today—you could not turn it on because the spectrum is not allocated to do this kind of transmission.”

Instead, Overview plans to use less-dense, wide-field infrared waves. Existing utility-scale solar farms would be able to receive the beamed energy just like they receive the sun’s energy during daylight hours. So “your receivers are already built,” Berte says. The next major step is a prototype demonstrator for low Earth orbit, after which he hopes to have GEO satellites beaming megawatts of power by 2030 and gigawatts by later that decade.

Plenty of doubts about the feasibility of space-based power abound. It is an exotic technology with much left to prove, including the ability to survive orbital debris and the exorbitant cost of launching the power stations. (Overview’s satellite will be built on Earth in a folded configuration, and it will unfold after it’s brought to orbit, according to the company.)

“Getting down the cost per unit mass for launch is a big deal,” Jaffe says. “Then, it just becomes a question of increasing the specific power. A lot of the technologies we’re working on at Overview are squarely focused on that.”


 

Share on Facebook Tweet about this on Twitter Post about this on Threads Join our Discord Share on LinkedIn Share on Reddit Make a Donation Email this to someone