Connected successfully
+ What is an RSS feed?
Scroll down to read. Use the menu above to choose a different RSS feed. Note: As of 9/2022 CNN has been leaning to the right. As such, they have been taken off as the default feed. For now I'm going to use Google for the default.
The available RSS feeds are valid news sites that are all considered to be neutral. Nothing leaning too far left, nothing leaning too far right. Plus some fun stuff. Hope you find the page useful.
If you want to know more about how this works, please visit the Tutorial page to learn to make your own RSS reader.
Current feed - IEEE Spectrum

This sponsored article is brought to you by NYU Tandon School of Engineering.
The traditional approach to academic research goes something like this: Assemble experts from a discipline, put them in a building, and hope something useful emerges. Biology departments do biology. Engineering departments do engineering. Medical schools treat patients.
NYU is turning that model inside out. At its new Institute for Engineering Health, the organizing principle centers around disease states rather than traditional disciplines. Instead of asking “what can electrical engineers contribute to medicine?,” they’re asking “what would it take to cure allergic asthma?,” and then assembling whoever can answer that question, whether they’re immunologists, computational biologists, materials scientists, AI researchers, or wireless communications engineers.
Jeffrey Hubbell, NYU’s vice president for bioengineering strategy and professor of chemical and biomolecular engineering at NYU’s Tandon School of Engineering.New York University
The early results suggest they’re onto something. A chemical engineer and an electrical engineer collaborated to build a device that detects airborne threats — including disease pathogens — that’s now a startup. A visually impaired physician teamed with mechanical engineers to create navigation technology for blind subway riders. And Jeffrey Hubbell, the Institute’s leader, is advancing “inverse vaccines” that could reprogram immune systems to treat conditions from celiac disease to allergies — work that requires equal fluency in immunology, molecular engineering, and materials science.
The underlying problem these collaborations address is conceptual as much as organizational. In his field, Hubbell argues that modern medicine has optimized around a single strategy: developing drugs that block specific molecules or suppress targeted immune responses. Antibody technology has been the workhorse of this approach. “It’s really fit for purpose for blocking one thing at a time,” he says. The pharmaceutical industry has become extraordinarily good at creating these inhibitors, each designed to shut down a particular pathway.
But Hubbell asks a different question: Rather than inhibit one bad thing at a time, what if you could promote one good thing and generate a cascade that contravenes several bad pathways simultaneously? In inflammation, could you bias the system toward immunological tolerance instead of blocking inflammatory molecules one by one? In cancer, could you drive pro-inflammatory pathways in the tumor microenvironment that would overcome multiple immune-suppressive features at once?
This shift from inhibition to activation requires a fundamentally different toolkit — and a different kind of researcher. “We’re using biological molecules like proteins, or material-based structures — soluble polymers, supramolecular structures of nanomaterials — to drive these more fundamental features,” Hubbell explains. You can’t develop those approaches if you only understand biology, or only understand materials science, or only understand immunology. You need an understanding and a mastery of all three.
“There will be people doing AI, data science, computational science theory, people doing immunoengineering and other biological engineering, people doing materials science and quantum engineering, all really in close proximity to each other.” —Jeffrey Hubbell, NYU Tandon
Which logically leads to the question: How do you create researchers with that kind of cross-disciplinary depth?
The answer isn’t what you might expect. “There may have been a time when the objective was to have the bioengineer understand the language of biology,” Hubbell says. “But that time is long, long gone. Now the engineer needs to become a biologist, or become an immunologist, or become a neuroscientist.”
Hubbell isn’t talking about engineers learning enough biology to collaborate with biologists. He’s describing something more radical: training people whose disciplinary identity is genuinely ambiguous. “The neuroengineering students — it’s very difficult to know that they’re an engineer or a neuroscientist,” Hubbell says. “That’s the whole idea.”
His own students exemplify this. They publish in immunology journals, present at immunology conferences. “Nobody knows they’re engineers,” he says. But they bring engineering approaches — computational modeling, materials design, systems thinking — to immunological problems in ways that traditional immunologists wouldn’t.
The mechanism for creating these hybrid researchers is what Hubbell calls a “milieu.” “To learn it all on your own is hopeless,” he acknowledges, “but to learn it in a milieu becomes very, very efficient.”
NYU is expanding its facilities to include a science and technology hub designed to force encounters between people across various schools and disciplines who wouldn’t naturally cross paths.Tracey Friedman/NYU
NYU is making that milieu physical. The university has acquired a large building in Manhattan that will serve as its science and technology hub — a deliberate co-location strategy designed to force encounters between people across various schools and disciplines who wouldn’t naturally cross paths.
Juan de Pablo is the Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and Executive Dean of the NYU Tandon School of Engineering.Steve Myaskovsky, Courtesy of NYU Photo Bureau
“There will be people doing AI, data science, computational science theory, people doing immunoengineering and other biological engineering, people doing materials science and quantum engineering, all really in close proximity to each other,” Hubbell explains.
The strategy mirrors what Juan de Pablo, NYU’s Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and Executive Dean at the NYU Tandon School of Engineering, describes as organizing around “grand challenges” rather than traditional disciplines. “What drives the recruitment and the spaces and the people that we’re bringing in are the problems that we’re trying to solve,” he says. “Great minds want to have a legacy, and we are making that possible here.”
But physical proximity alone isn’t enough. The Institute is also cultivating what Hubbell calls an “explicit” rather than “tacit” approach to translation — thinking about clinical and commercial pathways from day one.
“It’s a terrible thing to solve a problem that nobody cares about,” Hubbell tells his students. To avoid that, the Institute runs “translational exercises” — group sessions where researchers map the entire path from discovery to deployment before launching multi-year research programs. Where could this fail? What experiments would prove the idea wrong quickly? If it’s a drug, how long would the clinical trial take? If it’s a computational method, how would you roll it out safely?
The new cross-institutional initiative represents a major investment in science and technology, and includes adding new faculty, state-of-the-art facilities, and innovative programs.NYU Tandon
The approach contrasts sharply with typical academic practice. “Sometimes academics tend to think about something for 20 minutes and launch a 5-year PhD program,” Hubbell says. “That’s probably not a good way to do it.” Instead, the Institute brings together people who have actually developed drugs, built algorithms, or commercialized devices — importing their hard-won experience into the planning phase before a single experiment is run.
The timing may be fortuitous. De Pablo notes that AI is compressing timelines dramatically. “What we thought was going to take 10 years to complete, we might be able to do in 5,” he says.
But he’s quick to note AI’s limitations. While tools like AlphaFold can predict how a single protein folds — a breakthrough of the last five years — biology operates at much larger scales. “What we really need to do now is design not one protein, but collections of them that work together to solve a specific problem,” de Pablo explains.
Hubbell agrees: “Biology is much bigger — many, many, many systems.” The liver and kidney are in different places but interact. The gut and brain are connected neurologically in ways researchers are just beginning to map. “AI is not there yet, but it will be someday. And that’s our job — to develop the data sets, the computational frameworks, the systems frameworks to drive that to the next steps.”
It’s a moment of unusual ambition. “At a time when we’re seeing some research institutions retrench a little bit and limit their ambitions,” de Pablo says, “we’re doing just the opposite. We’re thinking about what are the grand challenges that we want to, and need to, tackle.”
The bet is that the breakthroughs worth making can’t emerge from any single discipline working alone. They require collisions —sometimes planned, sometimes accidental — between people who speak different technical languages and are willing to develop a shared one. NYU is engineering those collisions at scale.

This webinar covers power system modeling and simulation across multiple timescales, from quasi-static 8760 analysis through EMT studies, fault classification, and inverter-based resource grid integration.
What Attendees will Learn

When Yong Wang recently received one of the highest honors for early-career data visualization researchers, it marked a milestone in an extraordinary journey that began far from the world’s technology hubs.
Wang was born in a small farming village in southern China to parents with limited formal education. Today the IEEE member and associate editor of IEEE Transactions on Visualization and Computer Graphics is an assistant professor in the College of Computing and Data Science atNanyang Technological University, in Singapore. He studies how people can employdata visualization techniques to get more out of large-scale datasets as well as advanced artificial intelligence techniques.
“Visualization helps people understand complex ideas,” he says. “If we design these tools well, they can make advanced technologies accessible to everyone.”
For his work in the field, theIEEE Computer Society visualization and graphics technical committee presented him with its 2025 Significant New Researcher Award. The recognition highlights his growing influence in fields including data visualization, human-computer interaction and human-AI collaboration—areas becoming more important as the world generates more data than humans can easily interpret.
EMPLOYER
Nanyang Technological University, in Singapore
POSITION
Assistant professor of computing and data science
IEEE MEMBER GRADE
Member
ALMA MATERS
Harbin Institute of Technology in China; Huazhong University of Science and Technology in Wuhan, China; Hong Kong University of Science and Technology
“Visualization helps people understand complex ideas,” Wang says. “If we design these tools well, they can make advanced technologies accessible to everyone.”
Wang was born in in a small farming village in southern China. China’s economy was still developing, and life in his village was modest. Most families in Hunan grew rice, vegetables, and fruit to support themselves.
Wang’s parents worked in agriculture too, and his father often traveled to cities to earn money working in a factory or on construction jobs. The extra income helped support the family and made it possible for Wang to attend college.
“I’m very grateful to my parents,” Wang says. “They never attended university, but they strongly supported my education.”
“If we build tools that help people understand information, then more people can participate in science and innovation. That’s the real power of visualization.”
Technology was scarce in the village, he says. Computers were almost nonexistent, and televisions were considered precious, expensive household possessions.
One childhood memory still makes him laugh: During a summer vacation, he and his brother spent so many hours playing video games on a simple console connected to the family’s television that the TV screen eventually burned out.
“My mother was very angry,” he recalls. “At that time, a TV was a very valuable thing.”
He says that despite never having used a laptop or experimenting with electronic equipment, he was fascinated by the technologies he saw on TV shows.
His parents encouraged a practical career such as medicine or civil engineering, but he felt drawn to robotics and computing, he says.
“I didn’t really understand what computer science involved,” he says. “But from what I saw on TV, it looked exciting and advanced.”
He enrolled at Harbin Institute of Technology, in northeastern China. The esteemed university is known for its engineering programs. His major—automation—combined elements of electrical engineering, robotics, and control systems.
One of the defining experiences of his undergraduate years, he says, was a university robotics competition. Wang and his teammates designed a robot capable of autonomously navigating around obstacles.
The design was simple compared with professional systems, he acknowledges. But, he says, the experience was exhilarating. His team placed second, and Wang began to see engineering as both creative and collaborative.
He graduated with a bachelor’s degree in 2011, and then pursued a master’s degree in pattern recognition and image processing from the Huazhong University of Science and Technology, in Wuhan, China.
In 2014 he took a position as a research intern working at a technology company in Shenzhen, China.
That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.
In 2014 he took a position as a research intern working a technology company in Shenzhen, China. That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.
He enrolled in the computer science Ph.D. program at the Hong Kong University of Science and Technology and earned the degree in 2018. He remained there as a postdoctoral researcher until 2020, when he moved to Singapore to join Singapore Management University as an assistant professor of computing and information systems. He moved over to Nanyang Technological University as an assistant professor in 2024.
His research focuses on a challenge facing nearly every business: how to make sense of the enormous amounts of data being generated.
He and his students, collaborators have developed a series of approaches to recommend or automatically generate appropriate visualizations, including infographics.
It allows nontechnical people to create visualizations instead of hiring professional designers.
Another focus of Wang’s research is human-AI collaboration. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.
Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.
“If people understand how the AI system works,” Wang says, “they can collaborate with it more effectively.”
He recently explored how visualization techniques could help researchers understand quantum computing, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.
Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.
“We live in an era of information explosions,” Wang says. “Huge amounts of data are generated, and it’s difficult for people to interpret all of it to make better business decisions.”
Data visualization offers a solution by turning complex information into images, patterns, and diagrams that people can more readily understand.
But many visualizations still must be designed manually by experts, Wang notes. It’s a time-consuming process that creates a bottleneck, he says.
His solution is to use large language models and multimodal systems that can generate text, images, video, and sensor data simultaneously and automate parts of the process.
One system developed by his research group lets users design complex infographics through natural-language instructions combined with simple interactions such as drawing on a touchscreen with a finger. It allows nontechnical people to generate visualizations instead of hiring professional designers.
Another focus of his research is human-AI collaboration. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.
Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.
“If people understand how the AI system works,” he says, “they can collaborate with it more effectively.”
He recently explored how visualization techniques could help researchers understand quantum computing, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.
Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.
Teaching and mentoring students remain among the most meaningful parts of Wang’s career, he says.
Professional communities such as the IEEE Computer Society, he says, play a major role in helping him transform early-stage graduate students unsure of which lines of inquiry they will pursue into independent researchers with a solid technical focus. Through conferences, publications, and technical committees, IEEE connects Wang with other researchers working in visualization, AI, and human-computer interactions, he says.
Those connections have helped him share ideas, collaborate, and stay up to date on innovations in the research community.
Receiving the Significant New Researcher award motivates him to continue pushing the field forward, he says.
Looking back, he says, the distance between his rural village in Hunan and an international research career still feels remarkable. But, he says, the journey reflects something larger about his chosen field: “If we build tools that help people understand information, then more people can participate in science and innovation.
“That’s the real power of visualization.”

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.
The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.
We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.
We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.
The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.
We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.
Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.
So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.
Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.
Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.
This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.
Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.
Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.
Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.
Tom Burick has always considered himself a builder. Over the years he’s designed robots, constructed a vintage teardrop trailer, and most recently, led a group of students in building a full-scale replica of a pivotal 1940s computer.
Burick is a technology instructor at PS Academy in Gilbert, Ariz., a middle and high school for students with autism and other specialized learning needs. At the start of the 2025–26 school year, he began a project with his students to build a full-scale replica of the Electronic Numerical Integrator and Computer, or ENIAC, for the 80th anniversary of the historic computer’s construction. ENIAC was one of the world’s first programmable electronic computers. When it was built, it was about one thousand times as fast as other machines.
Before becoming a teacher, Burick owned a robotics company for a decade in the 2000s. But when a financial downturn forced him to close the business, he turned to teaching. “I had so many amazing people help me when I was young [who] really gave me their time and resources, and really changed the trajectory of my life,” Burick says. “I thought I need to pay that forward.”
As a young child in Latrobe, Pa., Burick watched the television show Lost in Space, which includes a robot character who protects the family. “He was the young boy’s best friend, and I was so captivated by that. I remember thinking to myself, I want that in my life. And that started that lifelong love affair with robotics and technology.”
He started building toy robots out of anything he could find, and in junior high school, he began adding electronics. “By early high school, I was building full-fledged autonomous, microprocessor-controlled machines,” he says. At age 15, he built a 150-pound steel firefighting robot, for which he won awards from IEEE and other organizations.
Burick kept building robots and reached out for help from local colleges and universities. He first got in touch with a student at Carnegie Mellon University, who invited him to visit campus. “My parents drove me down the next weekend, and he gave me a tour of the robotics lab. I was mesmerized. He sent me home with college textbooks and piles of metal and gears and wires,” Burick says. He would read the textbook a page at a time, reading it again and again until he felt he had an understanding of it. Then, to help fill gaps in his understanding, he got in touch with a robotics instructor at Saint Vincent College, in his hometown of Latrobe, who let him sit in on classes. Each of these adults, he says, “helped change the trajectory of my life.”
Toward the end of high school, Burick realized that college wouldn’t be the right environment for him. “I was drawn to real-world problem-solving rather than structured coursework and I chose to continue along that path,” he says. Additionally, Burick has dyscalculia, which makes traditional mathematics more challenging for him. “It pushed me to develop alternative methods of engineering.”
The ENIAC replica Burick’s students built precisely matches what the original computer would have looked like before it was disassembled in the 1950s. Robert Gamboa
When he graduated, he worked in several tech jobs before starting his own company. In 2000, he opened a computer retail store and adjacent robotics business, White Box Robotics. The idea for the company came when Burick was building a “white box” PC from standard, off-the-shelf components, and realized there was no comparable product for robotics.
So, he started developing a modular, general-purpose platform that applied white box PC standards to mobile robots. “The robot’s chassis was like a box of Legos,” he says. You could click together two torsos to double its payload, switch out the drive system, or swap its head for a different set of sensors. He filed utility and design patents for the platform, called the 914 PC-Bot, and after merging with a Canadian defense robotics company called Frontline Robotics, started production. They sold about 200 robots in 17 countries, Burick says.
Then the 2008 financial crisis hit. White Box Robotics held on for a couple of years, shuttering in late 2010. “I got to live my life’s dream for 10 years,” he says. After closing White Box, “there was some soul searching” about what to do next. He recalled the impact his own mentors had, and decided to pay it forward by teaching.
In 2013, Burick started working in a vocational training program for young adults living with autism. The program didn’t have a technical arm, so he started one and ran it until 2019, when he was hired to be a technology instructor at PS Academy Arizona.
Burick and one of his students assemble the base for one of ENIAC’s three portable function tables, which contained banks of switches that stored numerical constants. Bri Mason
Burick feels he can connect with his students, because he is also neurodivergent. Throughout his childhood, he was told what he wasn’t able to do because of his dyscalculia diagnosis. “People tell you what it takes, but they never tell you what it gives,” Burick says.
In adulthood, he realized that some of his strengths are linked to dyscalculia, too, like strong 3D spatial reasoning. “I have this CAD program that runs in my head 24 hours a day,” he says. “I think the reason I was successful in robotics, truly, was because of the dyscalculia…. To me, [it] has always been a superpower.”
Whenever his students say something disparaging about living with autism, he shares his own experience. “You need to have maybe just a bit more tenacity than others, because there are parts of it you do have to fight through, but you come through with gifts and strengths,” he tells them.
And Burick’s classes aim to play to those strengths. “I didn’t want my technology program to feel like craft hour,” he says. Instead, through projects like the ENIAC replica, students can leverage traits many of them share, like the abilities to hyperfocus and to precisely repeat tasks.
Burick has taught his students about ENIAC for several years. While reading about it, he learned that the massive, 27-tonne computer was dismantled and partially destroyed after being decommissioned in 1955. Although a few of ENIAC’s 40 original panels are on display at museums, “there was no hope of ever seeing it together again. We wanted to give the world that experience,” Burick says.
He and his students started by learning about ENIAC, and even Burick was surprised by how complex the 80-year-old computer was. They built a one-twelfth scale model to help the students better understand what it looked like. Seeing the students light up, Burick became confident in their ability to move onto the full-scale model, and he started ordering supplies.
ENIAC was composed of 40 large metal panels arranged in a U-shape that housed its many vacuum tubes, resistors, capacitors, and switches. Twenty of the panels were accumulators with the same design, so the students started with these, then worked through smaller groupings of panels. The repeating panels brought symmetry to ENIAC, Burick says, but it was also one of the main challenges of recreating it. If one part was slightly out of place, the next one would be too and the mistake would compound.
The students installed 500 simulated vacuum tubes in each of the panels here, for a total of 18,000 vacuum tubes.Robert Gamboa
Once they constructed the panels, they added ENIAC’s three function tables, which stored numerical constants in banks of switches, then two punch-card machines. Finally, they installed 18,000 simulated vacuum tubes. In total, the project used nearly 300 square meters of thick-ream cardboard, 1,600 hot-glue-gun sticks, and 7 gallons of black paint.
The scale of the machine—and his students’ work—left Burick in awe. “By the time we were done, I felt like I was in a room full of scientists,” he says.
Previously, Burick’s students built an 8-foot-long drivable Tesla Cybertruck (“complete with a 400-watt stereo system and a subwoofer”) and he plans to keep the momentum with another recreation—maybe from the Apollo moon missions.
“I go to work every day, and I feel passionate about robotics [and] technology. I get to share that passion with the students,” Burick says. “I get to feel what it’s like to be in the position of the people that helped me. It closes that loop, and I find that really rewarding.”

Once upon a time in Europe, television remote controls had a magic teletext button. Years before the internet stole into homes, pressing that button brought up teletext digital information services with hundreds of constantly updated pages. Living in Ireland in the 1980s and ’90s, my family accessed the national teletext service—Aertel—multiple times a day for weather and news bulletins, as well as things like TV program guides and updates on airport flight arrivals.
It was an elegant system: fast, low bandwidth, unaffected by user load, and delivering readable text even on analog television screens. So when I recently saw it was the 40th anniversary of Aertel’s test transmissions, it reactivated a thought that had been rolling around in my head for years. Could I make a ham-radio version of teletext?
First developed in the United Kingdom and rolled out to the public by the BBC under the name Ceefax, teletext exploited a quirk of analog television signals. These signals transmitted video frames as lines of luminosity and color, plus some additional blank lines that weren’t displayed. Teletext piggybacked a digital signal onto these spares, transmitting a carousel of pages over time. Using their remotes, viewers typed in the three-digit code of the page they wanted. Generally within a few seconds, the carousel would cycle around and display the desired page.
Teletext created unusually legible text in the 8-bit era by enlarging alphanumeric characters and interpolating new pixels by looking for existing pixels touching diagonally, and adding whitespace between characters. Graphic characters were not interpolated, and featured blocky chunks known as sixels for their 2-by-3 arrangement. My modern recreation uses the open-source font Bedstead, which replicates the look of teletext, including the graphics characters. James Provost
Teletext is composed of characters that can be one of eight colors. Control codes in the character stream select colors and can also produce effects like flashing text and double-height characters. The text’s legibility was better than most computers could manage at the time, thanks to the SAA5050 character-generator chip at the heart of teletext. Although characters are internally stored on this chip in 6-by-10-pixel cells—fewer pixels than the typical 8-by-8-pixel cell used in 1980s home computers—the SAA5050 interpolates additional pixels for alphanumeric characters on the fly, making the effective resolution 10 by 18 pixels. The trade-off is very low-resolution graphics, comprising characters that use a 2-by-3 set of blocky pixels.
Teletext screens use a 40-by-24-character grid. This means that a kilobyte of memory can store a full page of multicolor text, half the memory required for a similar amount of text on, for example, the Commodore 64. The BBC Microcomputer took advantage of this by putting an SAA5050 on its motherboard, which could be accessed in one of the computer’s graphics modes. Despite the crude graphics, some educational games used this mode, most notably Granny’s Garden, which filled the same cultural niche among British schoolchildren that The Oregon Trail did for their U.S. counterparts.
By the 2010s, most teletext services had ceased broadcasting. But teletext is still remembered fondly by many, and enthusiasts are keeping it alive, recovering and archiving old content, running internet-based services with current newsfeeds, and developing systems that make it possible to create and display teletext with modern TVs.
I wanted to do something a little different. Inspired by how the BBC Micro co-opted teletext for its own purposes, I thought it might make a great radio protocol. In particular I thought it could be a digital counterpart to slow-scan television (SSTV).
SSTV is an analog method of transmitting pictures, typically including banners with ham-radio call signs and other messages. SSTV is fun, but, true to its name, it’s slow—the most popular protocols take a little under 2 minutes to send an image—and it can be tricky to get a complete picture with legible text. For that reason, SSTV images are often broadcast multiple times.
Teletext is still remembered fondly by many.
I decided to send the teletext using the AX.25 protocol, which encodes ones and zeros as audible tones. For VHF and UHF transmissions at a rate of 1,200 baud, it would take 11 seconds to send one teletext screen. Over HF bands, AX.25 data is normally sent at 300 baud, which would result in a still-acceptable 44 seconds per screen. When a teletext page is sent repeatedly, any missed or corrupted rows are filled in with new ones. So in a little over 2 minutes, I could send a screen three times over HF, and the receiver would automatically combine the data. I also wanted to build the system in Python for portability, with an editor for creating pages, an AX.25 encoder and decoder, and a monitor for displaying received images.
The reason why I hadn’t done this before was because it requires digesting the details of the AX.25 standard and teletext’s official spec, and then translating them into a suite of software, which I never seemed to have the time to do. So I tried an experiment within an experiment, and turned to vibe coding.
Despite the popularity of vibe coding with developers, I have reservations. Even if concerns about AI slop, the environment, and memory hoarding were not on the table, I would still worry about the reliance on centralized systems that vibe coding brings. The whole point of a DIY project is to, well, do it yourself. A DIY project lets you craft things for your own purposes, not just operate within someone else’s profit margins and policies.
Still, criticizing a technology from afar isn’t ideal, so I directed Anthropic’s Claude toward the AX.25 and teletext specs and told it what I wanted. After about 250,000 to 300,000 tokens and several nights of back and forth about bugs and features, I had the complete system running without writing a single line of code. Being honest with myself, I doubt this system—which I’m calling Spectel—would ever have come about without vibe coding.
But I didn’t learn anything new about how teletext works, and only a little bit more about AX.25. Updates are contingent on my paying Anthropic’s fees. So I remain deeply ambivalent about vibe coding. And one final test remains in any case: trying Spectel out on HF bands. Of course, that means I’ll need willing partners out in the ether. So if you’re a ham who’d like to help out, let me know in the comments below!

Examining how a U.S. Interregional Transmission Overlay could address aging grid infrastructure, surging demand, and renewable integration challenges.
What Attendees will Learn

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!
When I was promoted to engineering manager of a mid-sized team at Clorox, I thought I had made it.
More money. More stock. More visibility. More proximity to senior leadership. From the outside, and on paper, it was clearly a promotion.
I had often heard the phrase, “Management isn’t a promotion. It’s a job switch.” I brushed it off as cliché advice engineers tell each other to sound wise.
It turns out both things were true. It was a promotion. It was also an entirely different job.
And I was nowhere near ready for what that meant.
There’s surprisingly little training for new managers. As engineers, we’re highly technical and used to mastering complex systems. Many of us assume managing people will be easier than distributed systems. Or we assume it’s just “more meetings.”
Both assumptions are wrong.
Yes, I had more meetings. But what changed most wasn’t my calendar, it was how my impact was measured. As an individual contributor, my output was visible. Code shipped. Features delivered. Bugs fixed.
As a manager, my impact became indirect. It flowed through other people.
That shift was disorienting.
So I fell back into my comfort zone. I started writing more code. I tried to be the strongest engineer on the team. It felt productive and measurable.
It was also a mistake.
By trying to be the number one engineer, I was neglecting my actual job. I wasn’t supporting senior engineers. I wasn’t unblocking systemic problems. I wasn’t building career paths. I was competing with the very people I was supposed to enable.
Management is about amplification.
The turning point came when I began each week with a simple question:
What is the single most impactful thing I can do right now?
Often, it wasn’t code. It was writing a document that clarified direction. It was fixing a broken process with a single point of failure. It was redistributing ownership so that knowledge wasn’t concentrated in one person.
I started deliberately removing myself from implementation work. I committed to writing almost no code. That forced trust. It also revealed gaps in the system that I could address at the right level: through coaching, documentation, hiring, or process changes.
Another major shift was taking one-on-one meetings seriously.
Many engineers dislike one-on-ones. They can feel awkward or devolve into status updates. I scheduled them every other week and approached them with a mix of tactical alignment and human check-in.
I rarely started with engineering questions. Instead:
Burnout doesn’t show up in Jira tickets. Neither does quiet disengagement.
Those conversations helped me anticipate turnover, redistribute workload, and build trust.
I also spent more time thinking about career ladders. Was I giving my team the kind of work that would help them grow? Was I hoarding high-visibility projects? Was I clear about what senior-level impact looked like?
That work felt less tangible than code, but it moved the needle far more.
Ultimately, I returned to the individual contributor track.
Part of it was practical: I was laid off from my management role, and the market rewarded senior IC roles more strongly at the time. But if I’m honest, the deeper reason was simpler.
I love writing code.
I enjoy improving systems and helping people, but the part of my day that energized me most was still building. Management required relinquishing that. You can’t be absorbed in technical implementation and deeply people-focused at the same time. Something has to give.
Personally, I don’t need to climb the corporate ladder to feel successful. And you might not have to. Many organizations offer technical leadership tracks that are truly in parity with management when it comes to salary bands. Staff and principal engineers steer strategy without managing people.
If you want to remain deeply technical, you should think very carefully before moving into people management. It requires surrendering control over implementation and focusing on alignment, growth, and long-range planning. If you don’t genuinely care about those things, you won’t just be unhappy, you’ll make your team unhappy.
Before taking a management role, ask yourself:
There’s no right answer.
The IC/manager fork isn’t about prestige. It’s about what kind of work you want your days to consist of.
Choose based on energy, not ego.
—Brian
Stanford University’s AI Index is out for 2026, tracking trends and noble developments in artificial intelligence. This year, China has taken a notable lead in AI model releases and industrial robotics compared to previous years. AIs are rapidly reaching benchmarks and achieving high levels of compute, but public trust in AI and confidence in government regulation of AI is mixed.
Much like large language models have learned from existing texts, new AI physics models are being trained on simulation results. This results in “large physics models” that can simulate situations in transportation, aerospace, or semiconductor engineering much faster than traditional physics simulations. Using new AI physics models “can be anywhere between 10,000 to close to a million times faster,” says Jacomo Corbo, CEO and co-founder of PhysicsX.
Kyle McGinley is an IEEE Student Member pursuing a bachelor’s degree in electrical and computer engineering at Temple University. Joining IEEE helped him to develop the skills necessary for real-world teams. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff,” he says.

Why does a chocolatier build a railroad? For Milton S. Hershey, it was a logical response to a sugar shortage brought on by World War I. The Hershey Chocolate Co. was by then a chocolate-making powerhouse, having refined the automation and mass production of its products, including the eponymous Hershey’s Milk Chocolate Bar and the bite-size Hershey’s Kiss. To satisfy its many customers, the company needed a steady supply of sugar. Plus, it wanted a way to circumvent the American Sugar Refining Co., also known as the Sugar Trust, which had a virtual monopoly on sugar processing in the United States.
Beginning in 1916, Hershey looked to Cuba to secure his sugar supply. According to historian Thomas R. Winpenny, the chocolate magnate had a “personal infatuation” with the lush, beautiful island. What’s more, U.S. business interests there were protected by a treaty known as the Platt Amendment, which made Cuba a satellite state of the United States.
Like many industrialists of the day, Hershey believed in vertical integration, and the company’s Cuban operation eventually expanded to include five sugar plantations, five modern sugar mills, a refinery, several company towns, and an oil-fired power plant with three substations to run it all.
A 1943 rail pass entitled the holder to travel on all ordinary passenger trains of the Hershey Electric Railway. Hershey Community Archives
The company also built a railroad. To maximize the sugar yield, the cane needed to be ground promptly after being cut, and the rail system offered an efficient means of transporting the cane to the mills, and ensured that the mills operated around the clock during the harvest. By 1920, one of Hershey’s three main sites was processing 135,000 tonnes of cane, yielding 14.4 million kilograms of sugar.
Initially, the Hershey Cuban Railway consisted of a single 56-kilometer-long standard gauge track on which ran seven steam locomotives that burned coal or oil. But due to the high cost of the imported fuel and the inefficiency of the locomotives, Hershey began electrifying the line in 1920. Although it was the first electrified train in Cuba, rail lines in Europe and the United States were already being electrified.
In addition to powering the various Hershey entities, the generating station supplied Matanzas and the smaller towns with electricity. F.W. Peters of General Electric’s Railway and Traction Engineering Department published a detailed account of the system in the April 1920 General Electric Review.
The company town of Central Hershey became the headquarters for Hershey’s Cuba operations. (“Central” is the Cuban term for a mill and the surrounding settlement.) It sat on a plateau overlooking the port of Santa Cruz del Norte, about halfway between Havana and Matanzas in the heart of Cuba’s sugarcane region.
Hershey imported the industrial utopian model he had established in Hershey, Penn., which was itself inspired by Richard and George Cadbury’s Bournville Village outside Birmingham, England.
The chocolate magnate Milton S. Hershey had a “personal infatuation” with Cuba.Underwood Archives/Getty Images
In Cuba as in Pennsylvania, Hershey’s factory complex was complemented by comfortable homes for his workers and their families, as well as swimming pools, baseball fields, and affordable medical clinics staffed with doctors, nurses, and dentists. Managers had access to a golf course and country club in Central Hershey. Schools provided free education for workers’ children.
Milton Hershey himself had very little formal education, and so in 1909 he and his wife, Catherine, established the Hershey Industrial School in Hershey, Penn. There, white, male orphans received an education until they were 18 years old. Now known as the Milton Hershey School, the school has broadened its admission criteria considerably over the years.
Hershey duplicated this concept in the Cuban company town of Central Rosario, founding the Hershey Agricultural School. The first students were children whose parents had died in a horrific 1923 train accident on the Hershey Electric Railway. The high-speed, head-on collision between two trains killed 25 people and injured 50 more.
Milton Hershey was a generous philanthropist, and by most accounts he truly cared for his employees and their welfare, and yet his early 20th-century paternalism was not without fault. He was a fierce opponent of union activity, and any hard-won pay increases for workers often came at the expense of profit-sharing benefits. Like other U.S. businessmen in Cuba, Hershey employed migrant seasonal labor from neighboring Caribbean islands, undercutting the wages of local workers. Historians are still wrangling with how to capture the long-lasting effects of U.S. economic imperialism on Cuba.
Hershey continued to acquire new sugar plantations in Cuba throughout the 1920s, eventually owning about 24,300 hectares and leasing another 12,000 hectares. In 1946, a year after Milton Hershey’s death and amid growing political uncertainty on the island, the company sold its Cuban interests to the Cuban Atlantic Sugar Co. In addition to Hershey’s sugar operations, the sale included a peanut oil plant, four electric plants, and 404 km of railroad track plus locomotives and train cars.
Service on the Hershey Electric Railway in Cuba continued into at least the 2010s but became increasingly sporadic, with aging equipment like this car at the Central Hershey station. Hershey Community Archives
The Central Hershey sugar refinery continued to operate even after the Cuban Revolution but eventually closed in 2002. Passenger service, meanwhile, continued on the Hershey Electric Railway, albeit sporadically: By 2012, there were only two trips a day between Havana and Matanzas. This video, from 2013, gives a good sense of the route:
A colleague of mine who studies Cuban history told me that in his travels to the country over almost 30 years, he has never been able to ride the Hershey electric train. It was always out of service or had restricted service due to the island’s chronic electricity shortages, which have only gotten worse in recent years. I’ve been trying to find out if any part of the line is still operating. If you happen to know, please add a comment below.
Cuba’s frequent power outages make it difficult to operate the Hershey Electric Railway. In this 2009 photo, passengers await the restoration of electricity so they can continue their journey.Adalberto Roque/AFP/Getty Images
A 2024 analysis of the economic potential and challenges of reactivating Cuba’s Hershey Electric Railway noted that an electric railway could be a hedge against climate change and geopolitical factors. But it also acknowledged that frequent power outages and damaged infrastructure argue against reactivating the electrified railway, and it favored the diesel engines used on most of Cuba’s rail network.
Cuba has been mostly off-limits to U.S. tourists for my entire life, but it was one of my grandmother’s favorite vacation spots. I would love to imagine a future where political ties are restored, the power grid is stabilized, and the Hershey Electric Railway is reopened to the Cuban public and to curious visitors like me.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the May 2026 print issue as “This Chocolate Empire Ran on Electric Rails.”
In April 1920, F.W. Peters of General Electric’s Railway and Traction Engineering Department wrote a detailed account called “Electrification of the Hershey Cuban Railway” in the General Electric Review, which was later abstracted in Scientific American Monthly to reach a broader audience.
Thomas R. Winpenny’s article “Milton S. Hershey Ventures into Cuban Sugar” in Pennsylvania History: A Journal of Mid-Atlantic Studies, Fall 1995, provided background to the business side of Hershey’s Cuba enterprise.
Florian Wondratschek’s 2024 article “Between Investment Risk and Economic Benefit: Potential Analysis for the Reactivation of the Hershey Railway in Cuba” in Transactions on Transport Sciences brought the story up to the present.
And if you’re interested in a visual take on the Hershey operation on Cuba, check out the documentary Milton Hershey’s Cuba by Ric Morris, a professor of Spanish and linguistics at Middle Tennessee State University.

When the robotics engineering field that Maja Matarić wanted to work in didn’t exist, she helped create it. In 2005 she helped define the new area of socially assistive robotics.
As an associate professor of computer science, neuroscience, and pediatrics at the University of Southern California, in Los Angeles, she developed robots to provide personalized therapy and care through social interactions.
Employer
University of Southern California, Los Angeles
Job Title
Professor of computer science, neuroscience, and pediatrics
Member grade
Fellow
Alma maters
University of Kansas and MIT
The robots could have conversations, play games, and respond to emotions.
Today the IEEE Fellow is a professor at USC. She studies how robots can help students with anxiety and depression undergo cognitive behavioral therapy. CBT focuses on changing a person’s negative thought patterns, behaviors, and emotional responses.
For her work, she received a 2025 Robotics Medal from MassRobotics, which recognizes female researchers advancing robotics. The Boston-based nonprofit provides robotics startups with a workspace, prototyping facilities, mentorship, and networking opportunities.
When receiving the award at the ceremony in Boston, Matarić was overcome with joy, she says.
“I’ve been very fortunate to be honored with several awards, which I am grateful for. But there was something very special about getting the MassRobotics medal, because I knew at least half the people in the room,” she says. “Everyone was just smiling, and there was a great sense of love.”
Matarić grew up in Belgrade, Serbia. Her father was an engineer, and her mother was a writer. After her father died when she was 16, Matarić and her mother moved to the United States.
She credits her father for igniting her interest in engineering, and her uncle who worked as an aerospace engineer for introducing her to computer science.
Matarić says she didn’t consider herself an engineer until she joined USC’s faculty, since she always had worked in computer science.
“In retrospect, I’ve always been an engineer,” Matarić says. “But I didn’t set out specifically thinking of myself as one—which is just one of the many things I like to convey to young people: You don’t always have to know exactly everything in advance.”
Maja Matarić and her lab are exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. National Science Foundation News
While pursuing her bachelor’s degree in computer science at the University of Kansas in Lawrence, she was introduced to industrial robotics through a textbook. After earning her degree in 1987, she had an opportunity to continue her education as a graduate student at MIT’s AI Lab (now the Computer Science and Artificial Intelligence Lab). During her first year, she explored the different research projects being conducted by faculty members, she said in a 2010 oral history conducted by the IEEE History Center. She met IEEE Life Fellow Rodney Brooks, who was working on novel reactive and behavior-based robotic systems. His work so excited her that she joined his lab and conducted her master’s thesis under his tutelage.
Inspired by the way animals use landmarks to navigate, Matarić developed Toto, the first navigating behavior-based robot. Toto used distributed models to map the AI Lab building where Matarić worked and plan its path to different rooms. Toto used sonar to detect walls, doors, and furniture, according to Matarić’s paper, “The Robotics Primer.”
After earning her master’s degree in AI and robotics in 1990, she continued to work under Brooks as a doctoral student, pioneering distributed algorithms that allowed a team of up to 20 robots to execute complex tasks in tandem, including searching for objects and exploring their environment.
Matarić earned her Ph.D. in AI and robotics in 1994 and joined Brandeis University, in Waltham, Mass., as an assistant professor of computer science. There she founded the Interaction Lab, where she developed autonomous robots that work together to accomplish tasks.
Three years later, she relocated to California and joined USC’s Viterbi School of Engineering as an assistant professor in computer science and neuroscience.
In 2002 she helped to found the Center for Robotics and Embedded Systems (now the Robotics and Autonomous Systems Center). The RASC focuses on research into human-centric and scalable robotic systems and promotes interdisciplinary partnerships across USC.
Matarić’s shift in her research came after she gave birth to her first child in 1998. When her daughter was a bit older and asked Matarić why she worked with robots, she wanted to be able to “say something better than ‘I publish a lot of research papers,’ or ‘it’s well-recognized,’” she says.
“In academia, you can be in a leadership role and still do research. It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”
“Kids don’t consider those good answers, and they’re probably right,” she says. “This made me realize I was in a position to do something different. And I really wanted the answer to my daughter’s future question to be, ‘Mommy’s robots help people.’”
Matarić and her doctoral student David Feil-Seifer presented a paper defining socially assistive robotics at the 2005 International Conference on Rehabilitation Robotics. It was the only paper that talked about helping people complete tasks and learn skills by speaking with them rather than by performing physical jobs, she says.
Feil-Seifer is now a professor of computer science and engineering at the University of Nevada in Reno.
At the same time, she founded the Interaction Lab at USC and made its focus creating robots that provide social, rather than physical, support.
“At this point in my career journey, I’ve matured to a place where I don’t want to do just curiosity-driven research alone,” she says. “Plenty of what my team and I do today is still driven by curiosity, but it is answering the question: ‘How can we help someone live a better life?’”
In 2006 she was promoted to full professor and made the senior associate dean for research in USC’s Viterbi School of Engineering. In 2012 she became vice dean for research.
“In academia, you can be in a leadership role and still do research,” she says. “It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”
One of the longest research projects Matarić has led at her Interaction Lab is exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. ASD is a lifelong neurological condition that affects the way people interact with others, and the way they learn. Children with ASD often struggle with social behaviors such as reading nonverbal cues, playing with others, and making eye contact.
Matarić and her team developed a robot, Bandit, that can play games with a child and give the youngster words of affirmation. Bandit is 56 centimeters tall and has a humanlike head, torso, and arms. Its head can pan and tilt. The robot uses two FireWire cameras as its eyes, and it has a movable mouth and eyebrows, allowing it to exhibit a variety of facial expressions, according to the IEEE Spectrum’s robots guide. Its torso is attached to a wheeled base.
The study showed that when interacting with Bandit, children with ASD exhibited social behaviors that were out of the ordinary for them, such as initiating play and imitating the robot.
Matarić and her team also studied how the robot could serve as a social and cognitive aid for elderly people and stroke patients. Bandit was programmed to instruct and motivate users to perform daily movement exercises such as seated aerobics.
Maja Matarić and doctoral student Amy O’Connell testing Blossom, which is being used to study how it can aid students with anxiety or depression.University of Southern California
Over the years, Matarić’s lab developed other robots including Kiwi and Blossom. Kiwi, which looked like an owl, helped children with ASD learn social and cognitive skills, helped motivate elderly people living alone to be more physically active, and mediated discussions among family members. Blossom, originally developed at Cornell, was adapted by the Interaction Lab to make it less expensive and personalizable for individuals. The robot is being used to study how it can aid students with anxiety or depression to practice cognitive behavioral therapy.
Matarić’s line of research began when she learned that large language model (LLM) chatbots were being promoted to help people with mental health struggles, she said in an episode of the AMA Medical News podcast.
“It is generally not easy to get [an appointment with a] therapist, or there might not be insurance coverage,” she said. “These, combined with the rates of anxiety and depression, created a real need.”
That made the chatbot idea appealing, she says, but she was interested to see if they were effective compared with a friendly robot such as Blossom.
Matarić and her team used the same LLMs to power CBT practice with a chatbot and with Blossom. They ran a two-week study in the USC dorms, where students were randomly assigned to complete CBT exercises daily with either a chatbot or the robot. Participants filled out a clinical assessment to measure their psychiatric distress before and after each session.
The study showed that students who interacted with the robot experienced a significant decrease in their mental state, Matarić said in the podcast, and students who interacted with the chatbot did not.
“Joining an [IEEE] society has an impact, and it can be personal. That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”
She and her team also reviewed transcripts of conversations between the students and the robot to evaluate how well the LLM responded to the participants. They found the robot was more effective than the chatbot, even though both were using the same model.
Based on those findings, in 2024 Matarić received a grant from the U.S. National Institute of Mental Health to conduct a six-week clinical trial to explore how effective a socially assistive robot could be at delivering CBT practice. The trial, currently underway, also is expected to study how Blossom can be personalized to adapt to each user’s preferences and progress, including the way the robot moves, which exercises it recommends, and what feedback it gives.
During the trial, the 120 students participating are wearing Fitbits to study their physiologic responses. The participants fill out a clinical assessment to measure their psychiatric distress before and after each session.
Data including the participants’ feelings of relating to the robot, intrinsic motivation, engagement, and adherence will be assessed by the research team, Matarić says.
She says she’s proud of the graduate students working on this project, and seeing them grow as engineers is one of the most rewarding parts of working in academia.
“Engineers generally don’t anticipate having to work with human study participants and needing to understand psychology in addition to the hardcore engineering,” she says. “So the students who choose to do this research are just wonderful, caring people.”
Matarić joined IEEE as a graduate student in 1992, the year she published her first paper in IEEE Transactions on Robotics and Automation. The paper, “Integration of Representation Into Goal-Driven Behavior-Based Robots,” described her work on Toto.
As a member of the IEEE Robotics and Automation Society, she says she has gained a community of like-minded people. She enjoys attending conferences including the IEEE International Conference on Robotics and Automation, the IEEE/RSJ International Conference on Intelligent Robots and Systems, and the ACM/IEEE International Conference on Human-Robot Interaction, which is closest to her field of research.
Matarić credits IEEE Life Fellow George Bekey, the founding editor in chief of the IEEE Transactions on Robotics, for recruiting her for the USC engineering faculty position. He knew of her work through her graduate advisor Brooks, who published a paper in the journal that introduced reactive control and the subsumption architecture, which became the foundation of a new way to control robots. It is his most cited paper. Bekey, who was editor in chief at the time, helped guide Brooks through the challenging review process. Matarić joined Brooks’s lab at MIT two years after its publication, and her work on Toto built on that foundation.
“Joining a society has an impact, and it can be personal,” she says. “That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”