Aaron’s Thoughts On The Week
"Just because we can doesn’t mean we should."— Ian Malcolm
As humanoid robots march from prototypes into the real world, we find ourselves at a critical crossroads. These machines, built in our image, designed to move among us, are showing up in factories, expo halls, hospitals, and even homes. The promise is exciting. But let’s not let the novelty of humanoid form cloud our judgment: safety must be non-negotiable.
Efforts are already underway to establish global standards. The IEEE Humanoid Study Group, ASTM International, ISO, and other organizations are beginning to define how humanoid robots should be classified, tested, and deployed safely and ethically. These frameworks will shape everything from mechanical stability and emergency stop protocols to interaction guidelines with untrained humans. That’s encouraging. But let’s be clear: standards take time. In the meantime, we don’t have time for carelessness or performative spectacle.
Too often, safety takes a backseat to showmanship. Take, for example, recent scenes from robotics trade shows—humanoids walking freely among crowds, with children at foot-level and no visible barriers or handlers. It’s one thing to show off a robot in a controlled demo zone. It’s another to let it roam untethered in spaces not designed for mobile machines. Would we let an autonomous forklift trundle through a conference hallway to demonstrate its capabilities? Of course not. Yet somehow, when the machine is shaped like a person, we suspend our usual caution.
The irony is that the human-like form, while intuitive in many settings, introduces a unique risk profile. A humanoid can be perceived as approachable or harmless….until it's not. Unlike stationary industrial robots, these systems move with intent, often blending mobility, voice interaction, and real-time AI decision-making. That increases the complexity of risk assessment exponentially. A slip, a failed sensor, a lag in a software update can result in unsafe interactions, especially in uncontrolled public settings.
This is not hypothetical. Around the world, we’re already seeing humanoids being introduced into customer service roles, warehouse operations, elder care, and educational environments. Each of these settings has a different population, physical layout, and risk tolerance. Yet too often, there is little nuance in how safety is approached. Some deployments rely on novelty to drive attention, forgetting that the stakes are higher when human safety is involved.
The good news? The industry is waking up. The IEEE Humanoid Study Group, in its May 2025 executive summary, highlights key challenges in building standards for humanoid robots, particularly around classification, safety protocols, and human-robot interaction design. ASTM and ISO are also initiating efforts to define baseline safety requirements and test methods. These efforts will help create a regulatory backbone for safe and scalable deployment.
But standards alone aren’t enough. They require interpretation, enforcement, and above all, a culture of responsibility. Safety isn’t a checklist. It’s a mindset that must be adopted by every stakeholder in the ecosystem: from hardware manufacturers to software developers, integrators, and end-users.
So what does common sense look like in practice?
Controlled demos in public: If you want to showcase a humanoid at an expo, use a marked-off zone with trained handlers. Make the boundaries obvious. Include emergency stop buttons. Explain the robot’s capabilities and limitations to bystanders.
Risk assessment before deployment: Every deployment, whether in a hospital, home, or warehouse, needs a contextual safety analysis. How fast can the robot move? What kind of sensors does it use? What is its field of view? What happens in the event of a system fault?
Training for users and bystanders: If untrained humans will interact with the robot, give them basic guidance. Use signage. Consider brief orientations for staff or participants.
Fail-safes and redundancy: Ensure the robot can be shut down quickly if needed. Use redundant sensors for obstacle detection. Integrate fallback behaviors if connectivity or control is lost.
Don’t deploy for spectacle: Resist the urge to use humanoids for marketing gimmicks unless the environment is fully risk-mitigated. A wow factor is not worth a preventable incident.
We also need transparency from developers and vendors. Be honest about what your robot can and cannot do. Don’t oversell autonomy if the system still needs supervision. Don’t imply general intelligence when it’s running on scripted behaviors. And certainly don’t gloss over safety risks just to close a deal.

The truth is, we are still early in the humanoid timeline. The systems we see today are impressive, but they are not yet robust enough for uncontrolled operation in all human environments. We owe it to the public—and to the future of the field—to move thoughtfully.
Because here’s the thing: if we get safety wrong now, it won’t just delay adoption. It will erode trust. It will invite backlash. It will trigger regulatory overreach or outright bans. On the other hand, if we get it right, if we pair cutting-edge technology with thoughtful risk management, humanoids could become one of the most transformative tools of the 21st century.
Let’s not leave that up to chance. Let’s design for safety. Let’s act with common sense. Let’s build a world where humans and humanoids can coexist, not just impressively, but responsibly.
Robot News Of The Week
For years, Aldebaran’s humanoid robots, Nao and Pepper, were fixtures in university robotics labs. Easy to use and widely adopted, they became go-to tools for research in social robotics. But the future of these iconic machines is now in jeopardy.
Development on Pepper halted in 2021 after poor sales, and in early 2025, Aldebaran filed for bankruptcy and slashed its workforce. While the company says Nao is still in production, with some AI upgrades, the uncertainty has raised serious concerns among researchers. Maintenance now depends on a shrinking support network, with parts often requiring expensive international shipping. And key features, like cloud-based AI services, could be at risk if the company collapses.
Even before the financial trouble, researchers had flagged recurring issues: overheating motors, brittle plastic shells, and little room for customization. During the pandemic, some labs were forced to halt projects entirely due to delays in getting replacement parts.
With robots like Nao and Pepper aging and less capable of supporting cutting-edge AI tools, universities face a crossroads. Do they stick with outdated, fragile systems or pivot toward building modular, open-source robots using 3D-printed parts and off-the-shelf components?
It may be the right time for a reset. Instead of buying closed systems with limited futures, some experts argue that universities should invest in creating flexible, hackable platforms that extend robot life spans and equip students with valuable engineering and programming skills.
Aldebaran’s story is a cautionary tale and a reminder that sustainability and adaptability may matter more than brand in robotics.
Rainbow Robotics unveils omnidirectional wheels, development kit for its dual-arm robot
Korean robotics firm Rainbow Robotics showcased major upgrades to its RB-Y1 semi-humanoid platform at ICRA 2025. The robot, designed for AI R&D, now features Mecanum wheels for 360-degree movement and a new SDK that supports modules like IMU, lidar, and grippers—boosting its customization for advanced research.
RB-Y1 has already attracted global interest from institutions like MIT, UC Berkeley, and Georgia Tech. Rainbow Robotics is expanding U.S. operations via its Chicago office to support growing demand.
Since March 2024, RB-Y1 has seen strong adoption, and with Samsung now owning a 35% stake, integration of AI and robotics is expected to accelerate.
Robot Research In The News
Kennesaw State researcher develops AI robot to aid farmers in fight against pests
Up to 40% of global crops are lost to pests each year, but a Kennesaw State University researcher is testing a chemical-free alternative.
Dr. Taeyeong Choi, assistant professor at KSU, is developing MocoBot, a low-cost, AI-powered robot that uses night vision to autonomously detect and remove pests—specifically slugs and snails—from strawberry fields. The prototype operates at night when pests are most active, avoiding the environmental harm of pesticides and the labor costs of manual removal.
"By targeting pests with smart technology, we can help farmers reduce crop loss while protecting the environment," said Choi, whose work is backed by a Southern SARE grant.
Designed with affordability in mind, MocoBot runs on budget-friendly components and simplified AI models to ensure accessibility for small and mid-size farms. The robot’s development involved working closely with local farmers and KSU’s Field Station to collect nighttime pest imagery—data that didn’t previously exist.
MocoBot undergoes a three-step training process: pest recognition, robotic pest removal, and safe navigation through delicate crops.
Choi sees the robot as the foundation for broader AI-agriculture tools that could address pest control across multiple crops.
"Food security is a growing concern,” he said. “This kind of sustainable, scalable tech could be a game-changer.”
CCSE Dean Sumanth Yeduri called the project a model of interdisciplinary innovation. “Dr. Choi is showing how AI and agriculture can come together to solve real-world challenges.”
Apple reportedly exploring new human-robot training method using Vision Pro
Apple’s latest research introduces a hybrid robot training method called PH2D (Physical Human-Humanoid Data), combining human instructors with robot demonstrators. Unlike traditional training that relies heavily on teleoperated robot data—costly and labor-intensive—Apple's approach uses modified consumer headsets like the Vision Pro and Meta Quest to capture human hand motions and 3D poses.
The data feeds into Apple’s new Human-Humanoid Action Transformer (HAT) model, which learns from both human and robot demonstrations. The study shows this method improves performance in tasks like object grasping, while cutting costs and boosting training flexibility.
While Apple has only shown prototypes like a robotic lamp, this research hints at future consumer-facing robots capable of performing everyday tasks.
Robot Workforce Story Of The Week
From the Gridiron to Gears: Perry High Senior Finds His Passion in Robotics
Cooper Jones once spent his afternoons under stadium lights, chasing touchdowns on the football field. But a spark of curiosity—and a pivotal decision—set him on a new path, one driven by circuits instead of cleats.
Now a senior at Perry High School, Jones serves as president of the Houston County Robotics Team, a districtwide initiative that unites students from all five public high schools. Under his leadership, the team has earned statewide recognition and championed a broader mission: expanding STEM access for students across the region.
“It was after my ninth-grade football season,” Jones recalled. “I decided I no longer wanted to play football. I wanted to do something else.”
That "something else" turned into a leadership role in one of the county’s most forward-looking student organizations—one that’s not only building robots but also building futures.
Robot Video Of The Week
A little bit of fun this week for my fantastic Staff Manager, Nora Nimmerichter, who attended her first ICRA this week. NIST’s Ben Beiter got Nora into one of the Exoskeletons at the NIST Booth. Enjoy the laughs.
Upcoming Robot Events
May 29-30 Humanoid Summit - London
June 9-13 London Tech Week
June 17-18 MTC Robotics & Automation (Coventry, UK)
June 30-July 2 International Conference on Ubiquitous Robots (College Station, TX)
Aug. 17-21 Intl. Conference on Automation Science & Engineering (Anaheim, CA)
Sept. 15-17 ROSCon UK (Edinburgh)
Sept. 23 Humanoid Robot Forum (Seattle, WA)
Sept. 27-30 IEEE Conference on Robot Learning (Seoul, KR)
Sept. 30-Oct. 2 IEEE International Conference on Humanoid Robots (Seoul, KR)
Oct. 6-10 Intl. Conference on Advanced Manufacturing (Las Vegas, NV)
Oct. 15-16 RoboBusiness (Santa Clara, CA)
Oct. 19-25 IEEE IROS (Hangzhou, China)
Oct. 27-29 ROSCon (Singapore)
Nov. 3-5 Intl. Robot Safety Conference (Houston, TX)
Dec. 11-12 Humanoid Summit (Silicon Valley TBA)
I love that you brought in a K2SO reference. He’s terrifying until he’s reprogrammed—then suddenly charming and lovable. Audiences mourn when he “dies.” But the story would’ve been very different if Rogue One had been called Rogue Droid, and at that critical moment, K2SO had reprogrammed the Imperial droid instead—the one sent to retrieve the facility map. No map, no mission. There goes the Rebellion.
Which raises a deeper question: Is our control over robots really as simple as deciding who holds the keys to the software matrix?
Safety doesn’t just mean physical safety. I think we all need to have a serious conversation about AI safety and humanoid robots.
The plain truth is that if AI goes SkyNet on us, the only thing stopping it will be self-preservation: the ability to have enough humanoid robots to maintain all the industries necessary to survive without humans: replacing its silicon as chips fail, feeding its electrical power appetite, and every other attendant industry to those goals.