One day, someone will come up with an equation that precisely defines the tipping point between our natural laziness and our willingness to give up our privacy to relieve some of it. I don’t think today is that day.
Years ago I wrote about a robot that you could teleoperate from anywhere in the world via WiFi. We tried it in our offices and I even passed it around my house, which freaked my family out. At the time, we didn’t think too much about the privacy implications because I, not a third party, was the one navigating the robot and seeing what it could see.
TechRadar AI Week 2025
This article is part of TechRadar’s AI Week 2025. Covering the basics of artificial intelligence, we’ll show you how to get the most out of ChatGPT, Gemini or Claude, as well as in-depth features, news and the main talking points in the world of AI.
With commercials for 1X’s as-yet-unreleased robot appearing on Subway advertising screens, as well as limited testing in select journalists’ homes, consumers are being asked to consider whether they would be willing to invite the 5’6″, 66 lb. humanoid robot into their homes. Although the $20,000 robot (or $499 per month lifetime rental) is designed for autonomy, the reality is that it could encounter many unfamiliar scenarios in your home. In these In this case, 1X technicians can, with your permission, take over, teleoperate and apparently train the robot’s Redwood AI.
Even in casual conversations with people, this news makes them think, but we decided to poll our almost half a million WhatsApp subscribers with this question:
“The 1X Neo is a new $20,000 home robot that can be controlled remotely by a human. But what do you think of an Al robot that learns skills based on your home data?”
While the majority (409 people) said they weren’t sure but still thought a “cleaning robot would be great,” a significant number (340 respondents) were significantly less optimistic, choosing “That sounds horrible and is a complete violation of privacy.”
73 described Neo as what they “always dreamed of,” and only 48 were happy to let 1X and Neo do their training at home.
I understand the concern and, to be honest, it’s far from new. In 2019, as Sony unveiled the latest refresh of its AIBO robot dog, some expressed concerns about a mobile robot with a camera in its snout, built-in facial recognition AI (useful for AIBO to remember family faces), and Sony’s access to all data collected.
At the time, Sony stored data locally and in its cloud, but hashed it in a way that was not identifiable as personal information. Despite this, the robot could not be sold in Illinois because its capabilities circumvented the state’s biometric information privacy law.
With AI and 1X’s much more powerful models, one would assume that privacy concerns would triple.
I asked technology and regulatory attorney Kathleen McGee via email how concerned consumers should be.
McGee, who was previously a government attorney and most recently chief of the Internet and Technology Bureau in the New York Attorney General’s Office and is now a partner in Lowenstein Sandler’s data privacy, security, safety and risk management practice, told me that the data that companies like 1X collect “ranges from the mundane (where you place your dish soap) to the highly personal (real-time video capture of your home, its physical layout, and images of you and your home’s occupants, including children). high-level security measures to ensure that data is anonymized, retained only when necessary, and that the AI model(s) are trained in accordance with ethical and legal standards.
Clarity, notes McGee, is key. “Alleged users of these products should be very clear about how data is used and shared, as well as what rights users have to delete the data. When an AI model is created and trained on your sensitive data, it is virtually impossible to get rid of it completely.”
1X, however, makes it clear in its FAQ that while data collected during “real-world tasks” is used to build NEO’s core intelligence and improve both its capabilities and security, “we do not use this data to build your profile, nor do we sell this data.” If you do not wish to participate in the continuous improvement of NEO, you can always unsubscribe. »
Data aside, however, a robotic camera attached to fully articulated limbs and hands raises the specter of a remote-controlled rampage of your home. Reddit is well stocked with these concerns.
Intended users of these products must be very clear about how data is used and shared.
Kathleen McGee
In a scathing post about Neo bot privacy, Reddit user GrandyRetroCandy wrote:
“If law enforcement comes to the 1X office. They say ‘we have a warrant.’ They can order an operator to take control of the Neo Robot, and while you’re shopping or away from home, they could force this robot to go through your wallet. Your newspaper. Your house. Your drawers. And see everything on you.,”
This sounds terrifying, but GrandRetroCandy quickly clarified:
“Technically, this part is not legal. It is technically possible (it could be done), but it is not legal. But if they have a warrant, they can see all the camera footage stored by your Neo Robot. This part is legal.”
McGee also told me, “Another general concern with these types of domestic products is the potential exposure of data that a user may believe to be private but may be subject to a subpoena, search warrant, or intrusion by a threat actor.” User privacy concerns cannot be separated from security concerns.
AI needs your data…and you need your privacy
Basically, the idea of someone suddenly using X1 Neo to roam your house and go through your stuff is way beyond the realm of probability, or even possibility.
The truth is that humanoid robots will never become practical and useful without a significant amount of data input from every user and household, especially in the beginning, when they are bound to make mistakes.
For robotics and automation, one of the major advances in recent years has been simulation training. This helped with autonomous driving and many of these early humanoid robots. And yes, we can see how difficult it is to prepare humanoid robots for the unexpected.
At this point, 1X Neo Beta is so poorly prepared that most of its abilities are teleoperated. Preparing humanoid robots for the spotlight remains difficult work. In Russia, the Idol robot was so ill-prepared for the lights of glory that it crashed spectacularly.
Giving away data for free that cannot be used to invade our privacy will help these bots learn and improve quickly, but there must be limits and controls.
Much of the responsibility falls on companies like 1X, especially those developing AI. As McGee pointed out in an email: “Many jurisdictions have privacy laws, and for AI developers, the focus should always be on complying with the strictest regulations. Again, ethics and law have their place here, and we advise our clients to build a strong foundation of trust and transparency to ensure the stability and longevity of their AI design.
As of April this year, only 20 US states had data privacy laws. At least in the EU they have GDPR (General Data Protection Regulation), which is so strict that some AI technologies have been retained in all 27 countries making up the EU. The UK has an almost identical GDPR.
There’s probably a happy medium between what we have here in the US and GDPR, but the intention should be the same: to safely train an army of humanoid robots that know how to help us, and even do our housework for us without setting off massive privacy alarms.

The best MacBooks and Macs for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




