Menlo’s Solution
Menlo enables developers to embody an AI agent into a humanoid robot, with a rapid iteration and deployment loop.
In an industry where most robotics companies sell deployment as bespoke engineering, Menlo must turn deployment into a product, not a project. We commoditise the “deployment gap” into a product capability: repeatable processes, packaging, regression tests, and reproducible “site onboarding”.
The Menlo Stack
The Menlo Stack is our core product: an integrated stack for building, training, validating, and deploying agentic behaviour into humanoids.
Menlo’s cloud-based platform integrates these components into a single deployment loop:
- developers create Agents on the Agent Platform,
- which are tested against scenarios in Uranus (World Simulator),
- refined with new motor skills via Cyclotron,
- deployed to Asimov for real-world execution,
- with telemetry flowing to a Data Engine to improve policies, cycle after cycle, faster than any competitor.
A platform wins even if hardware commoditises, and we focus on the cost-collapse levers that enable humanoid robotics to be deployed as a economically viable labor force, not a novelty demo.
Menlo’s Agent Platform
Traditional robotics often treats autonomy as a tightly engineered program. Menlo treats autonomy as an agent payload:
- packaged,
- permissioned,
- constrained by safety envelopes,
- continuously deployed with rollbacks and versioning,
- observable through operational telemetry.
This is a software-native approach to embodied systems. The core idea is that robustness is achieved through iteration, and iteration can only be fast if deployment is standardised. Robotic deployments evolve from once-deployed code & pre-baked intelligence into self-improving agentic payloads.
Uranus (World Simulator)
Uranus is our world simulator and digital twin engine for rapid scenario testing and validation. We built Uranus because simulation is where iteration happens, before hardware is ever at risk.
Uranus serves three purposes:
- Scenario generation — Produces high-fidelity, customer-relevant scenarios to stress-test agents before deployment.
- Pre-deployment validation — Enables regression testing and safe exploration in a risk-free environment.
- Hardware-in-the-loop testing — Supports virtual commissioning to reduce real-world testing time.
The point is not realism for its own sake; it is to compress the feedback loop from weeks to hours.
Cyclotron (Motor Control Pipeline)
Cyclotron is our motor-control and locomotion training pipeline for robust full-body behaviours. We built Cyclotron because locomotion and manipulation failures are among the most expensive in the field, time-consuming to debug, hazardous to hardware, and blocking to operations.
Cyclotron solves this through domain and dynamics randomization, training behaviours across a wide range of simulated conditions so they transfer reliably to the real world. This bridges the “reality gap” between simulation and hardware, creating a key moat for endurance, safety, and deployment readiness.
Asimov (Reference Humanoid)
Asimov is an open-source humanoid reference design that implements the Menlo Stack. We built Asimov for three reasons:
- Prove the development loop — The platform needs a system designed to be deployed repeatedly, with telemetry feeding back into Uranus and Cyclotron.
- Make the platform concrete — Customers can see, touch, and deploy agents to Asimov, turning abstract capabilities into tangible value.
- Enable an open, permissionless supply chain — By open-sourcing the design, we invite any manufacturer to produce Asimov units. This is the PC approach to humanoid robotics: Menlo provides the “Windows” (the platform stack), and a diverse, competitive supply chain brings the BOM cost down.
Our goal is a sub-$30,000 humanoid, achievable not through vertical integration, but through supply chain ecosystem competition and commoditization.
Data Engine
The Data Engine is our telemetry and continuous improvement system for closing the development loop. We built it because real-world data is irreplaceable, no simulation can capture every edge case, and the only way to achieve true reliability is to learn from actual deployments.
The Data Engine serves two purposes:
- Operational evidence capture — Records what failed, where, under what conditions, and with what impact, turning failures into training data.
- Closed-loop improvement — Feeds real-world evidence back into Uranus scenarios and Cyclotron training pipelines, so every deployment makes the next one smarter.
Robustness is achieved through iteration; the Data Engine ensures that iteration is informed by reality.
Internal Capabilities
We take a deliberate approach to building internal capabilities that accelerate our own iteration cycles. Speed is a moat. By building Tokamak, Menlo Cloud, and other infrastructure in-house, we move faster and at a smaller burn rate than competitors.
Tokamak
Tokamak is our internal software factory for compressing iteration cycles.
Tokamak serves two purposes:
- Accelerated development — Automates build, test, and deployment pipelines so developers spend time on code, not bureaucracy.
- Closed-loop iteration — Rapid deploy → observe → fix. Intelligence gathered from every deployment feeds the next.
Menlo Cloud
Menlo Cloud is our private cloud for robotics training and development. We built Menlo Cloud because humanoid robotics requires specialized hardware that hyperscaler clouds don’t offer, ARM-based edge compute boards for firmware testing, robot-specific acceleration, and hardware-in-the-loop validation at scale.
Menlo Cloud serves two purposes:
- Specialized robotics infrastructure — Provides compute and edge devices optimized for firmware testing, sensor processing, and real-time control, hardware that public clouds simply don’t stock.
- Predictable unit economics — Eliminates surprise pricing from hyperscalers while enabling us to scale testing hardware as fast as we need it.
Menlo turns deployment into a product, not a project.