Intelligent Routing
Local models handle simple tasks; complex queries auto-route to cloud LLMs — optimizing cost and latency in real time.
Netralis is the OS layer that brings large language models to low-cost microcontrollers — voice assistants, AI companions, and smart controllers on hardware anyone can afford.
Platform Capabilities
Four core layers — from hardware abstraction to cloud LLM routing — so you ship the experience, not the plumbing.
Local models handle simple tasks; complex queries auto-route to cloud LLMs — optimizing cost and latency in real time.
Streaming ASR + LLM + TTS with wake-word, speaker recognition, and multi-language support out of the box.
One OS, many chips. Unified APIs for sensors, displays, audio, and OTA across ESP32-S3, C3, and P4.
MCP-based agent framework lets third-party developers build AI apps that run on any Netralis-powered device. No firmware expertise required — just write your agent logic and deploy over-the-air.
Cloud Infrastructure
Netralis's cloud layer runs end-to-end on Google Cloud — from large-model inference to device fleet telemetry — so we focus on the edge while Google handles the planet-scale plumbing.
Roadmap
Concrete, dated milestones from private SDK to commercial release. Updated as we ship.
Founder
15 years of software engineering experience, including a decade at Ant Group (Alipay) — one of the world's largest fintech platforms, serving over 1 billion users and processing trillions of transactions annually. Now based in Edmonton, Canada, building Netralis to make LLM intelligence accessible on the cheapest hardware on the planet.
Early Access
Building on ESP32 hardware? Investing in edge AI? Drop your email — we'll reach out when the SDK alpha opens in Q2 2026.