ThoxOS™
Purpose-built edge AI operating system
Overview
A custom operating system optimized for AI workloads, featuring hardware-accelerated inference for maximum performance on large models.
ThoxOS™ is engineered from the ground up to deliver uncompromising AI performance while maintaining the security, reliability, and developer experience that professionals demand. Every component has been carefully optimized for the unique requirements of edge AI inference.
Key Features
Hardware-Accelerated Inference
Optimized inference engine for maximum performance across all model sizes.
Performance Optimized
Hardware-accelerated inference delivers blazing-fast performance on large models.
Secure by Design
Hardware-backed security with hardware root of trust and verified boot chain. HIPAA and GDPR ready.
Silent Operation
Intelligent thermal management maintains whisper-quiet operation below 25 dBA. Perfect for any workspace.
Seamless Updates
Over-the-air updates with automatic rollback protection for reliability.
User Friendly
Intuitive web dashboard, easy setup, and full API compatibility for any workflow integration.
Technical Highlights
System Components
Thermal Management
Intelligent cooling algorithms maintain optimal performance while keeping noise levels below 25 dBA during typical workloads.
Security Framework
Hardware-backed security with hardware root of trust, secure boot chain, and encrypted storage for your data and models.
Connectivity Stack
Full support for WiFi 6E, Bluetooth 5.3, 2.5Gbps Ethernet, and USB 3.2 with optimized drivers for low latency.
Hybrid AI Runtime
Ollama Backend (7B Models)
- 45-72 tokens/s inference speed
- Quick model swapping
- 100+ compatible models
- Port 11434
Hardware-Accelerated Backend (14B+)
- 60-100% faster inference
- Compressed model execution
- Native edge AI compute execution
- Port 11435
Smart Router (Port 8080)
Automatically routes requests to the optimal backend based on model size. 7B models → Ollama, 14B+ models → hardware-accelerated inference. OpenAI-compatible API with backend info in responses.
Pre-installed Models
- THOX.ai Coder 7B (Ollama)
- THOX.ai Coder 14B (Accelerated)
- THOX.ai Coder 32B (Accelerated)
- Model Context Protocol (MCP)
Developer Tools
- Web dashboard with inference engine status
- THOX CLI + inference engine builder
- SSH access enabled
- systemd services for boot
Boot Experience
████████╗██╗ ██╗ ██████╗ ██╗ ██╗ █████╗ ██╗ ╚══██╔══╝██║ ██║██╔═══██╗╚██╗██╔╝ ██╔══██╗██║ ██║ ███████║██║ ██║ ╚███╔╝ ███████║██║ ██║ ██╔══██║██║ ██║ ██╔██╗ ██╔══██║██║ ██║ ██║ ██║╚██████╔╝██╔╝ ██╗ ██╗██║ ██║██║ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ThoxOS v1.1 - Hybrid AI Inference Engine Platform: Nova edge AI compute module Starting services... ✓ thox-ollama.service [11434] Ollama Runtime ✓ thox-accelerated.service [11435] Hardware-Accelerated Inference ✓ thox-api-hybrid.service[8080] Smart Router Ready for inference in 8.2 seconds
ThoxOS™ boots directly into an optimized AI-ready state with hybrid inference. The smart router automatically selects Ollama (7B) or hardware-accelerated inference (14B+) for optimal performance.
Legal Notice
ThoxOS™ is a trademark of THOX.ai LLC. All rights reserved. © 2026 THOX.ai LLC.
The ThoxOS™ software, including its architecture, design, source code, object code, algorithms, data structures, user interface, APIs, and all associated documentation, is proprietary and confidential information of THOX.ai LLC. This software is protected by U.S. and international copyright laws, trade secret laws, and other intellectual property laws.