Performance Optimization

Get the best speed and quality from your Thox.ai device.

Back to Articles

Thox.ai is optimized out of the box, but you can fine-tune performance for your specific workflow. This guide covers hardware, network, and software optimizations to get the fastest responses with the best quality.

Expected Performance (Hybrid Architecture)

With the hybrid Ollama + TensorRT-LLM architecture, you should expect:

45-72

tok/s (7B Ollama)

45-56

tok/s (14B TensorRT)

20-24

tok/s (32B TensorRT)

<50ms

First token latency

TensorRT-LLM provides 60-100% faster inference on 14B+ models compared to Ollama alone.

Hybrid Backend Selection

Let the router auto-select backends

The smart router automatically routes 7B models to Ollama and 14B+ models to TensorRT-LLM for optimal performance.

Use TensorRT-LLM for large models

TensorRT-LLM provides 60-100% faster inference on 14B/32B models with INT4/INT8 quantization.

Pre-load TensorRT engines

Use thox tensorrt load <model> to pre-load engines into GPU memory for instant inference.

Check backend in API responses

API responses include "backend": "ollama" or "backend": "tensorrt" to confirm which engine was used.

Network Configuration

Use Ethernet for lowest latency

Wired connections add ~5ms latency vs 20-50ms for Wi-Fi. Essential for real-time completions.

Optimize network path

Place the device on the same network segment as your development machine. Avoid routing through VPNs.

Use local DNS

Configure your router to resolve thox.local locally, or use the IP address directly in IDE settings.

Thermal Management

Ensure proper ventilation

2+ inches clearance on all sides. Don't stack or enclose. Place on hard, flat surface.

Monitor thermal status

Run thox thermal status to check temperatures. Throttling begins at 80°C sustained.

Consider ambient temperature

Best performance at 0-35°C (32-95°F). In warm environments, a small fan can help.

Context Optimization

Minimize context size

Close unnecessary files in your IDE. Smaller context = faster processing.

Use .thoxignore

Exclude build directories, node_modules, and large files from indexing.

Target specific files

Use @filename references in chat instead of project-wide context when possible.

Useful Commands

thox status

View overall system status and hybrid backend status

thox tensorrt status

Check TensorRT-LLM engines and GPU memory

thox tensorrt load <model>

Pre-load a TensorRT engine into GPU

thox router status

View smart router configuration and backends

thox thermal status

Check current temperatures and throttle state

thox benchmark

Run performance benchmark on both backends

thox tensorrt build --all

Build TensorRT engines for all models

Advanced Tuning

Adjust Thread Count

By default, the device uses all available cores. Reduce threads if you need to reserve CPU for other tasks:

thox config set inference.threads 4

Adjust Context Length

Reduce context length for faster processing if you don't need full context:

thox config set inference.context_length 2048

Enable Flash Attention

Faster attention mechanism for compatible models (enabled by default):

thox config set inference.flash_attention true

Benchmarking Your Device

Run the built-in benchmark to measure your device's performance:

thox benchmark --full

This tests inference speed, memory bandwidth, and network latency. Results are compared to expected baselines and saved to /var/log/thox/benchmark.log.

Related Articles