Model Compatibility

Server-rendered model compatibility surface, sourced from the compat-mesh manifest. Verified entries plus a watchlist for models still under evaluation.

Manifest Stable

Last updated 4 months ago

All model entries below are sourced from the published model-compat manifest. Verified entries have passed Jetson AI Lab validation, benchmark, or manual review.

Manifest-driven model compatibility

This catalog is generated from the published compat-mesh manifest and re-fetched every five minutes. Verified entries have passed at least one evidence check; watchlist entries are under evaluation and not yet recommended for production.

Vision Models

Native image understanding, medical imaging, OCR in 32+ languages.

Multilingual

Support for 12-32+ languages including English, Spanish, Chinese, Arabic.

Performance

Optimized with on-device compression for Nova hardware.

Verified Compatibility

Models that have passed Jetson AI Lab validation, benchmark, or manual review.

Filter Models

Showing 8 compatible models

Coordinator

Alibaba · Qwen 3

Lightweight cluster orchestration and management model.

Size

4B

Speed

<100ms

Memory

3GB

Context

16K tokens

Min Devices

1x

Evidence

Jetson AI Lab

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x
THOX-ai/thox-cluster-coordinator
Recommended

Ministral-3 8B

Mistral · Ministral-3

Edge-optimized vision model with 32+ languages. Perfect for single devices with vision needs.

VisionMultilingualTools

Size

8B

Speed

40-60 tok/s

Memory

10GB

Context

256K tokens

Min Devices

1x

Evidence

Jetson AI Lab

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

HealthcareEducationLegal
ministral-3:8b

Gemma 3 8B

Google · Gemma 3

Google's efficient vision model optimized for single GPU. Excellent balance of performance and capability.

VisionTools

Size

8B

Speed

38-55 tok/s

Memory

10GB

Context

128K tokens

Min Devices

1x

Evidence

Benchmark

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

EnterpriseResearchDevelopment
gemma3:8b
Recommended

Qwen 3 14B

Alibaba · Qwen 3

Advanced reasoning model with vision and multilingual support. Excellent for complex professional tasks.

VisionMultilingualToolsThinking

Size

14B

Speed

30-45 tok/s

Memory

14GB

Context

128K tokens

Min Devices

1x

Evidence

Jetson AI Lab

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

ResearchLegalFinance
qwen3:14b

Phi-4 Mini (3.8B)

Microsoft · Phi-4 Mini

Microsoft's compact model with exceptional performance. Multilingual with function calling.

MultilingualTools

Size

3.8B

Speed

70-95 tok/s

Memory

4GB

Context

128K tokens

Min Devices

1x

Evidence

Benchmark

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

EducationBusinessDevelopment
phi4:mini

Llama 3.2 8B

Meta · Llama 3.2

Meta's reliable foundation model. Excellent for general professional use.

Size

8B

Speed

42-65 tok/s

Memory

10GB

Context

128K tokens

Min Devices

1x

Evidence

Vendor Docs

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

General
llama3.2:8b

Qwen 2.5 Coder 14B

Alibaba · Qwen 2.5 Coder

State-of-the-art coding model with reasoning improvements and 128K context.

ToolsThinking

Size

14B

Speed

28-42 tok/s

Memory

14GB

Context

128K tokens

Min Devices

1x

Evidence

Benchmark

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

Software Development
qwen2.5-coder:14b

DeepSeek-Coder-V2 16B

DeepSeek · DeepSeek-Coder-V2

Advanced coding model with MoE architecture. Excellent for software engineering.

Tools

Size

16B

Speed

25-38 tok/s

Memory

16GB

Context

64K tokens

Min Devices

1x

Evidence

Benchmark

Device Fit

thox-novamagstack-2xmagstack-4xmagstack-8x

Best For

Software Development
deepseek-coder-v2:16b

Compatibility Watchlist

Unverified — under evaluation. Not yet recommended for production workloads.

Unverified · under evaluation

Cluster 70B

Alibaba · Qwen 3

Performance benchmarks pending Jetson AI Lab validation.

Size
72B
Memory
140GB
Min Devices
2x
Status
Review required
THOX-ai/thox-cluster-70b
Unverified · under evaluation

Cluster 100B

Alibaba · Qwen 3

Memory footprint requires 4x stack; quantization profile pending.

Size
110B
Memory
220GB
Min Devices
4x
Status
Review required
THOX-ai/thox-cluster-100b
Unverified · under evaluation

Cluster 200B

Meta · Llama 3.3

Exceeds standard MagStack memory; awaiting 8x cluster validation.

Size
405B
Memory
810GB
Min Devices
8x
Status
Review required
THOX-ai/thox-cluster-200b
Unverified · under evaluation

GPT-OSS 120B

OpenAI · GPT-OSS

Memory footprint exceeds 8x MagStack budget; cloud-only until quantization or 16x stack lands.

Size
120B
Memory
240GB
Min Devices
12x
Status
Cloud-only — under evaluation
gpt-oss:120b
Unverified · under evaluation

Mixtral 8x22B

Mistral · Mixtral

Promising community reports; awaiting first-party Jetson AI Lab benchmark before approval.

Size
141B
Memory
90GB
Min Devices
4x
Status
Review required
mixtral:8x22b

Compatibility Guide

Single Device (16GB RAM)

Best for 3-14B parameter models with optimized quantization.

MagStack 2x (32GB RAM)

Unlocks frontier models with extended context and multimodal capabilities.

MagStack 4x+ (64GB+ RAM)

Enterprise-grade frontier models for professional workflows.

Quick Start Guide

# Pull a model from Ollama

ollama pull ministral-3:8b

# Run the model

ollama run ministral-3:8b

# For vision tasks, attach an image

ollama run ministral-3:8b "Analyze this medical image" /path/to/image.jpg

CONFIDENTIAL AND PROPRIETARY INFORMATION

This documentation is provided for informational and operational purposes only. The specifications and technical details herein are subject to change without notice. THOX.ai LLC reserves all rights in the technologies, methods, and implementations described.

Nothing in this documentation shall be construed as granting any license or right to use any patent, trademark, trade secret, or other intellectual property right of THOX.ai LLC, except as expressly provided in a written agreement.

Patent Protection

The MagStack™ magnetic stacking interface technology is proprietary technology of THOX.ai LLC, protected by trade secrets and intellectual property laws....

Reverse Engineering Prohibited

You may not reverse engineer, disassemble, decompile, decode, or otherwise attempt to derive the source code, algorithms, data structures, or underlying ideas of any THOX.ai hardwa...

THOX.ai™, ThoxOS™, MagStack™, MeshStack™, ThoxMigrate™, the THOX Edge Series™, the THOX Nova Series™, and the THOX.ai logo are trademarks or registered trademarks of THOX.ai LLC in the United States and other countries. WireGuard® is a registered trademark of Jason A. Donenfeld.

All other trademarks are the property of their respective owners.

© 2026 THOX.ai LLC. All Rights Reserved.