Cloud (Groq)
Fastest. Requires internet. Uses optimized Llama models.
Local (WebLLM)
Unlimited & Free. Runs on your device. Offline capable.
Local engine is disabled on mobile (large download +
inconsistent performance). Use Cloud (Groq) instead.
This will download a compressed AI model (~600MB) to your browser cache. Subsequent
runs will be instant.