Skip to main content

Documentation Index

Fetch the complete documentation index at: https://wp.farlabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

Node operators don’t choose which models to run. The orchestrator assigns models based on network demand and the operator’s hardware capabilities. Models are downloaded and cached locally on the node, so load times drop to near zero for frequently requested models and the orchestrator can remotely load or unload models as demand patterns shift. This means operators always serve the models where they can add the most value, and developers always have access to the models they need, without any manual coordination between the two.