Skip to main content

Documentation Index

Fetch the complete documentation index at: https://wp.farlabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

The journey of an inference request through the FAR AI network is designed to be fast, transparent, and verifiable at every step.
  • A developer submits a chat or completion request to the FAR AI API in standard format.
  • The orchestrator identifies eligible nodes those with the right hardware and the model loaded in memory.
  • The node with the highest Reliability Score for that model and hardware tier receives the job.
  • The node processes the request and streams tokens back through the orchestrator to the developer in real time.
  • Upon completion, the node reports timing and energy metrics. These are verified and recorded for billing and analytics.