Skip to main content

Documentation Index

Fetch the complete documentation index at: https://wp.farlabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

Open by designAffordable by structure
No proprietary hardware, no vendor lock-in.
FAR AI runs on GPUs already owned by millions
of people worldwide.
Routing inference through existing hardware
removes data center overhead from the cost
equation entirely.
Trustworthy by cryptographyFair by construction
Continuous verification means developers
get cloud-grade reliability without centralized
infrastructure.
Tiered routing ensures that reliability not just
raw GPU power determines which operators
earn the most.
Scalable by naturePrivate by roadmap
Capacity grows every time a new node joins.
There is no bottleneck at the infrastructure layer.
ZK proofs ensure prompts stay hidden from
the FAR AI orchestrator. A confidentiality layer -
TEE today, FHE as the long-term target extends
that guarantee to the executing node.
FAR AI is not just a cheaper way to access AI. It is a fundamentally different infrastructure model, one where the network grows stronger with every node that joins.