Skip to main content

Documentation Index

Fetch the complete documentation index at: https://wp.farlabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

Node operators who want to serve larger models can connect multiple machines on the same local network to pool their GPU memory. All machines in the group must be connected by wired gigabit Ethernet, Wi-Fi is not suitable for the low-latency communication required between nodes in a distributed inference session.