Skip to main content

Documentation Index

Fetch the complete documentation index at: https://wp.farlabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

Running a large language model is resource-intensive. The specialized hardware required high-end data center GPUs costs tens of thousands of dollars per unit and consumes hundreds of watts under load. Cloud providers who own these GPUs price their inference services accordingly, putting advanced AI out of reach for many developers, startups, and researchers. Meanwhile, the same AI workloads that strain cloud budgets could realistically run on consumer and prosumer GPUs already owned by millions of people worldwide. These cards sit idle for most of the day during work hours, overnight, between gaming sessions, representing enormous untapped compute potential.