Skip to main content
A decentralized AI network must ensure that every node is genuinely performing computation—not faking outputs to save electricity or GPU cycles. FAR AI solves this problem with a cryptographically enforced Proof of Compute layer.

The “Lazy Node” Threat

In open networks, malicious nodes may attempt to:
  • Pretend to run the model while returning random or low-effort outputs
  • Shortcut inference by using smaller models internally
  • Replay old outputs instead of generating fresh responses
  • Drop computations entirely to save power