
Fluence Launches Global and Affordable GPU Compute for AI
Zurich, Switzerland, October 3rd, 2025
Fluence, a cloudless computing platform designed for enterprise use, has announced the availability of GPU compute for AI workloads. The new service offers an alternative to traditional centralized cloud providers, aiming to provide cost efficiency and flexibility. GPU containers are now accessible through the Fluence Platform, with support for GPU virtual machines and bare metal expected to follow in the coming weeks. The rollout is supported by Spheron Network, which joins as a key compute infrastructure provider for Fluence.
Addressing AI’s Compute Bottleneck
AI projects and companies face rising compute costs and hidden fees from hyperscalers, forcing teams into long-term, rigid pricing structures. Fluence is addressing customer demand for open, low-cost, and short-term GPU access by expanding its offering from CPU-based virtual servers into GPUs, giving customers direct access to high-performance hardware at up to 85% lower cost than the large clouds. The addition of GPUs builds on Fluence’s expertise in offering CPUs and adds a key product that allows Fluence to address the growing AI ecosystem.
Fluence’s CPU marketplace currently generates over $1 million in annual recurring revenue (ARR), with a reported pipeline exceeding $8 million in the third-party node provider market. According to Fluence, customers have collectively saved $3.5 million by utilizing its services compared to traditional centralized cloud providers.
Fluence’s decentralized infrastructure currently supports thousands of active blockchain nodes. Its customer base includes organizations such as Antier — identified as one of the largest blockchain service providers — along with NEO, RapidNode, Zeeve, dKloud, AR.IO, Tashi, and Nodes Garden.
Fluence’s Vision 2026 calls for scaling enterprise-grade decentralized compute and building a global GPU-powered marketplace to support a wide range of features requested by customers. The partnership with Spheron expands the Fluence provider network, already including Kabat, Piknik, and other top data center facilities.
“Meeting the exponentially growing demand for AI requires cost-efficient access to enterprise-grade GPUs. By expanding our network using Spheron’s decentralized GPUs, we give developers that access immediately, making our platform the go-to choice for serious AI builders scaling to the next level,” said Evgeny Ponomarev, Co-Founder of Fluence.
“Access to GPUs has been gated by scarcity and cost. Partnering with Fluence removes those barriers, giving AI teams dependable, decentralized compute power to move faster from research to deployment.,” added Prashant Maurya, Co-Founder of Spheron Network.
GPU Containers Live Today, VMs and Bare Metal Coming Next
GPU containers are live now on the Fluence Console, optimized for fine-grained AI workloads. Support for GPU VMs and bare metal will follow in the coming weeks, expanding options for AI projects and companies seeking decentralized, enterprise-grade performance. Developers can start deploying today at fluence.network/gpu and review documentation at fluence.dev/docs.
The introduction of GPU support by Fluence represents a step toward expanding decentralized physical infrastructure (DePIN) offerings, aiming to provide cost-effective, enterprise-grade compute resources through a decentralized marketplace.
About Fluence Network
Fluence is a DePIN cloudless computing platform that delivers resilient, enterprise-grade compute at lower cost than centralized clouds. The network aggregates capacity from top tier enterprise grade data centers worldwide, giving builders open access to the resources they need for AI, Web3, and general-purpose applications. Fluence is governed by the Fluence DAO, and its native token FLT powers governance, staking, and coordination across the network.