Vultr and Exostellar partner to deliver unified AI infrastructure

Exostellar has joined the Vultr Cloud Alliance, bringing a unified heterogeneous AI infrastructure orchestration platform to teams running workloads across Vultr’s global cloud data center regions. The partnership enables customers to utilize Exostellar’s advanced orchestration and optimization capabilities to schedule, place, and optimize AI workloads across various GPU types on Vultr, thereby improving utilization, reducing overhead, and establishing a consistent operational model for distributed compute.

Meet Exostellar and see how they enhance AI operations on Vultr

Exostellar is an AI infrastructure orchestration platform built for enterprises, research groups, and platform engineering teams that run distributed GPU environments. The platform unifies heterogeneous GPU resources into a single, shared pool using multi-cluster federation, allowing teams to view and manage all Vultr regions through a single control plane. Once unified, it utilizes topology-aware scheduling to ensure workloads are placed where they fit best based on memory needs and specific workload patterns. To govern this shared environment, Exostellar employs hierarchical quota management, giving administrators precise control over resource sharing and queuing to ensure fair access.

Finally, to maximize efficiency, the platform supports flexible fractionalization, allowing several inference jobs or AI agents to run on the same device without wasting resources. These features deliver substantial gains, with many teams achieving 14 times higher efficiency, more than 50% cost savings, and 10 times more compute availability. Vultr is partnering with Exostellar to give customers an efficient, consistent, and scalable way to operate mixed GPU workloads across Vultr’s global cloud data center regions.

Read the full press release here.