Placement Groups

Placement groups control how VMs are distributed across physical hardware. By adding VMs to a placement group, you can ensure they are scheduled according to a specific placement strategy to improve availability and resilience.

How placement groups work

By default, a VM is not guaranteed to be scheduled to any particular physical host. Placement groups allow you to control this behavior. Each placement group has a placement strategy that determines how VMs in the group are distributed across physical infrastructure.

You can add VMs to a placement group when creating them or afterward. To change a VM's placement group membership, the VM must be stopped first.

Spread placement strategy

Placement groups currently support one placement strategy: spread.

The spread strategy ensures that VMs in the same placement group run on different physical hosts. This provides high availability by preventing multiple VMs from being affected by a single hardware failure.

Key characteristics of spread placement groups:

  • Maximum of 5 VMs per placement group.
  • Hard anti-affinity - VMs are never scheduled on the same physical host under any circumstances
  • Strict enforcement - If the number of VMs in the placement group exceeds the number of distinct physical hosts available, some VMs will not be scheduled. (Note: limited availability of distinct hosts does not prevent you from adding more VMs to the placement group within the limit of 5; it just limits how many can be scheduled.)

For example, if you have a web application running on 3 VMs in a spread placement group, each VM will run on a separate physical host. If one host fails, only one VM is affected, and the other two VMs continue running.

Use cases

Placement groups are recommended for:

  • Stateless services running behind a load balancer, such as web frontends or API servers, where losing a host shouldn't take down the service
  • Stateful distributed systems like databases, message queues, or consensus clusters (e.g., etcd, Kafka) that replicate data across nodes for availability

Next steps