dstack
is an open-source alternative to Kubernetes and Slurm, designed to simplify GPU allocation and AI workload
orchestration for ML teams across top clouds and on-prem clusters.
dstack
supports NVIDIA
, AMD
, Google TPU
, and Intel Gaudi
accelerators out of the box.
- [2025/02] dstack 0.18.41: GPU blocks, Proxy jump, inactivity duration, and more
- [2025/01] dstack 0.18.38: Intel Gaudi
- [2025/01] dstack 0.18.35: Vultr
- [2024/12] dstack 0.18.30: AWS Capacity Reservations and Capacity Blocks
- [2024/10] dstack 0.18.21: Instance volumes
- [2024/10] dstack 0.18.18: Hardware metrics monitoring
Before using
dstack
through CLI or API, set up adstack
server. If you already have a runningdstack
server, you only need to set up the CLI.
To use dstack
with cloud providers, configure backends
via the ~/.dstack/server/config.yml
file.
For more details on how to configure backends, check Backends.
For using
dstack
with on-prem servers, create SSH fleets once the server is up.
You can install the server on Linux, macOS, and Windows (via WSL 2). It requires Git and OpenSSH.
$ pip install "dstack[all]" -U
$ uv tool install "dstack[all]" -U
Once it's installed, go ahead and start the server.
$ dstack server
Applying ~/.dstack/server/config.yml...
The admin token is "bbae0f28-d3dd-4820-bf61-8f4bb40815da"
The server is running at http://127.0.0.1:3000/
For more details on server configuration options, see the Server deployment guide.
Once the server is up, you can access it via the dstack
CLI.
The CLI can be installed on Linux, macOS, and Windows. It requires Git and OpenSSH.
$ pip install dstack -U
$ uv tool install dstack -U
To point the CLI to the dstack
server, configure it
with the server address, user token, and project name:
$ dstack config \
--url http://127.0.0.1:3000 \
--project main \
--token bbae0f28-d3dd-4820-bf61-8f4bb40815da
Configuration is updated at ~/.dstack/config.yml
dstack
supports the following configurations:
- Dev environments — for interactive development using a desktop IDE
- Tasks — for scheduling jobs (incl. distributed jobs) or running web apps
- Services — for deployment of models and web apps (with auto-scaling and authorization)
- Fleets — for managing cloud and on-prem clusters
- Volumes — for managing persisted volumes
- Gateways — for configuring the ingress traffic and public endpoints
Configuration can be defined as YAML files within your repo.
Apply the configuration either via the dstack apply
CLI command or through a programmatic API.
dstack
automatically manages provisioning, job queuing, auto-scaling, networking, volumes, run failures,
out-of-capacity errors, port-forwarding, and more — across clouds and on-prem clusters.
For additional information, see the following links:
You're very welcome to contribute to dstack
.
Learn more about how to contribute to the project at CONTRIBUTING.md.