How to manage storage pools

See the following sections for instructions on how to create, configure, view, and resize Storage pools.

View storage pools

You can display a list of all available storage pools and check their configuration.

To list all available storage pools, run:

lxc storage list

The storage pool created during initialization is usually called default or local.

To show detailed information about a specific pool, run:

lxc storage show <pool_name>

To see usage information for a specific pool, run:

lxc storage info <pool_name>

Create a storage pool

LXD creates a storage pool during initialization. You can add more storage pools later, using the same or different driver. See the Storage drivers documentation to learn about available configuration options for each driver.

By default, LXD sets up loop-based storage with a sensible default size/quota: 20% of the free disk space, with a minimum of 5 GiB and a maximum of 30 GiB.

When using a Ceph storage driver, first see the Requirements for Ceph-based storage pools section below.

To create a storage pool, run:

lxc storage create <pool_name> <driver> [configuration_options...]

See the Storage drivers documentation for a list of available configuration options for each driver.

After creating a storage pool, back up its configuration for future recovery.

Examples

The following CLI syntax examples show how to create a storage pool using different storage drivers.

Create a directory pool named pool1:

lxc storage create pool1 dir

Use the existing directory /data/lxd for pool2:

lxc storage create pool2 dir source=/data/lxd

Create a storage pool in a cluster

If you want to add a storage pool to a LXD cluster, you must create the storage pool for each cluster member separately. This is because the configuration might differ among cluster members (for example, the storage location or the size of the pool).

If any cluster members use disks that already contain a LXD storage pool, or you want to recover an existing remote storage pool, refer to the Recover a storage pool section.

To create a storage pool via the CLI, start by creating a pending storage pool on each member with the --target=<cluster_member> flag and the appropriate configuration for the member.

Make sure to use the same storage pool name for all members. Then create the storage pool without specifying the --target flag to actually set it up.

For further details, see How to configure storage for a cluster.

Ceph-based storage pools in clusters

For most storage drivers, the storage pools exist locally on each cluster member. That means if you create a storage volume in a storage pool on one member, it is not available for other cluster members.

This behavior is different for Ceph-based storage drivers (ceph, cephfs and cephobject). When using these drivers, each storage pool exists in one central location and therefore, all cluster members access the same storage pool with the same storage volumes.

After creating a storage pool, back up its configuration for future recovery.

Examples

The following CLI syntax examples show how to create a storage pool in a cluster using different storage drivers.

Create a storage pool named my-pool using the ZFS driver at different locations and with different sizes on three cluster members:

user@host:~$
lxc storage create my-pool zfs source=/dev/sdX size=10GiB --target=vm01
Storage pool my-pool pending on member vm01
user@host:~$
lxc storage create my-pool zfs source=/dev/sdX size=15GiB --target=vm02
Storage pool my-pool pending on member vm02
user@host:~$
lxc storage create my-pool zfs source=/dev/sdY size=10GiB --target=vm03
Storage pool my-pool pending on member vm03
user@host:~$
lxc storage create my-pool zfs
Storage pool my-pool created

Back up storage pool configuration

To assist future recovery in case a storage pool malfunctions, maintain a record of storage pools as a backup. For each pool, record the driver type and its config options shown by running:

lxc storage show <pool_name>

The config options vary by driver type. Keep this record in a safe place, and update it if you update a storage pool’s configuration.

For pools in a cluster

For local storage pools in a cluster, the source value is member-specific and must be obtained from each cluster member. For non-local storage pools with the source config option, its value is shared across all cluster members.

Recover a storage pool

You might need to recover a storage pool when setting up a new LXD server or cluster with non-pristine storage disks, or when trying to access remote storage that was previously used by another LXD deployment.

Using recovery, you can restore instances, custom volumes, and buckets that are still located on those storage pools.

Get storage pool configuration

Before recovering a storage pool, you need to know its original configuration: the driver type and any config options that differ from the default. Ideally, you have access to a record of the configuration as described in Back up storage pool configuration.

If you do not have access to this information, try alternate ways to retrieve it. If the pool is still available in the LXD database, you can use lxc storage show:

lxc storage show <pool_name>

You can also try this command, which provides hints about missing storage pools and their original configuration, if such information can be discovered:

lxd recover

See the Storage drivers documentation for a list of available configuration options for each driver.

Recover a pool

To recover a storage pool, use the lxc storage create command with the source.recover=true flag and the pool’s original, non-default configuration options:

lxc storage create <pool_name> <driver> source.recover=true [original_pool_configuration_options...]

Examples

The following CLI syntax examples show how to recover different types of storage pools.

Recover a pool named pool1:

lxc storage create pool1 dir source.recover=true source=/data/lxd

Configure a storage pool

See the Storage drivers page for the available configuration options for each storage driver.

General keys for a storage pool (like source) are top-level. Driver-specific keys are namespaced by the driver name.

Use the following command to set configuration options for a storage pool:

lxc storage set <pool_name> <key> <value>

For example, to turn off compression during storage pool migration for a dir storage pool, use the following command:

lxc storage set my-dir-pool rsync.compression false

You can also edit the storage pool configuration by using the following command:

lxc storage edit <pool_name>

We recommend that you maintain a backup of the configuration of your storage pools for future recovery. Make sure to update this backup after your edited configuration.

Resize a storage pool

If you need more storage, you can increase the size (quota) of your storage pool. You can only grow the pool (increase its size), not shrink it.

You can only resize loop-backed storage pools that are managed by LXD, meaning they must use the Btrfs, LVM, or ZFS storage drivers.

In the CLI, resize a storage pool by changing the size configuration key:

lxc storage set <pool_name> size=<new_size>

If you later need to recover a storage pool and the pool has a non-default size configuration option, that option must be included for recovery. If needed, update the size in your backup of the storage pool configuration.

Requirements for Ceph-based storage pools

For Ceph-based storage pools, the requirements below must be met before you can Create a storage pool or Create a storage pool in a cluster.

Ceph cluster

Before you can create a storage pool that uses the Ceph RBD, CephFS, or Ceph Object driver, you must have access to a Ceph cluster.

To deploy a Ceph cluster, we recommend using MicroCloud. If you have completed the default MicroCloud setup, you already have a Ceph cluster deployed through MicroCeph, so this requirement is met. MicroCeph is a lightweight way of deploying and managing a Ceph cluster.

If you do not use MicroCloud, set up a standalone deployment of MicroCeph before you continue.

Ceph Object and radosgw

Storage pools that use the Ceph Object driver require a Ceph cluster with the RADOS Gateway (also known as RGW or radosgw) enabled.

Check if radosgw is already enabled

To check if the RADOS Gateway is already enabled in MicroCeph, run this command from one of its cluster members:

microceph status

In the output, look for a cluster member with rgw in its Services list.

Example:

root@micro1:~#
microceph status
MicroCeph deployment summary:
- micro1 (192.0.2.10)
  Services: mds, mgr, mon, rgw, osd
  Disks: 1
- micro2 (192.0.2.20)
  Services: mds, mgr, mon, osd
  Disks: 1

In the output above, notice rgw in the list of Services for micro1. This means that this cluster member is running the RADOS Gateway.

Look for rgw in your output. If you do not see it, you must Enable radosgw.

If you do see it, you’ll need the corresponding port number. On the cluster member with the rgw service, run:

sudo ss -ltnp | grep radosgw

Example:

root@micro1:~#
sudo ss -ltnp | grep radosgw
LISTEN 0      4096         0.0.0.0:8080      0.0.0.0:*    users:(("radosgw",pid=11345,fd=60))
LISTEN 0      4096            [::]:8080         [::]:*    users:(("radosgw",pid=11345,fd=61))

The output above shows that the radosgw port number is 8080.

Enable radosgw

If you did not find rgw in the Services list for any of your cluster members in the output from microceph status, then you must enable the RADOS Gateway. On one of the Ceph cluster members, run:

sudo microceph enable rgw --port 8080

We include the --port 8080 flag because if unspecified, the default port is 80. This default is a commonly used port number that can often cause conflicts with other services. You are not required to use 8080 — if needed, use a different port number.

The RADOS Gateway endpoint

The full RADOS Gateway endpoint includes the HTTP protocol, the IP address of the Ceph cluster member where the rgw service is enabled, and the port number specified. Example: http://192.0.2.10:8080.