Using Additional Disks
Assuming you have created a VM with more than one associated Disk (including one for booting from), there are some steps you must carry out to be able to use the non-boot Disks on the VM. These steps are OS-dependent and must be carried out on the VM (or specified in custom cloud-init).
In particular, if it is the first time the non-boot Disk is being used, and there is no OS image present on the Disk, the Disk must be formatted with a file system. Then, for all Disks (even those which have been used before on other VMs), the Disk must be mounted into the VM's file system.
Your VM's operating system will offer configuration to mount Disks on subsequent boots, so that this process does not need to be carried out each time the VM boots.
Initial Formatting and Mounting
This is a worked example using an Ubuntu image. The example VM is a VM with three Disks - the first Disk specified in the VM Spec's list of Disks is the boot Disk, the second and third disks are storage Disks.
View Disk Devices
Initially, Disks specified on your VM Spec under diskRefs with boot-from: false will
appear as unpartitioned devices that your VM can see.  Once you have ssh'd onto your VM,
you will be able to run the lsblk command to view attached devices.  The output will
look similar to:
$ lsblk -o NAME,MAJ:MIN,RM,SIZE,RO,TYPE,MOUNTPOINT,LABEL
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS LABEL
vda     253:16   0    5G  0 disk
├─vda1  253:17   0    4G  0 part /           cloudimg-rootfs
├─vda14 253:30   0    4M  0 part
├─vda15 253:31   0  106M  0 part /boot/efi   UEFI
└─vda16 259:0    0  913M  0 part /boot       BOOT
vdb     253:0    0  954M  0 disk
vdc     253:32   0  954M  0 disk
vdd     253:48   0    1M  0 disk             cidata
In this instance, the output indicates that there are four devices - vda, vdb, vdc
and vdd.  The order of disks devices under lsblk will correspond to the order of Disks
on the VM Spec - so vda will be the first specified Disk, vbd the second, and so on.
We can see from the output that this VM booted from the first listed disk - corresponding
to vda - as this disk is already partitioned, and some of its partitions are mounted at
/boot directories (and the largest partition is mounted at /).
Devices vdb and vdc correspond to two 1 GB storage Disks.  Neither is ready for use on
VM, as they do not have a mount point.  These were the Disks specified second and third on
the VM Spec respectively.  In general, Disks will have a device name of the form vd_, where
the final character is the alphabetical character corresponding to the position of the Disk
in the list of Disks specified on the VM.
The final device, here vdd, will always correspond to a small volume used for configuring
the VM on first boot, and should be ignored. It is always labelled cidata.
View Disk to Disk Device Mappings
You can confirm the mappings between Disks and disk devices using the Kubernetes API.  evroc
VMs will report the mapping on their Status, in the AttachedDisks field:
  status:
    attachedDisks:
    - device: vda
      name: mybootdisk
    - device: vdb
      name: mystoragedisk
Format and Mount Disks
On Ubuntu, formatting a disk to use a file system (typically ext4) is as simple as:
sudo mkfs.ext4 /dev/vdb
You can check whether a device is formatted or not using the blkid command (optionally
specifying the device of interest).  Formatted devices will appear in the output and report
a file system type:
$ sudo blkid /dev/vdb
/dev/vdb: UUID="125e9d68-1091-42a8-98ec-53ee10754835" BLOCK_SIZE="4096" TYPE="ext4"
To then mount the disk, you need to first create a mount point (a folder under /mnt), and
then mount the disk to the mount point.
sudo mkdir /mnt/mydisk
sudo mount /dev/vdb /mnt/mydisk
If you wish to verify that this has worked, you can check the output of lsblk.
Mounting Disks at Boot using fstab
If you wish to mount the disk automatically at boot, you can add configuration to the /etc/fstab
file.  This requires root access.  The general format for entries for storage Disks in the
fstab file is:
UUID=125e9d68-1091-42a8-98ec-53ee10754835 /mnt/mydisk ext4 defaults 0 2
Using the UUID of the disk device is strongly recommended, as the device name (e.g. /dev/vdb)
may change if the disks are reordered in the VM spec.  It is also safe to apply a label to
the device and use that label in fstab:
LABEL=<label> /mnt/mydisk ext4 defaults 0 2
Note this label must be 16 characters or less.
Attaching and Detaching non-boot Disks from a VM
In order to attach or detach a non-boot Disk from a VM, the VM must be stopped.  You can
stop the VM by setting running: false.  You must remove the corresponding Disk device
from fstab before stopping the VM and detaching the Disk.
If you have added Disk devices to fstab using UUID, no further fstab changes are required. As previously mentioned, if you instead added Disk devices to fstab using the device names, then changing the order of Disks on the VM Spec (by adding / removing Disks) will also lead to errors within fstab.
Automating Formatting and Mounting of Storage Disks using Custom Cloud-Init
It is possible to automate the formatting and mounting of additional disks using custom cloud-init. However, you will have to carefully establish the expected devices of the non-boot Disks, as determined by the order in which the disks are specified on the VM Spec.
If you are using partitioned disks, note that this will affect the required device names in your custom cloud-init.
Formatting Storage Disks using Cloud-Init
Disks can be formatted using disk_setup and fs_setup, which are responsible for formatting
devices, configuring file systems, and optionally partitioning devices.
The example below demonstrates config which will format the Disk filesystem to ext4 if no filesystem is present, and give the Disk device a label (required for automated mounting), but will not set up any partitions on the Disk:
disk_setup:
  /dev/vdb:
    table_type: gpt
    layout: false
    overwrite: false
fs_setup:
- device: /dev/vdb
  filesystem: ext4
  label: mydisklabel
If you wish to partition the disk into a single partition, you can set layout: true.  You
will then need to adjust the device in fs_setup in include the partition number - which
will be 1 for a single partition (giving /dev/vdb1).
Note: For ext4 filesystems, the label must be 16 characters or less.
Mounting Disks at First Boot
If you wish for the device to be automatically mounted at first boot, you will need to create a mount point using cloud-init run-cmd:
runcmd:
- mkdir -p /mnt/mymountpoint
You can then use the label specified in fs_step to add the device to fstab using mounts:
mounts:
  - [ "LABEL=mydisklabel", /mnt/mymountpoint, ext4, "defaults", "0", "2" ]
Creating and Attaching a non-boot Disk to an Existing VM using the evroc CLI
To create a Disk suitable for use as a storage disk, create a Disk with a size specified:
evroc compute disk create <disk_name> --disk-storage-class="persistent" --disk-size-amount=5 --disk-size-unit=GB
You can then add the Disk to an existing VM using the update --append command:
evroc compute vm update <vm_name> --append --disk=<disk_name>
You will need to stop-start the VM for the Disk to be attached and detectable. Stop the VM by running:
evroc compute vm update <vm_name> --running=false
and once the VM has finished stopping (reporting VM is stopped), start it again by running:
evroc compute vm update <vm_name> --running=true