In this series of posts, I’ll cover the difference between ephemeral and persistent storage as far as Kubernetes containers are concerned and discuss the latest developments in ephemeral storage. I’ll also occasionally mention Pure Service Orchestrator™ to show how this can provide storage to your applications do matter what type is required.
Back in the mists of time when Kubernetes and containers, in general, were young storage was only ephemeral. There was no concept of persistency for your storage and the applications running in container environments were inherently ephemeral themselves and therefore there was no need for data persistency.
Initially, with the development of FlexDriver plugins and lately CSI compliant drivers, persistent storage has become a mainstream offering to enable applications that need or require state for their data. Persistent storage will be covered in the second blog in this series.
Ephemeral Storage
Ephemeral storage can come from several different locations, the most popular and simplest being emptyDir
. This is, as the name implies, an empty directory mounted in the container that can be accessed by one or more pods in the container. When the container terminates, whether that be cleanly or through a failure event, the mounted emptyDir
storage is erased and all its contents are lost forever.
emptyDir
You might wonder where this “storage” used by emptyDir
comes from and that is a great question. It can come from one of two places. The most common is actually from the actual physical storage available to the Kubernetes nodes running the container, usually from the root partition. This space is finite and completely dependent on the available free capacity of the disk partition the directory is present on. This partition is also used for lots of other dynamic data, such as container logs, image layers, and container-writable layers, so it is potentially an ever-decreasing resource.
To create this type of ephemeral storage for a pod(s) running in a container, ensure the pod specification has the following section:
volumes: - name: demo-volume emptyDir: {}
Note that the {}
states that we are not providing any further requirements for the ephemeral volume. The name
parameter is required so that pods can mount the emptyDir
volume, like this:
volumeMounts: - mountPath: /demo name: demo-volume
If multiple pods are running in the container they can all access the same emptyDir
if they mount the same volume name.
From the pods perspective, the emptyDir
is a real filesystem mapped to the root partition, which is already part utilised, so you will see it in a df
command, executed in the pod, as follows (this example has the pod running on a Red Hat CoreOS worker node):
# df -h /demo Filesystem Size Used Available Use% Mounted on /dev/mapper/coreos-luks-root-nocrypt 119.5G 28.3G 91.2G 24% /demo
If you want to limit the size of your ephemeral storage this can be achieved by adding resource limits to the container in the pod as follows:
requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi"
Here the container has requested 2GiB of local ephemeral storage, but the container has a limit of 4GiB of local ephemeral storage.
Note that if you use this method and you exceed the ephemeral-storage limits value the Kubernetes eviction manager will evict the pod, so this is a very aggressive space limit enforcement method.
emptyDir from RAM
There might be instances that you only need a minimal scratch space area for your emptyDir
and you don’t want to use any of the root partition. In this case, resources permitting, you can create this in RAM. The only difference in the creation of the emptyDir
is that more information is passed during its creation in the pod specification as follows:
volumes: - name: demo-volume emptyDir: medium: Memory
In this case, the default size of the mounted directory is half of the RAM the running node has and is mounted on tmpfs. For example, here the worker node has just under 32GB of RAM and therefore the emptyDir
is 15.7GB, about half:
# df -h /demo Filesystem Size Used Available Use% Mounted on tmpfs 15.7G 0 15.7G 0% /demo
You can use the concept of sizeLimit for the RAM-based emptyDir
but this does not work as you would expect (at the time of writing). In this case, the sizeLimit is used by the Kubernetes eviction manager to evict any pods that exceed the sizeLimit specified in the emptyDir
.
Check back for Part 2 of this series, where I’ll discuss persistent storage in Kubernetes.