This article is produced by scandiweb

scandiweb is the most certified Adobe Commerce (Magento) team globally, being a long-term official Adobe partner specializing in Commerce in EMEA and the Americas. eCommerce has been our core expertise for 20+ years.

Kubernetes Cluster Failure Resolved

By Switching To The Fastest Drives AWS Can Offer

Kubernetes is an open-source product developed by Google and designed to automate, manage, and orchestrate many of the processes involved in deploying and scaling containerized applications. 

One of Scandiweb products – Readymage – is built with this functionality in mind. The basic idea behind Readymage is it combines Magento back-end with ScandiPWA front-end to launch projects in the Kubernetes cluster. The cluster is managed by AWS, and the service we use for these purposes is called Amazon EKS

Recently, we have encountered an unexpected Kubernetes problem affecting the integrity of a containerized setup and causing project deployments to crash the entire cluster.

The problem

Throughout Spring and early Summer 2020, after loading the Kubernetes cluster with a few heavy projects, we came across an unexpected issue: Kubernetes nodes (AWS instances) started randomly freezing for no obvious reason.

To mitigate the issue Kubernetes cluster normally relocates pods (sets of application containers) from the frozen nodes to healthy ones. This was acceptable up to a point when the problem began to emerge immediately on relocation. As a result, the healthy nodes were freezing as well, leading to a cascade of failed pods.

Investigation

We contacted the AWS Premium Support service for assistance, but after a month of back-and-forth communication, no real solution was proposed. 

To investigate the issue, we set up monitoring in order to track the exact moment the nodes were frozen. Whenever this happened, the faulty node was removed from load-balancing, removed from the Kubernetes cluster, and stopped. Then the node disk was mounted onto a healthy instance, so we could examine precisely what was happening.

Eventually, with the new data coming in, some light was shed on the nature of the problem. As it turned out, the faulty instance lost network connectivity due to kernel panic caused by the network interface driver. 

This explained why the Kubernetes cluster lost control of the node, and why the AWS health check failed, with the instance eventually being terminated.

A further search revealed that we were not alone in facing this problem. Other people using AWS managed Kubernetes cluster had encountered similar issues too, as can be seen from this Github thread.

The community’s suggestion was that the issues appeared when an intense disk input/output was performed – in other words, upon intense read or write. This is exactly what happens during application deployment.

A fresh look at the setup

At the time, we were using fast SSD disks with burstable performance (AWS EBS GP2). AWS EBS is attached over the network, forming a huge disk array that is attached on-demand to the required instance. 

These units offer pretty decent performance:

  • writing speed of 270 MB/s for a prolonged period of time (until the burst balance is drained), 
  • up to 3000 input/output operations per second (IOPS).

An alternative type of AWS instance has SSD disks attached directly to the physical instance hardware. This type is called “instance store”, and is marked with the letter “d” in the suffix – such as in “c5d.xlarge”. An instance store is not a persistent disk, meaning that all the stored data is lost if the instance is shut down. At the same time, they are much faster – though not in terms of speed, but in terms of reduced latency.

The findings

To see how the two types of hardware compare in terms of performance, we measured both EBS and instance store disks by repeatedly copying files of different size thousands of times:

File size
Number of files
Total size
EBS GP2Instance store
Speedup x times
Speed, MB/sTime, sSpeed, MB/sTime, s
1 KB100 000102 MB1.665.522.24.614.2
10 KB100 0001 GB14.968.61965.213.1
100 KB100 00010 GB92.7110.436228.33.9
1 MB10 00010 GB2693836827.81.4
10 MB1 00010 GB2693836827.81.4
100 MB10010 GB26938.136328.21.3
1 GB1010 GB26738.436328.21.4

 

Unsurprisingly, the physically attached disk is faster (on average, by 40%). But the difference becomes especially pronounced when small files are being transferred – a direct result of reduced latency, i.e. time added to each I/O operation.

As seen from the above table, with files over 1MB it doesn’t matter which disk is used: both EBS and instance store units show similar speeds – 269 and 368 MB/s accordingly.

However, when copying files under 10 KB in size, the gap increases – instance store being up to 14x faster.

Solution

The described difference in performance is crucial for Magento-based projects since Magento codebase consists of a large number of smaller-size files.

For example, assume a sample project with a codebase of 1.6 GB and 123K files:

  • EBS disk will require 68 seconds to copy the codebase, while
  • instance store disk will require only 5 seconds to perform the same task.

With these findings in mind, we revised our setup and fully switched to using instance store disks within the Kubernetes cluster. After this, the node-freezing stopped.

As a result, our Kubernetes deployments are now both faster, and don’t crash the entire cluster – benefits well justified for the 13% cost increase for the faster hardware.

Further reading:

Have you found this information helpful? Curious to learn more about ScandiPWA and ReadyMage? Looking for a Magento solution? Let us help you!

Any questions? Feel free to get in touch: book a free consultation or schedule a call!

Need help with your eCommerce?

Get in touch for a free consultation to discuss your business and explore how we can help.

Your request will be processed by

If you enjoyed this post, you may also like