FNX Is now part of Unifi3d! Click here to learn more

Configuring Kubernetes for 3D Workloads

Mohit Gupta
5
min read •
Published on
December 16, 2020

Achieving high-quality 3D renders is no easy feat from a software perspective, as the processing of rendering for 3D is an extremely computationally extensive operation. Large 3D workloads are typically run on dedicated render farms with both expensive initial costs as well as long term maintenance costs, therefore, setting up a system to maximize resource utilization and minimize cost is paramount to achieving a render farm that scales. So what is the best software option when it comes to supporting large 3D workloads? What software enables both scalability and automation features, and how do you configure it to best support 3D rendering?

At FNX, we’ve opted to utilize Kubernetes to power our rendering platform — a technology traditionally used in web application architectures to automate deployments and maximize farm utilization. With containerized application run times having emerged over the past few years as an extremely portable and effective way to package applications, Kubernetes stands out as the most effective system for managing applications at scale on all types of hardware.

The benefits provided by Kubernetes range from running web servers, to stateful applications, to cron jobs and batch processing. There's something for every type of workload, and to top it off, Kubernetes can readily be applied to the use case of rendering in 3D. Rendering large 3D workloads, among other features, generally requires optimizing resource utilization, system fault tolerance, and the ability to deal with large files effectively, and Kubernetes can be used effectively to achieve all of these functions in a scalable manner.

The core of FNX’s systems for rendering are run by Amazon Web Service’s (AWS) managed Kubernetes service — EKS orchestrated with Terraform — and the following blog post will highlight some ways in which Kubernetes can be configured to be beneficial for 3D rendering. This guide assumes that you have an EKS cluster running that has been set up with terraform. A sample repository to set up a similar cluster with terraform is available here.


GPU Rendering

Kubernetes does support NVIDIA GPU drivers via the NVIDIA Device Plugin for Kubernetes. We use the Hashicorp-maintained EKS terraform module to manage our clusters. To enable a GPU worker pool — add a new worker group to the terraform code for your EKS cluster using the GPU machine image:

If you plan to run other non-GPU workloads, as we do at FNX, it might make sense to add a label to the node like we did above to indicate that this is a gpu node. Applying this change will launch and add a p2.xlarge instance as a node to your cluster. Once these workers are up, in order for Kubernetes to recognize the GPU resources available, the NVIDIA k8s device plugin must be added to the Kubernetes cluster https://github.com/NVIDIA/k8s-device-plugin. Later on we will leverage this hardware to do gpu rendering with blender.


EFS For Render Assets

It’s no secret that any 3D workflow needs the ability to access large amounts of persistent storage. Common 3D formats such as .obj use tessellation with smaller and smaller geometric shapes to approximate curved surfaces, often leading to massive file sizes. AWS provides an elastic file system (EFS) to easily manage large files, and Kubernetes provides a way to mount EFS drives to worker nodes. Files on this EFS drive can then be mounted to the pods running your rendering engine.

To do so with terraform, first set up your elastic file storage system, along with a security group that allows access from your cluster.

Be sure to note the id of the EFS system you have provisioned. You will place this in the manifest used to deploy the efs-provisioner by Quay.io which will provision space on the EFS disk to the Kubernetes storage abstraction, a persistent volume. Below are some key parts you will need to change depending on your set up.


Download this file to get the full manifest and save it. Change the above resources and apply it to your cluster. You should see the persistent volume appear in the namespace you deployed the provisioner into.

Image for post

The next step is to mount this volume into our pod running blender, allowing us to run a rendering workload in Kubernetes.

Running Blender in Kubernetes

Blender is an open source computer graphics toolkit that is widely used within the 3D space. From animated films to computer games, to fashion and apparel, blender services a variety of use cases. The Blender install also comes with a command line tool, which is leverage-able for server rendering in Kubernetes.

To build our Blender pod we can choose to build our own docker container for Blender, or use one of the many open source ones on docker hub. For this example we will use this one. Our pod needs to leverage EFS for access to files and GPU cores for rendering.

One caveat is that you will need to configure Blender a bit to get it to run smoothly on these gpu cores. The following python script settings should run fairly well on the p2.xlarge instance above:


Next we need to mount this into our pod via a config map. To do so, save the above file locally and run:

          kubectl create configmap enable-gpu --from-file=<path to enable_gpu.py>


Finally, let’s put everything together and take a look at the pod that will be running our rendering workload:


  • Our pod is scheduled to run on a GPU node by specifying a 2 GPU resource requirements in the resources block
  • Our elastic file system drive is mounted into the pod using a Kubernetes volume named assets-efs-pvc
  • Our python script to configure blender for gpu usage is mounted via a config map volume named enable-gpu


Save this pod specification on your local machine and apply it to your cluster:

           kubectl apply -f my-blender-pod.yaml

 
After running it to completion, check the EFS drive and your rendered output should be there.


Scalable Software for 3D

With its resource optimization, system fault tolerance, and methodology for dealing with large files effectively, Kubernetes has quickly become the software of choice for scaling 3D workloads, and while configuration needs vary, there are numerous options available for how to configure the system to meet your specific needs. GPU resource configuration, combined with elastic file storage system setup and a computer graphics software such as Blender, can go a long way to power a state-of-the-art 3D platform. 

Kubernetes is a battle tested technology known to scale to tens of thousands of machines powering some of the most performant and well known applications in the world. Furthermore, the cost benefits of being able to optimize usage of hardware can be drastic. In the 3d world, an industry that often requires such vast computational power, saving money with kubernetes and open-source systems can be a game changer. 

At FNX, along with our custom software to manage and automate these processes effectively, these open source technologies provide the core of our platform, allowing us to render thousands of images each day for customers all over the world. 

Don’t want to manually configure kubernetes yourself? Don’t have the software team needed to scale out a complex system like this? We've got you covered! Try out FNX and begin automating and scaling your 3D workloads today.


UNLEASH YOUR CREATIVITY WITH FNX

Sign up for a free trial today to accelerate and automate your digital product creations!

Thank you! Your submission has been received!
Oops! Something went wrong. Please make sure your work email is correct and resubmit.