Machine learning is increasingly being used to solve problems in many domains. This results in a surge in the need for GPU computation as a common method for training and inferencing acceleration. Containerization technologies like Docker have made it possible to manage resource isolation and utilization in a GPU cloud compute server. However, containerization cannot handle scaling and failover of the running application. Kubernetes is an open-source platform for automating deployment, scaling, and managing containerized applications. This platform helps us to run distributed systems more resiliently. This thesis reports on an exploration of the application of Kubernetes in managing GPU resources for accelerating AI workloads. The study started with setting up a Kubernetes cluster to manage computation jobs. In application, the cluster is integrated into RAI, a project-submission system designed as a configurable programming environment for parallel programming courses.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.