A recent survey reports that the vast majority of software development teams expect to use microservices and containers in 2021, to advance the speed, efficiency, and flexibility of their software development tools and applications.
As reported by ZDNet, a study conducted by Kong, a microservices specialist, found that 86% of the software and services organizations participating in the survey are either already using — or are planning to use — Kubernetes within the next 12 months. “Kubernetes has clearly emerged as the standard operating environment for modern distributed architectures,” states the Kong report.
(click image to enlarge; source: Kubernetes.io)
According to Kong’s survey, “the sky-high adoption rate of this container orchestration tool also highlights the need for APIM or services meshes to integrate with the ingress of Kubernetes so that not only containers, but also microservices and APIs can be managed natively through Kubernetes CRDs.” Additionally, the report adds that “only 5% overall are not using and have no plans to use Kubernetes, although slightly more respondents from Europe (8%) say this than their U.S. colleagues (3%).” Download Kong’s “2021 Digital Innovation Benchmark” study here [PDF download].
Chris Lamb, NVIDIA’s VP of GPU computing software platforms, recently revealed that NVIDIA has been using Kubernetes internally for its large-scale AI training. “AI serving becomes a GPU-accelerated workload, which is just at the inflection point of taking off,” said Lamb. Nvidia is not new to Kubernetes, having partners like Linux veteran Wind River, which introduced an NVIDIA Kubernetes Plugin last year.
(click image to enlarge)
Last week, NVIDIA released “NVIDIA vGPU January 2021,” a new version of its “Virtual GPU” (vGPU) technology to enterprises. The support enables them to equip their developers with “more power and flexibility through GPU-accelerated virtual machines (VMs), from the data center or cloud.” The support is said to supply the NVIDIA GPU Operator, “a software framework that simplifies GPU deployment and management,” using Kubernetes. Nvidia’s blog post includes more details, including a performance benchmark on a virtual workstation (pictured below).
(click image to enlarge)
Docker, an open source containerization technology similar to Kubernetes, does not have as much interest around it despite the project’s recent announcement of support running Docker on workstations in conjunction with Azure-based technologies. Docker’s offering is currently available for download.
Technologies aimed at making GPU workloads more interoperable, manageable, and cost-effective have also recently begun to emerge. For example, startup Run.AI debuted a hypervisor-less approach, said to enable users to “fractionalize GPUs without a need for virtual machines.” Virtualized and containerized approaches to GPUs like this are reminiscent of the way virtualization has impacted yesteryear’s CPU and server generation.