D2iQ Releases DKP 2.0 to Run Kubernetes Applications at Scale


D2iQ recently released version 2.0 of the D2iQ Kubernetes Platform (DKP), a platform to help organizations run Kubernetes workloads at scale.

The new version provides a single pane of glass for managing multi-cluster environments and running applications on any infrastructure, including private cloud, public cloud or at the edge of the network.

Courtesy of D2iQ

DKP 2.0 is based on the Cluster API, a Kubernetes sub-project to simplify the creation, configuration and management of multiple clusters, to support out-of-the-box Day 2 operations. In addition, it has auto-scaling capabilities for workloads to improve availability and support for immutable operating systems such as Flatcar Linux.

InfoQ met Tobi Knaup, CEO of D2iQ, at KubeCon + CloudNativeCon NA 2021 and spoke about DKP 2.0, its relevance to developers, and the future of Kubernetes.

InfoQ: Why is DKP 2.0 an important release for D2iQ?

Knaup: Version 2.0 is always special for any software publisher. It’s the culmination of everything we’ve learned from our customers who have been using it in production since version 1.0 was released. We learned a lot there and built the 2.0 roadmap with our clients.

DKP 2.0 is a significant re-architecture of the platform. We did this because we want DKP and especially Kommander, which is one of the platform’s products, to be the central point of control for the company to manage the entire Kubernetes fleet.

Today we see the world moving towards multi-cloud, hybrid cloud and the edge. It is important to have this central point of control. Kommander and DKP 2.0 are built on the Cluster API so that they can manage the lifecycle of any Kubernetes cluster on any infrastructure. Once these clusters are in place, Kommander has a whole set of Day 2 operations capabilities.

The other new feature of 2.0 is Flux for continuous delivery. We adopted it because we believe it is powerful technology. It is native to Kubernetes, integrates well with other systems, binds to RBAC authentication, and can run on a namespace basis.

The third major element is that we have added support for immutable operating systems. This was motivated by the conversations we have had with our customers who are very safety conscious. We work with large companies, primarily federal government agencies, and support for immutable operating systems helps them improve their security.

InfoQ: How can developers benefit from the new features in DKP 2.0?

Knaup: For developers, I think it’s exciting to have Flux integrated. Another thing we have done that is not in Ad 2.0, but one of our other products is Kaptain. It is an end-to-end machine learning (ML) platform based on Kubeflow.

For ML developers, engineers, and data scientists, it’s a seamless way to build models without ever leaving your laptop environment. Part of Kaptain contains a Python SDK that you can use to train your models in a distributed way without having to know anything about Kubernetes.

The great thing about Kaptain is that it is built in a modular fashion, as we know that many organizations will be running some components on the edge and some on the cloud so you can configure them that way. The user may decide to train their model in the cloud on a particular cluster and may later wish to deploy it to the edge on another fleet of clusters.

InfoQ: Where do you think Kubernetes is heading?

Knaup: What I think is going to happen next is that many organizations that have been running data services and stateful applications for quite some time are starting to ask themselves questions like, what should we do with all these data and how do we derive information from it Data. Often times, that means creating machine learning models and AI that harness that data. We are seeing many organizations creating their next generation products that include AI components.

For example, we are working with a healthcare company that manufactures MRI scanners and CT scanners with Kubernetes integration and plans to integrate machine models. It makes perfect sense to then run these machine learning workloads on the same cluster as the microservices.

I think the other thing that’s interesting about these machine learning apps is that the data and the models have to run where the new data comes in. It is often at the limit these days. For most businesses, most of the data they consume and process happens at the edge, not inside their cloud or data center. Teams can now decide to run some workloads at the edge and other workloads in the cloud using the same platform and user experience.

The third thing I see is that multi-cloud is becoming a reality as there are many companies that want to deploy to multiple cloud providers for many reasons. We help them by providing them with a control plane they can use to manage their workloads across multiple cloud providers, Kubernetes clusters, edge, or private cloud.

D2iQ, formerly known as Mesosphere, moved away from Mesos DC / OS a few years ago to focus on Kubernetes and Day 2 operations related to cloud native apps and platforms.

A free trial of DKP 2.0 can be requested through the company’s website.

Source link


Comments are closed.