Description:

It was between the end of 2015 and the beginning of 2016 when I was inspecting Openshift for requirements conformance for automated software testing. That time k8s didn't have all those tools we have today, like kubeadm, kubespray etc. Even CNI didn't have GA status and only kubenet was available. k8s was doing its first steps in public space.

But even that time Openshift provided some unique features that allowed to work effectively with containers. Such things like Deployments, Jenkins integration1 2, web interface, security policies, internal docker registry etc. made Openshift much better than basic k8s was. We were able just concentrate on our duties with Openshift and we didn't spend much time on ops stuff, all work right out of the box.

But everything changes (sooner or later) and I decided to move my home infrastructure (a good way too bump into all real problems before prod) to k8s, and this is why I did that:

  1. Versions lag between Openshift and Kubernetes. Yep, you have to wait for new k8s features will be available in Openshift.

  2. Openshift web interface is not needed. After years of Openshift using - I can say for sure - it's a nice addition, but you stop using it very soon, because most of the time you interact with clusters through CI/CD tools. If we say about regular users which are needed some services upon clusters - they prefer to push a couple of buttons inside Jenkins (high level tasks with minimum cognitive tensions), instead of pass through all steps in such tools like openshift-templates or operatorhub and as a consecuence figuring out why things don't work as expected.

  3. Lack of supported CNI/Ingress options like Cilium, Calico, Contour etc. You cannot just use wealth of diversity of k8s ecosystem, because of "box readiness" of Openshift.

  4. Overcomplicated installation process and restrictions of recent 4.5 GA version of OKD. If earlier you could fix any problem in openshift-ansible so now you have to deal with "black box" of openshift-install. New 4th version assumes only FCOS as an underlying OS for control plane and FCOS/RHEL 7 for compute nodes, that doesn't fit my needs.

As a result, while participants of #openshift-users Slack channel try to solve installation problems, I just switched my clusters and all services to k8s and improved some features (like network security and performance, docker images security checking with help of Harbor etc.).