Sign up to the Arctiq Newsletter to stay up to date on the latest blogs, events and news
Subscribe to Newsletter
An outlined icon of an X
left arrow icon
Back to Blogs
February 27, 2020
Author:
Aaren DeJong

OpenShift 4 On Your Turf

OpenShift 4 is Here!

If you're in the container/devops/k8s/openshift space and haven't yet heard any news about OpenShift 4 (OCP4) then you must be living under a rock. OCP4 has been GA for a while now and we're eagerly waiting for 4.4 to drop, with 4.2 and 4.3 being the majority of our OCP4 services (January 2020 reference frame) and many of our customers with 3.x clusters have already asked about how OCP4 will play in their on-prem environments.

{% include image name="ocp-logo.png" %}

There's no question that this version of OCP is more 'opinionated' as a product, in that the Red Hat engineers are focusing the platform as efficient yet capable and much more cloud-native than ever before. I'll get into that part a little more in this blog, since we know more than a few organizations are going to insist on running OCP on-prem as well as in cloud.

Architecture Change!

First and foremost, there's major architecture changes to understand before planning any vast migrations or cluster builds. OCP4 is even more "kubernetes-ey" than it was in the v3 days. How can a k8s distro become more kubernetes than before, you ask? Well there's much less Ansible involved in the cluster build and configuration stages.. in fact, there's zero Ansible. Instead we see custom resource definitions and more native k8s objects to define things that make OCP what it is wrapped around k8s. There's also the full-on integration of the Operator SDK and the model of capabiities it brings to the platform. Operators and CRDs essentially run the cluster instead of using Ansible to make changes to it, as we were familiar with before.

Another major absence is typical RHEL as the host nodes for the cluster. Now we have Ignition-built Red Hat CoreOS nodes and only 2 types per cluster, more natively looking like k8s, as 'masters' and 'workers'. Being CoreOS we also spend 99% less time SSHing to the nodes (shocking for some of you, I know).

What does this mean for on-prem clusters then?

  • It means there's no more notions of having "gold images" of RHEL or "certified RHEL" templates to muddy the waters of what should be thought of as appliance-like operating system nodes that become part of the OCP4 k8s cluster. This will present a new security stage for many enterprises, and deserves consideration and research, given that CoreOS will likely be very new to most organizations.
  • It means the prior methods of spawning VMs in your hypervisor will need to change to accomodate the unique way that CoreOS systems are built. Red Hat CoreOS (RHCOS) is built by ignition, and typically on-prem deployments will leverage pxe-boot (TFTP) methods to make VMs out of minimal boot media.
  • It definitely means that new considerations are due for all major infrastructure and deployment departments in an organisation considering OCP4.. Security stance, storage handling, networking, and various other aspects deserve a new outlook rather than resting in laurels of legacy practice.

Cloud-Centric vs On-Premise?

Earlier I noted that OCP4 is much more kubernetes than before, and that it is opinionated and more cloud-centric. This is calling out that the platform is even more of a prime choice for a portable distribution of kubernetes than before, in that it will absolutely give the same experience to its users on ANY cloud or hyperivsor it lives on. What this provides is a standard, a baseline, a foundation for fast container workload deployment for developers.

If your organization isn't yet in the cloud, that's fine, because what you put into OCP4 while it's living in VMware or RHV or on bare-metal hardware can be the exact same workload when you make the move to cloud and put OCP4 there. It's just that you'd be tied to the UPI install method as things stand. ** There is a sacrifice of features ** however, that should be taken into consideration. Before any panic, let's establish what you might be giving up in an on-premise UPI OCP4 deploy..

First of all, the 2 install methods:

  • UPI - semi-automated stack, cluster built upon pre-built VMs (you provision/ provide the VMs and load balancer.
  • IPI - full-automated stack, cluster infra is self-built AND MANAGED. (take note of that last piece in caps)

{% include image name="install-compare.png" %}

So now that you understand the install methods, you understand that given the current release version, there is no IPI installer for on-premise hypervisors (except for OpenStack). This means you must pre-build the RHCOS virtual machines and provide the appropriately configured load-balancer. This also means that the cluster can't manage its own nodes as we enjoy with the IPI (fully automated, Installer Provisioned Infra) method. To be clear, when OCP can manage its own infra, we get node auto-scaling, as well as the ability to diversify the sizing of the nodes if you like. Now, depending on how advanced your in-house VM provisioning automation is, you might not have a problem with UPI OCP clusters. If you prefer the taste of a more self-managed infra layer as part of your enterprise k8s cluster, then IPI is the thing to do, and you keep all the features that OCP4 brings to the game of automated infra management.

{% include image name="install-flow.png" %}

The IPI installer is coming (per the roadmap) for VMware and RHV, but it's just not here yet. It will be awesome when it lands, but until then, it's definitely possible to automate VM provisioning, though a few special scenarios are now at play with how RHCOS is built and used for OCP4. If you're doing UPI and choose to set static IPs for the VMs, it will be a matter of making MACs link to IPs a-la DHCP reservations as the ignition of the RHCOS systems need an IP when they instantiate. There's also the requirement of the pre-configured load-balancer, a TFTP/httpd server for ignition images, bare-metal bios delivery and other handy things you will want to feed or debug via curl commands. In short, it's definitely doable to put UPI OCP4 into your on-premise datacenter, and with the right people at the helm (inadvertent k8s pun) it can be automated pretty well.

If you're planning to lab or sandbox it, due to the components you need to control, MANY UPI install tests have used the "facility VM" method to easily centralize what needs to be automated to make things play nice with the installer after the infra is in place.

{% include image name="components.png" %}

If you're curious about more details of either install method or anything OCP4, I highly recommend the official documentation for OCP4 or just get in touch with us at Arctiq as we've done a number of OCP4 deploys already and just as many are currently in progress. Even better, we'd really be happy to see you at one of our upcoming events about anything in the kubernetes space. See you then!

Related Posts