Considerations to evolve traditional virtualization platforms: introducing Red Hat OpenShift Virtualization.
Looking to unify your infrastructure? OpenShift Virtualization allows you to manage containers and virtual machines on one platform, reducing complexity and improving efficiency. In this blog, we'll explore its architecture and deployment, helping you integrate it into your environment seamlessly.
The introduction of virtualization in computing revolutionized how systems were designed, managed, and utilized. Virtualization allowed multiple operating systems to run concurrently on a single physical machine, creating an abstraction layer between the hardware and software. This innovation dramatically improved data center efficiency, resource utilization, and flexibility. Besides what some may think, virtualization is not a new concept; it dates back to the 1960s when IBM introduced virtual machines on mainframe computers to improve hardware efficiency and resource sharing.
Fast forward to the last part of the 20th century, when VMware (founded in 1998) changed the game by introducing the first x86 virtualization platform in 1999, making virtualizing widely used Intel-based hardware possible. Later, in 2001, VMware made another giant leap by introducing the first x86 Server virtualization product. From the start, VMware’s platform allowed businesses to consolidate their physical servers, reducing hardware costs, energy consumption, and administrative overhead while enhancing scalability and disaster recovery solutions. This innovation laid the foundation for the modern cloud computing era, transforming how IT infrastructure is deployed and managed across industries. It was establishing VMware as the leader in this x86 virtualization space.
While powerful, traditional virtualization became limited in handling containerized applications and microservices architectures. Kubernetes emerged as the right solution for this evolution by providing a robust platform for automating containerized applications' deployment, scaling, and management. It enables dynamic scaling, self-healing, and better resource utilization, making it ideal for cloud-native environments and the next generation of application development and infrastructure management.
One thing is sure: times evolve, and with them, there is a need for traditional virtualization to evolve to keep up with current demands. In my opinion, Kubernetes with KubeVirt is the answer. KubeVirt (created by Red Hat in 2016) is an innovative open-source project that extends Kubernetes' capabilities by enabling it to manage and run traditional virtual machines (VMs) alongside containerized workloads. KubeVirt allows organizations to leverage Kubernetes' powerful orchestration, scaling, and automation features for containers and legacy applications that still rely on VMs. KubeVirt bridges the gap between virtualization and containerization, providing a unified platform for managing both types of workloads, making it easier to transition to cloud-native architectures without abandoning existing virtualized environments.
After this introduction, you may wonder: Where does Red Hat OpenShift fit in this idea of evolving virtualization platforms? What is Red Hat OpenShift Virtualization?
Red Hat OpenShift is an enterprise-grade Kubernetes platform that provides a comprehensive solution for automating containerized applications' deployment, scaling, and management. It enhances Kubernetes with developer-friendly tools, security features, and integrated DevOps capabilities, making it easier for organizations to build, deploy, and manage applications across hybrid and multi-cloud environments. OpenShift also includes a wide range of enterprise features, such as built-in CI/CD pipelines, monitoring, and enhanced security.
Red Hat OpenShift is a proven choice for enterprises looking for reliable, scalable, and secure Kubernetes platforms to accelerate their adoption of cloud-native development. However, OpenShift goes beyond containers and cloud-native development; it provides serverless and virtualization capabilities.
Why should you use Red Hat OpenShift Virtualization to replace a traditional virtualization platform? Here are a few reasons to do so:
- Provides one unified platform for virtual machines, containers, and serverless workloads.
- It takes advantage of the performance and stability of Linux and the KVM (kernel-based virtual machine) hypervisor.
- Leverages KubeVirt, a top 10 CNCF with over two hundred contributing companies.
- Customers could benefit from the large and diverse Red Hat and partner ecosystem.
- OpenShift Virtualization preserves traditional VM behavior and administrative actions like live migration and can support your business-critical applications.
One important thing to note is that OpenShift Virtualization is not Red Hat's first venture into the virtualization space. Red Hat has been deeply involved in virtualization technologies since the early 2000s, consistently driving innovation and leading the field through projects and strategic acquisitions.
Considerations for Architecture and Deployment
The first thing to understand when planning a Red Hat OpenShift Virtualization deployment is that it is just a feature (deployed by an operator) of Red Hat OpenShift. The only added complexity from more traditional OpenShift implementations is the need for OpenShift Virtualization to be installed on bare-metal servers.
Here are a few essential considerations when planning the architecture and deployment:
Installation Method
In my opinion, the best deployment method for bare-metal nodes is Agent-based installer.
The agent-based installation method offers flexibility by allowing you to boot your on-premises servers in any way you prefer. It combines the user-friendly experience of the Assisted Installation service with the capability to operate offline, even in air-gapped environments.
Networking
Kubernetes uses a Container Network Interface (CNI) framework to configure network resources dynamically. Choosing the right CNI is an important architecture decision since Kubernetes will connect all the workloads (containers or virtual machines) to the cluster software-defined network (SDN) configured by the CNI.
Here are a few choices for a CNI to evaluate for deployment with OpenShift Virtualization:
- OVN-Kubernetes: is the default CNI network provider in OpenShift. Provides an overlay-based networking implementation. Here are more details about OVN-Kubernetes.
- Tigera Calico: A more advanced CNI network provider with multi-cluster, security, and observability capabilities. More details on their website here.
- Cilium by Isovalent: A eBPF-based CNI provider with advanced observability, security, and performance capabilities. More details about this solution here.
Another decision to take while using the default SDN is how the virtual machines will connect to the fabric. In OpenShift virtualization, there are a couple of options:
- Pod Network: Virtual machines connect to the pod network by default. A service object must be configured to expose the machines outside the cluster.
- Linux Bridge on Layer 2 Network: Using the NMState Operator, VLAN and bonding could be configured.
- SR-IOV Network: You could attach virtual machines to Single Root I/O Virtualization network devices.
Service Mesh: You could attach a virtual machine to the OpenShift Service Mesh.
Storage
Selecting the proper storage backend for your cluster is another important architecture decision. OpenShift uses the Container Storage Interface (CSI) to consume storage from different backends. In the case of OpenShift virtualization, there are a few essential requirements to consider:
- The CSI driver and device should support ReadWriteMany (RWX) volumes (this will enable Virtual Machine live migrations in the environment)
- The CSI should provide support for snapshots and clones
Selecting the right storage type is essential, and there are two main choices:
- Software-defined storage:
- IBM Fusion/OpenShift Data Foundation (Ceph RBD based)
- PortWorx
- Dell PowerFlex
- IBM Fusion/OpenShift Data Foundation (Ceph RBD based)
- Traditional SAN/NAS devices:
- With a supported CSI driver that meets the virtualization requirements listed above
Backups
An important decision to make while adopting a new virtualization platform is, without doubt, the ability to backup and restore virtual machines. To achieve this, there are two main ways:
- Using the provided (as an operator) OpenShift APIs for Data Protection (OADP)
- Using a third-party data protection vendor.
When making your choice, there are multiple points that you need to evaluate, including, but not limited to:
- Does the vendor support Red Hat OpenShift Virtualization?
- Ensure the vendor offers robust encryption methods for data at rest and in transit, multi-factor authentication, and compliance with relevant security standards such as GDPR or HIPAA.
- Choose a solution that can scale as your data grows and adapt to various environments (on-premises, cloud, hybrid). The software should support different operating systems, databases, and applications.
- Look for comprehensive recovery capabilities, including point-in-time recovery, fast restores, and disaster recovery options. Consider how easily the solution can restore data in case of a system failure or cyberattack.
- A user-friendly interface, rich API, and straightforward deployment are crucial to success. The software should also provide automated backups, easy monitoring, and transparent reporting tools to simplify management tasks.
- Evaluate the software’s impact on system performance, backup speed, and storage efficiency. Features like deduplication, compression, and incremental backups can help reduce storage requirements and bandwidth usage.
- Check the quality of customer support, response times, and availability. Some vendors offer 24/7 support and personalized assistance, which is critical during emergencies.
- Research the vendor’s reputation in the market, customer reviews, and how long they have been providing backup solutions. A reliable vendor with a proven track record is essential for long-term security.
Physical design
Now that the installation method, networking, and storage decisions have been made, let’s discuss some physical design considerations when implementing OpenShift Virtualization.
- Pick the proper server hardware:
- Consider the impact on your failure domain when sizing your servers.
- For example, a server with two sockets with 64 cores each and 2 TB of RAM will host many virtual machines and take longer to evacuate or recover during a failure or day-2 operation.
- Choose a suitable network interface controller (NIC), considering throughput and storage traffic requirements (e.g., if software-defined storage is in scope, traffic for access and replication must be considered for the NIC selection and network topology design).
- For example, a server with two sockets with 64 cores each and 2 TB of RAM will host many virtual machines and take longer to evacuate or recover during a failure or day-2 operation.
- Suitable hard drives for software-defined storage solutions:
- Ensure you have the proper hard drives based on your IO pattern requirements. For example, do I need all SSDs or NVME based on my workload?
- Do I have the correct network topology, e.g., a dedicated storage network?
- Ensure you have the proper hard drives based on your IO pattern requirements. For example, do I need all SSDs or NVME based on my workload?
- Consider the impact on your failure domain when sizing your servers.
- Are you re-using existing hardware? Do you have a procedure created for re-use? Have you evaluated if your hardware is compatible with OpenShift Virtualization?
Here is a logical representation of an OpenShift Virtualization node with proposed networks and specifications:
Here is a high-level design of an OpenShift Virtualization cluster:
Day-2 Operations
Here are vital considerations for Day-2 operations in Red Hat OpenShift environments:
- Monitoring & Alerting: Continuously monitor the performance of OpenShift clusters, nodes, and pods. Implement comprehensive health monitoring using OpenShift’s built-in observability tools or integrate with a third-party one to track node health, pod performance, and overall system stability. Set up proactive alerts to identify pod failures, node problems, or network disruptions before they affect your applications.
- Security and Compliance: Ensure strict security measures such as Role-Based Access Control (RBAC), Security Context Constraints (SCC), and network policies are in place. Regularly update OpenShift components and container images, and use OpenShift’s compliance operator to enforce security standards and industry regulations like GDPR, HIPAA, or PCI.
- Backup and Disaster Recovery: Create backup and disaster recovery procedures that could be executed in case of an event.
- Automation and Orchestration: Leverage OpenShift’s built-in automation tools, such as OpenShift API and Pipelines (Tekton), for continuous integration and deployment (CI/CD) and GitOps workflows through OpenShift GitOps (ArgoCD) to automate virtual machine deployments and infrastructure management, improving consistency and reducing manual intervention.
Culture Transformation
Culture is crucial to technology adoption because it shapes how teams embrace change, collaborate, and adapt to new tools and processes. A supportive, open-minded culture fosters innovation, encourages learning, and reduces resistance to adopting new technologies. When employees feel empowered, trust the vision behind the technology, and are encouraged to experiment without fear of failure, they are more likely to embrace change and drive successful implementation. Conversely, a rigid or resistant culture can stall progress, create silos, and hinder the full potential of new technology investments. A culture promoting adaptability and continuous learning is essential for successfully adopting technology.
For VMware administrators, changing to a new platform is more than a technical challenge; it is a process that touches on their day-to-day lives and what is familiar to them. Executive sponsorship will be crucial to driving change and shaping the cultural impact during the adoption process of OpenShift Virtualization. You will need leaders who actively champion the initiative, set a clear vision, and allocate necessary resources. Their visible support will help with team alignment, reduce resistance to change, and ensure that transformation efforts are prioritized across the organization.
Another thing to consider is enablement. VMware administrators transitioning to the new virtualization platform will need to upskill by first understanding the architecture, tools, and best practices of Red Hat OpenShift Virtualization. This will involve a learning curve on the new hypervisors, management tools, and automation frameworks. Leadership support during this re-training motion is, without a doubt, critical for mission success.
A few resources to have while adopting Red Hat OpenShift Virtualization
Training
Red Hat has excellent training courses in OpenShift and OpenShift Virtualization; here are a few ones that I recommend to learn the technology:
- DO316 - Managing Virtual Machines with Red Hat OpenShift Virtualization
- DO180 - Red Hat OpenShift Administration I: Operating a Production Cluster
- DO080 - Containers, Kubernetes and Red Hat OpenShift Technical Overview
Red Hat Consulting
Red Hat Consulting could help your organization adopt Red Hat OpenShift Virtualization using a hands-on, mentor-based approach. Their Virtualization Migration Assessment offering is an excellent tool for starting the journey.
Now is the time to explore OpenShift Virtualization and transform your hybrid cloud strategy. Please comment to let us know how OpenShift Virtualization is powering your transformation process.