How to run RHEL5/Centos5 in Google Cloud
Google Cloud instances run on top of the KVM hypervisor. In newer KVM versions, such as the one used in Google Cloud, virtio-scsi is used instead of an older virtio-blk storage backend.
The virtio-scsi disk devices look like:
virtio-scsi vendor='Google' product='PersistentDisk' rev='1' type=0 removable=0
In RHEL6, a module is added in the 2.6.32-244.el6 kernel. But virtio-scsi is not shipped in RHEL5 or Centos5.
* Tue Feb 28 2012 Aristeu Rozanski <arozansk@redhat.com> [2.6.32-244.el6] - [virt] virtio-scsi: add power management (Paolo Bonzini) [782027] - [virt] virtio-scsi: add driver (Paolo Bonzini) [782027]
The "Lift and Shift" Challenge for Legacy Systems
Legacy Centos5/RHEL5 servers moved using Lift and Shift to Google Cloud will fail to boot with an error similar to the following:
Unable to access resume device (LABEL=ROOT) Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: Kernel panic - not syncing: Attempted to kill init! moving /dev failed: No such file
Google has graciously backported the kernel code for 2.6.18* kernels. To fix the issue, we need to build and use the virtio_scsi driver from the backported code.
Phase 1: Build the Kernel Module
Setting Up the Environment
Get the backported kernel code to a RHEL5/Centos5 system.
git clone https://github.com/GoogleCloudPlatform/compute-centos5-drivers.git
Get the src package for the kernel version (this might need .redhat.com access). Other locations include:
Compiling the Source
Install the package and build the kernel source code:
rpm -ivh kernel-2.6.18-194.el5.src.rpm cd /usr/src/redhat/SPECS rpmbuild -bp --target=x86_64 kernel.spec cd /usr/src/redhat/BUILD/kernel-2.6.18/linux-2.6.18.x86_64 make oldconfig make prepare make scripts make modules
Finalizing the Driver
Build the virtio_scsi module:
make M=/root/compute-centos5-drivers-master/third_party/virtio_scsi
Phase 2: Install the Module in the Source System
Deploying the Module via Google Cloud Storage
The easiest way to install the module is to upload the module to Google Cloud Storage and use it to build initrd in the source system. Then the source system could be synced to Google Cloud.
Create a new bucket and upload the module:
# gsutil cp 2.6.18-194.el5-virtio_scsi.ko gs://rhel5test/
In the source system, download and copy the module to the relevant kernel module path:
wget https://storage.googleapis.com/rhel5test/`uname -r`-virtio_scsi.ko cp `uname -r`-virtio_scsi.ko /lib/modules/`uname -r`/extra/virtio_scsi.ko
Updating the Boot Configuration
Run depmod to generate modules.dep and map files:
depmod -a
Create a new initrd file using the virtio modules:
mkinitrd -f --with=virtio_pci --with=virtio_scsi --preload virtio --preload scsi_mod /boot/initrd-gcp-$(uname -r).img $(uname -r)
Update grub.conf with the new initrd. For example:
title GCP Red Hat Enterprise Linux Server (2.6.18-194.el5) root (hd0,0) kernel /vmlinuz-2.6.18-194.el5 ro root=UUID=eff36034-b712-468f-8e6e-0816973a866d rhgb edd=off quiet console=ttyS0 initrd /initrd-gcp-2.6.18-194.el5.img
Update the default parameter to boot from the new kernel:
default=1
Conclusion
Once the initrd is built and the GRUB configuration is made, the system can be synced to Google Cloud. Both RHEL5 and Centos5 have reached End of Life. It is strongly suggested to move them to later operating system releases. The above workaround is a good method to migrate the systems to the cloud until the application is refactored to run on modern OS versions.
Google Cloud Consulting Services
Ready to optimize your use of Google Cloud's AI tools?
Share this
Share this
More resources
Learn more about Pythian by reading the following blogs and articles.
Creating a custom Oracle EBS preinstall RPM for RHEL7
Live Kernel Updates Without Downtime in New RHEL 8.1, 7.7 and 7.6 Release
Comparing Pgpool-II and PgBouncer
Ready to unlock value from your data?
With Pythian, you can accomplish your data transformation goals and more.