Thursday, September 12, 2013

live linux p2v

so. vmware sometimes hates me. sometimes i have to do this to p2v a live system. boot from knoppix and off you go. from: http://pleasedonttouchthescreen.blogspot.com/2011/12/live-linux-p2v.html
Live Linux P2V
Here's how to clone a live, powered-on machine to a VMWare VM.

The test case scenario is a conversion from an HP BL460 to a Version 4 ESX VM, with LSI Logic parallel virtual SCSI controller and one e1000 vnic.
Linux version is Red Hat 5.5 x64
UPDATE: I've made some encouraging experiments with 6.2 also

Important: this method does not guarantee data consistency, as the disk will be copied while active, but it's usually good enought to bring up a bootable copy of the source machine where you can later restore a consistent backup.
On the other hand, this procedure gives way less downtime than a cold clone, and you can test the cloning process without halting the source machine.

Onward I will call the source machine "P" and the destination machine "V". 

Take note of the P disk configuration

# fdisk -l /dev/cciss/c0d0

Disk /dev/cciss/c0d0: 146.7 GB, 146778685440 bytes
255 heads, 32 sectors/track, 35132 cylinders
Units = cylinders of 8160 * 512 = 4177920 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d0p1               1          25      101984   83  Linux
/dev/cciss/c0d0p2              26       23961    97658880   8e  Linux LVM

This is an HP machine with a SmartArray controller.

Create a new VM, called V, with a disk at least as big a the P one: in this case I've created a 147GB vmdk.
Boot the V VM into rescue mode.

boot: linux rescue

Start the network interface and assign an IP enabled for SSH with the P machine.
Skip searching for existing installations.
All the following commands are to be executed on the V machine.
Now dump the P hard drive to the V vmdk:

# ssh root@P "cat /dev/cciss/c0d0 | gzip -1 | cat" | gzip -d >/dev/sda

note that this compresses the data over the wire, as this usually leads to faster transfers, even more if the disk has many empty sectors.

After completion, an fdisk should display the same partition table of the P machine on the V machine

# fdisk -l /dev/sda

Disk /dev/sda: 157.8 GB, 157840048128 bytes
255 heads, 32 sectors/track, 37779 cylinders
Units = cylinders of 8160 * 512 = 4177920 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          25      101984   83  Linux
/dev/sda2              26       23961    97658880   8e  Linux LVM


Reboot V into rescue mode again

boot: linux rescue

This time choose to search for existing installations, as it should find the new disk content.

# chroot /mnt/sysimage

Make sure the LVM has seen the new disk

# pvscan

PV /dev/sda2       VG VolGroup00   lvm2 [93.13 GB / 59.13 GB free]

If you don't see the PV, check /etc/lvm/lvm.conf for any filter other than the default one.

Modify the boot modules list taking as an example those of another identically configured VM.
This is the modprobe.conf of a tipical ESX VM.

# vi /etc/modprobe.conf

alias eth0 e1000
alias scsi_hostadapter mptbase
alias scsi_hostadapter1 mptspi
alias scsi_hostadapter2 ata_piix

Rebuild the initial ram disk, as referenced in the grub.conf

# mkinitrd -v -f /boot/initrd-2.6.18-194.el5.img 2.6.18-194.el5

Reinstall grub, just to be safe (it should be already there from the disk cloning process).

# grub-install /dev/sda

Now note that, after reboot, the V machine will start with the same IP address of the original P one.
If it doesn't suit your need, change the IP address in /etc/sysconfig/network-scripts/ifcfg-eth0 
Now you can reboot the V machine. 

If the M machine was at runlevel 5, the V machine may start with corrupted graphics because of the different video driver of the VM.
Switch to runlevel 3 and reconfigure X with

# system-config-display --reconfig

Now you can proceed with vmware-tools installation as usual on VMs.

is it or is it not x64?

yes.
root@here:/root# getconf LONG_BIT
64
root@here:/root# uname -m
x86_64
no.
root@here:/root# getconf LONG_BIT
32
root@here:/root# uname -m
i686

Wednesday, September 4, 2013

openstack cloud in the cloud

from: http://devstack.org/guides/single-vm.html works fine with ubuntu 12 lts on vmware esx 5.1
Running a Cloud in a VM
Use the cloud to build the cloud! Use your cloud to launch new versions of OpenStack in about 5 minutes. When you break it, start over! The VMs launched in the cloud will be slow as they are running in QEMU (emulation), but their primary use is testing OpenStack development and operation. Speed not required.

Prerequisites Cloud & Image
Virtual Machine
DevStack should run in any virtual machine running a supported Linux release. It will perform best with 2Gb or more of RAM.

OpenStack Deployment & cloud-init
If the cloud service has an image with cloud-init pre-installed, use it. You can get one from Ubuntu's Daily Build site if necessary. This will enable you to launch VMs with userdata that installs everything at boot time.

If you are directly using a hypervisor like Xen, kvm or VirtualBox you can manually kick off the script below in a bare-bones server installation.

Installation shake and bake
Launching with using Userdata
The userdata script grabs the latest version of DevStack via git, creates a minimal localrc file and kicks off stack.sh.

#!/bin/sh
apt-get update || yum update -y
apt-get install -qqy git || yum install -y git
git clone https://github.com/openstack-dev/devstack.git
cd devstack
echo ADMIN_PASSWORD=password > localrc
echo MYSQL_PASSWORD=password >> localrc
echo RABBIT_PASSWORD=password >> localrc
echo SERVICE_PASSWORD=password >> localrc
echo SERVICE_TOKEN=tokentoken >> localrc
./stack.sh
Launching By Hand
Using a hypervisor directly, launch the VM and either manually perform the steps in the script above or copy it into the VM.

Using OpenStack
At this point you should be able to access the dashboard. Launch VMs and if you give them floating IPs access those VMs from other machines on your network.

One interesting use case is for developers working on a VM on their laptop. Once stack.sh has completed once, all of the pre-requisite packages are installed in the VM and the source trees checked out. Setting OFFLINE=True in localrc enables stack.sh to run multiple times without an Internet connection. DevStack, making hacking at the lake possible since 2012!