Monday, March 25, 2013

netsh & editing dhcp scopes for subnet mask changes

for fsck's sake.  my subnet mask that i'm handing out on my ms server 2008 dns server does not match reality.
not at all.

well.  ms says delete and re-create.  please no.

my server has name:  invernonuclear (192.168.1.2)
my old scope is:  192.168.1.0/255.255.255.0 "poi"
my scope is: 192.168.1.0/255.255.240.0 "poi"

run an administrative cmd shell.  then...


c:\>netsh dhcp server \\invernonuclear scope 192.168.1.0 dump>c:\dhcp.txt

then, open the saves file and edit it to suit.  in my case:

Dhcp Server 192.168.1.2 add scope 192.168.1.0 255.255.240.0 "poi" "poi"

save. delete old scope and then import...

C:\>netsh exec c:\dhcp.txt

now.  if you are silly, you might have dhcp scopes sacked really really close.  you change your mask and try to run the above.
whoops!  error.  stay in your netmask bounds.

now if you're being fancy dancy, you'll need to create superscopes.

Wednesday, March 20, 2013

forcibly remove McAfee

Sometimes you need to forcibly uninstall McAfee.  Go to the program directory, Common Files and issue:

Frminst.exe /forceuninstall

Sometimes the forceuninstall fails.  And you see all an error having to do with MFEagent.msi. This may be due to a corrupt MSI installer.  You'll need to remove references to the MSI by doing thus:
Open regedit, HKLM  software > classes > Installer> Products > and find the folder that references the "McAfee Agent" and delete that whole folder.
Then re-install the agent as normal. In effect, what we're is doing is telling Windows Installer that the McAfee Agent is no longer installed and we're overwriting all McAfee Agent files. 

Tuesday, March 19, 2013

dhcp stanzas

um yeah.

so.  dhcpd3 substanzas do not inherit settings from parent stanzas when said stanzas are closed.

my man from botswana is correct...

sed -i licious

i decided to mirror a site.  but it has all these gross links in it.  
i needed to delete them to match reality.

find ./ -type f -exec sed -i 's/ftp:\/\/barf.site\/pub\/barf//' {} \;

and that was that.

large data transfers on a gig switch with jumbo frames

let's transfer lots of data from ubuntubox1 to ubuntubox2.  this is a lot of data.  
like gigs and gigs of massive vm images.
10/100/1000 auto-neg is fine, but i really want to burst it.

i'm using an hp switch.  i go to the config:

config
vlan 1 jumbo

my mtu is now 9220 yay.

let's go to one of my ubuntu systems and see what its ethernet card is set to.

ubuntubox1:

ifconfig -a

eth0      Link encap:Ethernet  HWaddr 00:a0:d1:e2:fc:67    
          inet addr:192.168.5.32  Bcast:192.168.5.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          &c.

let's change that.

nano -w /etc/network/interfaces

in the eth0 stanza i fix it:
mtu 9000

restart networking.  nice.

eth0      Link encap:Ethernet  HWaddr 00:a0:d1:e2:fc:67  
          inet addr:192.168.5.32  Bcast:192.168.5.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          &c.


if it were a centos box:

nano -w /etc/sysconfig/network-script/ifcfg-eth0
MTU=9000

Let's do a check:

[ubuntubox1 ~]# tracepath ubuntubox2 (192.168.5.117)
 1:  192.168.5.32 (192.168.5.32)                            0.065ms pmtu 9000
 1:  192.168.5.117 (192.168.5.117)                          4.232ms reached
 1:  192.168.5.117 (192.168.5.117)                          0.451ms reached
     Resume: pmtu 9000 hops 1 back 64 

cool.

now, time for nfs fun.

on ubuntubox2, the one who has an nfs export i need to mount, i need to do some nfs tuning.
here's /etc/exports:
/opt/repo    *(async,rw,no_root_squash,insecure,no_subtree_check)

async is the bees knees.

on ubuntubox1, i futz around in /etc/fstab
on fstab, i change the window.

192.168.5.117:/opt/repo      /opt/repo         nfs     rsize=65535,wsize=65535 0 0

let's do a nice test:

[ubuntubox1 /opt/test]# ls -lah
total 4.5G
drwxr-xr-x  2 root root 4.0K 2013-03-14 13:55 .
drwxr-xr-x 10 root root 4.0K 2013-03-14 14:48 ..
-rw-r--r--  1 root root 4.5G 2013-03-14 14:18 bigvm.gz

[ubuntubox1 /opt/test]# time cp bigvm.gz /opt/repo/

la la la

real    1m36.646s
user    0m0.050s
sys     0m9.140s

nice.

you could do this and get a pointless dummy bar:

[ubuntubox1 /opt/test]# cat /opt/test/bigvm.gz |pv -p -e -r > /opt/repo/bigvm.gz
[ 105MB/s] [==========> ] 13% ETA 0:54:35


note bene:

so.  what about those disks? well.  i tuned raid5 to *not* do write caching.  
this improves performance dramatically, 
but if there's a crash, we could end up with scrambled data.  do we care?  today, no.  
we just need to move stuff around quick fast.

so.  raid5 on a 3ware 9550 sata raid card:

[ubuntubox1 ~]# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   5900 MB in  2.00 seconds = 2952.30 MB/sec
 Timing buffered disk reads:  880 MB in  3.00 seconds = 292.87 MB/sec

and plain ol sata on connected to the board:

[ubuntubox2 ~]# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   1848 MB in  2.00 seconds = 923.53 MB/sec
 Timing buffered disk reads:  358 MB in  3.01 seconds = 119.06 MB/sec

remember your data transfer rate will be affected by caching mechanisms and 
network throughput.  
tune them both correctly and you'll have a lovely day.  really.
an esxi 3.1 interlude
# esxcfg-vswitch -m 9000 vSwitch[0-20]

Tuesday, March 12, 2013

high performance data transfers

so, if it isn't the encryption algos, it has got to be the window size, right?
right.

this made my day. hopefully it'll make yours, too.

go here:
http://psc.edu/index.php/hpn-ssh

get this:
http://psc.edu/index.php/component/remository/HPN-SSH/OpenSSH-6.1-Patches/HPN-SSH-Kitchen-Sink-Patch-for-OpenSSH-6.1/

get this:
http://mirror.jmu.edu/pub/OpenBSD/OpenSSH/portable/openssh-6.1p1.tar.gz

do this:
#  cd /usr/local/src/
#  gunzip openssh-6.1p1-hpn13v14.diff.gz 
#  tar xvfz openssh-6.1p1.tar.gz 
#  cd openssh-6.1p1
#  patch < ../openssh-6.1p1-hpn13v14.diff 
#  ./configure && make
#  make install
your magic is in /usr/local/bin. this may not get you love or riches, but it will speed up data transfers when you've realized jumbo frames isn't the answer to all things.

Monday, March 11, 2013

coldclone & sol

i stole this from here:
http://www.horizonsystems.com/forum/13-virtualization-central/54-solaris-10-x8664-conversion-to-vmware-howto

and the iso is here:
https://www.dropbox.com/sh/gcidv950p2ekkgg/CGI67G9Lcx/coldclone.3.03.iso
yeah...
Solaris 10 X86/64 conversion to VMware - Howto 10 Months, 3 Weeks ago  Karma: 2
I recently had to migrate several Solaris 10 X86/64 systems from Xen Server to VMware. 
This same procedure should work for doing either a P2V or V2V. 

Unlike Windows or Linux, which have a greater level of support for migrating, I struggled to get 
a successful procedure that worked consistently. 

After much research I then decided upon this following method, and have found it to be very reliable
and successful. 

Obtain a version of "Coldclone" from VMware, it is a bootable ISO image that will do the heavy lifting
when it comes to migrating the source system. 

After Coldclone converted, I found I had to do a LOT of manual steps to get the Solaris 10 X86/64 environment 
to successfully recognize it's new environment and run cleanly. The majority of this document 
details the steps I found necessary in order to properly get the Solaris 10 systems to run correctly under VMware. 

I did search the web, and found several entries that were Extremely helpful, but incomplete. 


Get the "coldclone.iso" which is aptly named: VMware vCenter Converter BootCD. I found it in 
a zip file named "VMware-convertercd-4.1.1-206170.zip" which is a individual item/part of 
"VMware vCenter Server 4 Update 2" 

At the time of this writing, it was located in my.vmware.com/web/vmware/details/vc40u2/ZHcqYmRoZXRiZHR3ZA==

This was a chore all in it's own, but well worth the quest to find. Now that you hopefully have 
the image, you need to burn a copy, or use the iso as a file in the hypervisor to be able to boot 
the Solaris system with it. 

Get started: 

1. Gracefully shutdown Solaris 10 on the system you are sourcing the conversion to VMware. 
Plan on it being down for a while. Size of the original system will determine how long it will take. 

2. Boot the system from the ISO/CD/DVD that has the "coldclone" package on it. 

3. start the networking and make sure you have a network path to the VMware 
server that will host the new guest Solaris image. 

4. Start a new task to import, select the drives you want copied over
most likely all of them. There may be an error message that pops up, 
about not finding the source system, that is okay. Make sure it finds the drive(s)
after clicking Ok. 

5. Answer the rest of the task's questions, remember you'll need the credentials 
of the VMware host server to be able to add it to the guests on that system.
I chose not to have the machine boot automatically. There are some pre-start steps to be taken. 

6. Start the task and wait for the conversion to take place. 

7. Once 100% complete with the conversion, log on to the VMware server with the Vi-Client and notice 
the new machine name you just created from the cold clone process.

8. Edit the virtual machine properties, and go to the options tab on the top of the panel. 
Choose "other" and then sub-select Oracle Solaris 10 - 64 or 32 bit depending on source system's
installation type. The Solaris boot up banner will tell you if it's 64 or 32 bit. This is critical 
to get right. 

9. Under Hardware properties: If you selected 64 bit Solaris, change the Bus Logic SCSI controller to be LSI Parallel, 
Bus Logic is not supported in 64 bit mode. 

10. Adjust your network adapters as appropriate for the networks the system is 
attached to. By default, ours came up inside Solaris as a type of "pcn". If delete them, 
we had the option to add them as "e1000". Depends on what hardware you actually have 
and what VMware offers. You need to adjust accordingly inside Solaris to address 
the right hardware. 

Now the hard work starts and the following steps are crucial to have a properly booted and 
running system. 

11. Start the guest and select to boot to failsafe mode. 

It will start up, and give you a message similar to the following: 

Solaris 10 * X86 was found on /dev/dsk/cXtXdXsX 
Do you wish to have it mounted read-write on /a? 

Choose "y" for yes. 

Take note of where the system identified "/a" as being mounted on, this is important information for later. 

Once you get a command prompt, I like to do the following to help with getting 
the system usable for editing. 

# stty rows 24 cols 80
# exec /bin/ksh -o emacs ## optional choice - handles backspace well. personal preference
# TERM=ansi # Acceptable emulation for console in vmware. Not perfect. 
# export TERM # So "vi" editor can see the variable. 



# cd /a/etc
# cp path_to_inst path_to_inst.save
# cp /etc/path_to_inst path_to_inst # Roughly an empty file, it gets rebuilt later from scratch. 

# cd /a/boot/solaris 
# cp bootenv.rc bootenv.rc.save

Using the value noted on where "/a" is mounted do: 

# ls -l /dev/dsk/cXtXdXsX | cut -c89- 

This should yield a value similar to /pci@0,0/pci15ad,1976@10/sd@0,0:a [ yours most likely will vary]

Next Store the string on the bottom of the bootenv.rc file so we can have it handy to use inside the vi editor. 

# ls -l /dev/dsk/cXtXdXsX | cut -c89- >> bootenv.rc # Remember append mode with '>>' 

This will put the text you need at the bottom of the file. Use 'vi' to edit the /a/boot/solaris/bootenv.rc file and 
fix the "setprop bootpath '{your value goes here}' line. Goal here is to remove the old device path, and 
put in the current correct one. 

Save the file and you may want to do a diff on the two files just to ensure only that one line has changed. 
This is most critical to be able to boot the system with the appropriate kernel. 

Next you'll need to fix the vfstab file to reflect current and correct cXtXdXsX devices. The individual slices 
should be the same as before. Please keep in mind, that once Solaris knows of a controller, it doesn't want 
to let go of it. This may result in your controller being labeled as "c1" now instead of "c0". Remember we changed 
it in the properties field of VMware earlier. 

Also, depending on system was first built, you may have to introduce a target component to each device name.

Every line in vfstab that references /dev/dsk and /dev/rdsk (usually two places each line) will need to be modified
to reference the correct device name in the new operating environment. 

"c0d0s0" will most likely be "c1t0d0s0" now. Please note the two core changes in the previous names. It most likely 
will be "c1" and need a target component, most likely "t0". 

Solaris doesn't give up known contollers very well, it increments the controller number. Part of the persistent 
binding built into the system. Be careful here, if it's not right, you'll need to reboot the system to failsafe mode
to fix it. Sorry for the complexity here, no easy way to state it otherwise. 

Save off the file, and again do a "diff" command on it to verify the changes you made. 

Next fix up the networking files: 

Rename your /a/etc/hostname.* files to reflect the proper network device. PCN* was on the device type on our systems. 
# cd /a/etc/
# mv hostname.rstl0 hostname.pcn0 # Your device types may vary - but do for all interfaces connected to your system. 

Next, clean up old entries so they can be regenerated properly: 

# rm /a/etc/devices/* 

# rm /a/dev/rdsk/c* 

# rm /a/dev/dsk/c* 

# rm /a/dev/cfg/c* 

The next two lines will regenerate the entries we previously generated and update the boot archive appropriately. 

# devfsadm -v -r /a 

# reboot /w -arvs

If everything went well, the system should come up cleanly and you should be back in business. Verify your network assignments 
to make sure everything is connected properly, and you should be in a good workable state. 

Hope this help! - RSH