Tuesday, March 19, 2013

large data transfers on a gig switch with jumbo frames

let's transfer lots of data from ubuntubox1 to ubuntubox2.  this is a lot of data.  
like gigs and gigs of massive vm images.
10/100/1000 auto-neg is fine, but i really want to burst it.

i'm using an hp switch.  i go to the config:

config
vlan 1 jumbo

my mtu is now 9220 yay.

let's go to one of my ubuntu systems and see what its ethernet card is set to.

ubuntubox1:

ifconfig -a

eth0      Link encap:Ethernet  HWaddr 00:a0:d1:e2:fc:67    
          inet addr:192.168.5.32  Bcast:192.168.5.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          &c.

let's change that.

nano -w /etc/network/interfaces

in the eth0 stanza i fix it:
mtu 9000

restart networking.  nice.

eth0      Link encap:Ethernet  HWaddr 00:a0:d1:e2:fc:67  
          inet addr:192.168.5.32  Bcast:192.168.5.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          &c.


if it were a centos box:

nano -w /etc/sysconfig/network-script/ifcfg-eth0
MTU=9000

Let's do a check:

[ubuntubox1 ~]# tracepath ubuntubox2 (192.168.5.117)
 1:  192.168.5.32 (192.168.5.32)                            0.065ms pmtu 9000
 1:  192.168.5.117 (192.168.5.117)                          4.232ms reached
 1:  192.168.5.117 (192.168.5.117)                          0.451ms reached
     Resume: pmtu 9000 hops 1 back 64 

cool.

now, time for nfs fun.

on ubuntubox2, the one who has an nfs export i need to mount, i need to do some nfs tuning.
here's /etc/exports:
/opt/repo    *(async,rw,no_root_squash,insecure,no_subtree_check)

async is the bees knees.

on ubuntubox1, i futz around in /etc/fstab
on fstab, i change the window.

192.168.5.117:/opt/repo      /opt/repo         nfs     rsize=65535,wsize=65535 0 0

let's do a nice test:

[ubuntubox1 /opt/test]# ls -lah
total 4.5G
drwxr-xr-x  2 root root 4.0K 2013-03-14 13:55 .
drwxr-xr-x 10 root root 4.0K 2013-03-14 14:48 ..
-rw-r--r--  1 root root 4.5G 2013-03-14 14:18 bigvm.gz

[ubuntubox1 /opt/test]# time cp bigvm.gz /opt/repo/

la la la

real    1m36.646s
user    0m0.050s
sys     0m9.140s

nice.

you could do this and get a pointless dummy bar:

[ubuntubox1 /opt/test]# cat /opt/test/bigvm.gz |pv -p -e -r > /opt/repo/bigvm.gz
[ 105MB/s] [==========> ] 13% ETA 0:54:35


note bene:

so.  what about those disks? well.  i tuned raid5 to *not* do write caching.  
this improves performance dramatically, 
but if there's a crash, we could end up with scrambled data.  do we care?  today, no.  
we just need to move stuff around quick fast.

so.  raid5 on a 3ware 9550 sata raid card:

[ubuntubox1 ~]# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   5900 MB in  2.00 seconds = 2952.30 MB/sec
 Timing buffered disk reads:  880 MB in  3.00 seconds = 292.87 MB/sec

and plain ol sata on connected to the board:

[ubuntubox2 ~]# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   1848 MB in  2.00 seconds = 923.53 MB/sec
 Timing buffered disk reads:  358 MB in  3.01 seconds = 119.06 MB/sec

remember your data transfer rate will be affected by caching mechanisms and 
network throughput.  
tune them both correctly and you'll have a lovely day.  really.
an esxi 3.1 interlude
# esxcfg-vswitch -m 9000 vSwitch[0-20]

No comments: