Skip to main content

PCIe, Raid, Gigabit

Published: August 03, 2008

Hmm, looks like as long as the PCIe slot is big enough, any card will work. For example, I just ordered a EXPI9300PTBLK which is a PCIe x1 card - and it is going into the one (and only) PCIe x16 slot in my server!

That aside, I spent the past week running a very real DR drill - and things are looking good, but there were a few hiccups. The drill was caused by me playing with my raid 1 array. I converted it to a linux style "10 f2" array which basically doubles read performance at the expense of write performance. It's still a 2 disk mirror, but now it is striped so we can read two disks at once.

Of course, I did the raid "upgrade" the _silly _way and didn't bother to bring one disk over at a time into a new array - instead I just dropped the existing md0 array and recreated it. Bye bye data. The worst part was restoring the vmware server, but it really wasn't that bad since I had a full list of packages and a copy of the /etc folder :) Otherwise, it was just waiting a few days for the backup to copy from my parents place back to my apartment.

How does this all relate? Well, I upgraded to a gigabit lan and after trying some pci cards (Linksys EG1032 and a Netgear GA311) based on the Realtek RTL8169S-32 chipset that seems to be limited to ~218Mb/sec upstream instead of getting closer to the gigabit speed of 1000Mb/sec (actual should be around 700Mb/sec maybe a bit better or worse). Not to mention that this chip is limited to a 7200 byte MTU (instead of the standard 9000). I thought I was reaching a HD physical limit - so improving the raid performance seemed to be the answer (and still is). However, that wasn't the only limit - when i measured with iperf, I found that the network card could stream around 218Mb/sec up and take in around 420Mb/sec down...