Port bonding/trunking with 3com gear

When setting up a new server in late 2001, I found that it would need more bandwith to the network backbone, than one 100baseTX connection could provide. So I pondered the options:

  1. A highspeed network interface, and appropriate upgrades to the backbone
  2. Several 100baseTX connections to the backbone

Item 1. seemed prohibitively expensive. We use mainly 3com network gear, and upgrading would have to be to gigabit ethernet. We could do that by either buying a new gigabit switch, or getting a gigabit ethernet module for one of our existing switches. We use 3ComŪ SuperStackŪ 3 Switch 3300es, connected via matrix ports. The only gigabit ethernet module I could find for those were $455, to which should be added the price of a gigabit ethernet NIC ($181).

Alternatively, we could get a gigabit ethernet switch, and connect it to the backbone, just like we would have connected the single server, but that many highspeed ports were not needed (yet, anyway).

Item 2, on the other hand, only seemed to require an extra NIC ($58), and an extra switch port, of which we had plenty.

A web search turned up, that you could do port bonding (a.k.a. port trunking, and Ether Channel), with standard Linux software, and a Cisco switch. The documentation for our 3com switches also indicated that they supported bonding.

As an aside, I've just been installing a newer switch, the 3ComŪ SuperStackŪ 3 Switch 4226T.

It only supports trunking (now called link aggregation by 3com), on its two 10/100/1000 ports, so beware of that, if you want to connect it to older 3com switches.

Two NICs (3com 3c905C-TX) were installed in the server, and the kernel was recompiled, after enabling Network device support -> Bonding driver support. To set up the bonding device, I used the setup script shown below (I put it in /etc/sysconfig/network-scripts/ifup-bond). ifenslave, which is needed for the bonding is from iputils-20001010-1


cd /etc/sysconfig/network-scripts
. network-functions

if [ "${ONBOOT}" = "yes" -a "${2}" = "boot" ]; then
ifconfig ${IF0} up
ifconfig ${IF1} up
ifconfig ${DEVICE} ${IPADDR} netmask ${NETMASK} up

ifenslave ${DEVICE} ${IF0}
ifenslave ${DEVICE} ${IF1}

and created a config file, looking like this (/etc/sysconfig/network-scripts/ifcfg-bond0):


This may no longer be needed. Newer RedHat distributions (7.3?) may ship with scripts that already do this.

On the switch I logged in, and bonded the two ports I had connected the servers NICs to, under feature -> trunk -> addPort

Menu options: --------------3Com SuperStack 3 Switch 3300---------------        
 addPort            - Add a port to a trunk                                     
 detail             - Display detailed information                              
 removePort         - Remove a port from a trunk                                
 summary            - Display summary information                               
Type "q" to return to the previous menu or ? for help.                          
-----------------------------------Switch 3300 (1)----------------------

One /etc/rc.d/init.d/network restart later, the server was on the backbone, with both interfaces transmitting and receiving packets. A quick benchmark showed that it could receive and transmit at atleast 170 mbit/s, which was deemed satisfactory for the task.

The price of gigabit ethernet seems seems to be falling rapidly though (much more rapidly than fast ethernet did), and you might want to look at a noname gigabit switch, and a noname NIC. There's an interesting look at the performance of various NICs at http://www.cs.uni.edu/~gray/gig-over-copper/.

I don't expect to be able to specify any gear from Extreme Networks anytime soon, though :-)

Valid XHTML 1.1! Valid CSS!
Last updated: 2002-11-14 00:04