IPMP
Internet Protocol Multi Pathing

IPMP (Internet Protocol Multi Pathing) is a stock standard feature of Solaris 10. After you install the operating system, you can continue to configure this great tool immediately.

It's easy to configure and I will show you easy ways to configure and test you network configuration after it has been setup.

IP multi pathing is still available in Oracle Solaris 10, but the link aggregation feature is very popular and could just replace Internet Protocol Multi Pathing in the future.

I will briefly touch on link aggregation in this section and how to set it up.

What is Solaris Internet Protocol Multi Pathing?

Internet Protocol Multi Pathing is used to provide failover and outbound load spreading for network interfaces. It's built into the network software of Solaris and you can use the normal Solaris, ifconfig, command to set it up.

It's really simple and easy to setup. If you understand how to configure a network interface in Solaris, then you will easily be able to configure IP Multi Pathing.

Let me explain this with a diagram. Below is a typical setup of IP multi pathing on a Solaris 10 host. You have two network interfaces or NIC's (Network Interface Cards) that are connected to the LAN. These two interfaces are then placed in an IPMP group. If the interface, that carries the IP fails, then the other interface will take over the ip address and happily continue working.

Typical setup of IP multi pathing
IPMP setup

In the above diagram, e1000g0, is the interface that has the active IP address which is, 192.168.102.10. As you can see the interface can send and receive traffic. Interface e1000g1, is the failover interface for e1000g0. At this stage, e1000g1 will only be used if e1000g0 becomes to busy.

If this is the case, then e1000g1 will start to do outbound load spreading. It will basically help to distribute the load outbound. That's why the arrow only points outwards.

Let's say that interface, e1000g0 fails. What will happen now is that the IPMP daemon, in.mpathd, will check if there is another interface in the group. If there is, then it will fail the IP address on e1000g0 over to e1000g1. Look at the diagram below.

Interface e1000g0 fails
IPMP fail

Interface e1000g1 will now continue to provide network access to the Solaris 10 server.

If we fix the problem on e1000g0, then the IP address will automatically fail back from e1000g1 to e1000g0. This automatic failback can be changed in the /etc/default/mpathd. There is a parameter called FAILBACK=yes, this could be changed to FAILBACK=no if you DON'T want the interface to fail back automatically.

Below is a diagram where the e1000g0 interface is fixed, and the IP address has failed back automatically.

Interface e1000g0 fixed
IPMP repaired

Very easy and efficient. Let's look at the two ways to setup IP multi pathing.

You get link based and probe based IPMP.

Link based IP multi pathing

In this mode, the link between the NIC on the host and switch is checked. Basically the physical connections between these two devices are checked if there is a link. Link based only requires 1 IP address to be configured on a interface.

The failover interface does not need an IP address. The failover just needs to be plumbed. The physical connection will be used for testing. This setup is very easy and quick.

IP multi pathing Link based
IPMP Link based

Probe based IP multi pathing

In this mode, we need to setup test IP address for the interfaces to check if they are working. The in.mpathd daemon will use the test address to check if the interface is up or down.

It uses ping's to the defaultrouter or known hosts and waits for a response. If no response is received, then the mpathd daemon will fail the IP over the failover interface in the group.

IP multi pathing Probe based
IPMP probe based

Enough talk. Let me show you a couple of examples.

I have two interfaces, e1000g0 and e1000g1. I will these two interfaces to show you how to setup IPMP.I will first do a link based and then the probe based setup.

Very important!

Before configuring IP multi pathing, you must set the interfaces to use their own unique MAC (Media Access Control) addresses. You do this by setting the local-mac-address? OBP parameter to true. You can either do this with the eeprom command, or setting it on the OBP.

Below is an example using the eeprom. This methid works for both SPARC and x86 systems. You could also use this command while Solaris is running.

bash-3.00# eeprom "local-mac-address?=true"
bash-3.00# eeprom local-mac-address?
local-mac-address?=true
bash-3.00#

Just remember to reboot you system to make these changes effective.

Link Based IP multi pathing setup


I will do the setup from the command line and then make it permanent across reboots.

We will plumb the interfaces first then configure it. I will do this to demonstrate the whole process of how to setup IP multi pathing. In a live system you would not need to plumb the active interface cause it should be running already. I'm only doing it this way to show you how to setup everything from scratch.

So, let's plumb the interfaces and assign an IP address to e1000g0. Then I will assign the groupname ipmp0 to both interfaces.

bash-3.00# ifconfig e1000g0 plumb
bash-3.00# ifconfig e1000g1 plumb
bash-3.00# ifconfig e1000g0 192.168.102.40 netmask + broadcast + up
Setting netmask of e1000g0 to 255.255.255.0
bash-3.00# ifconfig e1000g0 group ipmp0
bash-3.00# ifconfig e1000g1 group ipmp0
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4&rt mtu 1500 index 9
        inet 0.0.0.0 netmask 0
        groupname ipmp0
        ether 8:0:27:1e:1d:e6

Finished! The interfaces are now setup for link based IP multi pathing. How easy was that. We can confirm this by looking at the /var/adm/messages output.

Jun 26 23:15:37 qserver in.mpathd[1238]: [ID 975029 daemon.error] 
No test address configured on interface e1000g0; disabling probe-based failure detection on it

Let's test it to see if it works. I will use an Ubuntu system and ping my test setup.

pieter@pieter-VirtualBox:~$ ping  192.168.102.40
PING 192.168.102.40 (192.168.102.40) 56(84) bytes of data.
64 bytes from 192.168.102.40: icmp_req=1 ttl=255 time=3.83 ms
64 bytes from 192.168.102.40: icmp_req=2 ttl=255 time=0.955 ms
64 bytes from 192.168.102.40: icmp_req=3 ttl=255 time=0.523 ms

The ping works with the setup as is. Let's fail interface e1000g0 to check if the ping still works. I will use a command called, if_mpadm to simulate the failure. It's a command that will take the interface offline like you would physically disconnect it.

bash-3.00# if_mpadm -d e1000g0

bash-3.00# tail -f /var/adm/messages
Jun 26 23:23:34 qserver in.mpathd[1238]: [ID 832587 daemon.error] 
Successfully failed over from NIC e1000g0 to NIC e1000g1

Ok, so it looks like the IP has failed over. Let's see if we can ping from the Ubuntu system.

pieter@pieter-VirtualBox:~$ ping  192.168.102.40
PING 192.168.102.40 (192.168.102.40) 56(84) bytes of data.
64 bytes from 192.168.102.40: icmp_req=1 ttl=255 time=1.18 ms
64 bytes from 192.168.102.40: icmp_req=2 ttl=255 time=0.580 ms

Yep, it still works. Let's have a look at the interfaces with the ifconfig -a command.

bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=89000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,OFFLINE> mtu 0 index 8
        inet 0.0.0.0 netmask 0
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 9
        inet 0.0.0.0 netmask 0
        groupname ipmp0
        ether 8:0:27:1e:1d:e6
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 9
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255

Cool stuff! e1000g1 now has the IP address that was previously plumbed on e1000g0. So it worked as advertised.

Let's fix the interface and see if the IP fails back again.

bash-3.00# if_mpadm -r e1000g0
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 9
        inet 0.0.0.0 netmask 0
        groupname ipmp0
        ether 8:0:27:1e:1d:e6

bash-3.00# tail -f /var/adm/messages
Jun 24 15:04:02 qserver in.mpathd[1241]: [ID 620804 daemon.error]
Successfully failed back to NIC e1000g0

pieter@pieter-VirtualBox:~$ ping 192.168.102.40
PING 192.168.102.40 (192.168.102.40) 56(84) bytes of data.
64 bytes from 192.168.102.40: icmp_req=1 ttl=255 time=4.34 ms
64 bytes from 192.168.102.40: icmp_req=2 ttl=255 time=0.503 ms

Yep, it failed back and the ping still works.

This is all well and dandy, but if we reboot the system the config will be gone. So we need to make this permanent by editing the /etc/hostname.e1000g0 and /etc/hostname.e1000g1 files.

bash-3.00# cat /etc/hostname.e1000g0
qserver group ipmp0
bash-3.00# cat /etc/hostname.e1000g1
group ipmp0

We just add the group ipmp0 into both files and now we can reboot the system and IP multi pathing will be configured for link based mode on the interfaces.

How easy was that!

I really like the link based setup. It's easy and quick and I prefer this over the probe based setup.

Next we will look at probe based IP multipathing.

Probe based IP multipathing


For probe based multipathing, we need at least 3 IP addresses. 2 for test addresses and 1 for the active address.

We will use the following IP addresses.

192.168.102.40 on e1000g0 and this will be the active IP.
192.168.102.41 on e1000g0:1 will be the test IP for e1000g0
192.168.102.42 on e1000g1 will be the test IP for the e1000g1 interface.

Great stuff. Let's configure probe based IP multi pathing. I will do this step by step cause it's very easy to get this wrong and then you wonder why it did not work.

TIP - Always configure probe IPMP using the server's console. If you do it via a telnet or ssh session and you get it wrong, the session will freeze and you don't know whats happening.

Again, I'll start from scratch plumbing both interfaces. I will first do e1000g0 and then I will move on to e1000g1.

bash-3.00# ifconfig e1000g0 plumb
bash-3.00# ifconfig e1000g1 plumb

bash-3.00# ifconfig e1000g0 192.168.102.40 netmask + broadcast + up
Setting netmask of e1000g0 to 255.255.255.0
bash-3.00# ifconfig e1000g0 group ipmp0

Now e1000g0 is setup with an IP address and put in the ipmp0 group.

Next, we will configure the test IP address on e1000g0.

bash-3.00# ifconfig e1000g0 addif 192.168.102.41 netmask + broadcast + -failover deprecated up
Created new logical interface e1000g0:1
Setting netmask of e1000g0:1 to 255.255.255.0

bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 5
        inet 192.168.102.41 netmask ffffff00 broadcast 192.168.102.255
e1000g1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        ether 8:0:27:1e:1d:e6
bash-3.00#

Notice that I did not use the word test anywhere in the ifconfig command. So, how did I configure it then? There are two things I want to point out here.

1) I used the addif command to create a logical interface. This command means, add interface. This will check if there are any logical interfaces created and then just add the next available number to the logical interface.

Some people use the logical interface as the failover IP. It doesn't really matter. You can make the logical or the physical the test. As long as there is a test interface.

2) To create a test interface you specify two flags, -failover and deprecated. These two flags tell IPMP that this interface is a test interface.
-failover means don't failover this IP address when the interface fails. This makes sense cause you don't want the test IP to failover in the event the physical interface fails. You want it to keep on pinging to check when the interface is fixed.
deprecated means do not use this interface when initiating packets from this system. This is useful and needed when you use client software that initiated traffic from the host, or client system to the server where the server expects packets from a known interface.

Now for the e1000g1 interface. It's already plumbed so I just need to configure an IP address, put it in the same group as e1000g0 and specify it as a test IP.

bash-3.00# ifconfig e1000g1 192.168.102.42 netmask + broadcast + up
Setting netmask of e1000g1 to 255.255.255.0
bash-3.00# ifconfig e1000g1 group ipmp0 -failover deprecated
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 5
        inet 192.168.102.41 netmask ffffff00 broadcast 192.168.102.255
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
        inet 192.168.102.42 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:1e:1d:e6

I now it looks complicated but it really isn't. All we did was give e1000g1 an IP address, placed it in the ipmp0 group and made it a test interface.

Let's test it.

Again, I will disable the e1000g0 physical interface. Then I will ping with Ubuntu and I will check the ifconfig command if the IP did fail over.

bash-3.00# if_mpadm -d e1000g0

bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=89000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,OFFLINE> 
mtu 0 index 2
        inet 0.0.0.0 netmask 0
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g0:1: flags=89040842<BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,OFFLINE> 
mtu 1500 index 2
        inet 192.168.102.41 netmask ffffff00 broadcast 192.168.102.255
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> 
mtu 1500 index 3
        inet 192.168.102.42 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:1e:1d:e6
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255
bash-3.00#

Look at the output of ifconfig -a. You will notice that e1000g1 now has a logical interface called e1000g1:1 and that the IP address 192.168.102.40, the active IP, is plumbed and working. The test IP on e1000g0 is still pinging in the background to see if the interface has been fixed.

Let me ping from Ubuntu to see if it's still working and let's have a look at the messages file.

pieter@pieter-VirtualBox:~$ ping 192.168.102.40
PING 192.168.102.40 (192.168.102.40) 56(84) bytes of data.
64 bytes from 192.168.102.40: icmp_req=1 ttl=255 time=0.537 ms
64 bytes from 192.168.102.40: icmp_req=2 ttl=255 time=0.482 ms

Jun 27 09:22:09 qserver in.mpathd[1248]: [ID 832587 daemon.error] 
Successfully failed over from NIC e1000g0 to NIC e1000g1

Cool, it's working as planned. Now, let's fix the e1000g0 and see if the IP fails back.

bash-3.00# if_mpadm -r e1000g0
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 192.168.102.40 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:6f:c5:6e
e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
        inet 192.168.102.41 netmask ffffff00 broadcast 192.168.102.255
e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
        inet 192.168.102.42 netmask ffffff00 broadcast 192.168.102.255
        groupname ipmp0
        ether 8:0:27:1e:1d:e6

bash-3.00# tail -f /var/adm/messages
Jun 27 09:29:49 qserver in.mpathd[1248]: [ID 620804 daemon.error] 
Successfully failed back to NIC e1000g0

pieter@pieter-VirtualBox:~$ ping 192.168.102.40
PING 192.168.102.40 (192.168.102.40) 56(84) bytes of data.
64 bytes from 192.168.102.40: icmp_req=1 ttl=255 time=0.433 ms
64 bytes from 192.168.102.40: icmp_req=2 ttl=255 time=0.482 ms
64 bytes from 192.168.102.40: icmp_req=3 ttl=255 time=0.490 ms

Yep, still works. The interface has failed over and back and everything is still working.

Again we need to place this setup in the hostname files, otherwise the config is lost across reboots. Let's set that up.

For /etc/hostname.e1000g0

192.168.102.40 netmask + broadcast + group ipmp0 up 
addif 192.168.102.41 netmask + broadcast + deprecated -failover up

For /etc/hostname.e1000g1

192.168.102.42 netmask + broadcast + group ipmp0 deprecated -failover up

So, there you have it. IPMP made simple. If you're not sure about this setup, then practice setting it up. All you need is a Oracle VM VirtualBox VM with Solaris 10 installed.

One last IMPORTANT thing when using probe based IPMP with defaultrouter

It's rare, but it's there. I have come across this once and it made my life very difficult.

When you use probe based IP multi pathing, the in.mpathd daemon uses the defaultrouter to check that the link is up. So, what happens if the defaultrouter goes down or fails? IP multi pathing fails all interfaces in the group. Yep, that's right. It fails all interfaces.

Keep this in mind. In most cases this is not a problem cause if the defaultrouter is down, nobody can work anyway. That's true, but hat if you don't use a defaultrouter?

Ah, trick question. Not really. If you don't use a defaultrouter then IP multi pathing will just use the first couple of hosts it finds and use them for ping targets.

The problem with this setup is that if those hosts goes down, then all interfaces fails again. Not good. So, how do I fix this?

There are very few people who actually know that this problem even exists. Let me tell you how to fix the problem if you don't have a defaultrouter and you want to setup probe based IPMP.

First off all you need to add some static known host routes. You need to identify at least 10 hosts that you know will never go down or that if some goes down that at least one stays up.

I usually use the ILOM (Integrated Light Out Management) IP address or some file server IP address. How you select these hosts is up to you.

Then you need to add these hosts as static routes on the server. You use the route add host IP IP command to do this.

Remember, if you do this from the command line, and you reboot, the config is gone. Put these static routes in a start up script to add them every time you boot.

I will use an example file name called, S85addstatic

In this file you put the entries for your static routes like this. The IP address I use is for explanation purposes only. USE YOUR OWN. Don't use my example IP addresses, they will not work in your configuration!

I would suggest that you use some kind of script to stop and start the static routes. Below is an example.

Create a file called /etc/rc2.d/S85addstatic. Put the following in the script:

#!/bin/sh

case "$1" in
        'start')      
         /usr/sbin/route add host 192.168.102.10 192.168.102.10
         /usr/sbin/route add host 192.168.102.11 192.168.102.11
         /usr/sbin/route add host 192.168.102.12 192.168.102.12
         /usr/sbin/route add host 192.168.102.13 192.168.102.13
         /usr/sbin/route add host 192.168.102.14 192.168.102.14
         /usr/sbin/route add host 192.168.102.15 192.168.102.15
         /usr/sbin/route add host 192.168.102.16 192.168.102.16
         /usr/sbin/route add host 192.168.102.17 192.168.102.17
         /usr/sbin/route add host 192.168.102.18 192.168.102.18
         /usr/sbin/route add host 192.168.102.19 192.168.102.19
         ;;
        'stop')
        /usr/sbin/route delete host 192.168.102.10 192.168.102.10
        /usr/sbin/route delete host 192.168.102.11 192.168.102.11
        /usr/sbin/route delete host 192.168.102.12 192.168.102.12
        /usr/sbin/route delete host 192.168.102.13 192.168.102.13
        /usr/sbin/route delete host 192.168.102.14 192.168.102.14
        /usr/sbin/route delete host 192.168.102.15 192.168.102.15
        /usr/sbin/route delete host 192.168.102.16 192.168.102.16
        /usr/sbin/route delete host 192.168.102.17 192.168.102.17
        /usr/sbin/route delete host 192.168.102.18 192.168.102.18
        /usr/sbin/route delete host 192.168.102.19 192.168.102.19
                ;;
esac

Save the file, change the permissions and check if it works.

bash-3.00# chmod 744 /etc/rc2.d/S85addstatic

bash-3.00# /etc/rc2.d/S85addstatic start
add host 192.168.102.10: gateway 192.168.102.10
add host 192.168.102.11: gateway 192.168.102.11
add host 192.168.102.12: gateway 192.168.102.12
add host 192.168.102.13: gateway 192.168.102.13
add host 192.168.102.14: gateway 192.168.102.14
add host 192.168.102.15: gateway 192.168.102.15
add host 192.168.102.16: gateway 192.168.102.16
add host 192.168.102.17: gateway 192.168.102.17
add host 192.168.102.18: gateway 192.168.102.18
add host 192.168.102.19: gateway 192.168.102.19

bash-3.00# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
192.168.102.0        192.168.102.40       U         1          5 e1000g0
192.168.102.0        192.168.102.40       U         1          4 e1000g1
192.168.102.0        192.168.102.40       U         1          0 e1000g0:1
192.168.102.10       192.168.102.10       UGH       1          0
192.168.102.11       192.168.102.11       UGH       1          0
192.168.102.12       192.168.102.12       UGH       1          0
192.168.102.13       192.168.102.13       UGH       1          0
192.168.102.14       192.168.102.14       UGH       1          0
192.168.102.15       192.168.102.15       UGH       1          0
192.168.102.16       192.168.102.16       UGH       1          0
192.168.102.17       192.168.102.17       UGH       1          0
192.168.102.18       192.168.102.18       UGH       1          0
192.168.102.19       192.168.102.19       UGH       1          0
192.168.102.59       192.168.102.59       UGH       1          0
224.0.0.0            192.168.102.40       U         1          0 e1000g0
127.0.0.1            127.0.0.1            UH        3         84 lo0

bash-3.00# /etc/rc2.d/S85addstatic stop
delete host 192.168.102.10: gateway 192.168.102.10
delete host 192.168.102.11: gateway 192.168.102.11
delete host 192.168.102.12: gateway 192.168.102.12
delete host 192.168.102.13: gateway 192.168.102.13
delete host 192.168.102.14: gateway 192.168.102.14
delete host 192.168.102.15: gateway 192.168.102.15
delete host 192.168.102.16: gateway 192.168.102.16
delete host 192.168.102.17: gateway 192.168.102.17
delete host 192.168.102.18: gateway 192.168.102.18
delete host 192.168.102.19: gateway 192.168.102.19

bash-3.00# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
192.168.102.0        192.168.102.40       U         1          9 e1000g0
192.168.102.0        192.168.102.40       U         1          9 e1000g1
192.168.102.0        192.168.102.40       U         1          0 e1000g0:1
192.168.102.59       192.168.102.59       UGH       1          0
224.0.0.0            192.168.102.40       U         1          0 e1000g0
127.0.0.1            127.0.0.1            UH        3         84 lo0
bash-3.00#

There it is.

It's very RARE to get this type of setup, but if you have problems then have a look at this. I discovered that I needed static routes once when all my interfaces continually failed after I set them up. I could not understand what the problem was.

The physical cable that was connect was linked on the switch (LED was on) and the interface worked fine if I unconfigure IP multi pathing. As soon as I enable IPMP, it failed. I created the script, added the static routes and all was well.

Solaris 10 Link Aggregation


What is Solaris Link Aggregation? Let me give you the quick and easy explanation.

With link aggregation, you group 2 or more interfaces together in an aggregation set to give you speed and fail over capabilities.

Hm, doesn't IPMP do that? Yes, and no. Internet Protocol Multi Pathing has more to do with failing over IP addresses than speed. Sure you can do outbound load spreading, but it's only outbound.

Link aggregation does this both ways. The interfaces that you group together can be used inbound or outbound with failover or resilience built in.

This sounds to good to be true! There is no such thing as a free lunch. If you want to use link aggregation, the switches you connect to must be able to support the LACP (Link Aggregation Control Protocol). You basically just tell the switch that you are going to group two or more interfaces with one IP.

You cannot, at the time of writing, use link aggregation with two separate switches. So, all interfaces must be connected to the same switch to use link aggregation. This might change in the future with newer releases of link aggregation. With IPMP this is possible.

Let me just show you a quick example of how to configure link aggregation.

I will use two interfaces, e1000g1 and e1000g2 and create an aggregation with it.

bash-3.00# dladm create-aggr -d e1000g1 -d e1000g2 1

bash-3.00# dladm show-link
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
e1000g1         type: non-vlan  mtu: 1500       device: e1000g1
e1000g2         type: non-vlan  mtu: 1500       device: e1000g2
aggr1           type: non-vlan  mtu: 1500       aggregation: key 1
bash-3.00# dladm show-aggr
key: 1 (0x0001) policy: L4      address: 8:0:27:1e:1d:e6 (auto)
           device       address                 speed           duplex  link   state
           e1000g1      8:0:27:1e:1d:e6   1000  Mbps    full    unknown standby
           e1000g2      8:0:27:fb:66:79   1000  Mbps    full    unknown standby
bash-3.00#

Easy as pie. We used the dladm create-aggr command to create the aggregation. we then used various show commands to look at the configuration.

To use it, we need to plumb the aggr1 interface. I specified 1 at the end of my create-aggr command and that's what I have to use.

bash-3.00# ifconfig aggr1 plumb
bash-3.00# ifconfig aggr1 192.168.102.41 netmask + broadcast + up
Setting netmask of aggr1 to 255.255.255.0
bash-3.00# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
aggr1: flags=1000843 mtu 1500 index 3
        inet 192.168.102.41 netmask ffffff00 broadcast 192.168.102.255
        ether 8:0:27:1e:1d:e6
bash-3.00#

There it is. Very simple.

To delete an aggregation set we just use the delete-aggr command.

bash-3.00# dladm delete-aggr 1
dladm: delete operation failed: Device busy
bash-3.00# ifconfig aggr1 unplumb
bash-3.00# dladm delete-aggr 1
bash-3.00# dladm show-aggr
bash-3.00# dladm show-link
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
e1000g1         type: non-vlan  mtu: 1500       device: e1000g1
e1000g2         type: non-vlan  mtu: 1500       device: e1000g2
bash-3.00#

Did you pick up that I was unable to delete the aggregation cause the interface was plumbed? Just unplumb the interface and delete again.

Well, I said at the top of this page that I will do a brief explanation of link aggregation and that's what I did.

I hope this quick intro has given you a better understanding of link aggregation.





Return from IPMP to Solaris 10

Back to What is My Computer