Raid Performance
What to look out for



In this section we will look at raid performance. We will discuss things to look out for when creating raid volumes and how to squeeze the best performance when creating volumes.

There are some basic things to look out for.

First of all. If you use software raid with JBOD's (Just A Bunch Of Disks), try and use more than one controller and if possible, more than one array when creating your raid volumes.

It's very easy to saturate your controller when you connect a lot of disks to your server using a single array and controller. Spread the load across controllers and arrays.

When you do use more than controller and array, remember to create your luns across these arrays and controllers. Let's say you use raid 0+1, you would then use one array for one side of the stripe and the other array for the other stripe.

With this configuration, you will get excellent raid performance and redundancy. Below is a diagram that show this.

Raid 01 using 2 arrays
Raid 01 performance

In the above example we used two disks from array1 and two disks from array2. We then created the raid 0+1 raid group using our software and presented it to the host.

With this setup the load will be spread across two controllers and two array's. This will give us great raid performance and redundancy. We are protected even if one of the arrays powers off.

Use dual controllers per disk array

We could even improve the performance more by adding dual controllers to each array. This will give even better performance and redundancy as there is now 2 paths per array.

Below is a diagram to illustrate this.

Raid 01 using 2 arrays and two controllers
Raid 01 performance dual

With this setup you can now tolerate a controller failure or path failure and the disk array will still be accessible. Not just that, but now we have double the throughput per disk array.

Let's say that one controller gives us 300mb/s throughput per channel. We have 4 in the above example. This would equate to 1.2Gb/s throughput altogether. We have tripled the throughput by just adding 2 more controllers and paths.

Remember that you still need to be clever when creating raid groups and volumes. Spread the load the best you can. The above configuration is perfect for raid 1+0 or raid 0+1.

You could also use this setup for raid 5, but it might be advisable to create your raid 5 lun on a single disk array and not spread the raid group over the two disk array's. The reason is that if you spread the disks across disk array for raid 5, and an array fails, you loose the whole volume.

Remember, with raid 5, you can only tolerate a single disk failure at a time. If you loose an array, you will loose more than just one disk in the raid group.

Keep in mind that the disk array that you buy must be able to support dual controllers. The hardware itself must have at least two ports to connect cables to. Most of today's JBOD storage will adhere to this.

Configure Multi pathing for better raid performance and redundancy.

When there is more than one path to a JBOD disk array, the operating system will see the disks twice. Once for each path. This is great but you need a way to tell the operating system that if one path fails, then use the other. The native operating system cannot do this, that's why you need multi pathing software. This is the case for JBOD storage.

With hardware raid, when you create your luns, the operating system will see these luns twice. Again, you have to tell the operating system that if one path fails, then use the other.

This is where multi pathing software comes in.

The operating system you connect to must also be able to support multipathing. Windows Server 2003-2008 has a product called MPIO (Multi Pathing Input Output). It comes with the OS and is easy to configure. In server manager, enable the MPIO feature.

Other vendors, such as Sun Microsystems, also bundles the software with their Solaris operating system. On Solaris they use MPXIO (Multiplexed Input Output). On Solaris 10, for instance, it's very easy to activate this feature. You just run the command: stmsboot -e. This will reboot the server and enable the multi pathing.

Multi pathing can also be used in hardware raid configurations. In fact, some vendors will not support your hardware if you have not used dual paths to the storage.

Remember the stripe unit

I already discussed this issue in the raid 5 section. So I will just briefly mention it here. If you intend to use raid 5 then be careful with the stripe unit size. Try to match it to whatever the data size from the application is.

If you don't, you might sit with a CPU that does a lot of read, modify, writes. We don't want this.

The stripe unit is not just relevant for raid 5, it's also used with raid 0, raid 1+0 and raid0+1. Whenever there's a stripe, there's a stripe unit. That little piece on each disk that makes the total stripe size.

With raid 0, raid 1+0 and raid0+1 it's not that much of an issue because there is no parity to be calculated. With raid 5 it's definitely an issue.

I would rather use raid 1+0 on software raid systems if raid performance will be an issue. Rather spend the money and get the correct hardware than trying to get raid 5 to work properly. I have seen many clients burn their fingers when they used raid 5.

If raid performance is what you're after, then get a hardware raid setup.

Use the same technology disks

SAS is faster than SATA and Fibre channel is faster than SAS and SATA. Keep this in mind when creating raid groups. Try to use the same hardware when creating raid groups.

What I mean by this is, if you create a raid 0+1, then use the same type of technology disks for the group. Don't mix SAS and SATA in the same raid group. So, if you use SAS, then all disks in the raid group must use SAS disks.

If you create a raid group for raid performance, then use the fastest disks for the raid group. Don't use SATA disks in IO intensive environments. Use SATA for backup purposes or maybe storing large amounts of data, or archiving data.

Fibre channel disks are the fastest at the moment but they are also expensive and you rarely get them in JBOD format. They usually come in hardware raid configurations.

Hardware raid gives better raid performance.

Some people might disagree with me on this, but from my experience this is true.

People who use the open ZFS file system might disagree with me on this point. With the introduction of solid state disks, you can setup your ZFS filesystem to use the ssd as a sort of cache disks. This greatly increases the raid perfomance.

You can get good raid performance with ZFS, but if you are looking at enterprise environments, then hardware raid is the way to go.

I have listed some points below on why I think hardware raid is better.

Write behind vs write through.

One thing that hardware raid gives is, cache. It's faster to write to memory than to a hard disk. This is a fact.

The cache on hardware raid controllers will increase write performance. This is true if the controller has been set to write behind mode. This is usually the default on controllers when you set them up. But, what is the difference?

With write behind mode, the data is written to cache before it's committed to disk. The controller will commit the data to disk at specified intervals or when the cache gets to full.

Write behind mode
Write behind cache

Writing the data to cache is obviously faster than writing it directly to disk. Remember, writing to memory is faster than writing to disk. For this to happen, the controller must be in write behind mode.

With write through mode, the data is written directly to disk and the cache is skipped. This is slower than write behind cause we don't make use of the faster cache in the controllers.

Write through mode
Write through cache

The reason why a controller might be in write behind mode, is that the batteries, that keeps the data alive in the cache, might be faulty. Most controllers will switch to write through mode automatically if this happens.

Another reason might be that someone has set this manually via the software that comes with the controller.

If there are any errors on the controllers, then write through mode will be activated. You can override this and force the controllers to always use write behind mode.

This is obviously not a good idea, because if you loose power to the controller, the all the data in the cache will be lost. For data bases this is a bad scenario.

If you have slow raid performance from your hardware raid device, then check this setting. Consult the documentation that came with the device if you are unsure on how to check the cache mode setting.

Dual controllers and dual fc-switches

When connecting to hardware raid controllers, there's a couple of basic things that you can do to improve raid performance.

Most controllers will have more than one host ports that you can connect either directly to hosts, or to switches. Use these ports and connect at least two to different switches. This will improve performance and also enhance redundancy.

Below is a diagram that illustrates this.

Raid performance. Dual controllers and switches
fc switches

This will also help you in the future if you need to connect more hosts to the storage device. If you have a single host that connect to the storage device, then you probably won't do this. In that case you might connect the host directly to the storage.

In environments where more than one host host will connect to the storage, then you should look at this configuration.

With this configuration you would also need to use multi pathing software. The host will see the luns twice from the storage. You need to configure the host to recognize the luns twice.

Conclusions

There are so many different storage devices out there today, so I tried to give some general tips in this section to enhance raid performance. They are easy to implement and won't cost you an arm and a leg.

Read the specific vendors documentation if you need to drill down to specific options and settings on the storage device.

Use the vendors website for tips and tricks and to download manuals.

Try to keep the operating system patches and host bus adapters firmware up to date. Sometimes vendors include performance enhancements and bugs fixes in their firmware updates. If you keep this up to date you might get better raid performance.





Return from Raid Performance to Raid Levels

Back to What is My Computer














Search what-is-my-computer.com




What is my Computer

What is in my computer?


Computer Components

Discover what goes into a PC?