Solaris zones or containers
example



So, you installed Solaris 10 on your system and you are ready to create some Solaris zones.

In this section I will show you some examples on how to create zones on the Solaris 10 operating system. It's really not that difficult and I will guide you through this process by using examples.

We will look at:

Sparse root zones

Whole root zones

Network resources and issues

Starting, stopping and deleting zones

Managing disk resources in zones

Managing CPU and memory in zones

If you have not already read my page on Virtualization software, then I suggest you just go through this page to get a basic understanding of virtualization and Solaris zones.

So, lets get started.

I will use various virtualization platforms to actually do the Solaris zones. I have a laptop with Windows 7 and I also installed Oracle VM VirtualBox software on it. I created a virtual machine and installed Solaris 10 x86 u9 on the virtual machine.

Sparse root zones


First of all, let's use the zoneadm command to actually see if there are any Solaris zones created. I will use screen shots from my actual setup to show you the commands and output.

bash-3.00# zoneadm list -icv
  ID NAME             STATUS     PATH    BRAND    IP
   0 global           running    /       native   shared
bash-3.00#

The zoneadm command is used for administration of the Solaris zones. The list sub command tells zoneadm to list or display zoneinfo and the -icv options just tells zoneadm to display all Solaris zones and the output should be verbose, meaning show me all info about the Solaris zones. Pretty straight forward.

From the above output we can see that there is only one zone and that's the global zone. The global zone will always be there.

What do we need to create Solaris zones? Like any other server, a zone needs CPU, memory, network and storage. When we create Solaris zones, we don't really need to worry about CPU and memory. We can control these resources at a later stage with resource controls, or we can restrict Solaris zones to use a certain amount of CPU and memory.

We have to decide whether we want to use whole root zones or sparse root zones. What the difference? With whole root zones, most of the global zones filesystems, such as /usr, /lib, /sbin and /platform, are copied to the zone. This zone also requires more disk space for the OS than sparse root Solaris zones.

Sparse root zones requires less space because the /usr, /lib, /sbin, /platform filesystems are loop back mounted (lofs) from the global zone. Loop back means the filesystems are sort of linked from the global zone to the zone.

So, sparse root Solaris zones does not copy these filesystems, it only links to it. Sparse root zones requires less space than whole root Solaris zones and installs quicker as well.

We need to decide whether we want to use dedicated network resources or shared network resources. You can either use an ip-type of shared or exclusive.

Shared means that we share an interface with the global zone and other Solaris zones. I will show you in the example what I mean.

Exclusive means, use a dedicated interface for that zone exclusively.

Another important factor to consider is, where you want to create your zone. This is called the zonepath. This is where the actual zone and it's files will reside after creation.

For testing and playing, you may only want to create the zone in the root filesystem. This is ok as you will probably only use it for testing. If it's a production zone, then you will probably create it on separate storage such as an external disk unit or luns.

If you plan to move Solaris zones across systems, then creating your zones on shared storage might be a good idea.

I will start with a simple sparse root zone and then also show you how to create a whole root zone.

I will use the root filesystem to store my zone info in. My zone will also use the shared ip type. So, here goes:

We will use the zonecfg command to create the actual zone and add resources to it.

bash-3.00# zonecfg -z myzone
myzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:myzone>

The above command will start the interactive zone creation session. The -z myzone specifies the zone name. In this case I want to call the zone, myzone

The output says that there is no zone called myzone and that we have to create it. Great, that's exactly what we want.

Remember one thing here. If you want to create a sparse root zone, then use just the command create. If you want to create a whole root zone, then use the create -b command.

I will just use the create command.

bash-3.00# zonecfg -z myzone
myzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:myzone> create
zonecfg:myzone> info
zonename: myzone
zonepath:
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
zonecfg:myzone>

I used the info command to get information about my current zone.

From the info command, we can see that there's some things missing before we can install this zone. There is no zonepath and we need to give it some IP information. But we can actually get a lot of information from this output.

For instance, we see that the ip-type is shared and we can see that this is a sparse root zone cause the /usr, /sbin, /platform and /lib are inherited from the global zone.

Let's add some stuff.

First the zonepath, then the ip stuff.

zonecfg:myzone> set zonepath=/zones/myzone

Very simple. Just specify the zonepath where you want to create the zone. Create the directory, and change the permissions.

bash-3.00# mkdir -p /zones/myzone
bash-3.00# chmod 700 /zones/myzone

Let's add the network information.

zonecfg:myzone> add net
zonecfg:myzone:net> set address=192.168.102.120
zonecfg:myzone:net> set physical=e1000g0
zonecfg:myzone:net> set defrouter=192.168.102.1
zonecfg:myzone:net> end
zonecfg:myzone>

We used the add net command to add network information. The prompt changed to zonecfg:myzone:net>. This is just to show you that you are now going to configure the network.

The set address=192.168.102.120 sets the IP address for your zone. set physical=e1000g0 means use the physical interface, e1000g0. You get this interface name from the global zone with the ifconfig -a command.

bash-3.00# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=4001000842 mtu 1500 index 2
        inet 192.168.102.40 netmask ffffff00 
broadcast 192.168.102.255
        ether 8:0:27:6f:c5:6e
bash-3.00#

So, e1000g0 is a physical interface that's used in the global zone.

I always set the autoboot=true cause I want the zone to automatically boot when the system starts up.

Let's do the info to check that everything is ok.

zonecfg:myzone> set autoboot=true
zonecfg:myzone> info
zonename: myzone
zonepath: /zones/myzone
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
net:
        address: 192.168.102.120
        physical: e1000g0
        defrouter: 192.168.102.1
zonecfg:myzone>

I'm happy with the config. Next step is to verify and save the zone config.

zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit
bash-3.00#

No errors, great. We use the exit command to exit the zonecfg command.

Everything looks fine. Lets run the zoneadm command to check the state of the zone.

bash-3.00# zoneadm list -icv
  ID NAME       STATUS     PATH          BRAND    IP
   0 global     running    /             native   shared
   - myzone     configured /zones/myzone native   shared
bash-3.00#

From the output we can see the zone is configured. This just means that we created the zone. We have not put any OS on it yet. Next step is to install the zone.

bash-3.00# zoneadm -z myzone install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <3404> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1132> packages on the zone.
Initialized <1132> packages on zone.
Zone <myzone> is initialized.
The file </zones/myzone/root/var/sadm/system/logs/
install_log contains a log of the zone installation.>

You will notice that the installation doesn't take to long. That's because it's a sparse root zone. There is also a log file that you can have a look at after the install.

Let's run the zoneadm command again.

bash-3.00# zoneadm list -icv
  ID NAME      STATUS     PATH          BRAND    IP
   0 global    running    /             native   shared
   - myzone    installed  /zones/myzone native   shared
bash-3.00#

From the zoneadm output we see that the zone is installed and we can boot it up.

bash-3.00# zoneadm -z myzone boot

bash-3.00# zoneadm list -icv
  ID NAME      STATUS     PATH          BRAND    IP
   0 global    running    /             native   shared
   2 myzone    running    /zones/myzone native   shared
bash-3.00#

Now zoneadm shows that the zone is installed and running. Almost finished. We now need to connect to the Solaris zones console and configure Solaris 10 for the first time on the zone. We use the zlogin -C myzone command. The -C says, connect to the console. If you ommit the -C, then you login like a normal telnet or ssh session.

bash-3.00# zlogin -C myzone
[Connected to zone 'myzone' console]


You did not enter a selection.
What type of terminal are you using?
 1) ANSI Standard CRT
 2) DEC VT52
 3) DEC VT100
 4) Heathkit 19
 5) Lear Siegler ADM31
 6) PC Console
 7) Sun Command Tool
 8) Sun Workstation
 9) Televideo 910
 10) Televideo 925
 11) Wyse Model 50
 12) X Terminal Emulator (xterms)
 13) CDE Terminal Emulator (dtterm)
 14) Other
Type the number of your choice and press Return: 13

Here you just answer the normal questions that Solaris will ask you such as terminal type, hostname, kerberos, nameservice, NFS domain, timezone and root password.

SunOS Release 5.10 Version Generic_142910-17 64-bit
Copyright (c) 1983, 2010, Oracle and/or its affiliates. 
All rights reserved.
Hostname: myzone
Reading ZFS config: done.
myzone console login:

Cool stuff!. We now have a zone running and we can log into it and start to install our applications on it. How easy was that.

How much space does the sparse root zone take? Here is the output of the df -k command before we created the zone:

bash-3.00# df -k /
Filesystem       kbytes    used   avail capacity  Mounted on
/dev/dsk/c0d0s0  14397417 4465887 9787556    32%    /

Root was 4465887 Kbytes.

Here is the output after the zone was created:

bash-3.00# df -k /
Filesystem      kbytes    used   avail capacity  Mounted on
/dev/dsk/c0d0s0 14397417 4594913 9658530    33%    /

The used column now says, 4594913 Kbytes. If we deduct the two from each other we get, 129026 Kbytes. This equates to about 129 Mbytes. So, sparse root Solaris zones will take about 129 Mbytes. This is a more or less figure.

You can now navigate in the newly created Solaris zones filesystems and list files just like you would a normal system. For instance, the passwd file for myzone is located in /zones/myzone/root/etc/passwd. The same for the shadow, group etc.

Whole root Solaris zones


I just want to show you how to create a whole root zone as well for completeness of this page. I will not clutter the procedure with explanations. We already looked at the basics, so it's not necessary to explain everything again.

Here we go.

The biggest difference, when creating Solaris zones, between a whole root and sparse root zone is, how you create it. With whole root Solaris zones, we use the create -b option.

bash-3.00# zonecfg -z wholeroot
wholeroot: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:wholeroot> create -b
zonecfg:wholeroot> set zonepath=/zones/wholeroot
zonecfg:wholeroot> set autoboot=true
zonecfg:wholeroot> add net
zonecfg:wholeroot:net> set address=192.168.102.125
zonecfg:wholeroot:net> set physical=e1000g0
zonecfg:wholeroot:net> set defrouter=192.168.102.1
zonecfg:wholeroot:net> end
zonecfg:wholeroot> info
zonename: wholeroot
zonepath: /zones/wholeroot
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
net:
        address: 192.168.102.125
        physical: e1000g0
        defrouter: 192.168.102.1
zonecfg:wholeroot> verify
zonecfg:wholeroot> commit
zonecfg:wholeroot> exit

bash-3.00# mkdir /zones/wholeroot

bash-3.00# chmod 700 /zones/wholeroot

bash-3.00# zoneadm list -icv
ID NAME      STATUS     PATH             BRAND    IP
0 global    running    /                native   shared
5 myzone    running    /zones/myzone    native   shared
- wholeroot configured /zones/wholeroot native   shared

bash-3.00# zoneadm -z wholeroot install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <145150> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1132> packages on the zone.
Initialized <1132> packages on zone.
Zone <wholeroot> is initialized.
The file </zones/wholeroot/root/var/sadm/system
/logs/install_log> contains a log of the zone installation.

bash-3.00# zoneadm list -icv
ID NAME     STATUS     PATH            BRAND    IP
0 global    running   /                native   shared
5 myzone    running   /zones/myzone    native   shared
- wholeroot installed /zones/wholeroot native   shared

bash-3.00# zoneadm -z wholeroot boot

bash-3.00# zlogin -C wholeroot

Let's see how big the whole root zone is. I did a df -k / before I created the zone. Here's the output:

bash-3.00# df -k /
Filesystem      kbytes    used    avail   capacity  Mounted on
/dev/dsk/c0d0s0 14397417  4594985 9658458 33%       /

And here's the df after creation:

Filesystem      kbytes   used    avail   capacity  Mounted on
/dev/dsk/c0d0s0 14397417 7836035 6417408 55%       /

Without even deducting anything I can see that the wholeroot zone is much bigger. Let's do the math. 7836035 - 4594985 = 3241050 Kbytes. That equates to about 3.2Gbytes. So a whole root zone may require more than 3 Gbytes of space. Keep this in mind when you start creating production environments.

Network interfaces and issues


Another thing a want to show you is the network interfaces. Here is the ifconfig output:

Note: I had to trim the end of the output to fit in all the bits I need you to see.

bash-3.00# ifconfig -a
lo0: flags=2001000849
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 
        zone myzone
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849 
        zone wholeroot
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=4001000842 
        inet 192.168.102.40 netmask ffffff00 broadcast 
        ether 8:0:27:6f:c5:6e
e1000g0:1: flags=1000843 
        zone myzone
        inet 192.168.102.120 netmask ffffff00 broadcast
e1000g0:2: flags=1000843 mtu 1500 index 2
        zone wholeroot
        inet 192.168.102.125 netmask ffffff00 broadcast

All the Solaris zones run as logical interfaces of the physical e1000g0. That's what the ip-type shared is all about. myzone runs from e1000g0:1 and wholeroot runs from e1000g0:2. Each zone also has it's own loopback interface from lo0.

Very cool. There is a slight problem with this. The thing is, you can actually telnet or ssh from one zone to the other on the same system, cause the "inter domain routing" is enabled by default.

By default routing is direct and the actual physical wire is not used between Solaris zones.

Normally this would not be a problem, but if you have something like a DMZ setup, then this could definitely be a problem. You don't want people from the corporate Solaris zones logging into the DMZ Solaris zones. This might be a potential security risk.

The setup might require the users to first go through a firewall on the outside before access is granted to the DMZ zone.

So how do we tell the Solaris zones to use the physical wires?

There is a ndd setting that you can use to disable this default routing between the Solaris zones:

bash# ndd -set /dev/ip ip_restrict_interzone_loopback 1   

Keep in mind, that this setting will force Solaris zones to use the wire and not route directly between them.

Another way to get around the direct routing problem is to use the ip-type exclusive option in the zone. With this option the zones will have their own physical network interface instead of sharing a single interface or interfaces.

This might not be practical for setups where there are lots of Solaris zones, because each zone will need a physical network interface.

Starting, Stopping and deleting Solaris zones


Let's have a look at how to stop, start and delete domains.

To stop a zone use the halt command:

bash-3.00# zoneadm -z myzone halt

To boot or start a domain use the boot command:

bash-3.00# zoneadm -z myzone boot

Is this easy or what.

Let's delete the whole root zone. First off all, we have to stop the zone. Then we need to uninstall the Solaris OS from it and then you can delete the zone.

There is actually a force option that allows you to delete the zone without uninstalling it, but I always throw caution to the wind by first uninstalling. It's just one extra step and it might save your bacon one day.

bash-3.00# zoneadm -z wholeroot halt

bash-3.00# zoneadm -z wholeroot uninstall
Are you sure you want to uninstall zone wholeroot (y/[n])? y

bash-3.00# zonecfg -z wholeroot delete
Are you sure you want to delete zone wholeroot (y/[n])? y

bash-3.00# zoneadm list -icv
  ID NAME       STATUS     PATH          BRAND    IP
   0 global     running    /             native   shared
   8 myzone     running    /zones/myzone native   shared
bash-3.00#

Adding and changing resources to a zone


Next we will look at how to add resources to a zone.

Let's say you have a disk that you want to add to the zone. How will you go about doing it?

There's actually a lot of ways to add a file system to a zone. You can use a already mounted file system from the global zone as loopback (lofs) and specify that it be read write or read only.

You could also create a partition on a disk and specify the partition to be mounted on the zone. You could also use ZFS and mount file systems in your Solaris zones.

You could also use the match option and specify devices like cdrom's or tape devices to mount on the zone.

I will use two methods. The first is a filesystem that's mounted on the global zone. I will add this file system as a loop back (lofs) to the zone.

The second will be a normal special and raw partition from the global zone.

The disk is 1 Gbyte in size. I used partition 0 and 1 and made each of them 500 Mbyte. Slice 0 is mounted as /disk1 on the global zone. This will be my lofs file system that will be mounted as /zdisk1 and read only in the zone.

Slice 1 will be used as a normal ufs file system to the zone and mounted as /zdisk2 in the zone.

Let's go!

We will use the add fs command the add the file systems.

bash-3.00# zonecfg -z myzone
zonecfg:myzone> add fs
zonecfg:myzone:fs> set dir=/zdir3
zonecfg:myzone:fs> set special=/disk1
zonecfg:myzone:fs> set type=lofs
zonecfg:myzone:fs> add options [ro]
zonecfg:myzone:fs> end
zonecfg:myzone> add fs
zonecfg:myzone:fs> set dir=/zdisk2
zonecfg:myzone:fs> set special=/dev/dsk/c0d1s1
zonecfg:myzone:fs> set raw=/dev/rdsk/c0d1s1
zonecfg:myzone:fs> set type=ufs
zonecfg:myzone:fs> end
zonecfg:myzone>

If you look at the commands you will see that the first file system is the loop back or lofs file system. I have also added the ro option.

The second file system is the normal ufs file system. If you look closely, you will see that I made a mistake with the name of the first lofs file system. I called it, zdir1, instead of, zdisk1.

Let me show you how to change it. You select the resource and then specify it's attributes, like this:

zonecfg:myzone> select fs dir=/zdir3
zonecfg:myzone:fs> info
fs:
        dir: /zdir3
        special: /disk1
        raw not specified
        type: lofs
        options: [ro]
zonecfg:myzone:fs> set dir=/zdisk1
zonecfg:myzone:fs> info
fs:
        dir: /zdisk1
        special: /disk1
        raw not specified
        type: lofs
        options: [ro]
zonecfg:myzone:fs> end
zonecfg:myzone>

That's it! Easy as pie.

Now I'm ready to commit the config and reboot the zone to see what the filesystems look like.

zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit

bash-3.00# zoneadm -z myzone reboot
bash-3.00# zlogin myzone
[Connected to zone 'myzone' pts/5]
Last login: Thu Jun  2 16:27:39 on pts/5
Oracle Corporation      SunOS 5.10 January 2005
# df -h
Filesystem   size   used  avail capacity  Mounted on
/            14G   4.4G   9.2G    33%    /
/dev         14G   4.4G   9.2G    33%    /dev
/lib         14G   4.4G   9.2G    33%    /lib
/platform    14G   4.4G   9.2G    33%    /platform
/sbin        14G   4.4G   9.2G    33%    /sbin
/usr         14G   4.4G   9.2G    33%    /usr
/zdisk1   470M   1.0M   422M    1%    /zdisk1
/zdisk2   500M   1.0M   449M    1%    /zdisk2
proc         0K     0K     0K      0%    /proc
.
.
.
swap         2.2G    16K   2.2G     1%    /var/run
#

As you can see, we now have 2 additional file systems called /disk1 and /disk2. I have mounted /zdisk1 as read only, let's see if this is true. I will now attempt to create a file in this filesystem.

# cd /zdisk1
# pwd
/zdisk1
# touch ./testfile
touch: cannot create ./testfile: Read-only file system

It gives me an error saying the file system is read only. Great so this works.

Now, I'm going to change the file system to be rw so I can write to the file system. Below is the whole process that I used to make it read write.

bash-3.00# zonecfg -z myzone
zonecfg:myzone> select fs dir=/zdisk1
zonecfg:myzone:fs> info
fs:
        dir: /zdisk1
        special: /disk1
        raw not specified
        type: lofs
        options: [ro]
zonecfg:myzone:fs> remove options [ro]
zonecfg:myzone:fs> add options [rw]
zonecfg:myzone:fs> info
fs:
        dir: /zdisk1
        special: /disk1
        raw not specified
        type: lofs
        options: [rw]
zonecfg:myzone:fs> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit

Now the file system should be read write. We will test this by logging into the zone and try to create a file in the file system.

bash-3.00# zlogin myzone
[Connected to zone 'myzone' pts/4]
Last login: Fri Jun  3 09:03:42 on pts/4
Oracle Corporation      SunOS 5.10      Generic Patch 
# cd /zdisk1
# pwd
/zdisk1
# touch ./testfile
touch: cannot create ./testfile: Read-only file system
# exit
[Connection to zone 'myzone' pts/4 closed]

Hm, it did not work. I still get the read only file system error. To fix this, you need to reboot the zone. That's one thing about Solaris zones. When you change something, it usually requires a boot to make it active. Luckily, the boot only takes a couple of seconds. But keep this limitation in mind.

bash-3.00# zoneadm -z myzone reboot
bash-3.00# zlogin myzone
[Connected to zone 'myzone' pts/4]
Last login: Fri Jun  3 09:09:09 on pts/4
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# cd /zdisk1
# pwd
/zdisk1
# touch ./testfile
# ls -l ./testfile
-rw-r--r--   1 root  root  0 Jun  3 09:20 ./testfile
#

Ah, so this time it worked. Tha bad thing about loop back (lofs) filesystems, that are mount as read write, is the fact that you can access and look at whats in the file system from the global zone. This might not be what you want.

CPU and Memory resource capping


We will now look at how to restrict the use of memory and CPU's to a zone. Keep in mind there's a lot of ways you can do this. You can make it as complicated or as easy as you want.

With earlier releases of Solaris zones, you had to create pools and then assign the pool to a zone. In these pools you created resources for the Solaris zones. With later releases this complexity has been taken out and you can now control CPU and memory resources from the zone using the zonecfg command.

This makes it very easy to control these resources. You can still use the pools function if you desire to do so. In fact, when you create these resources in the zone it will create a pool for the zone.

I always say, keep it simple. At some stage someone else has to look or fix some problems, and if you used complex pools and resource scheduling, then the poor guy who has to fix it will be completely lost.

I will use the tools provided with the zonecfg command to assign these resources.

First let's use the dedicated-cpu resource setting. This will dedicate the amount of CPU's specified to zone at start up. The zone, however, will not start up if this requirement is not met. So if you specified 4 CPU's and only 3 is available, then the zone will not start up.

The command psrinfo displays the number of CPU's on the system itself. These could be threads, cores or physical CPU's. It all depends on the system you are using.

bash-3.00# psrinfo
0       on-line   since 06/03/2011 11:48:37
1       on-line   since 06/03/2011 11:48:40
2       on-line   since 06/03/2011 11:48:40
3       on-line   since 06/03/2011 11:48:40

bash-3.00# zonecfg -z myzone
zonecfg:myzone> add dedicated-cpu
zonecfg:myzone:dedicated-cpu> set ncpus=2
zonecfg:myzone:dedicated-cpu> info
dedicated-cpu:
        ncpus: 2
zonecfg:myzone:dedicated-cpu> end
zonecfg:myzone> info
zonename: myzone
zonepath: /zones/myzone
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
fs:
        dir: /zdisk2
        special: /dev/dsk/c0d1s1
        raw: /dev/rdsk/c0d1s1
        type: ufs
        options: []
net:
        address: 192.168.102.120
        physical: e1000g0
        defrouter: 192.168.102.1
dedicated-cpu:
        ncpus: 2
zonecfg:myzone>

Great stuff! Whenever the zone starts, and the amount of CPU's dedicated is available, then the zone will have 4 working dedicated CPU's.

Let's cap the memory. Every zone that is not capped could potentially use all the resources that it want's. This might not be a good idea. Especially if there's a problem with a application on a zone and starts to hog the resources. By capping the memory, we tell the Solaris zones that it's only allowed to use so much memory after which it will deny anymore.

We use the capped-memory resource to achieve this. You can cap the memory, swap and locked memory.

zonecfg:myzone> add capped-memory
zonecfg:myzone:capped-memory> set physical=512M
zonecfg:myzone:capped-memory> set swap=1G
zonecfg:myzone:capped-memory> end
zonecfg:myzone> info
zonename: myzone
zonepath: /zones/myzone
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
fs:
        dir: /zdisk2
        special: /dev/dsk/c0d1s1
        raw: /dev/rdsk/c0d1s1
        type: ufs
        options: []
net:
        address: 192.168.102.120
        physical: e1000g0
        defrouter: 192.168.102.1
dedicated-cpu:
        ncpus: 2
capped-memory:
        physical: 512M
        [swap: 1G]
rctl:
        name: zone.max-swap
        value: (priv=privileged,limit=1073741824,action=deny)
zonecfg:myzone>

So the memory has been capped to 512 Mbytes. M is Megabytes, G is Gigabytes.

If we now reboot the zone, these setting will take effect. There is another setting for CPU's called capped-cpu. Be careful with this. The units are specified as ncpus. This is confusing because the ncpus value does not refer to physical CPU's, it refers to the number of shares.

For instance, 1 means 100% of a CPU, 0.5 means 50% of a CPU, 1 means 100% of a CPU. You cannot use dedicated-cpu and capped-cpu at the same time. They are not, as they say, compatible.

We have to save the config and reboot the zone.

zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit

bash-3.00# zoneadm -z myzone reboot

Let's log into the zone and see if these resources are set.

bash-3.00# zlogin myzone
[Connected to zone 'myzone' pts/5]
Last login: Fri Jun  3 13:19:46 on pts/5
Oracle Corporation      SunOS 5.10  January 2005
# psrinfo
0       on-line   since 06/03/2011 11:48:37
1       on-line   since 06/03/2011 11:48:40
# prtconf | grep -i mem
Memory size: 512 Megabytes

Looks good. 2 CPU's and 512Mbytes memory, just what the doctor ordered.

Let's increase the number of CPU's to 5 just to check what will happen.

bash-3.00# zonecfg -z myzone
zonecfg:myzone> select dedicated-cpu ncpus=2
zonecfg:myzone:dedicated-cpu> info
dedicated-cpu:
        ncpus: 2
zonecfg:myzone:dedicated-cpu> set ncpus=5
zonecfg:myzone:dedicated-cpu> end
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit

bash-3.00# zoneadm -z myzone reboot
zoneadm: zone 'myzone': libpool(3LIB) error: Invalid
configuration
zoneadm: zone 'myzone': dedicated-cpu setting cannot be 
instantiated
bash-3.00#

It did not work cause the number of CPU's available were not enough to fulfill the 5 CPU's needed by the zone. So we get an error and the zone does not start up.

I want to use the capped-cpu now and cap the number of CPU's. First we need to remove the dedicated-cpu resource and then add the capped-cpu resource.

bash-3.00# zonecfg -z myzone
zonecfg:myzone> remove dedicated-cpu
zonecfg:myzone> add capped-cpu
zonecfg:myzone:capped-cpu> set ncpus=0.5
zonecfg:myzone:capped-cpu> end
zonecfg:myzone> info
zonename: myzone
zonepath: /zones/myzone
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
fs:
        dir: /zdisk2
        special: /dev/dsk/c0d1s1
        raw: /dev/rdsk/c0d1s1
        type: ufs
        options: []
net:
        address: 192.168.102.120
        physical: e1000g0
        defrouter: 192.168.102.1
capped-cpu:
        [ncpus: 1.25]
capped-memory:
        physical: 512M
        [swap: 1G]
rctl:
        name: zone.max-swap
        value: (priv=privileged,limit=1073741824,action=deny)
rctl:
        name: zone.cpu-cap
        value: (priv=privileged,limit=125,action=deny)
zonecfg:myzone> verify
zonecfg:myzone> commit
zonecfg:myzone> exit

bash-3.00# zoneadm -z myzone boot

So now we have a cap of 50% on the Solaris zones CPU resources. There are lots of ways to set resources in a zone. You can FSS (Fair Share Scheduler), pools, resource controls and the zonecfg's own resource controls. Have a look at all the options if you want. I would recommend though that you try to keep it simple.

I hope you have a better understanding of how Solaris zones work. There are lots of information on Solaris zones and my idea with this page is to give you a basic understanding of how Solaris zones work.

There are lots of documentation on Oracle's website pertaining to Solaris zones. Oracle has formal training on Solaris and Solaris zones if you are interested.

I might add some more stuff on this page as time goes by, so check in regularly.





Return from Solaris Zones to Virtualization

Back to What is My Computer
















Download VMware Workstation 7 Today!


What is my Computer

What is in my computer?


Computer Components

Discover what goes into a PC?



Download VMware Fusion 3.1 Now!