Upgrade or Reconfigure Your Existing Frontend

1.5. Upgrade or Reconfigure Your Existing Frontend

This procedure describes how to use a Restore Roll to upgrade or reconfigure your existing Rocks cluster.


If your Rocks frontend is running version 4.1 and you wish to upgrade it to version 4.2, you first need to install the rocks-devel-env package:

For i386, execute:

# ln -s /opt/rocks/usr/bin/python /opt/rocks/bin/python
# rpm -ivh --nodeps http://www.rocksclusters.org/ftp-site/pub/rocks/rocks-4.1/upgrade/rocks-devel-env-4.2-1.i386.rpm

For x86_64, execute:

# ln -s /opt/rocks/usr/bin/python /opt/rocks/bin/python
# rpm -ivh --nodeps http://www.rocksclusters.org/ftp-site/pub/rocks/rocks-4.1/upgrade/rocks-devel-env-4.2-1.x86_64.rpm

Now we'll create a Restore Roll for your frontend. This roll will contain site-specific info that will be used to quickly reconfigure your frontend (see the section below for details).

# cd /export/site-roll/rocks/src/roll/restore
# make roll

The above command will output a roll ISO image that has the name of the form: hostname-restore-date-0.arch.disk1.iso. For example, on the i386-based frontend with the FQDN of rocks-45.sdsc.edu, the roll will be named like:


Burn your restore roll ISO image to a CD.

Reinstall the frontend by putting the Rocks Boot CD in the CD tray (generally, this is the Kernel/Boot Roll) and rebooting the frontend.

At the boot: prompt, type:


At this point, the installation follows the same steps as a normal frontend installation (See the section: Install Frontend) -- with two exceptions:

  1. On the first user-input screen (the screen that asks for 'local' and 'network' rolls), be sure to supply the Restore Roll that you just created.

  2. You will be forced to manually partition your frontend's root disk.


    You must reformat your / partition, your /var partition and your /boot partition (if it exists).

    Also, be sure to assign the mountpoint of /export to the partition that contains the users' home areas. Do NOT erase or format this partition, or you will lose the user home directories. Generally, this is the largest partition on the first disk.

After your frontend completes it's installation, the last step is to force a re-installation of all of your compute nodes. The following will force a PXE (network install) reboot of all your compute nodes.

# ssh-agent $SHELL
# ssh-add
# tentakel -g compute '/boot/kickstart/cluster-kickstart-pxe'

1.5.1. Restore Roll Internals

By default, the Restore Roll contains two sets of files: system files and user files, and some user scripts. The system files are listed in the 'FILES' directive in the file: /export/site-roll/rocks/src/roll/restore/src/system-files/version.mk.

FILES           = /etc/passwd /etc/shadow /etc/gshadow /etc/group \
                  /etc/exports /etc/auto.home /etc/motd

The user files are listed in the 'FILES' directive in the file: /export/site-roll/rocks/src/roll/restore/version.mk.

FILES           += /etc/X11/xorg.conf

If you have other files you'd like saved and restored, then append them to the 'FILES' directive in the file /export/site-roll/rocks/src/roll/restore/version.mk, then rebuild the restore roll.

If you'd like to add your own post sections, you can add the name of the script to the 'SCRIPTS' directive of the the /export/site-roll/rocks/src/roll/restore/version.mk file.

SCRIPTS += /export/apps/myscript.sh /export/apps/myscript2.py

This will add the shell script /export/apps/myscript.sh, and the python script /export/apps/myscript2.py in the post section of the restore-user-files.xml file.


If you'd like to run the script in "nochroot" mode, add
# nochroot
as the first comment in your script file after the interpreter line, if one is present.

For example
echo "This is myscript.sh"
echo "This is myscript.sh"
will run the above code in the "nochroot" mode during installation. As opposed to
echo "This is myscript.sh"
echo "This is myscript.sh"
will NOT run the script under "nochroot" mode.

All the files under /export/home/install/site-profiles are saved and restored. So, any user modifications that are added via the XML node method will be preserved.

The networking info for all node interfaces (e.g., the frontend, compute nodes, NAS appliances, etc.) are saved and restored. This is accomplished via the 'dump' function of insert-ethers and add-extra-nic.