I don't know why, but from time to time drivers are assigned other "/dev"-paths on my ubuntu 6.10 LTS GNU/Linux server. I think a removable USB driver might have something to do with it...
However, when that happens it is a complete pain in the a$$ because if the driver is relocated, the system cannot find it and if it is mentioned in /etc/fstab the system reasons (justly so, I might add) that if it can't find the drive it should pull the emergency break and jump into a rescue prompt (where mostly everything is disabled), letting the user (that's me) deal with the problem.
I usually press the CTRL-D command exiting the shell and getting back to the boot process praying that no vital driver was lost. (I'm not really a guru, just a poor guy trying to make live a little easier).
For some reason (touch wood!) the drives with the boot image or with system specific things on them has never been moved around this way. Usually its the USB drive itself (when I still had it in the fstab) that has moved (I'll get back on how to make it auto-mount in a later post) or in this latest case, one of the back-up drives.
However, there's a solution. If you run GNU/Linux you might have seen it in your fstab-file. The use of a UUID to do mount instead of the regular /dev/something. My desktop computer's fstab looks like this:
# /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 # /dev/sda1 UUID=e1f37856-6cfd-43f9-bea0-d4c2e43afe29 / reiserfs notail,relatime 0 1 # /dev/sda6 UUID=64549135-a478-4aef-bb2a-da37d245dd9c /home reiserfs relatime 0 2
From this rather confusing array of characters (I had to shrink the spaces in order to fit it on the site) you can determine that there's three devices mounted at start up (proc, sda1 and sda6, I've got even more, but the exact number of devices are not interesting for this discussion).
The proc device always resides at the file system "proc", and it does not have nor need a UUID. However the sda1 and sda6 devices are regular hard drives (formatted with reiserfs) and they can change designation, for instance if I start rearranging my sata-cables or start a USB drive in a USB slot with a lower ID than those of my sda drivers (I'm guessing on that one but I've seen it on my server so...) These are therefore interesting to mount not by their dev-names but by their UUID's. The UUID are stored on the drive itself and it wont change unless the drive is reformatted. The drive can be moved, turned off, turned on, it will still have the same UUID.
So, using UUIDs are a good idea when I want to create my new, drives-moving-around-proofed server configuration. The first step is to determine what UUID the drives have. This is done with the following command:
sudo vol_id -u /dev/something
I had problems finding "vol_id". It was not in the PATH, and could therefore not be run like above. I did a locate (locate vol_id) and found it in "/lib/udev" so I prepended that path to my command. I've also to determine how to get the UUID from a swap partition, but for now I'm happy to have the infringing drives on UUID and hope the swap wont move (perhaps with the extra 2GB of memory I also stashed in it will need the swap even less, but anyway)...
You won't be able to determine the UUID of any drive part of a software raid configuration (but then again, the software raid is able to do its own magic locating of drives regardless of their sd-number -- trust me, I've done that as well -- so they won't need a UUID anyway -- wouldn't surprise me if raid uses the same scheme behind the scene though)
Let's look at the changes I did in my fstab file (always make backups before you start messing with this file! If you fail to set it up correctly your system will probably not start at all so have a live-cd handy before trying to do this!):
/etc/fstab before I changed it (just a part of it)
# /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/sdc1 / reiserfs notail,user_xattr 0 1 /dev/sdc5 /home reiserfs defaults,user_xattr 0 2
As you can see the situation is not as clear on this machine as it was on my desktop machine. Here sdc is the main system drive and that alone is a, well not a worrisome problem, but a slight discomfort... sdc never moved around, but being that I have a bunch (8 or 9) sata-cables in a large but far-from-large-enough case I'm bound to switch them around one day or another...
Anyway, using the above vol_id command to get the UUIDs of the drives, I've updated my fstab to look like this (still only partial fstab but you get the idea):
# /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 #/dev/sdc1 UUID=716cf691-dabd-4894-8e46-bc02b4c092b4 / reiserfs notail,user_xattr 0 1 #/dev/sdc5 UUID=9587a32e-ebb2-45ab-9e68-7a66cf43d6b4 /home reiserfs defaults,user_xattr 0 2
Unfortunately I have the same problem as above, the lines wont fit in the editor (or on the site) if I tabulate them correctly, but hopefully you'll still be able to connect the dots. Every group of white spaces (space, or tab) in the file counts as a field separator. I've commented out the "/dev/sdc..." section, added a line feed and replaced it with the "UUID=..." section, and then left the rest of the line intact.
This makes sense since I've replaced one identifier ("dev/sdc...") with another ("UUID=..."). So, after the original "dev"-version of the file has been safely backed up, the entries in the original "/etc/fstab" has been checked and double checked, it's time to restart and pray this will actually work. :O
Here's a few links you might want to check out before you give it a try:
Update: If, however, you're using LVM, you'll get stable device names and you should mount these instead. If you use LVM-snapshots you're going to get two or more volumes with the same UUID, and in that case you should absolutely not use UUID mounting.