  The Software-RAID HOWTO
  Jakob OEstergaard (jakob@ostenfeld.dk)
  v. 0.90.2 - Alpha, 27th february 1999

  This HOWTO describes how to use Software RAID under Linux. You must be
  using the RAID patches available from ftp://ftp.fi.ker-
  nel.org/pub/linux/daemons/raid/alpha. The HOWTO can be found at
  http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/.
  ______________________________________________________________________

  Table of Contents


  1. Introduction

     1.1 Disclaimer
     1.2 Requirements

  2. Why RAID ?

     2.1 Technicalities
     2.2 Terms
     2.3 The RAID levels
        2.3.1 Spare disks
     2.4 Swapping on RAID

  3. RAID setup

     3.1 General setup
     3.2 Linear mode
     3.3 RAID-0
     3.4 RAID-1
     3.5 RAID-4
     3.6 RAID-5
     3.7 The Persistent Superblock
     3.8 Chunk sizes
        3.8.1 RAID-0
        3.8.2 RAID-1
        3.8.3 RAID-4
        3.8.4 RAID-5
     3.9 Options for mke2fs
     3.10 Autodetection
     3.11 Booting on RAID
     3.12 Pitfalls

  4. Credits



  ______________________________________________________________________

  1.  Introduction

  This howto is written by Jakob OEstergaard based on a large number of
  emails between the author and Ingo Molnar (mingo@chiara.csoma.elte.hu)
  -- one of the RAID developers --, the linux-raid mailing list (linux-
  raid@vger.rutgers.edu) and various other people.

  The reason this HOWTO was written even though a Software-RAID HOWTO
  allready exists is, that the old HOWTO describes the old-style
  Software RAID found in the stock kernels. This HOWTO describes the use
  of the ``new-style'' RAID that has been developed more recently. The
  new-style RAID has a lot of features not present in old-style RAID.

  Some of the information in this HOWTO may seem trivial, if you know
  RAID all ready. Just skip those parts.
  1.1.  Disclaimer

  The mandatory disclaimer:

  Although RAID seems stable for me, and stable for many other people,
  it may not work for you.  If you loose all your data, your job, get
  hit by a truck, whatever, it's not my fault, nor the developers'.  Be
  aware, that you use the RAID software and this information at your own
  risk!  There is no guarantee whatsoever, that any of the software, or
  this information, is in anyway correct, nor suited for any use
  whatsoever. Back up all your data before experimenting with this.
  Better safe than sorry.



  1.2.  Requirements

  This HOWTO assumes you are using a late 2.2.x or 2.0.x kernel with a
  matching raid0145 patch, and the 0.90 version of the raidtools. Both
  can be found at ftp://ftp.fi.kernel.org/pub/linux/daemons/raid/alpha.
  The RAID patch, the raidtools package, and the kernel should all match
  as close as possible. At times it can be necessary to use older
  kernels if raid patches are not available for the latest kernel.



  2.  Why RAID ?

  There can be many good reasons for using RAID. A few are; the ability
  to combine several physical disks into one larger ``virtual'' device,
  performance improvements, and redundancy.



  2.1.  Technicalities

  Linux RAID can work on most block devices. It doesn't matter whether
  you use IDE or SCSI devices, or a mixture. Some people have also used
  the Network Block Device (NBD) with more or less success.

  Be sure that the bus(ses) to the drives are fast enough. You shouldn't
  have 14 UW-SCSI drives on one UW bus, if each drive can give 10 MB/s
  and the bus can only sustain 40 MB/s.  Also, you should only have one
  device per IDE bus. Running disks as master/slave is horrible for
  performance. IDE is really bad at accessing more that one drive per
  bus.  Of Course, all newer motherboards have two IDE busses, so you
  can set up two disks in RAID without buying more controllers.

  The RAID layer has absolutely nothing to do with the filesystem layer.
  You can put any filesystem on a RAID device, just like any other block
  device.



  2.2.  Terms

  The word ``RAID'' means ``Linux Software RAID''. This HOWTO does not
  treat any aspects of Hardware RAID.

  When describing setups, it is useful to refer to the number of disks
  and their sizes. At all times the letter N is used to denote the
  number of active disks in the array (not counting spare-disks). The
  letter S is the size of the smallest drive in the array, unless
  otherwise mentioned. The letter P is used as the performance of one
  disk in the array, in MB/s. When used, we assume that the disks are
  equally fast, which may not always be true.
  Note that the words ``device'' and ``disk'' are supposed to mean about
  the same thing.  Usually the devices that are used to build a RAID
  device are partitions on disks, not necessarily entire disks.  But
  combining several partitions on one disk usually does not make sense,
  so the words devices and disks just mean ``partitions on different
  disks''.



  2.3.  The RAID levels

  Here's a short description of what is supported in the Linux RAID
  patches. Some of this information is absolutely basic RAID info, but
  I've added a few notices about what's special in the Linux
  implementation of the levels.  Just skip this section if you know
  RAID. Then come back when you are having problems   :)

  The current RAID patches for Linux supports the following levels:

  o  Linear mode

  o  Two or more disks are combined into one physical device. The disks
     are ``appended'' to each other, so writing to the RAID device will
     fill up disk 0 first, then disk 1 and so on. The disks does not
     have to be of the same size. In fact, size doesn't matter at all
     here   :)

  o  There is no redundancy in this level. If one disk crashes you will
     most probably loose all your data.  You can however be lucky to
     recover some data, since the filesystem will just be missing one
     large consecutive chunk of data.

  o  The read and write performance will not increase for single
     reads/writes. But if several users use the device, you may be lucky
     that one user effectively is using the first disk, and the other
     user is accessing files which happen to reside on the second disk.
     If that happens, you will see a performance gain.

  o  RAID-0

  o  Also called ``stripe'' mode. Like linear mode, except that reads
     and writes are done in parallel to the devices. The devices should
     have approximately the same size. Since all access is done in
     parallel, the devices fill up equally. If one device is much larger
     than the other devices, that extra space is still utilized in the
     RAID device, but you will be accessing this larger disk alone,
     during writes in the high end of your RAID device. This of course
     hurts performance.

  o  Like linear, there's no redundancy in this level either. Unlike
     linear mode, you will not be able to rescue any data if a drive
     fails. If you remove a drive from a RAID-0 set, the RAID device
     will not just miss one consecutive block of data, it will be filled
     with small holes all over the device. e2fsck will probably not be
     able to recover much from such a device.

  o  The read and write performance will increase, because reads and
     writes are done in parallel on the devices. This is usually the
     main reason for running RAID-0. If the busses to the disks are fast
     enough, you can get very close to N*P MB/sec.

  o  RAID-1

  o  This is the first mode which actually has redundancy. RAID-1 can be
     used on two or more disks with zero or more spare-disks. This mode
     maintains an exact mirror of the information on one disk on the
     other disk(s). Of Course, the disks must be of equal size. If one
     disk is larger than another, your RAID device will be the size of
     the smallest disk.

  o  If up to N-1 disks are removed (or crashes), all data are still
     intact. If there are spare disks available, and if the system (eg.
     SCSI drivers or IDE chipset etc.) survived the crash,
     reconstruction of the mirror will immediately begin on one of the
     spare disks, after detection of the drive fault.

  o  Read performance will usually scale close to to N*P, while write
     performance is the same as on one device, or perhaps even less.
     Reads can be done in parallel, but when writing, the CPU must
     transfer N times as much data to the disks as it usually would
     (remember, N identical copies of all data must be sent to the
     disks).

  o  RAID-4

  o  This RAID level is not used very often. It can be used on three or
     more disks. Instead of completely mirroring the information, it
     keeps parity information on one drive, and writes data to the other
     disks in a RAID-0 like way.  Because one disks is reserved for
     parity information, the size of the array will be (N-1)*S, where S
     is the size of the smallest drive in the array. As in RAID-1, the
     disks should either be of equal size, or you will just have to
     accept that the S in the (N-1)*S formula above will be the size of
     the smallest drive in the array.

  o  If one drive fails, the parity information can be used to
     reconstruct all data.  If two drives fail, all data is lost.

  o  The reason this level is not more frequently used, is because the
     parity information is kept on one drive. This information must be
     updated every time one of the other disks are writte to. Thus, the
     parity disk will become a bottleneck, if it is not a lot faster
     than the other disks.  However, if you just happen to have a lot of
     slow disks and a very fast one, this RAID level can be very useful.

  o  RAID-5

  o  This is perhaps the most useful RAID mode when one wishes to
     combine a larger number of physical disks, and still maintain some
     redundancy. RAID-5 can be used on three or more disks, with zero or
     more spare-disks. The resulting RAID-5 device size will be (N-1)*S,
     just like RAID-4. The big difference between RAID-5 and -4 is, that
     the parity information is distributed evenly among the
     participating drives, avoiding the bottleneck problem in RAID-4.

  o  If one of the disks fail, all data are still intact, thanks to the
     parity information. If spare disks are available, reconstruction
     will begin immediately after the device failure.  If two disks fail
     simultaneously, all data are lost. RAID-5 can survive one disk
     failure, but not two or more.

  o  Both read and write performance usually increase, but it's hard to
     predict how much.


  2.3.1.  Spare disks

  Spare disks are disks that do not take part in the RAID set until one
  of the active disks fail.  When a device failure is detected, that
  device is marked as ``bad'' and reconstruction is immediately started
  on the first spare-disk available.

  Thus, spare disks add a nice extra safety to especially RAID-5 systems
  that perhaps are hard to get to (physically). One can allow the system
  to run for some time, with a faulty device, since all redundancy is
  preserved by means of the spare disk.

  You cannot be sure that your system will survive a disk crash. The
  RAID layer should handle device failures just fine, but SCSI drivers
  could be broken on error handling, or the IDE chipset could lock up,
  or a lot of other things could happen.




  2.4.  Swapping on RAID

  There's no reason to use RAID for swap performance reasons. The kernel
  itself can stripe swapping on several devices, if you just give them
  the same priority in the fstab file.

  A nice fstab looks like:

  /dev/sda2       swap           swap    defaults,pri=1   0 0
  /dev/sdb2       swap           swap    defaults,pri=1   0 0
  /dev/sdc2       swap           swap    defaults,pri=1   0 0
  /dev/sdd2       swap           swap    defaults,pri=1   0 0
  /dev/sde2       swap           swap    defaults,pri=1   0 0
  /dev/sdf2       swap           swap    defaults,pri=1   0 0
  /dev/sdg2       swap           swap    defaults,pri=1   0 0


  This setup lets the machine swap in parallel on seven SCSI devices. No
  need for RAID, since this has been a kernel feature for a long time.

  Another reason to use RAID for swap is high availability.  If you set
  up a system to boot on eg. a RAID-1 device, the system should be able
  to survive a disk crash. But if the system has been swapping on the
  now faulty device, you will for sure be going down.  Swapping on the
  RAID-1 device would solve this problem.

  However, swap on RAID-{1,4,5} is NOT supported. You can set it up, but
  it will crash. The reason is, that the RAID layer sometimes allocates
  memory before doing a write. This leads to a deadlock, since the
  kernel will have to allocate memory before it can swap, and swap
  before it can allocate memory.

  It's sad but true, at least for now.



  3.  RAID setup


  3.1.  General setup

  This is what you need for any of the RAID levels:

  o  A kernel.  Get 2.0.36 or a recent 2.2.x kernel.

  o  The RAID patches.  There usually is a patch available for the
     recent kernels.

  o  The RAID tools.

  o  Patience, Pizza, and your favourite caffeinated beverage.


  All this software can be found at ftp://ftp.fi.kernel.org/pub/linux
  The RAID tools and patches are in the daemons/raid/alpha subdirectory.
  The kernels are found in the kernel subdirectory.

  Patch the kernel, configure it to include RAID support for the level
  you want to use.  Compile it and install it.

  Then unpack, configure, compile and install the RAID tools.

  Ok, so far so good.  If you reboot now, you should have a file called
  /proc/mdstat.  Remember it, that file is your friend. See what it
  contains, by doing a cat /proc/mdstat. It should tell you that you
  have the right RAID personality (eg. RAID mode) registered, and that
  no RAID devices are currently active.

  Create the partitions you want to include in your RAID set.

  Now, let's go mode-specific.



  3.2.  Linear mode

  Ok, so you have two or more partitions which are not necessarily the
  same size (but of course can be), which you want to append to each
  other.

  Set up the /etc/raidtab file to describe your setup. I set up a
  raidtab for two disks in linear mode, and the file looked like this:


  raiddev /dev/md0
          raid-level      linear
          nr-raid-disks   2
          persistent-superblock 1
          device          /dev/sdb6
          raid-disk       0
          device          /dev/sdc5
          raid-disk       1


  Spare-disks are not supported here.  If a disk dies, the array dies
  with it. There's no information to put on a spare disk.

  Ok, let's create the array. Run the command

    mkraid /dev/md0



  This will initialize your array, write the persistent superblocks, and
  start the array.

  Have a look in /proc/mdstat. You should see that the array is running.

  Now, you can create a filesystem, just like you would on any other
  device, mount it, include it in your fstab and so on.



  3.3.  RAID-0

  You have two or more devices, of approximately the same size, and you
  want to combine their storage capacity and also combine their
  performance by accessing them in parallel.

  Set up the /etc/raidtab file to describe your configuration. An
  example raidtab looks like:

  raiddev /dev/md0
          raid-level      0
          nr-raid-disks   2
          persistent-superblock 1
          chunk-size     4
          device          /dev/sdb6
          raid-disk       0
          device          /dev/sdc5
          raid-disk       1


  Like in Linear mode, spare disks are not supported here either. RAID-0
  has no redundancy, so when a disk dies, the array goes with it.

  Again, you just run

    mkraid /dev/md0


  to initialize the array. This should initialize the superblocks and
  start the raid device.  Have a look in /proc/mdstat to see what's
  going on. You should see that your device is now running.

  /dev/md0 is now ready to be formatted, mounted, used and abused.



  3.4.  RAID-1

  You have two devices of approximately same size, and you want the two
  to be mirrors of each other. Eventually you have more devices, which
  you want to keep as stand-by spare-disks, that will automatically
  become a part of the mirror if one of the active devices break.

  Set up the /etc/raidtab file like this:

  raiddev /dev/md0
          raid-level      1
          nr-raid-disks   2
          nr-spare-disks  0
          chunk-size     4
          persistent-superblock 1
          device          /dev/sdb6
          raid-disk       0
          device          /dev/sdc5
          raid-disk       1


  If you have spare disks, you can add them to the end of the device
  specification like

          device          /dev/sdd5
          spare-disk      0


  Remember to set the nr-spare-disks entry correspondingly.

  Ok, now we're all set to start initializing the RAID. The mirror must
  be constructed, eg. the contents (however unimportant now, since the
  device is still not formatted) of the two devices must be
  synchronized.


  Issue the

    mkraid /dev/md0


  command to begin the mirror initialization.

  Check out the /proc/mdstat file. It should tell you that the /dev/md0
  device has been started, that the mirror is being reconstructed, and
  an ETA of the completion of the reconstruction.

  Reconstruction is done using idle I/O bandwidth. So, your system
  should still be fairly responsive, although your disk LEDs should be
  glowing nicely.

  The reconstruction process is transparent, so you can actually use the
  device even though the mirror is currently under reconstruction.

  Try formatting the device, while the reconstruction is running. It
  will work.  Also you can mount it and use it while reconstruction is
  running. Of Course, if the wrong disk breaks while the reconstruction
  is running, you're out of luck.



  3.5.  RAID-4

  Note! I haven't tested this setup myself. The setup below is my best
  guess, not something I have actually had up running.

  You have three or more devices of roughly the same size, one device is
  significantly faster than the other devices, and you want to combine
  them all into one larger device, still maintaining some redundancy
  information.  Eventually you have a number of devices you wish to use
  as spare-disks.

  Set up the /etc/raidtab file like this:

  raiddev /dev/md0
          raid-level      4
          nr-raid-disks   4
          nr-spare-disks  0
          persistent-superblock 1
          chunk-size      32
          device          /dev/sdb1
          raid-disk       0
          device          /dev/sdc1
          raid-disk       1
          device          /dev/sdd1
          raid-disk       2
          device          /dev/sde1
          raid-disk       3


  If we had any spare disks, they would be inserted in a similar way,
  following the raid-disk specifications;

          device         /dev/sdf1
          spare-disk     0


  as usual.

  Your array can be initialized with the


     mkraid /dev/md0


  command as usual.

  You should see the section on special options for mke2fs before
  formatting the device.




  3.6.  RAID-5

  You have three or more devices of roughly the same size, you want to
  combine them into a larger device, but still to maintain a degree of
  redundancy for data safety. Eventually you have a number of devices to
  use as spare-disks, that will not take part in the array before
  another device fails.

  If you use N devices where the smallest has size S, the size of the
  entire array will be (N-1)*S. This ``missing'' space is used for
  parity (redundancy) information.  Thus, if any disk fails, all data
  stay intact. But if two disks fail, all data is lost.

  Set up the /etc/raidtab file like this:

  raiddev /dev/md0
          raid-level      5
          nr-raid-disks   7
          nr-spare-disks  0
          persistent-superblock 1
          parity-algorithm        left-symmetric
          chunk-size      32
          device          /dev/sda3
          raid-disk       0
          device          /dev/sdb1
          raid-disk       1
          device          /dev/sdc1
          raid-disk       2
          device          /dev/sdd1
          raid-disk       3
          device          /dev/sde1
          raid-disk       4
          device          /dev/sdf1
          raid-disk       5
          device          /dev/sdg1
          raid-disk       6


  If we had any spare disks, they would be inserted in a similar way,
  following the raid-disk specifications;

          device         /dev/sdh1
          spare-disk     0


  And so on.

  A chunk size of 32 KB is a good default for many general purpose
  filesystems of this size. The array on which the above raidtab is
  used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36)
  device. It holds an ext2 filesystem with a 4 KB block size.  You could
  go higher with both array chunk-size and filesystem block-size if your
  filesystem is either much larger, or just holds very large files.


  Ok, enough talking. You set up the raidtab, so let's see if it works.
  Run the

    mkraid /dev/md0


  command, and see what happens.  Hopefully your disks start working
  like mad, as they begin the reconstruction of your array. Have a look
  in /proc/mdstat to see what's going on.

  If the device was successfully created, the reconstruction process has
  now begun.  Your array is not consistent until this reconstruction
  phase has completed. However, the array is fully functional (except
  for the handling of device failures of course), and you can format it
  and use it even while it is reconstructing.

  See the section on special options for mke2fs before formatting the
  array.

  Ok, now when you have your RAID device running, you can always stop it
  or re-start it using the

    raidstop /dev/md0


  or

    raidstart /dev/md0


  commands.

  Instead of putting these into init-files and rebooting a zillion times
  to make that work, read on, and get autodetection running.



  3.7.  The Persistent Superblock

  Back in ``The Good Old Days'' (TM), the raidtools would read your
  /etc/raidtab file, and then initialize the array.  However, this would
  require that the filesystem on which /etc/raidtab resided was mounted.
  This is unfortunate if you want to boot on a RAID.

  Also, the old approach led to complications when mounting filesystems
  RAID devices. They could not be put in the /etc/fstab file as usual,
  but would have to be mounted from the init-scripts.

  The persistent superblocks solve these problems. When an array is
  initialized with the persistent-superblock option in the /etc/raidtab
  file, a special superblock is written in the beginning of all disks
  participating in the array. This allows the kernel to read the
  configuration of RAID devices directly from the disks involved,
  instead of reading from some configuration file that may not be
  available at all times.

  You should however still maintain a consistent /etc/raidtab file,
  since you may need this file for later reconstruction of the array.

  The persistent superblock is mandatory if you want auto-detection of
  your RAID devices upon system boot. This is described in the
  Autodetection section.




  3.8.  Chunk sizes

  The chunk-size deserves an explanation.  You can never write
  completely parallel to a set of disks. If you had two disks and wanted
  to write a byte, you would have to write four bits on each disk,
  actually, every second bit would go to disk 0 and the others to disk
  1. Hardware just doesn't support that.  Instead, we choose some chunk-
  size, which we define as the smallest ``atomic'' mass of data that can
  be written to the devices.  A write of 16 KB with a chunk size of 4
  KB, will cause the first and the third 4 KB chunks to be written to
  the first disk, and the second and fourth chunks to be written to the
  second disk, in the RAID-0 case with two disks.  Thus, for large
  writes, you may see lower overhead by having fairly large chunks,
  whereas arrays that are primarily holding small files may benefit more
  from a smaller chunk size.

  Chunk sizes can be specified for all RAID levels except the Linear
  mode.

  For optimal performance, you should experiment with the value, as well
  as with the block-size of the filesystem you put on the array.

  The argument to the chunk-size option in /etc/raidtab specifies the
  chunk-size in kilobytes. So ``4'' means ``4 KB''.


  3.8.1.  RAID-0

  Data is written ``almost'' in parallel to the disks in the array.
  Actually, chunk-size bytes are written to each disk, serially.

  If you specify a 4 KB chunk size, and write 16 KB to an array of three
  disks, the RAID system will write 4 KB to disks 0, 1 and 2, in
  parallel, then the remaining 4 KB to disk 0.

  A 32 KB chunk-size is a reasonable starting point for most arrays. But
  the optimal value depends very much on the number of drives involved,
  the content of the filsystem you put on it, and many other factors.
  Experiment with it, to get the best performance.


  3.8.2.  RAID-1

  For writes, the chunk-size doesn't affect the array, since all data
  must be written to all disks no matter what.  For reads however, the
  chunk-size specifies how much data to read serially from the
  participating disks.  Since all active disks in the array contain the
  same information, reads can be done in a parallel RAID-0 like manner.


  3.8.3.  RAID-4

  When a write is done on a RAID-4 array, the parity information must be
  updated on the parity disk as well. The chunk-size is the size of the
  parity blocks. If one byte is written to a RAID-4 array, then chunk-
  size bytes will be read from the N-1 disks, the parity information
  will be calculated, and chunk-size bytes written to the parity disk.

  The chunk-size affects read performance in the same way as in RAID-0,
  since reads from RAID-4 are done in the same way.


  3.8.4.  RAID-5

  On RAID-5 the chunk-size has exactly the same meaning as in RAID-4.

  A reasonable chunk-size for RAID-5 is 128 KB, but as always, you may
  want to experiment with this.

  Also see the section on special options for mke2fs.  This affects
  RAID-5 performance.



  3.9.  Options for mke2fs

  There is a special option available when formatting RAID-4 or -5
  devices with mke2fs. The -R stride=nn option will allow mke2fs to
  better place different ext2 specific data-structures in an intelligent
  way on the RAID device.

  If the chunk-size is 32 KB, it means, that 32 KB of consecutive data
  will reside on one disk. If we want to build an ext2 filesystem with 4
  KB block-size, we realize that there will be eight filesystem blocks
  in one array chunk. We can pass this information on the mke2fs
  utility, when creating the filesystem:

    mke2fs -b 4096 -R stride=8 /dev/md0



  RAID-{4,5} performance is severely influenced by this option. I am
  unsure how the stride option will affect other RAID levels. If anyone
  has information on this, please send it in my direction.



  3.10.  Autodetection

  Autodetection allows the RAID devices to be automatically recognized
  by the kernel at boot-time, right after the ordinary partition
  detection is done.

  This requires several things:

  1. You need autodetection support in the kernel. Check this

  2. You must have created the RAID devices using persistent-superblock

  3. The partition-types of the devices used in the RAID must be set to
     0xFD  (use fdisk and set the type to ``fd'')

  NOTE: Be sure that your RAID is NOT RUNNING before changing the
  partition types.  Use raidstop /dev/md0 to stop the device.

  If you set up 1, 2 and 3 from above, autodetection should be set up.
  Try rebooting.  When the system comes up, cat'ing /proc/mdstat should
  tell you that your RAID is running.

  During boot, you could see messages similar to these:












   Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512
    bytes. Sectors= 12657717 [6180 MB] [6.2 GB]
   Oct 22 00:51:59 malthe kernel: Partition check:
   Oct 22 00:51:59 malthe kernel:  sda: sda1 sda2 sda3 sda4
   Oct 22 00:51:59 malthe kernel:  sdb: sdb1 sdb2
   Oct 22 00:51:59 malthe kernel:  sdc: sdc1 sdc2
   Oct 22 00:51:59 malthe kernel:  sdd: sdd1 sdd2
   Oct 22 00:51:59 malthe kernel:  sde: sde1 sde2
   Oct 22 00:51:59 malthe kernel:  sdf: sdf1 sdf2
   Oct 22 00:51:59 malthe kernel:  sdg: sdg1 sdg2
   Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays
   Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872
   Oct 22 00:51:59 malthe kernel: bind<sdb1,1>
   Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872
   Oct 22 00:51:59 malthe kernel: bind<sdc1,2>
   Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872
   Oct 22 00:51:59 malthe kernel: bind<sdd1,3>
   Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872
   Oct 22 00:51:59 malthe kernel: bind<sde1,4>
   Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376
   Oct 22 00:51:59 malthe kernel: bind<sdf1,5>
   Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376
   Oct 22 00:51:59 malthe kernel: bind<sdg1,6>
   Oct 22 00:51:59 malthe kernel: autorunning md0
   Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1>
   Oct 22 00:51:59 malthe kernel: now!
   Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean --
    starting background reconstruction


  This is output from the autodetection of a RAID-5 array that was not
  cleanly shut down (eg. the machine crashed).  Reconstruction is auto-
  matically initiated.  Mounting this device is perfectly safe, since
  reconstruction is transparent and all data are consistent (it's only
  the parity information that is inconsistent - but that isn't needed
  until a device fails).

  Autostarted devices are also automatically stopped at shutdown.  Don't
  worry about init scripts.  Just use the /dev/md devices as any other
  /dev/sd or /dev/hd devices.

  Yes, it really is that easy.

  You may want to look in your init-scripts for any raidstart/raidstop
  commands. These are often found in the standard RedHat init scripts.
  They are used for old-style RAID, and has no use in new-style RAID
  with autodetection. Just remove the lines, and everything will be just
  fine.



  3.11.  Booting on RAID

  This will be added in near future.

  The really really short nano-howto goes:

  o  Put two identical disks in a system.

  o  Put in a third disk, on which you install a complete Linux system.

  o  Now set up the two identical disks each with a /boot, swap and /
     partition.

  o  Configure RAID-1 on the two / partitions.

  o  Copy the entire installation from the third disk to the RAID.
     (just using tar, no raw copying !)

  o  Set up the /boot on the first disk.  Run lilo.  You probably want
     to set the root fs device to be  900, since LILO doesn't really
     handle the /dev/md devices.  /dev/md0 is major 9 minor 9, so
     root=900 should work.

  o  Set up /boot on the second disk just like you did on the first.

  o  In the bios, in the case of IDE disks, set the disk types to
     autodetect.  In the fstab, make sure you are not mounting any of
     the /boot filesystems. You don't need them, and in case of device
     failure, you will just get stuck in the boot sequence when trying
     to mount a non-existing device.

  o  Try booting on just one of the disks. Try booting on the other disk
     only. If this works, you're up and running.

  o  Document what you did, mail it to me, and I'll put it in here.


  3.12.  Pitfalls

  Never NEVER never re-partition disks that are part of a running RAID.
  If you must alter the partition table on a disk which is a part of a
  RAID, stop the array first, then repartition.

  It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus
  can sustain 10 MB/s which is less than many disks can do alone today.
  Putting six such disks on the bus will of course not give you the
  expected performance boost.

  More SCSI controllers will only give you extra performance, if the
  SCSI busses are nearly maxed out by the disks on them.  You will not
  see a performance improvement from using two 2940s with two old SCSI
  disks, instead of just running the two disks on one controller.

  If you forget the persistent-superblock option, your array may not
  start up willingly after it has been stopped.  Just re-create the
  array with the option set correctly in the raidtab.

  If a RAID-5 fails to reconstruct after a disk was removed and re-
  inserted, this may be because of the ordering of the devices in the
  raidtab. Try moving the first ``device ...'' and ``raid-disk ...''
  pair to the bottom of the array description in the raidtab file.



  4.  Credits

  The following people contributed to the creation of this
  documentation:

  o  Ingo Molnar

  o  Jim Warren

  o  Louis Mandelstam

  o  Allan Noah

  o  Yasunori Taniike

  o  The Linux-RAID mailing list

  o  The ones I forgot,  sorry   :)

  Please submit corrections, suggestions etc. to the author. It's the
  only way this HOWTO can improve.






























































