[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: AW: [cobalt-users] raid creation after adding 2nd hdd
- Subject: RE: AW: [cobalt-users] raid creation after adding 2nd hdd
- From: "Bob Noordam" <mac@xxxxxxxx>
- Date: Fri Apr 2 03:39:01 2004
- List-id: Mailing list for users to share thoughts on Sun Cobalt products. <cobalt-users.list.cobalt.com>
> -----Oorspronkelijk bericht-----
> Van: cobalt-users-admin@xxxxxxxxxxxxxxx
> [mailto:cobalt-users-admin@xxxxxxxxxxxxxxx] Namens Florian Arzberger
> Verzonden: donderdag 25 maart 2004 18:46
> Aan: cobalt-users@xxxxxxxxxxxxxxx
> Onderwerp: AW: AW: [cobalt-users] raid creation after adding 2nd hdd
>
> hey, thanks for being so responsive ;). another one that just
> came to my
> mind... i would probably have to change all the mountings.
> can you also post
> the output of "mount" on that machine?
>
> -----Ursprüngliche Nachricht-----
> Von: cobalt-users-admin@xxxxxxxxxxxxxxx
> [mailto:cobalt-users-admin@xxxxxxxxxxxxxxx] Im Auftrag von
> Gerald Waugh
> Gesendet: Donnerstag, 25. März 2004 18:37
> An: cobalt-users@xxxxxxxxxxxxxxx
> Betreff: Re: AW: [cobalt-users] raid creation after adding 2nd hdd
>
> On Thu, 25 Mar 2004, Florian Arzberger wrote:
>
> > can anyone post the output of "fdisk -l /dev/hda" on a raid-enabled
> raq3/4?
>
>
> Device Boot Start End Blocks Id System
> /dev/hdc1 1 1524 768095+ fd Linux raid
> autodetect
> /dev/hdc2 1525 1846 162288 5 Extended
> /dev/hdc3 1847 2253 205128 fd Linux raid
> autodetect
> /dev/hdc4 2254 155061 77015232 fd Linux raid
> autodetect
> /dev/hdc5 1525 1585 30743+ 83 Linux
> /dev/hdc6 1586 1846 131543+ fd Linux raid
> autodetect
>
> Gerald
> --
It all turns out a bit harder. If you add the second blank harddrive and
just drop in a new raidtab (or edit it), al the partitions will be created
and typed correctly. Upon boot, a raid failure (correct) is reported, and
the blank disk is setup. However, the system appears to be unable to mount
the new drive, and returns a dmesg message like below;
<major snip>
ide0 at 0xff58-0xff5f,0xff56 on irq 14
ide1 at 0xff48-0xff4f,0xff46 on irq 15
hda: 78165360 sectors (40021 MB) w/2048KiB Cache, CHS=77545/16/63, UDMA(33)
hdc: 78165360 sectors (40021 MB) w/2048KiB Cache, CHS=77545/16/63, UDMA(33)
<major snip>
md: trying to remove hdc1 from md1 ...
md1: personality does not support diskops!
md: trying to remove hdc2 from md2 ...
md2: personality does not support diskops!
md: trying to remove hdc3 from md3 ...
md3: personality does not support diskops!
md: trying to remove hdc4 from md4 ...
md4: personality does not support diskops!
hdc: hdc1 hdc2 hdc3 hdc4
Looking into the software raid faq and other resources
(http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html) , this seems to
indicate a problem with kernel raid support. However, the support is all
there; (see below)
[root /etc]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md1 : active raid0 hda1[0]
4095936 blocks 64k chunks
md2 : active raid0 hda2[0]
1536128 blocks 64k chunks
md3 : active raid0 hda3[0]
524544 blocks 64k chunks
md4 : active raid0 hda4[0]
32925696 blocks 64k chunks
unused devices: <none>
[root /etc]#
Mdstat indicates to support raid1 (personality 2) but the system does not
switch to the other personality on boot, and remains on raid0.
Stuck for the moment :)