Posts

Showing posts from August, 2014

zfs pool faulted due to disk ID changed

After applying the patch, the disk device ID has been changed, so the status of zfs pool snapshot is faulted. [root@r710 by-id]# zpool list NAME       SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT snapshot      -      -      -      -      -  FAULTED  - storage   27.2T  10.1T  17.1T    37%  1.00x  ONLINE  - Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev: #zpool export snapshot #zpool import -d /dev/disk/by-vdev snapshot

CentOS/RedHat UUID ZFS

Usually, the /dev/disk/by-uuid does not show all disk's UUID. For example: [root@r710 ~]# ls -l /dev/disk/by-uuid total 0 lrwxrwxrwx 1 root root 10 Aug  3 15:45 0d97a78f-8a2c-4040-9ee4-6ed3764cd809 -> .        ./../sda3 lrwxrwxrwx 1 root root 10 Aug  3 15:45 54242d33-f140-4ffa-96a5-7cfac64bdb09 -> .        ./../sda1 lrwxrwxrwx 1 root root 10 Aug  3 15:45 bbdcaa28-a4de-47a4-804c-79e83fa31980 -> .        ./../sda2 blkid will show all of them including ZFS's dataset [root@r710 ~]# blkid /dev/sda2: UUID="bbdcaa28-a4de-47a4-804c-79e83fa31980" TYPE="ext4" /dev/sda1: UUID="54242d33-f140-4ffa-96a5-7cfac64bdb09" TYPE="ext4" /dev/sda3: UUID="0d97a78f-8a2c-4040-9ee4-6ed3764cd809" TYPE="swap" /dev/sda4: LABEL="snapshot" UUID="7237582886734482444" UUID_SUB="5054151785144376043" TYPE="zfs_member" /dev/sdb: LABEL="storage" UUID="6125443995338521040" UUID_SUB=&q