Posts

Showing posts from 2014

Solaris Logic Volume MD

metadb metastat metainit -r metadetach dx dxx metatach dx dxx metasync
metainit -r metadb -i metastat metattach d0 d20 metadeattach d0 d20 metasync d0 http://troysunix.blogspot.com/2010/12/mounting-svm-root-disk-from-cdrom.html Mounting an SVM Root Disk from CDROM The following illustrates how to mount a root disk under Solaris Volume Manager (SVM) control when booted from a CDROM.  Our host details:         HOST:                   snorkle         PROMPT:                 cdrom [0]         OS:                     Solaris 10 u8 x86         SVM ROOT DEVICE:        d2         PHYSICAL ROOT SLICE:    c1t1d0s0         NOTES:                  The following is applicable for Solaris                                 9 and 10, x86 and SPARC.  Also, while a                                 boot from CDROM is used for the example,                                 booting from jumpstart would work as well. Though the following details accessing the SVM managed root disk, after step 3, any SVM volume could instead be mounted and managed. Regardless

Netapp/IBM NAS file system on Volume xxx is out of inodes

use "df -i" on the client to check the inodes. xxxxx:/vol/home01 14199161888 3250047496   78% 31876688   100% /home On Netapp/IBM NAS, double the size: xxxxx02> maxfiles home01 Volume home01: maximum number of files is currently 31876689 (31876689 used). xxxxx02> maxfiles home01 63753378 The new maximum number of files specified is more than twice as big as it needs to be, based on current usage patterns.  This invocation of the operation on the specified volume will allow disk space consumption for files to grow up to the new limit depending on your workload. The maxfiles setting cannot be lowered below the point of any such additional disk space consumption and any additional disk space consumed can never be reclaimed. Also, such consumption of additional disk space could result in less available memory after an upgrade. The new maximum number of files will be rounded to 63753378. Are you sure you want to change the maximum number of files? yes xxxxx02> maxfi

Useful VIOS commands

As root, you can run any pdamin's commands: root@vios# /usr/ios/cli/ioscli lsmap -all or root@vios# alias vios =/us r/io s/cl i/io scli root@vios# vios lsmap -all To connect the console by ssh [root@ms01 ~]# ssh hscroot@dubhmc1 Password:  Last login: Thu Sep  4 04:57:59 2014 from ms01.dub.usoh.ibm.com hscroot@dubhmc1:~> vtmenu Mirror the VIOS bootable disk: $ extendvg -f rootvg hdisk2 $ mirrorios -f hdisk2

zfs pool faulted due to disk ID changed

After applying the patch, the disk device ID has been changed, so the status of zfs pool snapshot is faulted. [root@r710 by-id]# zpool list NAME       SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT snapshot      -      -      -      -      -  FAULTED  - storage   27.2T  10.1T  17.1T    37%  1.00x  ONLINE  - Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev: #zpool export snapshot #zpool import -d /dev/disk/by-vdev snapshot

CentOS/RedHat UUID ZFS

Usually, the /dev/disk/by-uuid does not show all disk's UUID. For example: [root@r710 ~]# ls -l /dev/disk/by-uuid total 0 lrwxrwxrwx 1 root root 10 Aug  3 15:45 0d97a78f-8a2c-4040-9ee4-6ed3764cd809 -> .        ./../sda3 lrwxrwxrwx 1 root root 10 Aug  3 15:45 54242d33-f140-4ffa-96a5-7cfac64bdb09 -> .        ./../sda1 lrwxrwxrwx 1 root root 10 Aug  3 15:45 bbdcaa28-a4de-47a4-804c-79e83fa31980 -> .        ./../sda2 blkid will show all of them including ZFS's dataset [root@r710 ~]# blkid /dev/sda2: UUID="bbdcaa28-a4de-47a4-804c-79e83fa31980" TYPE="ext4" /dev/sda1: UUID="54242d33-f140-4ffa-96a5-7cfac64bdb09" TYPE="ext4" /dev/sda3: UUID="0d97a78f-8a2c-4040-9ee4-6ed3764cd809" TYPE="swap" /dev/sda4: LABEL="snapshot" UUID="7237582886734482444" UUID_SUB="5054151785144376043" TYPE="zfs_member" /dev/sdb: LABEL="storage" UUID="6125443995338521040" UUID_SUB=&q

VIOS SEA Network bridge without HMC

Two ways to make the VIOS's network to work: SEA (shared ethernet Adapter) HSE (hosted Ethernet Adapter) In this SEA example, we need to create a bridge to make it work.  We need to create a share adapter to pass the network traffic from internal to external. $  lsdev -type adapter name             status      description ent0             Available   10 Gb Ethernet PCI Express Dual Port Adapter (7710008077108001) ent1             Available   10 Gb Ethernet PCI Express Dual Port Adapter (7710008077108001) ent2             Available   2-Port Integrated Gigabit Ethernet PCI-Express Adapter (e4143a161410ed03) ent3             Available   2-Port Integrated Gigabit Ethernet PCI-Express Adapter (e4143a161410ed03) ent4             Available   Virtual I/O Ethernet Adapter (l-lan) ent5             Available   Virtual I/O Ethernet Adapter (l-lan) ent6             Available   Virtual I/O Ethernet Adapter (l-lan) ent7             Available   Virtual I/O Ethernet Adapter (l-la

zfs share multiple hosts

bash-3.2# zfs set sharenfs=on /backup bash-3.2# zfs set sharenfs=rw=host1,root=host2 zpool/backup bash-3.2# share -               /backup   sec=sys,rw=host1:host2,root=host1:host2   "" root@minnie # showmount -e host1 export list for NFS_server: /backup host1,host2