ZFS es flexible, scalable y confiable, fue liberado en el release 6/2006 en la distribucion de Solaris10.
Es importante considerar que todos los miembros que pertenezcan a un pooldeben ser man page zfs o documentacion en linea: 1.- Creación y Borrando de un ZFS
# time zpool create oradba mirror c1t2d0s0 c1t3d0s0 real 32:20.2 user 0.9 sys 1.5 Listar status del pool creado # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradba 19.5G 89K 19.5G 0% ONLINE - # zpool status -v oradba pool: oradba state: ONLINE scrub: scrub in progress, 54.94% done, 0h2m to go config: NAME STATE READ WRITE CKSUM oradba ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 errors: No known data errors # Crear un zfs # zfs create oradba/home Asignar un punto de montaje diferente al default oradba/home # time zfs set mountpoint=/Punto_Montaje oradba/home real 0.6 user 0.0 sys 0.4 # Eliminar un zfs # zfs destroy -r oradba/home/oradata10 # zfs destroy -r oradba/home/oradata09 # Mas ejemplos: Crear 10 zfs con punto de montaje en /home_pool # for i in 01 02 03 04 05 06 07 08 09 10 > do > zfs create oradba/home/oradata$i > done Listar los zfs creados # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 426K 19.2G 24.5K /oradba oradba/home 282K 19.2G 36.5K /home_pool oradba/home/oradata01 24.5K 19.2G 24.5K /home_pool/oradata01 oradba/home/oradata02 24.5K 19.2G 24.5K /home_pool/oradata02 oradba/home/oradata03 24.5K 19.2G 24.5K /home_pool/oradata03 oradba/home/oradata04 24.5K 19.2G 24.5K /home_pool/oradata04 oradba/home/oradata05 24.5K 19.2G 24.5K /home_pool/oradata05 oradba/home/oradata06 24.5K 19.2G 24.5K /home_pool/oradata06 oradba/home/oradata07 24.5K 19.2G 24.5K /home_pool/oradata07 oradba/home/oradata08 24.5K 19.2G 24.5K /home_pool/oradata08 oradba/home/oradata09 24.5K 19.2G 24.5K /home_pool/oradata09 oradba/home/oradata10 24.5K 19.2G 24.5K /home_pool/oradata10 # 2.- Asignando Cuotas / Reservación de Espacio a un ZFS # zfs set quota=2g oradba/home/oradata01 # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 426K 19.2G 24.5K /oradba oradba/home 282K 19.2G 36.5K /home_pool oradba/home/oradata01 24.5K 2.00G 24.5K /home_pool/oradata01 oradba/home/oradata02 24.5K 3.00G 24.5K /home_pool/oradata02 oradba/home/oradata03 24.5K 2.00G 24.5K /home_pool/oradata03 oradba/home/oradata04 24.5K 2.00G 24.5K /home_pool/oradata04 oradba/home/oradata05 24.5K 2.00G 24.5K /home_pool/oradata05 oradba/home/oradata06 24.5K 2.00G 24.5K /home_pool/oradata06 oradba/home/oradata07 24.5K 2.00G 24.5K /home_pool/oradata07 oradba/home/oradata08 24.5K 2.00G 24.5K /home_pool/oradata08 oradba/home/oradata09 24.5K 2.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 24.5K 19.2G 24.5K /home_pool/oradata10 Test de espacio de cuota aisgnados # for i in $(zfs list | awk '{ print $NF }' | grep /home_pool) > do > mkfile 2m $i/data_dump.test > done # # zfs list | grep /home_pool oradba 19G 24K 9.6G 1% /oradba oradba/home 19G 2.0G 9.6G 18% /home_pool oradba/home/oradata01 2.0G 2.0G 0K 100% /home_pool/oradata01 oradba/home/oradata02 3.0G 2.0G 1023M 67% /home_pool/oradata02 oradba/home/oradata03 2.0G 2.0G 0K 100% /home_pool/oradata03 oradba/home/oradata04 2.0G 1.6G 398M 81% /home_pool/oradata04 oradba/home/oradata05 2.0G 24K 2.0G 1% /home_pool/oradata05 oradba/home/oradata06 2.0G 24K 2.0G 1% /home_pool/oradata06 oradba/home/oradata07 2.0G 24K 2.0G 1% /home_pool/oradata07 oradba/home/oradata08 2.0G 24K 2.0G 1% /home_pool/oradata08 oradba/home/oradata09 2.0G 24K 2.0G 1% /home_pool/oradata09 oradba/home/oradata10 5.0G 24K 5.0G 1% /home_pool/oradata10 Estadisticas de IO mientras escribe en los 10 zfs zpool iostat command puede monitorear el performance en ZFS objects: * USED CAPACITY: Data currently stored * AVAILABLE CAPACITY: Space available * READ OPERATIONS: Number of operations * WRITE OPERATIONS: Number of operations * READ BANDWIDTH: Bandwidth of all read operations * WRITE BANDWIDTH: Bandwidth of all write operations # zpool iostat -v capacity operations bandwidth pool used avail read write read write ------------ ----- ----- ----- ----- ----- ----- oradba 18.0G 1.50G 0 53 61 3.21M mirror 18.0G 1.50G 0 53 61 3.21M c1t2d0s0 - - 0 45 227 3.21M c1t3d0s0 - - 0 45 124 3.21M ------------ ----- ----- ----- ----- ----- ----- Listar los zfs utilizando la quota asignada # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 426K 19.2G 24.5K /oradba oradba/home 282K 19.2G 36.5K /home_pool oradba/home/oradata01 24.5K 2.00G 24.5K /home_pool/oradata01 oradba/home/oradata02 24.5K 3.00G 24.5K /home_pool/oradata02 oradba/home/oradata03 24.5K 2.00G 24.5K /home_pool/oradata03 oradba/home/oradata04 24.5K 2.00G 24.5K /home_pool/oradata04 oradba/home/oradata05 24.5K 2.00G 24.5K /home_pool/oradata05 oradba/home/oradata06 24.5K 2.00G 24.5K /home_pool/oradata06 oradba/home/oradata07 24.5K 2.00G 24.5K /home_pool/oradata07 oradba/home/oradata08 24.5K 2.00G 24.5K /home_pool/oradata08 oradba/home/oradata09 24.5K 2.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 24.5K 19.2G 24.5K /home_pool/oradata10 Mostrar ZFS file system information unicamente apartir del Release Solaris 10 7/07 # zfs get -s local all 3.- Redimensionando Crecer / Decrecer ZFS Decrecer un zfs # zfs set quota=550m oradba/home/oradata10 cannot set property for 'oradba/home/oradata10': size is less than current used or reserved space # rm /home_pool/oradata10/* # zfs set quota=10m oradba/home/oradata10 Crecer un zfs # zfs set quota=2.5g oradba/home/oradata08 oradba 19G 24K 322M 1% /oradba oradba/home 19G 2.0G 322M 87% /home_pool oradba/home/oradata01 2.0G 2.0G 0K 100% /home_pool/oradata01 oradba/home/oradata02 3.0G 2.0G 322M 87% /home_pool/oradata02 oradba/home/oradata03 2.0G 2.0G 0K 100% /home_pool/oradata03 oradba/home/oradata04 2.0G 2.0G 0K 100% /home_pool/oradata04 oradba/home/oradata05 2.0G 2.0G 0K 100% /home_pool/oradata05 oradba/home/oradata06 2.0G 2.0G 0K 100% /home_pool/oradata06 oradba/home/oradata07 2.0G 2.0G 0K 100% /home_pool/oradata07 oradba/home/oradata08 2.5G 2.0G 322M 87% /home_pool/oradata08 oradba/home/oradata09 1.0G 500M 524M 49% /home_pool/oradata09 oradba/home/oradata10 10M 25K 10.0M 1% /home_pool/oradata10 Reservacion de 1g de espacio para zfs # zfs set reservation=1g oradba/home/oradata09 # zfs get reservation oradba/home/oradata09 NAME PROPERTY VALUE SOURCE oradba/home/oradata09 reservation 900M local # zfs get reservation oradba/home/oradata09 NAME PROPERTY VALUE SOURCE oradba/home/oradata09 reservation 900M local # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 18.9G 322M 24.5K /oradba oradba/home 18.9G 322M 2.00G /home_pool oradba/home/oradata01 2.00G 0 2.00G /home_pool/oradata01 oradba/home/oradata02 2.00G 322M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 2.00G 322M 2.00G /home_pool/oradata08 oradba/home/oradata09 800M 224M 800M /home_pool/oradata09 oradba/home/oradata10 25.5K 9.98M 25.5K /home_pool/oradata10 Crecer un zfs de 2 a 3.19g # zfs set quota=3.19g oradba/home/oradata08 # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 18.0G 1.19G 24.5K /oradba oradba/home 18.0G 1.19G 2.00G /home_pool oradba/home/oradata01 2.00G 0 2.00G /home_pool/oradata01 oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 2.00G 1.19G 2.00G /home_pool/oradata08 # Test crear un file de 3g # time mkfile 3g /home_pool/oradata08/data_dump01 real 1:12.1 user 0.3 sys 26.1 # 4.- Respaldo y Restauración de un ZFS Respaldo de zfs mediante snapshots # cd /home_pool/oradata02 # ls -l total 4195359 -rw------T 1 root root 2147483648 Oct 17 15:53 data_dump -rw-r--r-- 1 root root 7 Oct 17 17:51 test.txt # Respaldando zfs # time zfs snapshot oradba/home/oradata02@MiResplado real 0.3 user 0.0 sys 0.0 # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 17.0G 6.01G 24.5K /oradba oradba/home 17.0G 6.01G 34.5K /home_pool oradba/home/oradata01 2.00G 0 2.00G /home_pool/oradata01 oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata02@MiResplado 24K - 2.00G - oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 194M 3.00G /home_pool/oradata08 # # cd .. Hacer una restauracion del respaldo sobre el mismo zfs # zfs rollback oradba/home/oradata02@MiResplado # cd oradata02 # ls data_dump test.txt # ls -lrt total 4195359 -rw------T 1 root root 2147483648 Oct 17 15:53 data_dump -rw-r--r-- 1 root root 7 Oct 17 17:51 test.txt # more test.txt 123abc Agregar mas espacio de disco al pool oradba, actual 19G # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradba 19.5G 89K 19.5G 0% ONLINE - # time zpool add oradba mirror c1t2d0s1 c1t3d0s1 real 32m5.22s user 0m0.25s sys 0m0.57s # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradba 23.4G 17.0G 6.37G 72% ONLINE - root@dns2.desc.com.mx # zpool status -v oradba pool: oradba state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM oradba ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s1 ONLINE 0 0 0 c1t3d0s1 ONLINE 0 0 0 errors: No known data errors root@dns2.desc.com.mx # Mas ejemplos |
Crear Respaldos de ZFS ( Snapshots ) |
Puntos
a favor Se puede crear un numero ilimitado de snapshots No se requiere espacio adicional Accesible a traves del directorio .zfs/snapshot en el root de cada ZFS Permite a los usuarios recuperar files sin apoyo del sysadmin Se pueden crear clones de zfs apartir de un snapshot Realizar un snapshot del ZFS oradata01 root@ # zfs snapshot oradba/home/oradata02@MiResplado Listar snapshot root@ # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT oradba/home/oradata01@161007 0 - 2.00G - oradba/home/oradata01@MYRESPALDO 0 - 2.00G - oradba/home/oradata02@MiResplado 23.5K - 2.00G - root@ # Validamos existencia de la informacion contenida en el snapshot root@ # cd /home_pool/oradata02/.zfs root@# ls snapshot root@# cd snapshot root@ # ls -l total 3 drwxr-xr-x 2 root sys 4 Oct 17 17:51 MiResplado root@# cd MiResplado root@# ls -l total 4195359 -rw------T 1 root root 2147483648 Oct 17 15:53 data_dump -rw-r--r-- 1 root root 7 Oct 17 17:51 test.txt root@# Se puede hacer uso de la informacion en snapshot root@ # cp test.txt /home_pool/oradata02/test.txt.backup root@ # cd /home_pool/oradata02/ root@ # pwd /home_pool/oradata02 root@ # ls data_dump test.txt test.txt.backup root@ # ls -lrt total 4195361 -rw------T 1 root root 2147483648 Oct 17 15:53 data_dump -rw-r--r-- 1 root root 7 Oct 17 17:51 test.txt -rw-r--r-- 1 root root 7 Oct 18 10:57 test.txt.backup ROLLBACK DE UN SNAPSHOT * Plancha la informacion sobre el ZFS, si existe informacion actualizada se perdera. zfs rollback oradba/home/oradata02@MiResplado root@dns2.desc.com.mx # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT oradba/home/oradata01@161007 0 - 2.00G - oradba/home/oradata01@MYRESPALDO 0 - 2.00G - oradba/home/oradata02@MiResplado 23.5K - 2.00G - Eliminar un Snapshot root@dns2.desc.com.mx # zfs destroy oradba/home/oradata01@161007 root@dns2.desc.com.mx # zfs destroy oradba/home/oradata01@MYRESPALDO root@dns2.desc.com.mx # zfs destroy oradba/home/oradata02@MiResplado |
5.- CLONADO DE ZFS
* Usar para recuperar informacion de un snapshot backup |
Cremos un nuevo snapshot para clonar root@ # zfs snapshot oradba/home/oradata01@backup-171007 root@ # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT oradba/home/oradata01@backup-171007 0 - 2.00G - root@ # zfs clone oradba/home/oradata01@backup-171007 oradba/home/oradata10 root@ # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 17.0G 6.01G 24.5K /oradba oradba/home 17.0G 6.01G 37.5K /home_pool oradba/home/oradata01 2.00G 0 2.00G /home_pool/oradata01 oradba/home/oradata01@backup-171007 0 - 2.00G - oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 1023M 3.00G /home_pool/oradata08 oradba/home/oradata09 24.5K 3.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 0 6.01G 2.00G /home_pool/oradata10 Crecemos el ZFS oradata10 para el clon del ZFS oradata01 root@ # zfs set quota=2060m oradba/home/oradata10 root@ # zfs clone oradba/home/oradata01@backup-171007 /home_pool/oradata10 root@ # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 17.0G 6.01G 24.5K /oradba oradba/home 17.0G 6.01G 37.5K /home_pool oradba/home/oradata01 2.00G 0 2.00G /home_pool/oradata01 oradba/home/oradata01@backup-171007 0 - 2.00G - oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 1023M 3.00G /home_pool/oradata08 oradba/home/oradata09 24.5K 3.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 0 2.01G 2.00G /home_pool/oradata10 root@ # ls -l /home_pool/oradata10 total 4195359 -rw------T 1 root root 2147483648 Oct 17 15:53 data_dump -rw-r--r-- 1 root root 7 Oct 17 17:51 test.txt Si se destruye el snapshot del cual clonamos, nos mandara el siguiente mensaje. El cual nos menciona que se destruira el ZFS clonado. root@ # zfs destroy oradba/home/oradata01@backup-171007 cannot destroy 'oradba/home/oradata01@backup-171007': snapshot has dependent clones use '-R' to destroy the following datasets: oradba/home/oradata10 Para mantener el ZFS clonado como el original para la operacion, se debe promover el zfs clonado. root@ # zfs promote oradba/home/oradata10 root@ # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 17.0G 6.01G 24.5K /oradba oradba/home 17.0G 6.01G 37.5K /home_pool oradba/home/oradata01 0 2G 2.00G /home_pool/oradata01 oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 1023M 3.00G /home_pool/oradata08 oradba/home/oradata09 24.5K 3.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 2.00G 12.0M 2.00G /home_pool/oradata10 oradba/home/oradata10@backup-171007 0 - 2.00G - Advertencia: Si se promueve un zfs clon, el zfs original ( padre ) sera eliminado al eliminar el snapshot. root@dns2.desc.com.mx # zfs destroy -R oradba/home/oradata01@MiResplado root@dns2.desc.com.mx # zfs destroy -R oradba/home/oradata10@backup-171007 root@dns2.desc.com.mx # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 17.0G 6.01G 24.5K /oradba oradba/home 17.0G 6.01G 36.5K /home_pool oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 1023M 3.00G /home_pool/oradata08 oradba/home/oradata09 24.5K 3.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 2.00G 12.0M 2.00G /home_pool/oradata10 root@dns2.desc.com.mx # Validamos que si se eliminaron los snapshosts. root@ # zfs list -t snapshot no datasets available root@ # df -h | grep home_pool oradba/home 23G 36K 6.0G 1% /home_pool oradba/home/oradata03 2.0G 2.0G 0K 100% /home_pool/oradata03 oradba/home/oradata04 2.0G 2.0G 0K 100% /home_pool/oradata04 oradba/home/oradata05 2.0G 2.0G 0K 100% /home_pool/oradata05 oradba/home/oradata06 2.0G 2.0G 0K 100% /home_pool/oradata06 oradba/home/oradata07 2.0G 2.0G 0K 100% /home_pool/oradata07 oradba/home/oradata02 3.0G 2.0G 1023M 67% /home_pool/oradata02 oradba/home/oradata08 4.0G 3.0G 1023M 76% /home_pool/oradata08 oradba/home/oradata09 3.0G 24K 3.0G 1% /home_pool/oradata09 oradba/home/oradata10 2.0G 2.0G 12M 100% /home_pool/oradata10 root@ # ZFS Send / Receive ( Backup / Restore ) root@ # zfs snapshot oradba/home/oradata05@Backup-remote root@ # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT oradba/home/oradata05@Backup-remote 0 - 2.00G - root@ # root@ # zfs send oradba/home/oradata05@Backup-remote | ssh 10.98.201.145 zfs recv oradba/home/oradata01@today root@dns2.desc.com.mx # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 19.0G 4.01G 24.5K /oradba oradba/home 19.0G 4.01G 37.5K /home_pool oradba/home/oradata01 2.00G 4.01G 2.00G /home_pool/oradata01 oradba/home/oradata01@today 0 - 2.00G - oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata05@Backup-remote 0 - 2.00G - oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 1023M 3.00G /home_pool/oradata08 oradba/home/oradata09 24.5K 3.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 2.00G 12.0M 2.00G /home_pool/oradata10 root@# root@ # ls -l /home_pool/oradata01 total 4194333 -rw------- 1 root root 2147483648 Oct 17 15:58 data_dump root@ # Otra forma sencilla de realizar un backup de un snapshot # zfs send tank/dana@040706 > /dev/rmt/0 De forma incremental: # zfs send -i tank/dana@040706 tank/dana@040806 > /dev/rmt/0 Con compression: # zfs send pool/fs@snap | gzip > backupfile.gz Restaurar File Backup root@ # zfs receive oradba/home/oradata05@Backup-remote < /dev/rmt/0 Eliminar los snapshots quedando los ZFS operando. root@ # zfs destroy oradba/home/oradata05@Backup-remote root@ # zfs destroy oradba/home/oradata01@today root@ # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 19.0G 4.01G 24.5K /oradba oradba/home 19.0G 4.01G 37.5K /home_pool oradba/home/oradata01 2.00G 4.01G 2.00G /home_pool/oradata01 oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 oradba/home/oradata08 3.00G 1023M 3.00G /home_pool/oradata08 oradba/home/oradata09 24.5K 3.00G 24.5K /home_pool/oradata09 oradba/home/oradata10 2.00G 12.0M 2.00G /home_pool/oradata10 root@ # |
6.- Analizando la
integridad de un “ZFS Storage Pool” |
Check
pool's status * ONLINE: Normal * FAULTED: Missing, damaged, or mis-seated device * DEGRADED: Device being resilvered * UNAVAILABLE: Device cannot be opened * OFFLINE: Administrative action root@ # zpool status -x all pools are healthy root@ zpool status NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradba 23.4G 17.0G 6.37G 72% ONLINE - ZFS Data Scrubbing Realizar un scrub para detectar y prevenir errores de hardware o software. En caso de detectar degradacion en el performance detener el scrub con la opcion -s . Para checar errores no usar fsck. root@ # zpool scrub oradba root@ # zpool status -v oradba pool: oradba state: ONLINE scrub: scrub completed with 0 errors on Thu Oct 18 13:12:43 2007 config: NAME STATE READ WRITE CKSUM oradba ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s1 ONLINE 0 0 0 c1t3d0s1 ONLINE 0 0 0 no errors: No known data errors Test de error en dispositivo, se quita un disk. # zfs list -o name,zoned,mountpoint -r oradba/home pzzy9l@dns2.desc.com.mx # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradba 23.4G 14.0G 9.36G 59% DEGRADED - pzzy9l@dns2.desc.com.mx # Estatus del pool sin un disk pzzy9l@ # zpool status pool: oradba state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: none requested config: NAME STATE READ WRITE CKSUM oradba DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c1t2d0s0 ONLINE 0 0 0 c1t3d0s0 UNAVAIL 0 0 0 cannot open mirror DEGRADED 0 0 0 c1t2d0s1 ONLINE 0 0 0 c1t3d0s1 UNAVAIL 0 0 0 cannot open errors: No known data errors Se vuelve a insertar el disk, se activan los slices para que se eintegren al mirror. root@ # zpool online oradba c1t3d0s0 Bringing device c1t3d0s0 online root@ # time zpool online oradba c1t3d0s1 Bringing device c1t3d0s1 online real 0.4 user 0.0 sys 0.0 root@ # root@ # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradba 23.4G 14.0G 9.36G 59% ONLINE - root@ # Se ejecuta una check al ZFS con scrub, para validar que no existan errores. #zpool scrub oradba pool: oradba state: ONLINE scrub: scrub completed with 0 errors on Fri Oct 19 12:23:18 2007 config: NAME STATE READ WRITE CKSUM oradba ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0s1 ONLINE 0 0 0 c1t3d0s1 ONLINE 0 0 0 errors: No known data errors Comandos adicionales para corregir errores en ZFS Para sacar de linea un disk: zpool offline pool_name c0t1d0 Depsues de que un offline disk ha sido reemplazado, este puede estar online nuevamente: zpool online pool_name c0t1d0 Remplazo de un Dispositivo dañando a otro dispositivo en ZFS Storage Pool # zpool replace pool_name c1t0d0 c2t0d0 Recuperar un pool destruido unicamente a partir de - Solaris 10 6/06 release El comando zpool import -D habilita la recuperacion del pool que fue previamente destruido por el comando zpool destroy command. |
7.- Compartir ZFS a Zonas |
Permisos de Administracion para el ZFS puede ser otorgada a una zona: zonecfg -z zone-name Zone path /dev/dsk/c1t2d0s3 19G 20M 19G 1% /test/home/zona01 Configurando la zona llamada zona01 root@ # zonecfg -z zona01 zona01: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zona01> create zonecfg:zona01> set zonepath=/test/home/zona01 zonecfg:zona01> set autoboot=true zonecfg:zona01> verify zonecfg:zona01> commit zonecfg:zona01> exit Instalando la zona root@ # zoneadm -z zona01 install root@ # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zona01 installed /test/home/zona01 native shared root@ # Iniciando la zona root@dns2.desc.com.mx # zoneadm -z zona01 boot Configurando server name, timezone de zona01 root@dns2.desc.com.mx # zlogin -C zona01 Agregando los ZFS a la zona01 zonecfg:zona01> add fs zonecfg:zona01:fs> set type=zfs zonecfg:zona01:fs> set special=oradba/home/oradata01 zonecfg:zona01:fs> set dir=/shared/ZFS/oradata01 zonecfg:zona01:fs> end zonecfg:zona01> verify zonecfg:zona01> exit Compartir ZFS Para compartir algun ZFS no es necesario editar el file /etc/dfstab root@ # zfs sharenfs=on oradba/home/oradata01 oradba/home/oradata01 Montaje del ZFS asignado a la zona Con opcion de legacy, los zfs no son montados automaticamente por zfs, usar mount -F zfs o via /etc/vfstab para montar automaticamente despues de un reboot. root@ # zfs set mountpoint=legacy oradba/home/oradata01 root@ # zfs set mountpoint=legacy oradba/home/oradata02 root@ # mount -F zfs oradba/home/oradata01 /test/home/zona01/root/shared/ZFS/oradata01 root@ # mount -F zfs oradba/home/oradata02 /test/home/zona01/root/shared/ZFS/oradata02 Test de montaje de los ZFS asignados a la zona despues de reboot root@ # zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared 3 zona01 running /test/home/zona01 native shared Reinicio de zona root@ # zoneadm -z zona01 reboot root@ # zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared 4 zona01 running /test/home/zona01 native shared root@ # Listar zfs en zona global root@ # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 14.0G 9.01G 24.5K /oradba oradba/home 14.0G 9.01G 32.5K /home_pool oradba/home/oradata01 2.00G 9.01G 2.00G legacy oradba/home/oradata02 2.00G 1023M 2.00G /home_pool/oradata02 oradba/home/oradata03 2.00G 0 2.00G /home_pool/oradata03 oradba/home/oradata04 2.00G 0 2.00G /home_pool/oradata04 oradba/home/oradata05 2.00G 0 2.00G /home_pool/oradata05 oradba/home/oradata06 2.00G 0 2.00G /home_pool/oradata06 oradba/home/oradata07 2.00G 0 2.00G /home_pool/oradata07 root@ # df -h | grep home_pool oradba/home 23G 32K 9.0G 1% /home_pool oradba/home/oradata03 2.0G 2.0G 0K 100% /home_pool/oradata03 oradba/home/oradata04 2.0G 2.0G 0K 100% /home_pool/oradata04 oradba/home/oradata05 2.0G 2.0G 0K 100% /home_pool/oradata05 oradba/home/oradata06 2.0G 2.0G 0K 100% /home_pool/oradata06 oradba/home/oradata07 2.0G 2.0G 0K 100% /home_pool/oradata07 oradba/home/oradata02 3.0G 2.0G 1023M 67% /home_pool/oradata02 root@dns2.desc.com.mx # Hacer login a la zona no global zona01 Validar si los ZFS asignados estan montados root@ # zlogin zona01 [Connected to zone 'zona01' pts/4] Last login: Fri Oct 19 10:49:55 on pts/2 Sun Microsystems Inc. SunOS 5.10 Generic January 2005 # zonename zona01 # df -h Filesystem size used avail capacity Mounted on / 19G 125M 19G 1% / /dev 19G 125M 19G 1% /dev /lib 19G 4.0G 15G 21% /lib /platform 19G 4.0G 15G 21% /platform /sbin 19G 4.0G 15G 21% /sbin oradba/home/oradata01 3.0G 2.0G 1013M 68% /shared/ZFS/oradata01 oradba/home/oradata02 3.0G 2.0G 1023M 67% /shared/ZFS/oradata02 /usr 19G 4.0G 15G 21% /usr proc 0K 0K 0K 0% /proc ctfs 0K 0K 0K 0% /system/contract mnttab 0K 0K 0K 0% /etc/mnttab objfs 0K 0K 0K 0% /system/object swap 17G 320K 17G 1% /etc/svc/volatile /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 19G 4.0G 15G 21% /platform/sun4u-us3/lib/libc_psr.so.1 /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 19G 4.0G 15G 21% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1 fd 0K 0K 0K 0% /dev/fd swap 17G 64K 17G 1% /tmp swap 17G 24K 17G 1% /var/run # zfs list NAME USED AVAIL REFER MOUNTPOINT oradba 14.0G 9.01G 24.5K /oradba oradba/home 14.0G 9.01G 32.5K /home_pool oradba/home/oradata01 2.00G 9.01G 2.00G legacy oradba/home/oradata02 3.0G 2.0G 1023M legacy Obtener quota si es que tiene asignada alguna # zfs get quota oradba/home/oradata01 NAME PROPERTY VALUE SOURCE oradba/home/oradata01 quota none default Asignar cuota al zfs de la zona no global dentro de la zona01 # zfs set quota=3g oradba/home/oradata01 cannot set property for 'oradba/home/oradata01': permission denied # Como no tiene permisos la zona01 para administrar el recurso de zfs, lo hacemos desde la zona global root@ # zfs get quota oradba/home/oradata01 NAME PROPERTY VALUE SOURCE oradba/home/oradata01 quota none default root@dns2.desc.com.mx # zfs set quota=3g oradba/home/oradata01 Validamos el incremento de espacio conectados a la zona zona01 root@dns2.desc.com.mx # zlogin zona01 zfs get quota oradba/home/oradata01 NAME PROPERTY VALUE SOURCE oradba/home/oradata01 quota 3G local root@ # |
8.- Java Web Console para adminstrar ZFS |
Habilitar Consola y levantar consola tras reboot # /usr/sbin/smcwebserver enable Levantar consola pzzy9l@ # /usr/sbin/smcwebserver stop pzzy9l@ # /usr/sbin/smcwebserver start Autorizar conexiones a la consola root@ # svccfg -s svc:/system/webconsole setprop options/tcp_listen = true root@# svcadm refresh svc:/system/webconsole root@ # wcadmin list -a Deployed web applications (application name, context name, status): console ROOT [running] console com_sun_web_ui [running] console console [running] console manager [running] console zfs [running] root@ # Accesar a la dir web para la administracion de ZFS https://host:6789 |
9.- New Solaris ACL Model in ZFS |
Setting ACLs on ZFS Files Setting and Displaying ACLs on ZFS Files in Compact Format Setting and Displaying ACLs on ZFS Files in Verbose Format root@dns2.desc.com.mx # cd /home_pool/oradata10 root@dns2.desc.com.mx # ls data_dump root@dns2.desc.com.mx # ls -l total 4194333 -rw------- 1 root root 2147483648 Oct 17 15:52 data_dump root@dns2.desc.com.mx # setfacl -r -m u:pzzy9l:7 data_dump File system doesn't support aclent_t style ACL's. See acl(5) for more information on ACL styles support by Solaris. root@dns2.desc.com.mx # root@dns2.desc.com.mx # ls -v data_dump -rw------- 1 root root 2147483648 Oct 17 15:52 data_dump 0:owner@:execute:deny 1:owner@:read_data/write_data/append_data/write_xattr/write_attributes /write_acl/write_owner:allow 2:group@:read_data/write_data/append_data/execute:deny 3:group@::allow 4:everyone@:read_data/write_data/append_data/write_xattr/execute /write_attributes/write_acl/write_owner:deny 5:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow root@dns2.desc.com.mx # ------ in Verbose Format ---- root@dns2.desc.com.mx # chmod A+user:pzzy9l:read_data/write_data/execute:allow data_dump root@dns2.desc.com.mx # root@dns2.desc.com.mx # ls -l total 4194333 -rwx------+ 1 root root 2147483648 Oct 17 15:52 data_dump root@dns2.desc.com.mx # root@dns2.desc.com.mx # ls -v data_dump -rwx------+ 1 root root 2147483648 Oct 17 15:52 data_dump 0:user:pzzy9l:read_data/write_data/execute:allow pzzy9l@dns2.desc.com.mx # ls -l total 2097166 -rwx------ 1 root 2147483648 Oct 17 15:52 data_dump pzzy9l@dns2.desc.com.mx # cat pzzy9l >> data_dump cat: cannot open pzzy9l pzzy9l@dns2.desc.com.mx # echo "pzzy9l@dns2.desc.com.mx" > data_dump pzzy9l@dns2.desc.com.mx # ls -l total 132 -rwx------ 1 root 24 Oct 18 12:43 data_dump pzzy9l@dns2.desc.com.mx # --- QUITANDO ACL A USR root@dns2.desc.com.mx # chmod A0- data_dump root@dns2.desc.com.mx # ls -v data_dump -rwx------ 1 root root 24 Oct 18 12:43 data_dump 0:owner@::deny 1:owner@:read_data/write_data/append_data/write_xattr/execute /write_attributes/write_acl/write_owner:allow 2:group@:read_data/write_data/append_data/execute:deny 3:group@::allow 4:everyone@:read_data/write_data/append_data/write_xattr/execute /write_attributes/write_acl/write_owner:deny 5:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow root@dns2.desc.com.mx # Displaying ACLs in Compact Format root@dns2.desc.com.mx # ls -V data_dump -rwx------ 1 root root 24 Oct 18 12:43 data_dump owner@:--------------:------:deny owner@:rwxp---A-W-Co-:------:allow group@:rwxp----------:------:deny group@:--------------:------:allow everyone@:rwxp---A-W-Co-:------:deny everyone@:------a-R-c--s:------:allow root@dns2.desc.com.mx # |
RESPALDAR ARCHIVO wtmpx CON FORMATO LEGIBLE
root@ # cd /var/adm/ root@ # ls -l wtmpx -rw-r--r-- 1 adm adm 1539641784 Apr 15 18:02 wtmpx root@ # /usr/lib/acct/fwtmp < /var/adm/wtmpx > /var/adm/wtmpx.`date +%Y%m%d` root@ # ls -l | grep wtmpx -rw-r--r-- 1 adm adm 1539693492 Apr 15 18:09 wtmpx -rw-r--r-- 1 root other 355949528 Apr 15 18:09 wtmpx.20150415 root@ # > /var/adm/wtmpx root@ # ls -l /var/adm/wtmpx -rw-r--r-- 1 adm adm 372 Apr 15 18:11 /var/adm/wtmpx root@ # gzip wtmpx.20150415 root@ # ls -l | grep wtmpx -rw-r--r-- 1 adm adm 10044 Apr 15 18:12 wtmpx -rw-r--r-- 1 root other 42045437 Apr 15 18:09 wtmpx.20150415.gz root@ #