I came across this issue while installing the patch on Solaris 8, 9, 10. This happened at the customers environment. Finally came across the root cause for why it fails.
If you also run into the same problem. Please try the following.
It seems, that at some point the patchadd process does an su to the install user if one exists, otherwise to user nobody. If the patch files and all parent directories are not readable by either install or nobody, you get the above mentioned error message.
So there are two workarounds:
1. Setting execute permission for all on /var/spool/patch so that the user nobody can read all patch files and execute a pwd in the patch directory hierarchy.
2. Add an account "install" to the system
useradd -u 0 -o -g 1 -c "Install user" -d / -s /bin/true install
In my case, option 2 worked for the customer.
1.- La auditoria C2 tiene que estar habilitada ya en la zona global como normalmente se habilita.
2.- En la zona Global ejecutamos el siguiente comando:
# /usr/sbin/auditconfig -setpolicy +perzone
3.- Agregamos la siguiente linea al archivo para que al reboot, la politica de auditoria por zonas se habilite.
echo "/usr/sbin/auditconfig -setpolicy +perzone" >> /etc/security/audit_startup
4.- Nos conectamos a la zona no global y habilitamos el servicio.
# svcadm enable auditd
5.- Validamos que el demonio de la auditoria C2 levante en la zona no global.
# ps -fe | grep audit
NOTA: tomar en cuenta que al habilitar la auditoria c2 comenzaran a generarse los archivos de la auditoria por lo que el espacio de la zona no global debe de ser suficiente para no tener el problema de saturar el filesystem de la zona.
A fix to the unzip utility is available in recent patch utility patch revisions. This fix is required in order to be able to successfully unzip very large files such as the Solaris 10 Recommended and Sun Alert Patch Clusters.
Please download the latest revision of the patch utilities patch first and install it, before attempting to unzip the Solaris 10 Recommended or Sun Alert Patch Clusters.
The fix was incorporated in the putback to CRs 6344676 and 6464056.
The following are the earliest revisions of the patch utilities containing the fix:
* Solaris 10 SPARC: 119254-46 or above
* Solaris 10 x86: 119255-46 or above
* Solaris 9 SPARC: 112951-14 or above
* Solaris 9 x86: 114194-11 or above
* Solaris 8 SPARC: 108987-19 or above
* Solaris 8 x86: 108988-19 or above
Without the fix to unzip provided by the above patches, the following error will be seen when attempting to unzip the Solaris 10 Patch Clusters:
# unzip -q 10_Recommended.zip
note: didn't find end-of-central-dir signature at end of central dir.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
In addition, do not unzip Solaris patch clusters on Windows. Solaris patch clusters, and solaris patches more generally, can contain case-sensitive file names. Consequently clusters and patches must be unzipped on a case-sensitive filesystem (corruption can occur if unzipping on filesystems that are not case-sensitive).
Fuente: http://blogs.sun.com/patch/entry/need_unzip_fix_available_in
To make the secondary disk bootable Solaris X86.
Once all file systems are attached and all the metadevice status are "OKay".
Then, update the secondary disk booteable.
The installgrub utility will update the Master Boot record of the secondary disk, so that it can be bootable.
Example:
root # installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t3d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage1 written to master boot sector
root #
Now update the secondary disk as alternate bootpath in /boot/solaris/bootenv.rc file, this can be done by executing the eeprom command
root@rmanair01 # eeprom altbootpath="/pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/sd@3,0:a"
root@rmanair01 #
Mount the image directly into the filesystem using the lofiadm and mount commands.
Given an ISO image in /export/temp/software.iso, a loopback file device (/dev/lofi/1) is created with the following command:
lofiadm -a /export/temp/software.iso /dev/lofi/1
The lofi device creates a block device version of a file. This block device can be mounted to /mnt with the following command:
mount -F hsfs -o ro /dev/lofi/1 /mnt
These commands can be combined into a single command:
mount -F hsfs -o ro `lofiadm -a /export/temp/software.iso` /mnt
root # fcinfo hba-port
HBA Port WWN: 210000144f1e906f
OS Device Name: /dev/cfg/c1
Manufacturer: QLogic Corp.
Model: 2200
Type: L-port
State: online
Supported Speeds: 1Gb
Current Speed: 1Gb
Node WWN: 200000144f1e906f
HBA Port WWN: 10000000c952d05e
OS Device Name: /dev/cfg/c2
Manufacturer: Emulex
Model: LP10000DC-S
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 20000000c952d05e
HBA Port WWN: 10000000c952d05d
OS Device Name: /dev/cfg/c3
Manufacturer: Emulex
Model: LP10000DC-S
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 20000000c952d05d
root #
root # prtpicl -v -c scsi-fcp | grep wwn
:node_wwn
:port_wwn
:node_wwn
:port_wwn
:node-wwn 20 00 00 14 4f 1e 90 6f
:port-wwn 21 00 00 14 4f 1e 90 6f
root #
root # prtconf -vp | grep -i wwn
port_wwn:
node_wwn:
port_wwn:
node_wwn:
port-wwn: 21000014.4f1e906f
node-wwn: 20000014.4f1e906f
root #
root # luxadm -e port
/devices/pci@9,600000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
/devices/pci@8,600000/SUNW,emlxs@1,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@8,600000/SUNW,emlxs@1/fp@0,0:devctl NOT CONNECTED
root #
Con los physical paths obtenidos con el comando anterior, podemos hacer buscar los WWN numbers. Usar unicamente
el path que estan CONNECTED.
root # luxadm -e dump_map /devices/pci@9,600000/SUNW,qlc@2/fp@0,0:devctl
Pos AL_PA ID Hard_Addr Port WWN Node WWN Type
0 1 7d 0 210000144f1e906f 200000144f1e906f 0x1f (Unknown Type,Host Bus Adapter)
1 ef 0 ef 500000e011e833e1 500000e011e833e0 0x0 (Disk device)
2 e8 1 e8 500000e011eca191 500000e011eca190 0x0 (Disk device)
root #
root # powermt check_registration
Display PowerPath Registration Key / Statusroot # /etc/emcpreg -install
root # powermt set policy=so dev=all # EMC-SYMMETRIX
root # powermt set policy=co dev=all # CLARiiON, VNX OE
root # powermt config
root # powermt save
Display High Level HBA I/O Paths
root # powermt display
Display All Attached LUNsroot # powermt display dev=all
Display specific LUNWhen there are multiple LUNs connected to a server, you might want to view information about a specific LUN by providing the logical name of the LUN
root # # powermt display dev=emcpowera <== "Pseudo name"
Displays whether hba is enabled or not in the column Mode.
root # powermt display hba_mode
Displays all available path for your SAN device.
root # powermt display paths
Displays the status of the individual ports on the HBA. i.e Whether the port is enabled or not.
root # powermt display port_mode
root # powermt version
Check the I/O PathsIf you have manually removed an I/O path, check command will detect a dead path and remove it from the EMC path list.
root # powermt check
You can change the mode of a specific HBA to either standby or active
# powermt set mode=standby hba=1
Use this command to remove any specific I/O path (or) a whole device.
# powermt remove dev=sdd <== "I/O Path"
This command checks for available EMC SAN logical devices and add those to PowerPath configuration list. Powermt config command, sets some of the options to it's default values.
# powermt config
If you have dead I/O paths, and if you've done something to fix the issue, you can request PowerPath to re-check the paths and mark it as active using powermt restore command.
# powermt restore dev=all Save the current Powerpath ConfigurationTo backup the current PowerPath Configurations.
# powermt save