Thursday, February 22, 2007

Oracle 10g RAC installation: 11 Setup Oracle Clusterware

Start your first node and start your second node.
Then login into your first node as oracle user; then go with your browser to the Oracle home page and download the Oracle Database 10g R2 and the Oracle Clusterware software.

Extract the zip file (10201_clusterware_linux32.zip) into oracle home directory and then type:
/home/oracle/clusterware/runInstaller
The Welcome screen will appear so click next:


In the next screen you have to specify the inventory directory and credentials;
the full path of the inventory directory should be /u01/app/oracle/oraInventory and
the operating system group name should be oinstall.


Be care in the next step!!! It seems OUI doesn't read our ORA_CRS_HOME and it writes a wrong clusterware home directory. You should type OraCrs10g_home as Name and /u01/app/oracle/product/10.2.0/crs_1 as the right path.


While testing the prerequisites, you should ignore the warning message about the RAM.


In the next step you have to add your second node (your first node should already be displayed). So click Add and specify your second node configuration.


In Network Interface Usage screen you have to change eth0 to PUBLIC interface type.


For the OCR (Oracle Cluster Registry) location, first of all select EXTERNAL REDUNDANCY and then write our vmware shared disk, that is /ocfs/clusterware/ocr


Do the same thing with the Voting Disk: your path should be /ocfs/clusterware/vdisk


Now we can begin to install the Oracle Clusterware software.


During the installation process the following error occurred. It says to execute from the second node a command when the installation process would be complete. Let's go on...


Now we have to execte the following scripts as root user in this order and be sure to wait for the command to be completed:
/u01/app/oracle/oraInventory/orainstRoot.sh on first node
Then
/u01/app/oracle/oraInventory/orainstRoot.sh on second node


The following two commands use more time to be completed... so be patient. So execute
/u01/app/oracle/product/10.2.0/crs_1/root.sh on first node


and finally
/u01/app/oracle/product/10.2.0/crs_1/root.sh on second node
After this fourth command you receive an error. We have to run the Virtual IP Configuration Assistant (vipca) from command line.


So as root user and from the second node go to the ORA_CRS_HOME (/u01/app/oracle/product/10.2.0/crs_1/) and type
./bin/vipca
Click next on the Welcome screen.


Verify your eth0 configuration and then click next.


The vipca should proceed with the VIP installation... You have simply to wait.


At the very last a summary should be showed. Click the exit button.


We have not finished... we have to switch to the first node and click the OK button.


The installation is completed. Click on Exit button.


Out final step is to execute the command suggested previously when we get the SEVERE error.
I took the command from the installation log on the first node and wrote it on another file (for example SEVERE_COMMAND.txt)
Then I issue the following command:
scp SEVERE_COMMAND.txt rac2:SEVERE_COMMAND.txt
to copy that file to the second node (I used the root user so Linux asked me for the password of rac2's root user).
Finally from the second node as root user, open the file and from another root terminal execute that command. That's all.



In the next step we will install the Oracle Database 10g R2 on our Real Application Cluster formed by our two nodes.