Pages

Wednesday, February 28, 2007

Oracle 10g RAC installation: 12 Oracle Database 10g software installation

Click here to read the previous step

Now we are ready to install the Oracle Database 10g R2 software to complete our
Real Application Cluster installation, so start both nodes and login as oracle user. I've found difficult to run the installation process using my desktop machine on both nodes, so I didn't start the second node and I improved the virtual memory for the first node (it will have to start an ASM instance for about 100MB of RAM and the Oracle Cluster Instance racdb1 for about 300 MB of RAM) to the value of 748 MB. If you have resources on your machine you can follow the same installation process described below selecting and running both nodes, otherwise as I'll do you can add your second node later just using the addNode.sh script located in our Clusterware bin directory.
So I started my first node and logged in as oracle user.
We had already downloaded the Oracle Database 10g R2 software and my unzipped version
is in my home dircetory, so I type:
/home/oracle/database/runInstaller
to start the Oracle Universal Installer. On the first screen click the NEXT button.
Choose Enterprise Edition for Installation Type and click NEXT.
The OUI should read your Oracle environment and suggest a name and a pathfor Home Details like in the picture.


On the Hardware Cluster Installation Mode screen, select Cluster Installation (default option) and then check the second node (rac2) if you have started also your second node, otherwise select only the first node like in the picture and go on.


OUI will perform the prerequisite checks: it will suggest to improve our RAM, but we can
ignore this warning. It suggested me also to check my network configuration: I ignored that warning too.


On the Configuration Option screen select Create Database (default option) and then click NEXT.
Select Advanced on the Database Configuration screen like in the picture, click on NEXT and then on INSTALL button.


On the Database Templates screen select General Purpose and click NEXT.


Write racdb on Database identification screen for the Global Database Name and the SID Prefix.
On the Management Options screen select Configure the Database with the EM (Enterprise Manager).


On Database Credentials screen choose a password and use it for all accounts.
On Storage Options screen select ASM (Automatic Storage Management).


Choose your SYS password for the ASM instance.


If you select to use the spfile for the ASM instance you will receive this error.


So select for your ASM instance to use a pfile and go on.


Click OK and dbca will start your ASM instance.


Your ASM instance is starting...


Now we ahve to add our diskgroup for the ASM instance. So click on the Create New button.


The ASM will discovere our three devices. Select ASMD1 and ASMD2, give a name to these
disk groups (I choosed OMF) and check Normal as Redundancy, then click OK button.


The ASM instance will add OMF disk groups.


The ASM instance has mounted one disk groups named OMF.


Now click again on Create New button and select ASMD3, give a name to that disk group (I choosed FRA) and check External as Redundancy, then click OK.


As you will see, the ASM instance has mounted two disk groups, one named OMF and the other named FRA.


Select OMF and click NEXT.


Be sure to use Oracle Managed File options and the OMF disk groups.


For your Flash Recovery Area dbca will suggest again +OMF, so simply select the Browse button and choose your FRA ASM disk group like in the picture.


Check Enable Archiving and then click on the NEXT button.


On Database Content screen select the sample schemas and then click NEXT.


On Database Services simply click on NEXT button.


You will be prompted to use about 41% of your memory. Click OK.


In this screen you can choose your initialization parameters, I left all the default value suggested.


A summary of your configuration and storage options. Click NEXT.


Now select Create Database and click FINISH.


A summary screen will be prompted. Click OK and your database setup will start.


Dbca installation process at 2%.


Dbca installation process at 30%.


Dbca installation process at 79%.


Oracle Database 10g installation complete. Click on EXIT button.


Your cluster will start... You should see just your first node instance racdb1 of the database racdb running.


Login as root user from a terminal window and execute the root.sh script from your first node.
Dbca will suggest to execute the same scripts on your second node if you choosed to use also the second node during the installation process.


Finally you have completed the installation process of the Oracle Database 10g on your first node. Some URL will be showed on the last summary window.

Thursday, February 22, 2007

Oracle 10g RAC installation: 11 Setup Oracle Clusterware

Start your first node and start your second node.
Then login into your first node as oracle user; then go with your browser to the Oracle home page and download the Oracle Database 10g R2 and the Oracle Clusterware software.

Extract the zip file (10201_clusterware_linux32.zip) into oracle home directory and then type:
/home/oracle/clusterware/runInstaller
The Welcome screen will appear so click next:


In the next screen you have to specify the inventory directory and credentials;
the full path of the inventory directory should be /u01/app/oracle/oraInventory and
the operating system group name should be oinstall.


Be care in the next step!!! It seems OUI doesn't read our ORA_CRS_HOME and it writes a wrong clusterware home directory. You should type OraCrs10g_home as Name and /u01/app/oracle/product/10.2.0/crs_1 as the right path.


While testing the prerequisites, you should ignore the warning message about the RAM.


In the next step you have to add your second node (your first node should already be displayed). So click Add and specify your second node configuration.


In Network Interface Usage screen you have to change eth0 to PUBLIC interface type.


For the OCR (Oracle Cluster Registry) location, first of all select EXTERNAL REDUNDANCY and then write our vmware shared disk, that is /ocfs/clusterware/ocr


Do the same thing with the Voting Disk: your path should be /ocfs/clusterware/vdisk


Now we can begin to install the Oracle Clusterware software.


During the installation process the following error occurred. It says to execute from the second node a command when the installation process would be complete. Let's go on...


Now we have to execte the following scripts as root user in this order and be sure to wait for the command to be completed:
/u01/app/oracle/oraInventory/orainstRoot.sh on first node
Then
/u01/app/oracle/oraInventory/orainstRoot.sh on second node


The following two commands use more time to be completed... so be patient. So execute
/u01/app/oracle/product/10.2.0/crs_1/root.sh on first node


and finally
/u01/app/oracle/product/10.2.0/crs_1/root.sh on second node
After this fourth command you receive an error. We have to run the Virtual IP Configuration Assistant (vipca) from command line.


So as root user and from the second node go to the ORA_CRS_HOME (/u01/app/oracle/product/10.2.0/crs_1/) and type
./bin/vipca
Click next on the Welcome screen.


Verify your eth0 configuration and then click next.


The vipca should proceed with the VIP installation... You have simply to wait.


At the very last a summary should be showed. Click the exit button.


We have not finished... we have to switch to the first node and click the OK button.


The installation is completed. Click on Exit button.


Out final step is to execute the command suggested previously when we get the SEVERE error.
I took the command from the installation log on the first node and wrote it on another file (for example SEVERE_COMMAND.txt)
Then I issue the following command:
scp SEVERE_COMMAND.txt rac2:SEVERE_COMMAND.txt
to copy that file to the second node (I used the root user so Linux asked me for the password of rac2's root user).
Finally from the second node as root user, open the file and from another root terminal execute that command. That's all.



In the next step we will install the Oracle Database 10g R2 on our Real Application Cluster formed by our two nodes.