Pages

Monday, February 22, 2010

Oracle 11gR2 RAC installation: Grid Infrastructure installation

To install the Grid Infrastructure, login as oracle user and start the runInstaller











I choosed to install as "Advanced Installation"






The following is a screenshot coming from the help of the runInstaller, describing a virtual scenario formed by two Rac nodes:


What is a SCAN name ?

Oracle introduced this new concept on 11gR2 version: it stands for Single Client Access Name (SCAN). Its purpose is to eliminate the need to change tns name entry when nodes are added to or removed from the Cluster. What does it happen when you have several client and you decide to add/remove a node? So you have to configure your SCAN name on your DNS otherwise...


you will get the following error. [INS-40718] Single Client Access Name (SCAN) name: RAC3-scan could not be resolved.




From the previous screenshot, click add to add configure the second node.






Specify public and private interconnection.

A new feature: it's now possible to select ASM to place OCR and voting disks.
Before proceeding with this option, just have a read to this interesting post:

Add a disk group, selecting from candidate disks. It was not possible to create one disk group and associate to only one candidate ... My intention was to create, as usual and as suggested in Oracle best practices, two disk groups: DATA_RAC3 and FRA_RAC3.


Select ASM password


I didn't use IPMI (the Intelligent Platform Management Interface)






At this time I didn't create the "Oracle Base" directory


So I created it manually


On both nodes
















Execute the scripts as root user


The output of the scripts...















Grid Infrastructure installation finished.

Wednesday, February 3, 2010

Oracle 11gR2 RAC installation: install and configure ASM library

To install ASM drivers download your rpm files from here http://www.oracle.com/technology/tech/linux/asmlib/index.html: for my installation (Oracle Enterprise Linux 5) I downloaded them (oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm, oracleasmlib-2.0.4-1.el5.x86_64.rpm, oracleasm-support-2.1.3-1.el5.x86_64.rpm) from here http://www.oracle.com/technology/software/tech/linux/asmlib/rhel5.html

I executed the following command on both nodes as root user:
rpm -Uvh oracleasm*




Then I need to configure the ASM library, with the following command:
/etc/init.d/oracleasm configure

Type oracle and oinstall as user and group name that own the driver interface; type y to configure starting ASM library on boot and again y to scan ASM disks on boot.


Execute the same previous step on second node.


From one node only, I used the following command to create TWO disk group pointing to my "PowerPath" devices:
/etc/init.d/oracleasm createdisk DG_DATA_DWH /dev/emcpowere1
/etc/init.d/oracleasm createdisk DG_FRA_DWH /dev/emcpowerf1


From the other node, to scan for the new ASM disks I issued:
/etc/init.d/oracleasm scandisks


Verify the cluster is configured correctly for a "Grid" creation, running as oracle user, only from one node, the following command:
./runcluvfy.sh stage -pre crsinst -n nov2210,nov2211 -verbose











Tuesday, February 2, 2010

Oracle 11gR2 RAC installation: install and configure OCFS2

This post will describe how to install and configure OCFS2, the Oracle Cluster File System, a shared disk file system developed by Oracle Corporation and released under the GNU General Public License.
First of all download and install on both nodes the requested rpm files from the Oracle website: http://oss.oracle.com/projects/ocfs2/files/





Then run the console using the following command as root just from one node:
ocfs2console &




Then select Cluster->Configure Nodes... from menu


and the following window will appear


Press tha ADD button


And insert all details of your node (specifing the private IP address)


Again ADD button to add details for the second node


Click the APPLY button


The select Cluster->Propagate Configuration... from menu


Type YES and the root password when asked. When FINISHED! is showned, click on CLOSE button


Now from first node type:
/etc/init.d/o2cb configure

Answer Y to load O2CB driver on boot, then use default value for cluster name, then 61 as heartbeat dead threshold and then accept all default values.


The same steps for the second node


Now it's time to format, selecting Tasks->Format from menu


A list of available devices will appear. Use a Volume label and then press OK. I will setup TWO devices to share disks






First devices is formatted


Proceed with the second one








Quit OCFS2 console


Now on both nodes, run:
mkdir -p /u02/ocfs2/
mkdir -p /u02/ocfs2_mirror/
mount -t ocfs2 -o datavolume,nointr /dev/emcpowerb1 /u02/ocfs2/
mount -t ocfs2 -o datavolume,nointr /dev/emcpowerd1 /u02/ocfs2_mirror/
echo "/dev/emcpowerb1 /u02/ocfs2/ ocfs2 _netdev,datavolume,nointr 0 0" >> /etc/fstab
echo "/dev/emcpowerd1 /u02/ocfs2_mirror/ ocfs2 _netdev,datavolume,nointr 0 0" >> /etc/fstab
reboot