Pages

Wednesday, February 21, 2007

Oracle 10g RAC installation: 10 Oracle Cluster File System configuration

ThatThe next step we will use Oracle Cluster File System just for the cluster manager files, that is the OCR and Voting Disk.
Start your first node and as root user issue:
ocfs2console
ocfs2console is a GUI front-end for managing OCFS2 volumes

Select from the menù Cluster->Configure Nodes... like in the picture


Select Close on the next popup window like in the picture

Select Add and type your first node IP address and name like in the picture

Select Add again and add your second node IP address. Your final configuration shuold be like in the picture.


Start your second node and wait for the welcome screen.
From your first node, select from the Menù Cluster->Propagate Configuration... to copy the file /etc/ocfs2/cluster.conf located on your first node to the second node.


In fact before you select the propagation command from your second node ou can see that the /etc/ocfs2/cluster.conf file doesn't exist. The propagation command will use the ssh tool, but with your root account to login into the other node. So you have to give the root password to estabilish the secure copy from rac1 to rac2 like in the picture. When you see Finished! you can close that terminal window.


The next step is to configure the o2cb driver (do you remember when we closed the popup window???). At that time we loaded that driver, now to configure it (on both nodes) we have first to unload it, so as root user type:
/etc/init.d/o2cb unload


and then
/etc/init.d/o2cb configure
Type y, enter, 61 (the fence time will be: (heartbeat dead threshold-1)*2=> (61-1)*2 = 120 seconds)


Perform the same actions on the second node!!!

Only from the first node now we have to format the file system, so type again ocfs2console from a root terminal and select from Menù Task->Format... like in the picture.


Select Ok and then confirm the format process clicking on YES button. Perform these actions only from one node!!!


You should now see something like the following picture.


Now from both nodes we have to mount the file system and then to mount it on boot so type:
mkdir /ocfs
mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs

and then, again from both nodes, add in /etc/fstab file the following line
/dev/sdb1 /ocfs ocfs2 _netdev,datavolume,nointr 0 0

From only the first node I will create the directory for the OCR and Voting Disk files, so I issue the following commands:
mkdir /ocfs/clusterware
chown -R oracle:dba /ocfs

Now you should be able to read and write into that directory from both nodes.




Oracle 10g RAC installation: 09 ASM configuration

Now we have to create our ASM disks and configure the Oracle Cluster File System where we'll install the Oracle Clusterware.
Start the first node and first of all, simply type as root user
/etc/init.d/oracleasm configure
Type oracle when prompted the default user as owner of the driver interface,
type dba as default group, type y to start the driver on boot and again type y to
fix permissions on boot like in the picture.

Repeat this operation on the second node too.


From only one node (I used rac1 ) type as root user
fdisk -l |grep 261
to see where to create the ASM disks.


We have /dev/sdc1, /dev/sdd1 e /dev/sde1 and so we will create the ASM disks for that devices. The syntax is very easy.
/etc/init.d/oracleasm createdisk ASMD1 /dev/sdc1
/etc/init.d/oracleasm createdisk ASMD2 /dev/sdd1
/etc/init.d/oracleasm createdisk ASMD3 /dev/sde1
Once created, use these commands to scan and list your ASM disks:
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks


Execute the same commands (/etc/init.d/oracleasm scandisks, /etc/init.d/oracleasm listdisks) also from the second node to verify you are able to see the ASM disks.
On the second node you don't have to execute the commands /etc/init.d/oracleasm createdisk