Wednesday, July 28, 2010

HOW TO SET-UP RAC IN 10G USING OPENFILER FOR VIRTUAL STORAGE

2- NODE RAC SETUP WITH OPENFILER
PART-A Open-filer Set-up

ü Install openfiler on a machine (Leave 60GB free space on the hdd)
ü Login to root user
ü Start iSCSI target service
* # service iscsi-target start
* # chkconfig –level 345 iscsi-target on

PART –B Configuring Storage on openfiler

ü From any client machine open the browser and access openfiler console (446 ports).
ü Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.0).
ü Create a single primary partition with partition type as “Physical Volume”.
ü Create a volume group.
ü Now create volumes on volume group(possible three volumes one for OCR, one for Voting Disk and another one for ASM)
Note: - To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume.
ü Edit the property of each volume and change access to allow.

PART –C N/W Configuration on Linux Nodes

ü Assign the ips
ü Edit the /etc/hosts on both nodes as given below
Oracle RAC Node 1- (node1)
Device
IP Address
Subnet
Gateway
Purpose
Eth0
192.9.201.183
255.255.255.0
192.9.201.1
Connect node to the public network
Eth1
10.0.0.1
255.255.255.0

Connect node1 (interconnect) to node2
/etc/hosts
127.0.0.1 localhost.localdomain localhost
#Public network – (eth0)
192.9.201.183 node1.oracle.com node1
192.9.201.187 node2,oracle.com node2
#Private Interconnect – (eth1)
10.0.0.1 node1-priv.oracle.com node1-priv
10.0.0.2 node2-priv.oracle.com node2-priv
#Public Virtual IP (VIP Addresses) – (eth0:1)
192.9.201.184 node1-vip.oracle.com node1-vip
192.9.201.187 node2-vip.oracle.com node2-vip
#Private Storage network for openfiler
192.9.201.182 openfiler.oracle.com openfiler
Oracle RAC Node 2- (node2)
Device
IP Address
Subnet
Gateway
Purpose
Eth0
192.9.201.187
255.255.255.0
192.9.201.1
Connect node to the public network
Eth1
10.0.0.2
255.255.255.0

Connect node1 (interconnect) to node2
/etc/hosts
127.0.0.1 localhost.localdomain localhost
#Public network – (eth0)
192.9.201.183 node1.oracle.com node1
192.9.201.187 node2,oracle.com node2
#Private Interconnect – (eth1)
10.0.0.1 node1-priv.oracle.com node1-priv
10.0.0.2 node2-priv.oracle.com node2-priv
#Public Virtual IP (VIP Addresses) – (eth0:1)
192.9.201.184 node1-vip.oracle.com node1-vip
192.9.201.187 node2-vip.oracle.com node2-vip
#Private Storage network for openfiler
192.9.201.182 openfiler.oracle.com openfiler
Note:- Node wise perform ping operation
For node1:-
ping node2.oracle.com
ping node2
ping node2-priv.oracle.com
ping node2.priv
ping openfiler.oracle.com
ping openfiler
For node2:-
ping node1.oracle.com
ping node1
ping node1-priv.oracle.com
ping node1.priv
ping openfiler.oracle.com
ping openfiler

PART – D Node Configuration

ü Set Kernel Parameters (/etc/sysctl.conf)
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
ü Configure /etc/security/limits.conf file
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
ü Configure /etc/pam.d/login file
session required /lib/security/pam_limits.so
ü Create user and groups on both nodes
# groupadd oinstall
# groupadd dba
# groupadd oper
# useradd –g oinstall –G dba oracle
# passwd oracle
ü Create required directories and set the ownership and permission.
# mkdir –p /u01/crs1020
# mkdir –p /u01/app/oracle
# chown –R oracle:oinstall /u01/
# chmod –R 755 /u01/
ü Set the environment variables
$ vi .bash_profile
ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
LANG=”en_US”; export LANG
ü Dump all the required software
ü Configure ssh
ü Run cluster verifier
ü Create raw devices
 
 
 
SSH Configuration
 
On node1:- $ssh-keygen –t rsa
           $ssh-keygen –t dsa
 
On node2:- $ssh-keygen –t rsa
           $ssh-keygen –t dsa
 
On node1:- $cd .ssh
           $cat *.pub>>node1
 
On node2:- $cd .ssh
           $cat *.pub>>node2
 
On node1:- $scp node1 node2:/home/oracle/.ssh
On node2:- $scp node2 node2:/home/oracle/.ssh
 
On node1:- $cat node*>>authowized_keys
On node2:- $cat node*>>authowized_keys
 
Now test the ssh configuration
 
 
To run cluster verifier
 
On node1 :-$cd /…/stage…/cluster…/cluvfy
           $./runcluvfy stage –pre crsinst –n node1,node2
 
It should given a clean cheat.
 
PART –F iSCSI Volume configuration on nodes
 
Now on both nodes
1)      Open the /etc/iscs.conf
               ……………
               DiscoveryAddress=Openfiler hostname
       
2)      start ths iscsi service
         #service iscsi start
         #chkconfig –level 345 iscsi on
3)      Run the iscsi-ls to know the device mapped to disk
4)      Now prepare a plan for creating partitions. Examples are given below
 
 
SL NO
 
Partition name
Raw Device Name
For What?
Approx Size
 
1
/dev/sdb5
raw5
ocr
1g
 
2
/dev/sdb6
raw6
vote
1g
 
3
/dev/sdb7
raw7
asm
8g
4
/dev/sdb8
raw8
asm
8g
5
/dev/sdb9
raw9
asm
8g
6
/dev/sdb10
raw10
asm
8g
7
/dev/sdb11
raw11
asm
8g
8
/dev/sdb12
raw12
asm
8g
5. Now you need to create 8 partitions as per above table
#fdisk /dev/sdb
……
:e (extended)
Part No. 1
First Cylinder:
Last Cylinder:
:p
:n
:l
First Cylinder:
Last Cylinder: +1024M
…………………
……………………
…………………………..
6. Note the /dev/sdb* names.
7. #partprobe
8. Login as root user on node2 and run partprobe
On node1 login as root user and create following raw devices
# raw /dev/raw/raw5 /dev/sdb5
#raw /dev/raw/taw6 /dev/sdb6
……………………………….
……………………………….
# raw /dev/raw/raw12 /dev/sdb12
Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
-Repeat the same thing on node2
On node1 as root user
# vi .etc/sysconfig/rawdevices
/dev/raw/raw5 /dev/sdb5
/dev/raw/raw6 /dev/sdb6
…………………………
…………………………
/dev/raw/raw12 /dev/sdb12
- Restart the raw service (# service rawdevices restart)
Repeat the same thing on node2 also
-# chown –R oracle:oinstall /dev/raw/raw*
#chmod –R 755 /dev/raw/raw8
On both nodes
# vi /etc/rc.local
Chown –R oracle:oinstall /dev/raw/raw*
Chmod –R 755 /dev/raw/raw*

PART – E Clusterware Installation on nodes

$ cd …/stage/clusterware
$./runInstaller
Use /dev/raw/raw5 for ocr disk
Use /dev/raw/raw6 for voting disk
Finish the clusterware installation
PART – G RDBMS Installation for ASM
Install 10g R2 database software for ASM at one home
PART – H RDBMS Installation for RDBMS
Install 10gR2 database software again at different home and then create a RAC database.


If you have any kind of queries regarding setup please send a mail to me on following email-id
rajeev108.cool@gmail.com
Wish you all the best...

1 comment: