Wednesday, July 28, 2010

HOW TO SET-UP RAC IN 10G USING OPENFILER FOR VIRTUAL STORAGE

2- NODE RAC SETUP WITH OPENFILER
PART-A Open-filer Set-up

ü Install openfiler on a machine (Leave 60GB free space on the hdd)
ü Login to root user
ü Start iSCSI target service
* # service iscsi-target start
* # chkconfig –level 345 iscsi-target on

PART –B Configuring Storage on openfiler

ü From any client machine open the browser and access openfiler console (446 ports).
ü Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.0).
ü Create a single primary partition with partition type as “Physical Volume”.
ü Create a volume group.
ü Now create volumes on volume group(possible three volumes one for OCR, one for Voting Disk and another one for ASM)
Note: - To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume.
ü Edit the property of each volume and change access to allow.

PART –C N/W Configuration on Linux Nodes

ü Assign the ips
ü Edit the /etc/hosts on both nodes as given below
Oracle RAC Node 1- (node1)
Device
IP Address
Subnet
Gateway
Purpose
Eth0
192.9.201.183
255.255.255.0
192.9.201.1
Connect node to the public network
Eth1
10.0.0.1
255.255.255.0

Connect node1 (interconnect) to node2
/etc/hosts
127.0.0.1 localhost.localdomain localhost
#Public network – (eth0)
192.9.201.183 node1.oracle.com node1
192.9.201.187 node2,oracle.com node2
#Private Interconnect – (eth1)
10.0.0.1 node1-priv.oracle.com node1-priv
10.0.0.2 node2-priv.oracle.com node2-priv
#Public Virtual IP (VIP Addresses) – (eth0:1)
192.9.201.184 node1-vip.oracle.com node1-vip
192.9.201.187 node2-vip.oracle.com node2-vip
#Private Storage network for openfiler
192.9.201.182 openfiler.oracle.com openfiler
Oracle RAC Node 2- (node2)
Device
IP Address
Subnet
Gateway
Purpose
Eth0
192.9.201.187
255.255.255.0
192.9.201.1
Connect node to the public network
Eth1
10.0.0.2
255.255.255.0

Connect node1 (interconnect) to node2
/etc/hosts
127.0.0.1 localhost.localdomain localhost
#Public network – (eth0)
192.9.201.183 node1.oracle.com node1
192.9.201.187 node2,oracle.com node2
#Private Interconnect – (eth1)
10.0.0.1 node1-priv.oracle.com node1-priv
10.0.0.2 node2-priv.oracle.com node2-priv
#Public Virtual IP (VIP Addresses) – (eth0:1)
192.9.201.184 node1-vip.oracle.com node1-vip
192.9.201.187 node2-vip.oracle.com node2-vip
#Private Storage network for openfiler
192.9.201.182 openfiler.oracle.com openfiler
Note:- Node wise perform ping operation
For node1:-
ping node2.oracle.com
ping node2
ping node2-priv.oracle.com
ping node2.priv
ping openfiler.oracle.com
ping openfiler
For node2:-
ping node1.oracle.com
ping node1
ping node1-priv.oracle.com
ping node1.priv
ping openfiler.oracle.com
ping openfiler

PART – D Node Configuration

ü Set Kernel Parameters (/etc/sysctl.conf)
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
ü Configure /etc/security/limits.conf file
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
ü Configure /etc/pam.d/login file
session required /lib/security/pam_limits.so
ü Create user and groups on both nodes
# groupadd oinstall
# groupadd dba
# groupadd oper
# useradd –g oinstall –G dba oracle
# passwd oracle
ü Create required directories and set the ownership and permission.
# mkdir –p /u01/crs1020
# mkdir –p /u01/app/oracle
# chown –R oracle:oinstall /u01/
# chmod –R 755 /u01/
ü Set the environment variables
$ vi .bash_profile
ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
LANG=”en_US”; export LANG
ü Dump all the required software
ü Configure ssh
ü Run cluster verifier
ü Create raw devices
 
 
 
SSH Configuration
 
On node1:- $ssh-keygen –t rsa
           $ssh-keygen –t dsa
 
On node2:- $ssh-keygen –t rsa
           $ssh-keygen –t dsa
 
On node1:- $cd .ssh
           $cat *.pub>>node1
 
On node2:- $cd .ssh
           $cat *.pub>>node2
 
On node1:- $scp node1 node2:/home/oracle/.ssh
On node2:- $scp node2 node2:/home/oracle/.ssh
 
On node1:- $cat node*>>authowized_keys
On node2:- $cat node*>>authowized_keys
 
Now test the ssh configuration
 
 
To run cluster verifier
 
On node1 :-$cd /…/stage…/cluster…/cluvfy
           $./runcluvfy stage –pre crsinst –n node1,node2
 
It should given a clean cheat.
 
PART –F iSCSI Volume configuration on nodes
 
Now on both nodes
1)      Open the /etc/iscs.conf
               ……………
               DiscoveryAddress=Openfiler hostname
       
2)      start ths iscsi service
         #service iscsi start
         #chkconfig –level 345 iscsi on
3)      Run the iscsi-ls to know the device mapped to disk
4)      Now prepare a plan for creating partitions. Examples are given below
 
 
SL NO
 
Partition name
Raw Device Name
For What?
Approx Size
 
1
/dev/sdb5
raw5
ocr
1g
 
2
/dev/sdb6
raw6
vote
1g
 
3
/dev/sdb7
raw7
asm
8g
4
/dev/sdb8
raw8
asm
8g
5
/dev/sdb9
raw9
asm
8g
6
/dev/sdb10
raw10
asm
8g
7
/dev/sdb11
raw11
asm
8g
8
/dev/sdb12
raw12
asm
8g
5. Now you need to create 8 partitions as per above table
#fdisk /dev/sdb
……
:e (extended)
Part No. 1
First Cylinder:
Last Cylinder:
:p
:n
:l
First Cylinder:
Last Cylinder: +1024M
…………………
……………………
…………………………..
6. Note the /dev/sdb* names.
7. #partprobe
8. Login as root user on node2 and run partprobe
On node1 login as root user and create following raw devices
# raw /dev/raw/raw5 /dev/sdb5
#raw /dev/raw/taw6 /dev/sdb6
……………………………….
……………………………….
# raw /dev/raw/raw12 /dev/sdb12
Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
-Repeat the same thing on node2
On node1 as root user
# vi .etc/sysconfig/rawdevices
/dev/raw/raw5 /dev/sdb5
/dev/raw/raw6 /dev/sdb6
…………………………
…………………………
/dev/raw/raw12 /dev/sdb12
- Restart the raw service (# service rawdevices restart)
Repeat the same thing on node2 also
-# chown –R oracle:oinstall /dev/raw/raw*
#chmod –R 755 /dev/raw/raw8
On both nodes
# vi /etc/rc.local
Chown –R oracle:oinstall /dev/raw/raw*
Chmod –R 755 /dev/raw/raw*

PART – E Clusterware Installation on nodes

$ cd …/stage/clusterware
$./runInstaller
Use /dev/raw/raw5 for ocr disk
Use /dev/raw/raw6 for voting disk
Finish the clusterware installation
PART – G RDBMS Installation for ASM
Install 10g R2 database software for ASM at one home
PART – H RDBMS Installation for RDBMS
Install 10gR2 database software again at different home and then create a RAC database.


If you have any kind of queries regarding setup please send a mail to me on following email-id
rajeev108.cool@gmail.com
Wish you all the best...

DATA GUARD SETUP

Follow the following steps to configure data guard with broker configuration

Assumption: - Let’s suppose we have two machine machine1 & machine2 with the hostname machine1.oracle.com & machine2.oracle.com respectively. For the beginning machine1 will be considered as primary machine and machine2 will be considered as standby machine.
On machine1 we have a primary database 'dg01' and we will configure a physical standby database dg02 on machine2.

Configuration on primary site:-

Step 1) Configure the database in archive log mode and enable the force logging.

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE ARCHIVELOG;

ALTER DATABASE FORCE LOGGING;

ALTER DATABASE OPEN;

Step 2) Configure standby redo log files to enable fast failover and real time apply. (Assume we have three redo log group)

ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ‘/u01/app/oracle/oradata/dg01/sredo04.log’ size 50m;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ‘/u01/app/oracle/oradata/dg01/sredo05.log’ size 50m;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ‘/u01/app/oracle/oradata/dg01/sredo06.log’ size 50m;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ‘/u01/app/oracle/oradata/dg01/sredo07.log’ size 50m;

Note: - Number of standby redo log group must be more than number of redo log group and size of the standby logfile must be same as the size of redo log file.

Steps 3) CREATE PFILE FROM SPFILE.

Now configure following parameters for primary database:-
LOG_ARCGIVE_CONFIG='DG_CONFIG=(dg01,dg02)'

DB_UNIQUE_NAME=dg01

SERVICE_NAMES=dg01

DB_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02’,’/u01/app/oracle/oradata/dg01’

LOG_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02’,’/u01/app/oracle/oradata/dg01’

LOG_ARCHIVE_DEST_1=’location=/u01/app/oracle/oradata/dg01/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DG01’

LOG_ARCHIVE_DEST_2=’service=dg02 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=DG02’

LOG_ARCHIVE_DEST_STATE_1=enable

LOG_ARCHIVE_DEST_STATE_2=enable

STANDBY_FILE_MANAGEMENT=auto

FAL_CLIENT=dg01

FAL_SERVER=dg02

LOG_ARCHIVE_FORMAT=%t_%s_%r.arc

Step 4) Take the entire database backup on pre configured staging directory.

mkdir /home/oracle/rman (Staging directory for rman backup)

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

RMAN> BACKUP FORMAT ‘/home/oracle/rman/%U’ DATABASE;

RMAN> BACKUP FORMAT ‘/home/oracle/rman/%U’ CURRENT CONTROLFILE FOR STANDBY;

RMAN> SQL “ALTER SYSTEM ARCHIVE LOG CURRENT”;

Step 5) Copy the following items from machine1 to machine2.

PASSWORD FILE

PARAMETER FILE

RMAN BACKUP TAKEN ABOVE

Note: - RMAN backup must be copy at the same directory on machine2 i.e. /home/oracle/rman.

Step 6) Configure net service for dg01 (primary database) as well as for dg02 (standby database).

DG01 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = machine1.oracle.com)(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dg01)

(INSTANCE_NAME = dg01)

)

)


DG02 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = machine2.oracle.com)(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dg02)

(INSTANCE_NAME = dg02)

)

)


Configuration on standby site:-

Step 7) Rename the password file and parameter file to initdg02.ora and orapwdg02 respectively. Now do the following parameter changes for standby database.
LOG_ARCGIVE_CONFIG='DG_CONFIG=(dg01,dg02)'DB_UNIQUE_NAME=dg02



SERVICE_NAMES=dg02

DB_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg01’,’/u01/app/oracle/oradata/dg02’

LOG_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg01’,’/u01/app/oracle/oradata/dg02’

LOG_ARCHIVE_DEST_1=’location=/u01/app/oracle/oradata/dg01/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DG02’

LOG_ARCHIVE_DEST_2=’service=dg01 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=DG01’

LOG_ARCHIVE_DEST_STATE_1=enable

LOG_ARCHIVE_DEST_STATE_2=enable

STANDBY_FILE_MANAGEMENT=auto

FAL_CLIENT=dg02

FAL_SERVER=dg01

LOG_ARCHIVE_FORMAT=%t_%s_%r.arc

Step 8) Create the required directory for standby database

cd /u01/app/oracle/admin

mkdir dg02

cd dg02

mkdir bdump udump adump cdump create pfile scripts

mkdir -p /u01/app/oracle/oradata/dg02/archive

Step 9) Configure net service for dg01 (primary) as well as dg02 (standby) database.

DG01 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = machine1.oracle.com)(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dg01)

(INSTANCE_NAME = dg01)

)

)


DG02 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = machine2.oracle.com)(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dg02)

(INSTANCE_NAME = dg02)

)

)

Step 10) Start dg02 at nomount stage.

export ORACLE_SID=dg02
CREATE SPFILE FROM PFILE;

STARTUP NOMOUNT

EXIT

Step 11) Now using RMAN duplicate command create standby database.

RMAN TARGET SYS/ORACLE@DG01

RMAN> CONNECT AUXILIARY SYS/ORACLE@DG02

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY DORECOVER;

Note: execute above commands on machine1.
Above are the simple steps to configure physical standby database for your primary database.
As you know about data guard broker which makes data guard management ease and centralized. Following steps will tel you how to configure data guard broker using command line utility DGMGRL.
Step 1) Register a service db_name.db_domain_DGMGRL statically with the listener for primary as well as for standby database on primary as well as standby server.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = dg01_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/oracle/product/10.2.0/db_1)
(SID_NAME = dg01)
)
(SID_DESC =
(GLOBAL_DBNAME = dg02_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/oracle/product/10.2.0/asm)
(SID_NAME = dg02)
)
)
Note:- Reload the listener after modifying the setting. (LSNRCTL RELOAD)

Step 2) Configure a parameter DG_BROKER_START=TRUE on primary as well as on standby side.
ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH;








Step 3) Invoke DGMGRL and create the configuration
$ dgmgrl
DGMGRL> connect sys/oracle@dg01
DGMGRL> CREATE CONFIGURATION DGCONFIG1 AS PRIMARY DATABASE IS DG01 CONNECT IDENTIFIER IS 'DG01';
DGMGRL> ADD DATABASE DG02 AS CONNECT IDENTIFIER IS 'DG02' MAINTAINED AS PHYSICAL;
Step 4) Enable the configuration and check their status
DGMGRL> SHOW CONFIGURATION
DGMGRL> ENABLE CONFIGURATION


DGMGRL> SHOW CONFIGURATION


I JUST HOPE YOU WILL FIND SOME VALUABLE INFORMATION WHICH MAY ENHANCE YOUR KNOWLEDGE.
WISH YOU ALL THE BEST...