Saturday, 1 December 2012
ASM Admin - 10g
ASM Administration and Managment
ASM is a database file system that provides a cluster file system and volume manager capabilities
for oracle datailes that are integration into the oracle database. The asm environment
provides the performance of raw I/O with the easy management of a file system. It simplifies
database administration by eliminating the need to manage potentially thoushands of oracle
database files in a direct manner.
ASM simplifies storage management by enabling you to divide all the available storage into disk
groups. You manage the small set of diskgroup and asm automates the placement of database files
with those disk groups.
ASM dividies file into pieces and spreads them evenly across all the disks. This is the key
difference between the traditionaly striping techniques that used mathematical functions to stripe
complete logical volumes independent of files or directories. Striping requires careful capacity
planning at the beginning, as adding new volume requeires rebalancing and downtime, but with the
asm whenever a new storage is added or removed, asm does not restripe all the data, it just moves
the amount of data proportional to the amount of storage added or removed to redistribute the fies
evenly and maintain a balanced i/o load accross the disks. This occur while the databae is up
and running and totally transperant to the database and end user applications.
Disk groups:-
ASM diskgroup is a group of disks that are managed together as a single unit of storage. There are
three type of redundancy are provided by the disk groups they are external, normal and high reduancy,
an asm disk can be a partition of a LUN or a NAS deives
Allocation unit:-
ASM disks are divided into number of units or storage blocks that are small enough not to be hot.
The allocation unit of a storage is large enough for efficient sequential access. The allocation
unit defaults to 1MB in size and is sufficient for most configurations. ASM allows you to change
that allocation unit size, but that is not normally required unless the ASM hosts are very large.
Failure groups:-
Failure groups define ASM disks that share a common potential failure mechanism. A failure group
is a subset of disks of a disk group dependent on common hardware resource whose failure must
be tolerated, it is important only for normal and high redundancy configuration. Redundant copies of
the same data are placed in different failure groups. Failure groups are used to determine which
ASM disks are used for storig redundant copies of data. By default, each disk in an individual
failure group.
For eg. If you have two may mirroring for a file, asm automatically stores the redundant copies of
the file extents in seperate failure groups. Failure groups apply only to the normal and
high redundancy disk groups and are not applicable for external redundancy disk groups.
ASM Files:-
Fils written on asm disk are called the asm files. Normall all the asm files start with (+)
plus symbol, although the names are automatically generated by ASM, we can define a meaningful
/ userdefined name for each asm file. Each asm file is completely contained in a single
disk group and every spread across all the disks in that disk group.
Storage area network:-
A storage area network is the networked storage device connected via uniquely identified host
bus adapters (HBA's). This storage is divided in to LUN's and each LUN is logically represeted
as a single disk to the operating system.
The ASM disk are either LUN's or disk partitions. They are logically represented to the ASM
as raw device to the ASM. The name and the path of the raw device is dependent on the
operating system. For eg in SUN the raw device has the name as cNtNdNsN
cN - Controller number
tN - is the target ID (SCSI ID)
dN - is the disk number
sN - is the slice number.
so when you see a raw partition in sun listed as c0t0d2s3 we can say that the deivce is the
3rd partition of 2nd device connected to first controller of first port.
A typical linux configuration uses a straight disks. RAW functionality was an afterthought. However
linux imposes a limit of 255 possible raw files and this limiation is one of the reasons for the
development of oracle cluster file system and the use of ASMLib. Raw devices are typically
stored in /dev/raw and are named raw1 to raw255.
ASM Instance:-
Asm instance has their own SGA and background process which is similar to RDBMS architechture.
1) ASM instance does not have a data dictionary , as metadata is not stored
in the dictionary. Asm metadata is small and stored in the disk headers.
2) You can connect to ASM only using sysdba either using the os authentication
or by remotely using the password file.
Startup Modes:-
NO MOUNT -> Starts up the ASM instance without mounting any diskgroups.
MOUNT -> Starts up the ASM instance and mounts the diskgroups.
NORMAL -> Starts up the ASM instance monunts the diskgroup and allow connection from database.
This is the default startup option.
FORCE -> Starts up MOUNT after shutdown abort.
Shutdown :-
You cannot shutdown the ASM instance when database are connected to the ASM instance.
SQL> shutdown
ORA-15097: cannot SHUTDOWN ASM instance with connected RDBMS instance
You need to shutdown all the database which are using the ASM instnace before proceeding
with shuting down the ASM instance.
Shutdown abort -> ASM instance is termindated immediately and it does not dismount the diskgroup
in orderly fashion. All the database instance connected to this ASM instance will be terminated.
Next startup requires ASM recovery The database instance get terminated because it doest not
get the access to the storgae system managed by the ASM instance.
Normal, immediate, Transactional -> The asm instance shutdown once all the ASM sql
are completed ( The database should be down before shuting down the asm instance)
ASM Backgroup processes:-
ASM instance share the similar architechture of RDBMS instance, hence most of the
process are same in asm instance, below are the asm background processes in the
database instance.
ASMB -> The asmb background process runs in the database instance and connects to the
foreground process in an ASM instance. Over this connections periodic messages are
exchanged to update the statistics and to verify that both instances are healthy.
All extent maps describing the open files are sent to the database instance via asmb.
If the extent of a open file is relocated or the status for the disk is changed, messages
are received by the ASMB process in the affected database instance.
During operation that requires ASM intervention, such as file creation b a database
foreground , that database foreground connects directly to the ASM instance to perform
the operation. Each database maintains a pool of connections to the ASM instance to
avoid the overhead of reconnecting for every file operation.
O001 -> A group of slave process in O001 -> O0010 establishes a connection to the ASM instance.
and these slave process are used as connection pool for database process. Database
processes can send messages to the ASM instance using these slave proess. For eg.
Opening a file sends the open request to the ASM instance via a slave. However slave
are not use for long running operation, such as creating a file. The slave connections
eliminate the overhead of logging into the asm instance for short requests. These
slave process are automatically shutdown when not in use.
RBAL ->
Initialization paramters:-
instance_type This parameter instructs oracle executables about the instance type. By default, the oracle
executables assume the instance type is a database instance. This is the only mandatory
parameter in asm instance. All the other parameters have suitable default parameters
when not specified.
asm_power_limit - Sets the power limit for disk rebalancing. This parameter defaults to 1. Valid values are
0 to 11.
asm_diskstring - A comma seperated list of strings that limits the set of disks that asm discovers. This
parameter accepts wildcard characters. Only disks that match one of the strings are discoverd.
asm_diskgroups -a list of names of disk groups to be mounted by an asm instance at startup , or when
alter diskgroup all mount statement is used. If this parameter is not specified, no diskgroups
are mounted. The parameter is dynamic when using the spfile.
large_pool_size - the internal packages used by the asm istance are executed from the large pool, and therefore
you should set the value of the initialization parameter large_pool_size to a greater value than a 8mb.
Creating ASM Diskgroups:-
SQL> col path form a40;
SQL> select path, state, header_status, total_mb, free_mb, create_Date, group_number from V$asm_disk;
PATH STATE HEADER_STATU TOTAL_MB FREE_MB CREATE_DA GROUP_NUMBER
---------------------------------------- -------- ------------ ---------- ---------- --------- ------------
ORCL:MANZY3 NORMAL PROVISIONED 1137 0 0
ORCL:MANZY1 NORMAL PROVISIONED 964 0 0
ORCL:MANZY2 NORMAL PROVISIONED 964 0 0
ORCL:DISK1 NORMAL MEMBER 1286 1249 18-DEC-11 1
ORCL:DISK10 NORMAL MEMBER 1286 891 18-DEC-11 4
ORCL:DISK11 NORMAL MEMBER 1286 891 18-DEC-11 4
ORCL:DISK12 NORMAL MEMBER 1255 1020 18-DEC-11 2
ORCL:DISK2 NORMAL MEMBER 1286 1256 18-DEC-11 1
ORCL:DISK3 NORMAL MEMBER 1286 1255 18-DEC-11 1
ORCL:DISK4 NORMAL MEMBER 1255 1016 18-DEC-11 2
ORCL:DISK5 NORMAL MEMBER 1286 1254 28-FEB-12 3
ORCL:DISK6 NORMAL MEMBER 1286 1254 28-FEB-12 3
ORCL:DISK7 NORMAL MEMBER 1286 1253 28-FEB-12 3
ORCL:DISK8 NORMAL MEMBER 1255 1018 18-DEC-11 2
ORCL:DISK9 NORMAL MEMBER 1286 892 18-DEC-11 4
ORCL:NEWDISK1 NORMAL MEMBER 1019 997 28-FEB-12 5
ORCL:NEWDISK2 NORMAL MEMBER 1019 1003 28-FEB-12 5
ORCL:NEWDISK3 NORMAL MEMBER 1019 1003 28-FEB-12 5
Here we can see that we have 3 provisioned disks (ORCL:MANZY3,MANZY2/MANZY1) , we can create a new diskgroups
using the provisioned / canidate disks, if the header status shows as memeber then it means that its already
a part of a disk group. At any point of time a disk can only be part of only one diskgroup.
Creating diskgroup with external redundancy.
SQL> create diskgroup oradata external redundancy disk 'ORCL:MANZY3';
Diskgroup created.
SQL> select name, type, state from V$asm_diskgroup;
NAME TYPE STATE
------------------------------ ------ -----------
DATA1 EXTERN MOUNTED
DATA2 NORMAL MOUNTED
DATA3 EXTERN MOUNTED
FLASH EXTERN MOUNTED
NEWDISK EXTERN MOUNTED
ORADATA EXTERN MOUNTED
6 rows selected.
The diskgroup willbe mounted automatically once it is created. If we use spfile then the created disk
will dynamically get added to the asm_diskgroups parameter, so that each time the asm instance is
restarted the diskgroup will be mounted automatically.
Creating diskgroup with normal redundancy.
SQL> create diskgroup oranormal normal redundancy
failgroup fg1 disk 'ORCL:MANZY1'
failgroup fg2 disk 'ORCL:MANZY2'
/
Diskgroup created.
SQL> select a.group_number, b.name, b.type, a.failgroup, a.path from V$asm_disk a, V$asm_diskgroup b where
a.group_number = b.group_number order by 1;
GROUP_NUMBER NAME TYPE FAILGROUP PATH
------------ ------------------------------ ------ ------------------------------ ----------------------------------------
1 DATA1 EXTERN DISK2 ORCL:DISK2
1 DATA1 EXTERN DISK3 ORCL:DISK3
1 DATA1 EXTERN DISK1 ORCL:DISK1
2 DATA2 NORMAL DISK12 ORCL:DISK12
2 DATA2 NORMAL DISK4 ORCL:DISK4
2 DATA2 NORMAL DISK8 ORCL:DISK8
3 DATA3 EXTERN DISK6 ORCL:DISK6
3 DATA3 EXTERN DISK5 ORCL:DISK5
3 DATA3 EXTERN DISK7 ORCL:DISK7
4 FLASH EXTERN DISK11 ORCL:DISK11
4 FLASH EXTERN DISK10 ORCL:DISK10
4 FLASH EXTERN DISK9 ORCL:DISK9
5 NEWDISK EXTERN NEWDISK1 ORCL:NEWDISK1
6 ORADATA EXTERN MANZY3 ORCL:MANZY3
7 ORANORMAL NORMAL FG1 ORCL:MANZY1
7 ORANORMAL NORMAL FG2 ORCL:MANZY2
Each disk will be a part of a failgroup irrespective of a redunancy, if we have not defined any failgroup name
then the path (disk) name will the the failgroup name.
SQL> show parameter pfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------------------------
spfile string
Here am using the pfile for asm instance, we will change that to use the spfile.
a) Shutdown the database
rhelrac1-> srvctl stop database -d testthis
b) shutdown the asm instance
rhelrac1-> srvctl stop asm -n rhelrac2
rhelrac1-> srvctl stop asm -n rhelrac1
c) Create an spfile from the current pfile.
rhelrac1-> sqlplus / as sysdba
SQL*Plus: Release 10.2.0.4.0 - Production on Sat Mar 17 07:08:39 2012
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> create spfile='/u01/app/oracle/admin/+ASM/pfile/spfile+ASM1.ora' from pfile;
File created.
SQL> exit
Disconnected
d) Modify the entry in the pfile to point the spfile. Open the pfile in vi editor
and remove all the parameters and add the below.
spfile=/u01/app/oracle/admin/+ASM/pfile/spfile+ASM1.ora
e) Login to the other nodes and create the spfile from pfile as mentioned above.
f) Start the asm instance and then the db.
rhelrac1-> srvctl start asm -n rhelrac1
rhelrac1-> srvctl start asm -n rhelrac2
rhelrac1-> srvctl start database -d testthis
e) Now check, the asm instance will be using the spfile.
SQL> select inst_id, name, type, value from GV$parameter where name ='spfile';
INST_ID NAME TYPE VALUE
---------- ---------- ---------- ----------------------------------------------------------------------
1 spfile 2 /u01/app/oracle/admin/+ASM/pfile/spfile+ASM1.ora
2 spfile 2 /u01/app/oracle/admin/+ASM/pfile/spfile+ASM2.ora
Altering a Diskgorup:-
---------------------
SQL> select inst_id, name, state from GV$asm_diskgroup order by name;
INST_ID NAME STATE
---------- ---------- -----------
2 DATA1 MOUNTED
1 DATA1 MOUNTED
1 DATA2 MOUNTED
2 DATA2 MOUNTED
2 DATA3 MOUNTED
1 DATA3 MOUNTED
2 FLASH MOUNTED
1 FLASH MOUNTED
1 ORADATA DISMOUNTED
2 ORADATA DISMOUNTED
2 ORANORMAL DISMOUNTED
1 ORANORMAL DISMOUNTED
2 NEWDISK DISMOUNTED
1 NEWDISK MOUNTED
We can see that the diskgroups newdisk, oradata and oranormal are not mounted yet. Will
mount it now.
SQL> alter diskgroup newdisk mount;
Diskgroup altered.
SQL> select inst_id, name, state from GV$asm_diskgroup where name = 'NEWDISK';
INST_ID NAME STATE
---------- ---------- -----------
2 NEWDISK DISMOUNTED
1 NEWDISK MOUNTED
Now the newdisk is mounted only on the one node, that is whenever we mount the diskgroup in
only node it will be mounted in RESTRICTED mode, inorder to do some maintainance activities.
The diskgroup should be mounted in all node for normal operation. Connect to the remaining
nodes and mount the diskgroup.
Repeat the above procedure to mount the diskgroups on all the nodes.
Set the asm_diskgroups parameters to mount the diskgroup at every reboot, execute the below in both nodes.
> alter system set asm_diskgroups='DATA1','DATA2','DATA3','FLASH','NEWDISK','ORADATA','ORANORMAL' SCOPE=BOTH;
SQL> select name, type, total_mb, free_mb from V$asm_diskgroup;
NAME TYPE TOTAL_MB FREE_MB
------------------------------ ------ ---------- ----------
DATA1 EXTERN 3858 3760
DATA2 NORMAL 3765 3054
DATA3 EXTERN 3858 3761
FLASH EXTERN 3858 2674
ORANORMAL NORMAL 1928 1742
ORADATA EXTERN 1137 1044
NEWDISK EXTERN 1019 926
Droping a diskgroup:-
Now we will drop a diskgroup and give the disks to some other diskgroup.
Here we will drop the newdisk diskgroup.
SQL> select a.path, a.state, a.header_status, a.group_number, b.name from V$asm_disk a, V$asm_diskgroup b
where a.group_number = b.group_number and b.name = 'NEWDISK';
PATH STATE HEADER_STATU GROUP_NUMBER NAME
-------------------- -------- ------------ ------------ ------------------------------
ORCL:NEWDISK1 NORMAL MEMBER 5 NEWDISK
To drop a diskgroup it should be mounted in only one node, if the diskgroup is mount in all the nodes
and if you try to drop it you will get the below error.
SQL> drop diskgroup newdisk;
drop diskgroup newdisk
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15073: diskgroup NEWDISK is mounted by another ASM instance
first dismount in all the others nodes here the 2nd node and then drop the diskgroup in the firstnode.
SQL> alter diskgroup newdisk dismount;
Diskgroup altered.
Then drop it in the first node.
SQL> drop diskgroup newdisk;
Diskgroup dropped.
SQL> select path,header_status from V$asm_disk where path = 'ORCL:NEWDISK1';
PATH HEADER_STATU
---------------------------------------- ------------
ORCL:NEWDISK1 FORMER
Now the header status shows as former, which means it has been removed from the diskgroup.
Do not add a disk which been taken from the other diskgroups until the header status
changed to former.
Now we can add this disk to another diskgroup.
SQL> select name, type, total_mb from V$asm_diskgroup;
NAME TYPE TOTAL_MB
------------------------------ ------ ----------
DATA1 EXTERN 3858
DATA2 NORMAL 3765
DATA3 EXTERN 3858
FLASH EXTERN 3858
ORANORMAL NORMAL 1928
ORADATA EXTERN 1137
We will add the disk to the FLASH diskgroup, cause currently all the datafiles are resting
in this diskgroup.
SQL> alter diskgroup flash add disk 'ORCL:NEWDISK1';
Diskgroup altered.
The control will be transfered immediately when we add / drop any disks, but rebalancing
will get happend in the background, it will always a good pratice to have the LUN's with
the same size in a diskgroup so that there will be equal load on all the disks.
This stripping / rebalancing is transperent to the application users / end users. No
outage are require for adding / droping a disk to a diskgroup.
SQL> select * from V$asm_operation;
GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ----- ---- ---------- ---------- ---------- ---------- ---------- -----------
4 REBAL RUN 1 1 21 252 524 0
alternatively we can include the rebalance power in the same state and with wait clause
so that the control will not be transferred until the rebalancing is completed.
> alter diskgroup flash add disk 'ORCL:NEWDISK1' rebalance power 5 wait;
SQL> select name, state, total_mb from V$asm_diskgroup;
NAME STATE TOTAL_MB
------------------------------ ----------- ----------
DATA1 MOUNTED 3858
DATA2 MOUNTED 3765
DATA3 MOUNTED 3858
FLASH MOUNTED 4877
ORANORMAL MOUNTED 1928
ORADATA MOUNTED 1137
6 rows selected.
now we can see that the size of the FLASH diskgroup has been increased. Now we will delete
the disk from the diskgroup.
SQL> SELECT HEADER_STATUS , path,name , total_mb from V$asm_disk where group_number = (select distinct group_number from V$asm_diskgroup
where name ='FLASH');
HEADER_STATU PATH NAME TOTAL_MB
------------ ---------------------------------------- ------------------------------ ----------
MEMBER ORCL:DISK10 DISK10 1286
MEMBER ORCL:DISK11 DISK11 1286
MEMBER ORCL:DISK9 DISK9 1286
MEMBER ORCL:NEWDISK1 NEWDISK1 1019
Currenty there are four disks are attached to this diskgroup, lets remove the last one (NEWDISK1)
SQL> alter diskgroup flash drop disk newdisk1 rebalance power 6 wait;
Diskgroup altered.
You can use the undrop clause to cancel the droping operation of disk if still the drop operation is
not completed. If the operation is alread completed then you can use the undrop to restore the disk.
SQL> alter diskgroup flash undorop disk newdisk1;
SQL> SELECT HEADER_STATUS , path,name , total_mb, free_mb from V$asm_disk where group_number = (select distinct group_number from V$asm_diskgroup
where name ='FLASH' );
HEADER_STATU PATH NAME TOTAL_MB FREE_MB
------------ ---------------------------------------- ------------------------------ ---------- ----------
MEMBER ORCL:DISK10 DISK10 1286 892
MEMBER ORCL:DISK11 DISK11 1286 891
MEMBER ORCL:DISK9 DISK9 1286 891
Rebalancing has been completed and now we can see that the free_mb is almost equal in all the disks.
Resizing a Disk in a diskgroup.
If you do not specify a new size in the SIZE clause then ASM uses the size of the disk
as returned by the operating system. The new size is written to the ASM disk header
and if the size of the disk is increasing, then the new space is immediately available for
allocation. If the size is decreasing, rebalancing must relocate file extents beyond the
new size limit to available space below the limit. If the rebalance operation can
successfully relocate all extents, then the new size is made permanent, otherwise the
rebalance fails.
SQL> SELECT HEADER_STATUS , path,name , total_mb, free_mb from V$asm_disk where group_number = (select distinct group_number from V$asm_diskgroup
2 where name ='DATA3');
HEADER_STATU PATH NAME TOTAL_MB FREE_MB
------------ ---------------------------------------- ------------------------------ ---------- ----------
MEMBER ORCL:DISK5 DISK5 1286 1254
MEMBER ORCL:DISK6 DISK6 1286 1254
MEMBER ORCL:DISK7 DISK7 1286 1253
SQL> alter diskgroup data3
resize disk 'DISK5' size 1000m
disk 'DISK6' size 1000m
disk 'DISK7' size 1000m;
SQL> SELECT HEADER_STATUS , path,name , total_mb, free_mb, failgroup from V$asm_disk where group_number = (select distinct group_number from V$asm_diskgroup
2 where name ='DATA3');
HEADER_STATU PATH NAME TOTAL_MB FREE_MB FAILGROUP
------------ ---------------------------------------- ------------------------------ ---------- ---------- ------------------------------
MEMBER ORCL:DISK5 DISK5 1000 968 DISK5
MEMBER ORCL:DISK6 DISK6 1000 968 DISK6
MEMBER ORCL:DISK7 DISK7 1000 967 DISK7
Now we can see that the size of all the disks has been reduced to 1000m.
Lets look the calculation of free mb of normal redundancy disk group.
SQL> select HEADER_STATUS , path,name , total_mb, free_mb, failgroup from V$asm_disk where group_number = (select distinct group_number from V$asm_diskgroup
where name ='DATA2');
HEADER_STATU PATH NAME TOTAL_MB FREE_MB FAILGROUP
------------ ---------------------------------------- ------------------------------ ---------- ---------- ------------------------------
MEMBER ORCL:DISK12 DISK12 1255 1020 DISK12
MEMBER ORCL:DISK4 DISK4 1255 1016 DISK4
MEMBER ORCL:DISK8 DISK8 1255 1018 DISK8
SQL> SELECT name, type, total_mb, free_mb, required_mirror_free_mb,
usable_file_mb FROM V$ASM_DISKGROUP where name = 'DATA2';
NAME TYPE TOTAL_MB FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------------------------------ ------ ---------- ---------- ----------------------- --------------
DATA2 NORMAL 3765 3054 1255 899
REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be
available in a disk group to restore full redundancy after the worst failure that can
be tolerated by the disk group. The amount of space displayed in this column
takes the effects of mirroring into account. The value is computed as follows:
For a normal redundancy disk group, the value is the total raw space for all of the
disks in the largest failure group. The largest failure group is the one with the
largest total raw capacity. For example, if each disk is in its own failure group, then
the value would be the size of the largest capacity disk.
For a high redundancy disk group, the value is the total raw space for all of the
disks in the two largest failure groups.
USABLE_FILE_MB indicates the amount of space that can be used to store the newfiles, here we have 899 mb of
space that can be used for new files.
Free_mb - required mirror free mb = 2 * usable file mb
3054 - 1255 = 1799
2 * 899 = 1799
2 ( is for normal redundancy)
How ASM Manages Disk Failures
=============================
Depending on the redundancy level of a disk group and how you define failure
groups, the failure of one or more disks could result in either of the following:
The disks are first taken offline and then automatically dropped. In this case, the
disk group remains mounted and serviceable. In addition, because of mirroring,
all of the disk group data remains accessible. After the disk drop operation, ASM
performs a rebalance to restore full redundancy for the data on the failed disks.
The entire disk group is automatically dismounted, which means loss of data
accessibility.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment