Check ASM Diskgroup Space and Directory Size

The script below can be used to check the Disk Group Space Free and also Check Directory sizes for each Disk Group

The script is written by somebody at Pythian but i cannot re-collect the original link to the blog.

eg:
./asmcmd_du.sh

DiskGroup Total_MB Free_MB % Free
——— ——– ——- ——
DATAC1 15962112 11215880 70
DBFS_DG 415296 403740 97
RECOC1 3996000 3460272 86

./asmcmd_du.sh DATAC1

DiskGroup Total_MB Free_MB % Free
——— ——– ——- ——
DATAC1 15962112 10155732 63

DATAC1 subdirectories size

Subdir Used MB Mirror MB
—— ——- ———
CARATST/ 55646 111356
ECCTST/ 174912 349856
—— ——- ———
Total 2799978 5600788

Script Below :

#!/bin/bash
# Shadab Mohammad -- 2016
#
# - If no parameter specified, show a du of each DiskGroup
# - If a parameter, print a du of each subdirectory
#

D=$1

#
# Colored thresholds (Red, Yellow, Green)
#
CRITICAL=90
WARNING=75

#
# Set the ASM env
#
ORACLE_SID=`ps -ef | grep pmon | grep asm | awk '{print $NF}' | sed s'/asm_pmon_//' | egrep "^[+]"`
export ORAENV_ASK=NO
. oraenv > /dev/null 2>&1

#
# A quick list of what is running on the server
#
ps -ef | grep pmon | grep -v grep | awk '{print $NF}' | sed s'/.*_pmon_//' | egrep "^([+]|[Aa-Zz])" | sort | awk -v H="`hostname -s`" '' | sed s'/, $//'

#
# Manage parameters
#
if [[ -z $D ]]
then # No directory provided, will check all the DG
DG=`asmcmd lsdg | grep -v State | awk '{print $NF}' | sed s'/\///'`
SUBDIR="No" # Do not show the subdirectories details if no directory is specified
else
DG=`echo $D | sed s'/\/.*$//g'`
fi

#
# A header
#
printf "\n%25s%16s%16s%14s" "DiskGroup" "Total_MB" "Free_MB" "% Free"
printf "\n%25s%16s%16s%14s\n" "---------" "--------" "-------" "------"

#
# Show DG info
#
for X in ${DG}
do
asmcmd lsdg ${X} | tail -1 |\
awk -v DG="$X" -v W="$WARNING" -v C="$CRITICAL" '\
BEGIN \
{COLOR_BEGIN = "\033[1;" ;
COLOR_END = "\033[m" ;
RED = COLOR_BEGIN"31m" ;
GREEN = COLOR_BEGIN"32m" ;
YELLOW = COLOR_BEGIN"33m" ;
COLOR = GREEN ;
}
{ FREE = sprintf("%12d", $8/$7*100) ;
if ((100-FREE) > W) {COLOR=YELLOW ;}
if ((100-FREE) > C) {COLOR=RED ;}
printf("%25s%16s%16s%s\n", DG, $7, $8, COLOR FREE COLOR_END) ; }'
done
printf "\n"

#
# Subdirs info
#
if [ -z ${SUBDIR} ]
then
(for DIR in `asmcmd ls ${D}`
do
echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
done) | awk -v D="$D" ' BEGIN { printf("\n\t\t%40s\n\n", D " subdirectories size") ;
printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB") ;
printf("%25s%16s%16s\n", "------", "-------", "---------") ;}
{
printf("%25s%16s%16s\n", $1, $2, $3) ;
use += $2 ;
mir += $3 ;
}
END { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------") ;
printf("%25s%16s%16s\n\n", "Total", use, mir) ;} '
fi


#************************************************************************#
#* E N D O F S O U R C E *#
#************************************************************************#

Create ACFS Filesystem Oracle 12c Linux 12.1.0.2

— Create ACFS filesystem on 12c Linux Exadata —
1. Create a volume in ASM

ASMCMD [+] > volcreate -G datac1 -s 500G ACFS_VOL

If you get like below error

ORA-15032: not all alterations performed
ORA-15479: ASM diskgroup does not support volumes
ORA-15221: ASM operation requires compatible.asm of 12.1.0.2.0 or higher (DBD ERROR: OCIStmtExecute

Check the Current Compatibility for the Diskgroup

select group_number, name,compatibility, database_compatibility from v$asm_diskgroup
GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY
———— —————————— ———————————————————— ————————————————————
1 DATAC1 12.1.0.1.0 11.2.0.2.0
2 DBFS_DG 12.1.0.0.0 11.2.0.2.0
3 RECOC1 12.1.0.1.0 11.2.0.2.0
SQL> alter diskgroup DATAC1 set attribute ‘compatible.asm’=’12.1.0.2.0’;

Diskgroup altered.

SQL> alter diskgroup RECOC1 set attribute ‘compatible.asm’=’12.1.0.2.0’;

Diskgroup altered.

SQL> alter diskgroup DBFS_DG set attribute ‘compatible.asm’=’12.1.0.2.0′;

Diskgroup altered.

SQL> select group_number, name,compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY
———— —————————— ———————————————————— ————————————————————
1 DATAC1 12.1.0.2.0 11.2.0.2.0
2 DBFS_DG 12.1.0.2.0 11.2.0.2.0
3 RECOC1 12.1.0.2.0 11.2.0.2.0

Run again the volcreate command
ASMCMD [+] > volcreate -G datac1 -s 500G ACFS_VOL
2. Check the volume information
ASMCMD [+] > volinfo -G datac1 ACFS_VOL

Diskgroup Name: DATAC1

Volume Name: ACFS_VOL
Volume Device: /dev/asm/acfs_vol-45
State: ENABLED
Size (MB): 512000
Resize Unit (MB): 64
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:
sqlplus “/as sysasm”

SELECT volume_name, volume_device FROM V$ASM_VOLUME
WHERE volume_name =’ACFS_VOL’;

VOLUME_NAME
——————————
VOLUME_DEVICE
——————————————————————————–
ACFS_VOL
/dev/asm/acfs_vol-45

3. Create a file system with the Oracle ACFS mkfs command using output of above command

With root user run below command

/sbin/mkfs -t acfs /dev/asm/acfs_vol-45
mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfs_vol-45
mkfs.acfs: volume size = 536870912000 ( 500.00 GB )
mkfs.acfs: Format complete.
4. Register the file system with the acfsutil registry command.

Create a directory called ACFS

cd /
mkdir /ACFS

/sbin/acfsutil registry -a /dev/asm/acfs_vol-45 /ACFS

acfsutil registry: mount point /ACFS successfully added to Oracle Registry
Imp Note 1: Registering an Oracle ACFS file system also causes the file system to be mounted automatically whenever Oracle Clusterware or the system is restarted.
Imp Note 2: Oracle ACFS registration (acfsutil registry) is not supported in an Oracle Restart (standalone) configuration, which is a single-instance (non-clustered) environment.
5. Verify if ACFS filesystem mounted automatically

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 17G 12G 59% /
tmpfs 252G 22G 231G 9% /dev/shm
/dev/sda1 496M 54M 418M 12% /boot
/dev/mapper/VGExaDb-LVDbOra1
99G 57G 37G 61% /u01
/dev/mapper/VGExaDb-LVDbOra2
197G 68G 119G 37% /u02
/dev/mapper/VGExaDb-LVBkp1
985G 288G 648G 31% /u03
/dev/asm/acfs_vol-45 500G 1.1G 499G 1% /ACFS
As you can see from above output the ACFS filesystem moutned automatically after registration
6. If you did not register the ACFS filesystem it will not mount automatically, you can mount the ACFS filesystem manually using below command

As root user

/bin/mount -t acfs /dev/asm/acfs_vol-45 /ACFS

7. Give appropriate permissions to the filesystem required by Oracle users
chown -R oracle:dba /ACFS

su – oracle

cd /ACFS

touch abc.txt

ORA-29701: unable to connect to Cluster Synchronization Service

Problem
——
Error on starting ASM from SQLPLUS or ASMCMD

SQL> startup;
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service
SQL> exit
Disconnected

Solution
——–

As Grid User

$ crsctl start res ora.cssd
CRS-2672: Attempting to start ‘ora.cssd’ on ‘prmdb’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘prmdb’
CRS-2676: Start of ‘ora.diskmon’ on ‘prmdb’ succeeded

$ asmcmd
Connected to an idle instance.
ASMCMD> startup
ORA-00099: warning: no parameter file specified for ASM instance

ASM instance started

Total System Global Area 317333504 bytes
Fixed Size 2221120 bytes
Variable Size 289946560 bytes
ASM Cache 25165824 bytes
ORA-15110: no diskgroups mounted

$ sqlplus “/as sysasm”

SQL*Plus: Release 11.2.0.3.0 Production on Thu Nov 15 12:53:09 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Automatic Storage Management option

SQL> select name,state from v$asm_diskgroup;

NAME STATE
—————————— ———–
DATA MOUNTED

Installing 11gR2 Real Application Clusters on Oracle Enterprise Linux 4 x86-64 (64 bit)

Installing Oracle Grid Infrastrcture

This document provides a step by step guide to, installing Oracle 11gR2 Real Application clusters on Oracle Enterprise Linux 4, x86-64 (64 Bit).
Keep in mind, this is a test installation and hence does not follow some of the best practices for installing oracle real application clusters (eg: use a separate o/s user for oracle grid infrastructure, have at least 3 voting disks etc…).

Important Documents

Grid infrastructure installation guide for Linux
Real Application Clusters Installation guide for Linux and Unix
Clusterware Administration and Deployment Guide
Real Application Clusters Administration and Deployment Guide

Some new concepts in 11gR2 Rac


Oracle clusterware and ASM now are installed into the Same Oracle Home, and is now called the grid infrastructure install.

Raw devices are no longer supported for use for anything (Read oracle cluster registry, voting disk, asm disks), for new installs.

OCR and Voting disk can now be stored in ASM, or a certified cluster file system.

The redundancy level of your ASM diskgroup (That you choose to place voting disk on) determines the number of voting disks you can have.
You can place

  • Only One voting disk on an ASM diskgroup configured as external redundancy
  • Only Three voting disks on an ASM diskgroup configured as normal redundancy
  • Only Five voting disks on an ASM diskgroup configured as high redundancy


The contents of the voting disks are automatically backed up into the OCR

ACFS (Asm cluster file system) is only supported on Oracle Enterprise Linux 5 (And RHEL5), not on OEL4.
There is a new service called cluster time synchronization service that can keep the clocks on all the servers in the cluster synchronized (In case you dont have network time protocol (ntp) configured)

Single Client Access Name (SCAN), is a hostname in the DNS server that will resolve to 3 (or at least one) ip addresses in your public network. This hostname is to be used by client applications to connect to the database (As opposed to the vip hostnames you were using in 10g and 11gr1). SCAN provides location independence to the client connections connecting to the database. SCAN makes node additions and removals transparent to the client application (meaning you dont have to edit your tnsnames.ora entries every time you add or remove a node from the cluster).

Oracle Grid Naming Service (GNS), provides a mechanism to make the allocation and removal of VIP addresses a dynamic process (Using dynamic Ip addresses).

Intelligent Platform Management Interface (IPMI) integration, provides a new mechanism to fence server’s in the cluster, when the server is not responding.

The installer can now check the O/S requirements, report on the requirements that are not met, and give you fixup scripts to fix some of them (like setting kernel parameters).

The installer can also help you setup SSH between the cluster nodes.

There is anew deinstall utility that cleans up a existing or failed install.

There is a new Instantaneous problem detection OS tool. This tool is the nextgen oswatcher.

And the list goes on and on.

Listed below are some of the top hardware and software requirements for the install. Please refer to the grid infrastructure install guide for all the pre-requisites.

Hardware requirements

Please refer to the install guide for all the pre-requisites.

One or more servers
Shared Disk storage (SAN, NAS)
GigE or higher network switch (For the private interconnect)
Atleast One GigE or higher Network interface card for the private interconnect
Atleast One Network interface card for the public interconnect
IP address requirements

  • One SCAN name (That resolves to 3 ip addresses) for the cluster
  • For each Node
    • 1 public IP address
    • 1 private IP address
    • 1 VIP IP address
  • The SCAN, Public IP addresses and the VIP should be in the same subnet.


Operating System requirements

Amoung other requirements (Please see the install documents for all the requirements)

RHEL4 (or OEL4), update 7 or higher
RHEL5 (or OEL5), update 2 or higher

Software requirements list for x86-64 linux platforms
Kernel parameters

If you are part of the Redhat Linux Network, or Oracle Unbreakable linux network, then you can get the oracle-validated rpm. This rpm sets up all the (or most of it in my case) required rpm’s and kernel parameters for you. You can use up2date to install this package. This really simplifies the setup steps that you need to perform.

Setup SSH between the cluster nodes.

Setup ntp deamon for clock synchronization on all the nodes. Remember to use the -x switch, or else cluvfy will always call this out and declare failure.

Configure ASMLIB

  • Install the asm packages oracleasm-support oracleasmlib oracleasm
  • Run oracleasm configure on each node to configure asmlib
  • Run oracleasm createdisk to create the needed ASM disks from one node
  • Run oracleasm scandisks to mount the disks on all the other nodes.

Download Software for Installation

Download Oracle Grid Infrastructure Software
Download Oracle Database  Software – Part 1
Download Oracle Database  Software – Part 2

Grid Infrastructure Installation

Unzip the grid infrastructure software into a staging directory on one of the server’s.
login as the oracle user.
cd <software stage home>/grid
./runInstaller


In the screen above, choose to “Install and configure grid infrastructure for a cluster”, click Next.


Choose a “Typical” installation, Click Next.

The above screen will be displayed with a default scan name, and just the details regarding the node from which you are running the installer.
Click on the Add button and add the hostname and virtual ip name for the rest of the nodes in the cluster.
The SCAN Name has to be a valid hostname that is setup in DNS which resolves to 3 ip addresses (atleast 1) in a round robin fashion.
Replace the default shown in the screen above with a valid SCAN you will be using.

If you have not configured ssh connectivity, you can click on “SSH connectivity” and configure ssh.

You can click on “Identify network interfaces” to check and/or change, the interfaces being used for the public and private interfaces.


Click Ok to return to the Cluster Configuration screen.

Click Next

In the screen above, choose the location for Oracle Base, the Home directory for the Grid infrastructure software, the passwords for the ASM instance and the group to be used as the OSASM group. Click Next to continue.


In the screen above, choose the disk group name to use for the first ASM diskgroup created, and which disks to use. The ASM instance needs to be created so that the voting disk and cluster registry can be placed on ASM. In my case i had configured the disks using ASMLIB before i started the grid infrastructure install. In my case the disks are created as Raid 10 on the array, so i chose external redundancy. This means that there will be only one voting disk created. Click Next to continue.

In the “Prerequisite checks” screen above, the installer runs through and checks all the pre-requisites (kernel parameters, ssh, ntp, packages). It displays the pre-requisites that are not being met. Some of these it is able to help us fix (like kernel parameters). For the one’s it can fix it will give you a script to run from /tmp as root.

In my case although i had used the oracle-validated package, there were some packages missing (i386 versions of lib-aio, unixODBC and unixODBC-devel), and some kernel parameters to be fixed.


I fixed them and ran the Check again, which resulted in the above two remaining. Since I know that i have enough swap space and I had not used the -x flag for the ntp setup, i choose to Ignore All and continue.


In the above screen, click finish to continue with the install.


Finally it will prompt you with the screen above to run the scripts as root.
Make sure that you run them one at a time, one node at a time.
Once the scripts are run to completion successfully, click on the Ok button.

In my case the cluster verification utility failed, because of not using the -x switch in ntp. I choose to skip it. I’ll fix it later.


In the screen above, click Close to exit the installer.

Your Oracle 11gR2 grid infrastructure installation is now complete.

Screen output from root.sh

First Node





Second Node