Create ACFS Filesystem Oracle 12c Linux 12.1.0.2

— Create ACFS filesystem on 12c Linux Exadata —
1. Create a volume in ASM

ASMCMD [+] > volcreate -G datac1 -s 500G ACFS_VOL

If you get like below error

ORA-15032: not all alterations performed
ORA-15479: ASM diskgroup does not support volumes
ORA-15221: ASM operation requires compatible.asm of 12.1.0.2.0 or higher (DBD ERROR: OCIStmtExecute

Check the Current Compatibility for the Diskgroup

select group_number, name,compatibility, database_compatibility from v$asm_diskgroup
GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY
———— —————————— ———————————————————— ————————————————————
1 DATAC1 12.1.0.1.0 11.2.0.2.0
2 DBFS_DG 12.1.0.0.0 11.2.0.2.0
3 RECOC1 12.1.0.1.0 11.2.0.2.0
SQL> alter diskgroup DATAC1 set attribute ‘compatible.asm’=’12.1.0.2.0’;

Diskgroup altered.

SQL> alter diskgroup RECOC1 set attribute ‘compatible.asm’=’12.1.0.2.0’;

Diskgroup altered.

SQL> alter diskgroup DBFS_DG set attribute ‘compatible.asm’=’12.1.0.2.0′;

Diskgroup altered.

SQL> select group_number, name,compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY
———— —————————— ———————————————————— ————————————————————
1 DATAC1 12.1.0.2.0 11.2.0.2.0
2 DBFS_DG 12.1.0.2.0 11.2.0.2.0
3 RECOC1 12.1.0.2.0 11.2.0.2.0

Run again the volcreate command
ASMCMD [+] > volcreate -G datac1 -s 500G ACFS_VOL
2. Check the volume information
ASMCMD [+] > volinfo -G datac1 ACFS_VOL

Diskgroup Name: DATAC1

Volume Name: ACFS_VOL
Volume Device: /dev/asm/acfs_vol-45
State: ENABLED
Size (MB): 512000
Resize Unit (MB): 64
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:
sqlplus “/as sysasm”

SELECT volume_name, volume_device FROM V$ASM_VOLUME
WHERE volume_name =’ACFS_VOL’;

VOLUME_NAME
——————————
VOLUME_DEVICE
——————————————————————————–
ACFS_VOL
/dev/asm/acfs_vol-45

3. Create a file system with the Oracle ACFS mkfs command using output of above command

With root user run below command

/sbin/mkfs -t acfs /dev/asm/acfs_vol-45
mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfs_vol-45
mkfs.acfs: volume size = 536870912000 ( 500.00 GB )
mkfs.acfs: Format complete.
4. Register the file system with the acfsutil registry command.

Create a directory called ACFS

cd /
mkdir /ACFS

/sbin/acfsutil registry -a /dev/asm/acfs_vol-45 /ACFS

acfsutil registry: mount point /ACFS successfully added to Oracle Registry
Imp Note 1: Registering an Oracle ACFS file system also causes the file system to be mounted automatically whenever Oracle Clusterware or the system is restarted.
Imp Note 2: Oracle ACFS registration (acfsutil registry) is not supported in an Oracle Restart (standalone) configuration, which is a single-instance (non-clustered) environment.
5. Verify if ACFS filesystem mounted automatically

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
30G 17G 12G 59% /
tmpfs 252G 22G 231G 9% /dev/shm
/dev/sda1 496M 54M 418M 12% /boot
/dev/mapper/VGExaDb-LVDbOra1
99G 57G 37G 61% /u01
/dev/mapper/VGExaDb-LVDbOra2
197G 68G 119G 37% /u02
/dev/mapper/VGExaDb-LVBkp1
985G 288G 648G 31% /u03
/dev/asm/acfs_vol-45 500G 1.1G 499G 1% /ACFS
As you can see from above output the ACFS filesystem moutned automatically after registration
6. If you did not register the ACFS filesystem it will not mount automatically, you can mount the ACFS filesystem manually using below command

As root user

/bin/mount -t acfs /dev/asm/acfs_vol-45 /ACFS

7. Give appropriate permissions to the filesystem required by Oracle users
chown -R oracle:dba /ACFS

su – oracle

cd /ACFS

touch abc.txt

Add a Disk and Extend FileSystem on RHEL7 on VMware Fusion 8 for Mac (El- Capitan)

Below is the Filesystem on our VM

Filesystem           Size  Used Avail Use% Mounted on

/dev/mapper/ol-root   18G   11G  7.0G  61% /

devtmpfs             476M     0  476M   0% /dev

tmpfs                491M  144K  491M   1% /dev/shm

tmpfs                491M  7.2M  484M   2% /run

tmpfs                491M     0  491M   0% /sys/fs/cgroup

/dev/sda1            497M  177M  320M  36% /boot

tmpfs                 99M   12K   99M   1% /run/user/0

Our aim is to grow the Filesystem /dev/mapper/ol-root

Filesystems in Linux are extended using the following steps

  1. Add a Physical Disk —> Needs a Shutdown
  2. Create a Physical Volume, using pvcreate —> Online
  3. Grow the Volume Group, using vgextend —> Online
  4. Grow the Logical Volume, using extend—> Online
  5. Extend the Filesystem(XFS) , using xfs_growfs—> Online

Let us continue and see each step in details now…

  1. Shutdown your VM and ddd the disk from option Virtual Machine—> Hard Disk (SCSI)
  2. Check from Disk Utility if the new Disk AppearsScreen Shot 2016-05-14 at 5.27.55 PM

$ pvdisplay

3. Create Physical Volume

root@localhost ~]# pvcreate /dev/sdc

WARNING: xfs signature detected on /dev/sdc at offset 0. Wipe it? [y/n]: y

  Wiping xfs signature on /dev/sdc.

  Physical volume “/dev/sdc” successfully created

4. Extend the Volume Group

$ vgdisplay

[root@localhost ~]# vgextend ol /dev/sdc

  Volume group “ol” successfully extended

5. Extend the Logical Volume

[root@localhost ~]# lvdisplay

  — Logical volume —

  LV Path                /dev/ol/root

  LV Name                root

  VG Name                ol

  LV UUID                nfBnYo-iJhh-fC0y-dWqD-A5Nf-SVHX-lBwioW

  LV Write Access        read/write

  LV Creation host, time localhost.localdomain, 2016-05-13 14:49:36 +0300

  LV Status              available

  # open                 1

  LV Size                17.47 GiB

  Current LE             4472

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  – currently set to     8192

  Block device           252:0

[root@localhost ~]# lvextend –size 35G /dev/ol/root

  Size of logical volume ol/root changed from 17.47 GiB (4472 extents) to 35.00 GiB (8960 extents).

  Logical volume root successfully resized.

[root@localhost ~]# lvdisplay

  — Logical volume —

  LV Path                /dev/ol/root

  LV Name                root

  VG Name                ol

  LV UUID                nfBnYo-iJhh-fC0y-dWqD-A5Nf-SVHX-lBwioW

  LV Write Access        read/write

  LV Creation host, time localhost.localdomain, 2016-05-13 14:49:36 +0300

  LV Status              available

  # open                 1

  LV Size                35.00 GiB

  Current LE             8960

  Segments               2

  Allocation             inherit

  Read ahead sectors     auto

  – currently set to     8192

  Block device           252:0

Now the logical volume is extended

6. Resize the Filesystem

Since this is RHEL7 and the default FS for root is XFS, if we try to grow the FS using resize2fs like we did earlier on ext3 and ext4, it will give below error

[root@localhost ~]# resize2fs /dev/ol/root 35G

resize2fs 1.42.9 (28-Dec-2013)

resize2fs: Bad magic number in super-block while trying to open /dev/ol/root

Couldn’t find valid filesystem superblock

If you were using RHEL6 and ext4, you could use resize2fs, but we will continue with xfs_growfs

To grow the XFS filesystem we have to use command xfs_growfs

[root@localhost ~]# xfs_growfs /dev/ol/root

meta-data=/dev/mapper/ol-root    isize=256    agcount=4, agsize=1144832 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=0        finobt=0

data     =                       bsize=4096   blocks=4579328, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

log      =internal               bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

data blocks changed from 4579328 to 9175040

Check to confirm

Filesystem           Size  Used Avail Use% Mounted on

/dev/mapper/ol-root   35G   11G   25G  31% /

devtmpfs             476M     0  476M   0% /dev

tmpfs                491M  144K  491M   1% /dev/shm

tmpfs                491M  7.2M  484M   2% /run

tmpfs                491M     0  491M   0% /sys/fs/cgroup

/dev/sda1            497M  177M  320M  36% /boot

tmpfs                 99M   16K   99M   1% /run/user/0

Resize Filesystem, Logical Volume in Linux for Virtual Machine using ext4

Often when you have a Linux up and running you run out of space. I use VMware Fusion on my Macbook Pro 13 inch to do a lot of testing. To increase the diskspace you can follow the below simple steps

  1. Shutdown the VM and Increase the Diskspace
  2. Go to Disk Utility and the device /dev/sda will show the additional space as empty
  3. Right click and say create partition, once partition created it will have a device name eg:  /dev/sda3
  4. $ vgdisplay
  5. $ lvdisplay
  6. $ lvextend /dev/VolGroup/lv_root /dev/sda3
  7. The logical volume is extended now resize the filesystem to reflect the new change
  8. $ resize2fs /dev/VolGroup/lv_root
  9. $ df -h

    There is thus a new partition /dev/sda3 created and the logical volume is extended to include this partition. And finally the filesystem has been resized to reflect the disk size

Shell Script to Monitor AIX Filesystem and Send Email

The Below Shell Script checks the Filesystem mount points and using AWK outputs all filesystem exceeding 90% space to a file called diskspacepoll. Once that is done the sed command removes any special character like ‘%’ from the output file and cleans it to a file called output.log

The next important logic is in the AWK block. Here a variable called pattern is defined using the threshold of 90%. Another variable called var is defined. This is your baseline metric. So it value of pattern exceeds var then the mail is dispatched else the script does nothing. You can put this in crontab as a every 5 minute job to continuously poll the filesystems and incase the threshold is exceed it will dispatch an email immediately to the admin


#!/bin/ksh
df -P | grep -v Capacity | awk '{if ($5 >= 90) {print $5;}}' > /home/root/diskspacepoll
sed 's/[!@#\$%^&*()]//g' /home/root/diskspacepoll > /home/root/output.log
####### AWK LOGICAL BLOCK #########
pattern=$(awk '$1 > 90 {print $1}' /home/root/output.log)
var=90
if [[ $pattern > $var ]]
then
echo "Please Check with System Administrator" | mailx -s "90% Threshold of DiskSpace exceeded on Server 1 (ESB1)" sysadmin@company.com
fi

Fix Corrupted Veritas Filesystem VXFS

A Vertitas filesystem can sometimes become corrupted. This specially happens when your replicating a filesystem. So how do you fix that !? Well lets begin.

Say your importing a  diskgroup called app-dg

vxdg import app-dg

#VxVM vxdg ERROR V-5-1-10978 Disk group app-dg: import failed:
Disk is in use by another host

Now lets list all the disk groups on the system

vxdisk -o alldgs list

So how do you fix it ? You have to clear the host ID attached with that disk group

vxdg -f -C import app-dg

vxvol -g app-dg startall

Now lets mount it

mount /local/zones/app/appmount
UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/app-dg/appmount  is corrupted. needs checking.

Oops so now we begin the real fixing.

fsck -F vxfs -o full /dev/vx/dsk/app-dg/appmount
log replay in progress
pass0 – checking structural files
pass1 – checking inode sanity and blocks
pass2 – checking directory linkage
pass3 – checking reference counts
pass4 – checking resource maps
OK to clear log? (ynq)y
flush fileset headers? (ynq)y
set state to CLEAN? (ynq)y

Now you have repaired the file system now mount it again

mount /local/zones/app/appmount

And voila ! 😉

Create Logical Volume Group and JFS2 filesystem AIX

We had a new Oracle installation on an AIX 6.1 Power7 server. Our organization doesn’t have an AIX admin. So to faciliate the lack of resources, I decided to be the makeshift Aix admin and create the mountpoints for the Oracle installation. It was easier than I thought. The Smitty tool is very easy and powerful in doing day-to-day tasks on Aix.

The local hard disks can be found at /dev/hdisk* location. First we will create a Volume group.

1. Create Logical Volume Group
mkvg -y oradata hdisk2

2. Create Logical Volume
prmdb[/dev] # mklv -t jfs2log oradata 1
loglv01

3. List the volume group (our new volume group is called oradata)
prmdb[/dev] # lsvg
rootvg
oravg
oradata

4. Check the status of the volume group
prmdb[/dev] # lsvg -l oradata
oradata:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv01 jfs2log 1 1 1 closed/syncd N/A

5. Goto smitty and create the journal filesystem for the logical volume we just created.

prmdb[/dev] # smitty jfs2

– Add an Enhanced Journaled File System
– Select the Volume Group Name “oradata” we previously created
– Put the unit size in your choice of megabytes or gigabytes(to find out size command is “lsvg oradata”
– Select mount at restart as yes
– Save and Exit

prmdb[/dev] # lsvg
rootvg
oravg
oradata

prmdb[/dev] # lsvg oradata
VOLUME GROUP: oradata VG IDENTIFIER: 00f7abf000004c000000013b046eb8c2
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 799 (102272 megabytes)
MAX LVs: 256 FREE PPs: 798 (102144 megabytes)
LVs: 1 USED PPs: 1 (128 megabytes)
OPEN LVs: 0 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none INFINITE RETRY: no

prmdb[/dev] # smitty jfs2

Create the file system as “Add a new enchanced journal filesystem”
New File System size is 207618048
COMMAND STATUS

Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below.

File system created successfully.
103805652 kilobytes total disk space.
New File System size is 207618048

List all mounted file systems from smitty

Name Nodename Mount Pt VFS Size Options Auto Accounting
/dev/hd4 — / jfs2 4194304 rw yes no
/dev/hd1 — /home jfs2 1048576 rw yes no
/dev/hd2 — /usr jfs2 6291456 rw yes no
/dev/hd9var — /var jfs2 6291456 rw yes no
/dev/hd3 — /tmp jfs2 6291456 rw yes no
/dev/hd11admin — /admin jfs2 262144 rw yes no
/proc — /proc procfs — rw yes no
/dev/hd10opt — /opt jfs2 2097152 rw yes no
/dev/livedump — /var/adm/ras/livedump jfs2 524288 rw yes no
/dev/cd0 — /cdrom cdrfs — ro no no
/dev/locallv — /usr/local jfs2 1048576 rw yes no
/dev/fslv00 — /oracle jfs2 102760448 rw yes no
/dev/fslv01 — /oradata jfs2 207618048 rw yes no

6. Create a mount point and mount the filesystem on it (Check the device name from the above entry in smitty)

prmdb[/dev] # mkdir /oradata
prmdb[/dev] # mount /dev/fslv01 /oradata
prmdb[/dev] # df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.00 1.80 11% 10247 3% /
/dev/hd2 3.00 0.43 86% 56419 36% /usr
/dev/hd9var 3.00 2.63 13% 8462 2% /var
/dev/hd3 3.00 0.74 76% 6017 4% /tmp
/dev/hd1 0.50 0.39 23% 60 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc – – – – – /proc
/dev/hd10opt 1.00 0.69 32% 9968 6% /opt
/dev/livedump 0.25 0.25 1% 4 1% /var/adm/ras/livedump
/dev/locallv 0.50 0.50 1% 128 1% /usr/local
/dev/fslv00 49.00 38.92 21% 5661 1% /oracle
/dev/fslv01 99.00 98.98 1% 4 1% /oradata

7. Test the mountpoint by creating a text file

prmdb[/dev] # cd /oradata
prmdb[/oradata] # touch abc.txt
prmdb[/oradata] # rm -rf abc.txt

8. Check the entries in the filesystems file. if it doesnt exists then create it

prmdb[/oradata] # vi /etc/filesystems
prmdb[/oradata] #

/oradata:
dev = /dev/fslv01
vfs = jfs2
log = /dev/loglv02
mount = true
options = rw
account = false

And Voila your done configuring a persistent file system using the JFS2 filesystem on Aix for your oracle installation. From their on it is a standard Oracle installation with creating users, groups etc.

—————————————————————————————————-