Create pem file for SSH access Linux

It is always good practice to lock down password based logins and SSH using keys. We can use pem files to login to remote server from local machines. Infact if you use AWS, the only way to SSH into the server is using pem files.

This procedure can be done on any server cloud based or sitting on your LAN

1. On your local Machine from where you require access, I prefer to keep it in the home directory of the user

# cd $HOME

# ssh-keygen -t rsa -b 2048

Generating public/private rsa key pair.
Enter file in which to save the key (/Users/shadab/.ssh/id_rsa): wha
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in wha.
Your public key has been saved in
The key fingerprint is:
The key’s randomart image is:
+—[RSA 2048]—-+
| |
| . |
|= o |
|oB . . |
| o+ .o S |
|.+.o= .. |
|+ o*.Xo.+ |
|o =o&.BO o |
| + E+X++=.. |

The file which i chose to create is “wha”, this will create 3 files “wha”, “wha.pem”, “”

wha.pem is empty for now
wha : is your private key : is your public key



2. Keep the private key (wha) as it is and create a pem file from it

# rsa -in wha -outform pem > wha.pem

writing RSA key

Now the pem file is created. Next step to copy public key to remote server

Note: If you dont have rsa utility on your local machine, with a simple copy command also you can create the pem file.

# cp -p wha wha.pem



3. Copy the public key to your remote server, which needs to be accessed

# ssh-copy-id -i root@

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “”
The authenticity of host ‘ (’ can’t be established.
ECDSA key fingerprint is SHA256:*************************.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@ password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘root@′”
and check to make sure that only the key(s) you wanted were added.



4. Change the permissions of your local machine pem file

# chmod 400 wha.pem



5. Login to remote server with pem file to check

# ssh -i /Users/shadab/wha.pem root@


6. Disable SSH Access to server

On the remote server with root user

# vim /etc/ssh/sshd_config

Change parameter PasswordAuthentication yes to PasswordAuthentication no

Restart SSH Daemon

# systemctl restart sshd


# service sshd restart


P.S: If you need to do the same for any other user on the remote server. you just have to
copy the public key file with that user on the remote server


ssh-copy-id -i oracle@

ssh -i /Users/shadab/wha.pem oracle@



Drop those Indexes and see the Magical Unicorn of Exadata do it’s Magic !

Now that I have got your attention with my sales pitch. Let me quote a sales pitch we have heard quite often from Exadata Sales rep “Drop the indexes on Exadata ; It is a magical unicorn and it will guarantee performance even without them”  Well, 50 years of computer science begs to differ. This Urban legend has to die for once and all. Indexes are computer science constructs and one of the basics of search optimization.

Hardware like Exadata cannot always be a solution to a Software problem. Exadata though a very well engineered system design to run Oracle database it cannot always compensate for wrong SQL logic. Recently we encountered a issue where a query running in a constant loop was doing lot of user I/O and hence consuming way too much CPU. We decided to dig a little further by getting the execution plan for the query.


[code language=”sql”]
SELECT  t.*                          ,
        e.RETURNCODEMSG              ,
        r.RETURNCODE            AS R_RC  ,
        R.RETURNCODEDESC        AS R_RCD ,
                SELECT  *
                                SELECT  ROWNUM rown ,
                                                SELECT  TRAN.*
                                                FROM    transactions TRAN,
                                                        TRANSREVERSED RTRAN
                                                WHERE   RTRAN.LOGREFID(+)    =TRAN.ID
                                                        AND TRAN.UTIMESTAMP >= to_date(’26/10/2017′, ‘dd/MM/yyyy’)
                                                        AND TRAN.UTIMESTAMP <= to_date(’26/10/2017 23:59:59′,’dd/MM/yyyy HH24:MI:ss’)                                                         AND 1                =1                                                         AND TRAN.SERVICENAME =’STRINGTOSEARCH’                                                         AND TRAN.returncode !=’000004′                                                 ORDER BY TRAN.UTIMESTAMP DESC                                         )                                         t1                         )                         q1                 WHERE   q1.rown    >=1
                        AND q1.rown
        t               ,
        esbreturncode e
WHERE   r.LOGREFID(+)      =t.ID
        AND e.returncode(+)=t.returncode


Now If you analyze the Query, it’s a complex one with Joins on two tables selecting data from table TRANSACTIONS and then doing an inner join on a column in table TRANSREVERSED. The predicate LOGREFID is used for this inner join. In the Explain plan we saw a full table scan being done on the TRANSREVERSED table. Even though the lookup table is small the result set via which it has to be compared is quite huge.

Execution Plan Pre-Index Ethos

[code language=”sql”]

<hr />
<p class="x_MsoNormal">| Id  | Operation                          | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |</p>
<p class="x_MsoNormal">——————————————————————————————————————–</p>
<p class="x_MsoNormal">|   0 | SELECT STATEMENT                   |               |    39 | 84786 | 17007   (1)| 00:03:25 |       |       |</p>
<p class="x_MsoNormal">|   1 |  SORT ORDER BY                     |               |    39 | 84786 | 17007   (1)| 00:03:25 |       |       |</p>
<p class="x_MsoNormal">|*  2 |   HASH JOIN OUTER                  |               |    39 | 84786 | 17006   (1)| 00:03:25 |       |       |</p>
<p class="x_MsoNormal">|*  3 |    HASH JOIN RIGHT OUTER           |               |    39 | 80106 |  9053   (1)| 00:01:49 |       |       |</p>
<p class="x_MsoNormal">|   4 |     TABLE ACCESS STORAGE FULL      | ESBRETURNCODE |    76 |  2964 |     4   (0)| 00:00:01 |       |       |</p>
<p class="x_MsoNormal">|*  5 |     VIEW                           |               |    39 | 78585 |  9049   (1)| 00:01:49 |       |       |</p>
<p class="x_MsoNormal">|   6 |      COUNT                         |               |       |       |            |          |       |       |</p>
<p class="x_MsoNormal">|   7 |       VIEW                         |               |    39 | 78078 |  9049   (1)| 00:01:49 |       |       |</p>
<p class="x_MsoNormal">|   8 |        SORT ORDER BY               |               |    39 | 12558 |  9049   (1)| 00:01:49 |       |       |</p>
<p class="x_MsoNormal">|*  9 |         HASH JOIN OUTER            |               |    39 | 12558 |  9048   (1)| 00:01:49 |       |       |</p>
<p class="x_MsoNormal">|  10 |          PARTITION RANGE SINGLE    |               |    39 | 11505 |  1095   (1)| 00:00:14 |    86 |    86 |</p>
<p class="x_MsoNormal">|* 11 |           TABLE ACCESS STORAGE FULL| TRANSACTIONS  |    39 | 11505 |  1095   (1)| 00:00:14 |    86 |    86 |</p>
<p class="x_MsoNormal">|  12 |          PARTITION RANGE ALL       |               | 16278 |   429K|  7953   (1)| 00:01:36 |     1 |1048575|</p>
<p class="x_MsoNormal">|  13 |           TABLE ACCESS STORAGE FULL| TRANSREVERSED | 16278 |   429K|  7953   (1)| 00:01:36 |     1 |1048575|</p>
<p class="x_MsoNormal">|  14 |    PARTITION RANGE ALL             |               | 16278 |  1907K|  7953   (1)| 00:01:36 |     1 |1048575|</p>
<p class="x_MsoNormal">|  15 |     TABLE ACCESS STORAGE FULL      | TRANSREVERSED | 16278 |  1907K|  7953   (1)| 00:01:36 |     1 |1048575|</p>



After trying a couple of silly scenarios for tuning it like moving the Table to the flash cache permanently with FLASH_CACHE=KEEP option and trying to ask the developer to reduce the time span for which it searches the first table via field UTIMESTAMP (developers will never admit their code is crappy). Then we went back to the hammer and nails approach. “Why not create an index on the field instead of trying to put a nail through with a Jedi lightsaber !!”

So after a few hours of excersizing our grey cells we created a non-unique b*tree index on the field LOGREFID and rebuild the table stats. The performance of the query immediately saw improvement from consuming a helluva lot user I/O to almost nothing.  In case of heavy reads a missing index is almost always the silver bullet.

Execution Plan Post-Index Ethos

[code language=”sql”]
<p class="x_MsoNormal">———————————————————————————————————————-</p>
<p class="x_MsoNormal">| Id  | Operation                            | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |</p>
<p class="x_MsoNormal">———————————————————————————————————————-</p>
<p class="x_MsoNormal">|   0 | SELECT STATEMENT                     |               |    39 | 84786 |  1174   (1)| 00:00:15 |       |       |</p>
<p class="x_MsoNormal">|   1 |  SORT ORDER BY                       |               |    39 | 84786 |  1174   (1)| 00:00:15 |       |       |</p>
<p class="x_MsoNormal">|   2 |   NESTED LOOPS OUTER                 |               |    39 | 84786 |  1173   (1)| 00:00:15 |       |       |</p>
<p class="x_MsoNormal">|*  3 |    HASH JOIN RIGHT OUTER             |               |    39 | 80106 |  1127   (1)| 00:00:14 |       |       |</p>
<p class="x_MsoNormal">|   4 |     TABLE ACCESS STORAGE FULL        | ESBRETURNCODE |    76 |  2964 |     4   (0)| 00:00:01 |       |       |</p>
<p class="x_MsoNormal">|*  5 |     VIEW                             |               |    39 | 78585 |  1123   (1)| 00:00:14 |       |       |</p>
<p class="x_MsoNormal">|   6 |      COUNT                           |               |       |       |            |          |       |       |</p>
<p class="x_MsoNormal">|   7 |       VIEW                           |               |    39 | 78078 |  1123   (1)| 00:00:14 |       |       |</p>
<p class="x_MsoNormal">|   8 |        SORT ORDER BY                 |               |    39 | 12558 |  1123   (1)| 00:00:14 |       |       |</p>
<p class="x_MsoNormal">|*  9 |         HASH JOIN OUTER              |               |    39 | 12558 |  1122   (1)| 00:00:14 |       |       |</p>
<p class="x_MsoNormal">|  10 |          PARTITION RANGE SINGLE      |               |    39 | 11505 |  1095   (1)| 00:00:14 |    86 |    86 |</p>
<p class="x_MsoNormal">|* 11 |           TABLE ACCESS STORAGE FULL  | TRANSACTIONS  |    39 | 11505 |  1095   (1)| 00:00:14 |    86 |    86 |</p>
<p class="x_MsoNormal">|  12 |          INDEX STORAGE FAST FULL SCAN| LOGREFID_IDX  | 16278 |   429K|    27   (0)| 00:00:01 |       |       |</p>
<p class="x_MsoNormal">|  13 |    TABLE ACCESS BY GLOBAL INDEX ROWID| TRANSREVERSED |     1 |   120 |     2   (0)| 00:00:01 | ROWID | ROWID |</p>
<p class="x_MsoNormal">|* 14 |     INDEX RANGE SCAN                 | LOGREFID_IDX  |     1 |       |     1   (0)| 00:00:01 |       |       |</p>
<p class="x_MsoNormal">[/code]

Lets look at more graphical representation  from OEM ;  A picture speaks louder than a thousand words.


/* The left-side mountain spikes are pre-index ethos and the right side hills are post. Self explanatory */

Heavy reads with high cost screams Index missing. Now this does not mean you go and start creating indexes in every SQL query you see a bit of reads. The Crux of the matter is that before you go ahead and blame your storage admin or disk subsystem or get new hardware. Doing a bit of the basics like looking at AWR reports, generating plans and applying good ol’ DBA intuition, it can do more good than throwing hardware at it. Hardware will never guarantee solution to a software problem.

P.S: We had the above issue on a densely packed Exadata X4-2 Quarter rack sharing 20+ instances. Disk I/O is the number one performance killer on a consolidated platform. Waiting on Disk I/O will always lead to higher CPU consumption.



Send SMS using Shell Script, checking Transaction Count of Table in Oracle

Being a DBA requires us many a times to step outside our role and solve a problem in a smart way. One of the issues we faced recently with one of our application, was the application server hanging and not being able to function properly. The application team did not have any means of getting a quick alert and depending on contact center  to get alerts about the application being down. This required the application server to be monitored round the clock using SMS alerts. Now the application itself wasn’t smart enough to do this. So i did a bit of analysis on the application to figure out a way . Now the Oracle database on Exadata rarely if ever goes down (A very Good DBA team 🙂 ) So to find out the application has a problem, we figured to keep the database as a reference point of availability would be best.

The application writes to a table called authorizations. This being a payment gateway application the transaction count is very high. We receive 24/7 about 10TPS. So in a minute the transaction count will never be below 500-600 anytime . So the first step in creating this script is to create a query to check the transaction count and spool it to file called smscard.txt

select count(*) from authorizations where ltimestamp > sysdate – 5/1440 and I039_RSP_CD=’00’;

This will give us the transaction count in the last 5 minutes

Next We create a logic to check this count with a defined threshold. Using AWK we can remove the garbage information and get the count(*) output

####### AWK BLOCK #########
awk ‘NF=5’ /u03/scripts/smscard.txt > /u03/scripts/trimoutput
pattern=”$(awk ‘NR==5 {print;exit}’ /u03/scripts/trimoutput)”
if [[ “$pattern” -lt 3 ]]
sh /u03/scripts/

First Line Removes the Extra spaces from the smscard.txt file and writes to file “trimoutput”

Second Prints Line 5 and writes to variable “pattern” from  “trimoutput” file

Third Line checks IF pattern is less than threshold. If condition is true it sends an SMS file to the SMS gateway using FTP. Our SMS gateway requires a text file with mobile numbers to be send to a FTP server.

So there are 3 scripts. A FTP script, a SQL script and the Master script. The Master script calls the SQL script and the FTP script and sends the SMS file based on the IF THEN logic. Below are the 3 scripts

FTP Script : /u03/scripts/

quote USER $USER
lcd /u03/scripts
put $FILE
exit 0

SQL Script : /u03/scripts/counttsys.sql

select count(*) from authorizations where ltimestamp > sysdate – 5/1440 and I039_RESP_CD=’00’;

Main/Master Script : /u03/scripts/

current_time=$(date “+%Y%m%d%H%M%S”)
export ORACLE_HOME=/u03/app/oracle/product/
cd /u03/scripts
$ORACLE_HOME/bin/sqlplus xyz/******@onlnprd << EOF
whenever sqlerror exit sqlcode
set serveroutput on size 1000000
set feedback off
set line 200
spool /u03/scripts/smscard.txt
spool off
rm -f /u03/scripts/SMSTSYS.TXT
touch /u03/scripts/SMSTSYS.TXT
chmod 777 /u03/scripts/SMSTYS.TXT
echo “$current_time,ACQUIRER,33578971,30116935″>/u03/scripts/output
cat /u03/scripts/output > /u03/scripts/SMSTSYS.TXT
####### AWK BLOCK #########
awk ‘NF=5’ /u03/scripts/smscard.txt > /u03/scripts/trimoutput
pattern=”$(awk ‘NR==5 {print;exit}’ /u03/scripts/trimoutput)”
if [[ “$pattern” -lt 3 ]]
sh /u03/scripts/

The Final Script will generate an SMS file called SMSTYS.TXT in format with mobile numbers and send to the SMS ftp folder based on the count(*) output being less than threshold



Using a little bit of logic and the magic of Shell scripts we created a customized SMS probe for a very critical business application. Even an Email alert can be configured using a similar mechanism. A bit of innovation goes a long way to solve a business problem  🙂

Create ACFS Filesystem Oracle 12c Linux

— Create ACFS filesystem on 12c Linux Exadata —
1. Create a volume in ASM

ASMCMD [+] > volcreate -G datac1 -s 500G ACFS_VOL

If you get like below error

ORA-15032: not all alterations performed
ORA-15479: ASM diskgroup does not support volumes
ORA-15221: ASM operation requires compatible.asm of or higher (DBD ERROR: OCIStmtExecute

Check the Current Compatibility for the Diskgroup

select group_number, name,compatibility, database_compatibility from v$asm_diskgroup
———— —————————— ———————————————————— ————————————————————
SQL> alter diskgroup DATAC1 set attribute ‘compatible.asm’=’’;

Diskgroup altered.

SQL> alter diskgroup RECOC1 set attribute ‘compatible.asm’=’’;

Diskgroup altered.

SQL> alter diskgroup DBFS_DG set attribute ‘compatible.asm’=’′;

Diskgroup altered.

SQL> select group_number, name,compatibility, database_compatibility from v$asm_diskgroup;

———— —————————— ———————————————————— ————————————————————

Run again the volcreate command
ASMCMD [+] > volcreate -G datac1 -s 500G ACFS_VOL
2. Check the volume information
ASMCMD [+] > volinfo -G datac1 ACFS_VOL

Diskgroup Name: DATAC1

Volume Name: ACFS_VOL
Volume Device: /dev/asm/acfs_vol-45
Size (MB): 512000
Resize Unit (MB): 64
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 1024
sqlplus “/as sysasm”

SELECT volume_name, volume_device FROM V$ASM_VOLUME
WHERE volume_name =’ACFS_VOL’;


3. Create a file system with the Oracle ACFS mkfs command using output of above command

With root user run below command

/sbin/mkfs -t acfs /dev/asm/acfs_vol-45
mkfs.acfs: version =
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfs_vol-45
mkfs.acfs: volume size = 536870912000 ( 500.00 GB )
mkfs.acfs: Format complete.
4. Register the file system with the acfsutil registry command.

Create a directory called ACFS

cd /
mkdir /ACFS

/sbin/acfsutil registry -a /dev/asm/acfs_vol-45 /ACFS

acfsutil registry: mount point /ACFS successfully added to Oracle Registry
Imp Note 1: Registering an Oracle ACFS file system also causes the file system to be mounted automatically whenever Oracle Clusterware or the system is restarted.
Imp Note 2: Oracle ACFS registration (acfsutil registry) is not supported in an Oracle Restart (standalone) configuration, which is a single-instance (non-clustered) environment.
5. Verify if ACFS filesystem mounted automatically

$ df -h
Filesystem Size Used Avail Use% Mounted on
30G 17G 12G 59% /
tmpfs 252G 22G 231G 9% /dev/shm
/dev/sda1 496M 54M 418M 12% /boot
99G 57G 37G 61% /u01
197G 68G 119G 37% /u02
985G 288G 648G 31% /u03
/dev/asm/acfs_vol-45 500G 1.1G 499G 1% /ACFS
As you can see from above output the ACFS filesystem moutned automatically after registration
6. If you did not register the ACFS filesystem it will not mount automatically, you can mount the ACFS filesystem manually using below command

As root user

/bin/mount -t acfs /dev/asm/acfs_vol-45 /ACFS

7. Give appropriate permissions to the filesystem required by Oracle users
chown -R oracle:dba /ACFS

su – oracle

cd /ACFS

touch abc.txt

Create or Drop Public Synonyms in Oracle

Synonym is an alternative name for a table, view, sequence, procedure, stored function, package, materialized view, Java class schema object, user-defined object type, or another synonym.

— Create a Public Synonym for a Table —


— Drop a Public Synonym for a Table —


— View to Check Synonyms in Oracle —


Generate Excel | CSV Report from SQL*PLUS

Shell Script to generate a column seperator report in SQL*PLUS using a shell script. It can be opened in Excel and or text editor like NOTEPAD++


. $HOME/.bash_profile
NOW=$(date +”%a %b %e %T %Y”)
MON2=$(date +”%b%Y”)

cd $HOME/scripts
rm -rf Report_$MON2.csv

sqlplus -s /NOLOG << EOF > /tmp/log.txt
CONNECT abc/*****

set colsep ;
set pagesize 1000
set trimspool on
set headsep off
set linesize 9999

alter session set nls_date_format = ‘DD/MM/YYYY’;
spool Report_$MON2.csv
select * from tablename where condition;
spool off