Search every field of every table for a particular value in oracle

In case you want to search a string or any other number in all tables inside a schema. Then there is a free tool available for it. I have tried many scripts and PL/SQL procedures to do this, but by far the best is the freeware tool DBSEARCH. It can be downloaded free from the below link. It can even search inside SQL server since it connects to the database using ODBC drivers.

 

https://sites.google.com/site/freejansoft/dbsearch

or

https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxmcmVlamFuc29mdHxneDozYmQwNzNlMTk3NWUzMDhk

 

Create a copy of an Oracle table without copying the data?

Do a "create table" as command without selecting any rows

create table abc_new as select * from abc where 1=0;

Flashback Table to Timestamp 11gR2

Flashback Technology provided by Oracle is one of the most useful in a production environment. It is a life saver. You can even rollback a query to a point in time without requiring traditional database restore. I will demonstrate below how to do flashback a table to a point in time. But first you have to enable flashback.

From 11gR2 onwards you don’t have to shutdown the database to enable Flashback. You can do it online. To enable flashback in 11gR2 do the below steps.

1.  alter system set db_recovery_file_dest=’+FRA’ scope=both sid=’*’;

2. alter system set db_recovery_file_dest_size=’400G’ scope=both sid=’*’;

3.  select flashback_on from v$database;

FLASHBACK_ON
——————
NO

4. alter database flashback on;

select flashback_on from v$database;

FLASHBACK_ON
——————
YES

_______________________________________________________

— For a table to be flash-backed you need to have row movement enabled on it. We have a table here called Employees in Schema HR

ALTER TABLE HR.EMPLOYEES ENABLE ROW MOVEMENT;

 

— Check current SCN and Database Time

SELECT current_scn, TO_CHAR(SYSTIMESTAMP, ‘YYYY-MM-DD HH24:MI:SS’) FROM v$database;

4784136875    2012-04-30 8:30:00

 

— Check the current total no. of records in the table

SELECT COUNT(*) FROM HR.EMPLOYEES

Count(*)

126

 

— Check the current total no. of records in the table as it existed half an hour back at 8:00 AM

SELECT COUNT(*) FROM HR.EMPLOYEES AS OF TIMESTAMP TO_TIMESTAMP(‘2012-04-30 8:00:00’, ‘YYYY-MM-DD HH24:MI:SS’)

Count(*)

127

This means there is one record was deleted. We need to flashback the table as it existsed at 8:00AM

 

FLASHBACK TABLE HR.EMPLOYEES TO TIMESTAMP TO_TIMESTAMP(‘2012-04-30 8:00:00’, ‘YYYY-MM-DD HH24:MI:SS’);

 

SELECT COUNT(*) FROM HR.EMPLOYEES

Count(*)

127

 

Now we can see that the record are displaying as 127. Which means table is back to the state before the record was deleted.

Proof that count(*) is faster than count(1)

The final answer to the age old debate of count(*) vs count(1).Absolute proof
that count(*) is faster than count(1).

sql->alter session set events '10046 trace name context forever, level 12';

Session altered.

Elapsed: 00:00:00.10
sql->select count(1) from dual;

  COUNT(1)
----------
         1

1 row selected.

Elapsed: 00:00:00.40
sql->select count(*) from dual;

  COUNT(*)
----------
         1

1 row selected.

Elapsed: 00:00:00.40

-- TKPROF:
select count(1)
from
 dual

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 22

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=3 r=0 w=0 time=87 us)
      1   TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=71 us)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        3.11          3.11
********************************************************************************

select count(*)
from
 dual

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          3          0           1

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 22

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=3 r=0 w=0 time=94 us)
      1   TABLE ACCESS FULL OBJ#(222) (cr=3 r=0 w=0 time=79 us)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        2.30          2.30

--------------------------------------------------------------------------------
count(1) shows "SQL*Net message from client" = 3.11
count(*) shows "SQL*Net message from client" = 2.30

Research into the time problem led first to a comparison of the statements.
Careful examination revealed a subtle difference: a one byte discrepancy at
offset 14 of the SQL statement ( "*" vs. "1" ).  Realizing this difference,
I was able to form two separate cases and document execution steps for each.

case 1: count(*)
----------------
  1) left pinky holds down left shift key
  2) right middle finger hits "("
  3) right middle finger hits "*"
  4) right ring finger hits ")"
  Note that steps 1 and 2 are very nearly simultaneous, as are steps 3 and 4,
  due to parallelism between the two hands.  The majority of the time spent
  in this case is the movement of right middle finger between steps 2 and 3

case 2: count(1)
----------------
  1) left pinky holds down left shift key
  2) right middle finger hits "("
  3) left pinky releases left shift
  4) left ring finger hits "1"
  5) left pinky holds down left shift key
  6) right ring finger hits ")"
  Note that steps 1 and 2 are very nearly simultaneous, as are steps 5 and 6.
  The majority of the time spent in this case is steps 3, 4, and 5.

Analysis: case 1 has 33% FEWER steps than case 2. Even with pipelining involved
(using both hands at once), case 2 must still perform more work to accomplish
what Oracle will ultimately consider the same thing.  This may be defeated,
however, by clever use of the copy/paste subsystem of the operating system.
Using high-speed overhead keyboard cameras may reveal further important
information.

To ensure this was a reproducible case, I performed it on multiple machines
with similar findings.  I furthermore discovered that both cases suffered
reduced performance after a 24 hr lack of sleep.

The point is, my argument is equally as valid as some of the arguments I've
heard: utterly ridiculous and inconclusive.  Unless one is an Oracle engineer
responsible for the code that actually interprets and
optimizes count(*), then you really can't argue that count(1) is faster than
count(*).  You must provide irrefutable conclusive evidence to prove your case
that one is faster than the other, and that case must rest the test of this exercise
in futility.

Method to delete a huge table with more than 100 million rows in Oracle 10g.

Oracle Database philosophy is Data Recover-ability. This is the reason if you have a huge table with more than 100 million records and indexes on it. Cleaning it up can be a mess. Truncate  is required when you have to empty the entire table. Ofcourse that is much quicker. But sometimes you need a conditional cleaning where you need to keep a small set of data and purge the rest of the records. It would take days to purge records like this. Increasing the size of undo tablespace also doesn’t help in this case sometimes. The best method is to create a new table from the table minus indexes. Use your WHERE clause on the new table. The create indexes and triggers on new table and drop the old table.

I will illustrate this with a small example. Consider a table called HISTORY it has around 100 million records. Only a few thousand records needs to be kept and the rest of the records need to be deleted. If a normal conditional delete is issued, this would kill your system. So this is now I do it:

1. Use a script and create the script of table HISTORY ( I do this using TOAD for Oracle)

2. Now we will create a new table TEST from old table HISTORY

SQL> Create table TEST as ( Select * from HISTORY);

3.  Select count(*) from TEST;

# to check if no. of records same in new and test table)

4. Now use your conditional delete on the table TEST. Since no indexes or constraints are their on this table this would be so much quicker.

SQL> Delete from TEST where <column-name>  !=  all ( Value1, Value2, Value3);

SQL> commit;

5. Drop old table (Take an export dump of this table before to be safe)

SQL> DROP TABLE HISTORY;

6. Rename TEST to  new HISTORY

SQL> Alter table TEST rename to HISTORY;

7. Re-create indexes and triggers and constraints (or any more objects you have)  on your new HISTORY table  from the script created in step 1.

CREATE INDEX IX123 ON HISTORY;

—–

CREATE OR REPLACE TRIGGER  TRIG

BEFORE INSERT ON HISTORY

—–

8.  Now your new table is ready with all the unnecessary data cleaned. Much  efficiently executed task.

P.S: Do this task at a time when there is less no. of transactions on your database. Weekend midnight is a time I choose personally to do it on a production system. And keep the application guys handy to check it from their application side.