Use the menus on the left-hand side to select a script - a brief description of each is given when your mouse is over the script name. You must check scripts before use - please read the disclaimer page.


NB An archive of the 2006 scripts can be found by clicking here


Script of the week - SQL*Plus (optional) command line parameters (week 2008/17)


DefaultParams.Sql - when you run a script in SQL*Plus you can pass in parameters on the command line just as you would in any other language. The command line paramters become the SQL*Plus variables 1, 2, 3 ... nn based on their position on the command line. So if the following script DefaultParametersWhenNotSpecified.Sql/i> is run:

@DefaultParametersWhenNotSpecified.Sql P1 P2 P3
the SQL*Plus variables 1, 2 and 3 would contain the following data:
SQL> DEF
...
DEFINE 1               = "P1" (CHAR)
DEFINE 2               = "P2" (CHAR)
DEFINE 3               = "P3" (CHAR)
...


That is all well and good but what if you want a generic script that can have command line arguments or can be run with no arguments? If you run a script that expects 3 arguments (as above) but do not supply the arguments on the command line then SQL*Plus is going to stop and prompt for the missing variable on the first occasion it is used within the script. For example:

@DefaultParametersWhenNotSpecified.Sql
Enter value for 1: 
...


In order to be able to run a SQL*Plus script with or without command line parameters - and to have suitable defaults applied when nothing is specified on the command line the script has to apply a default to variables 1, 2, 3 ... nn before their first usage, hence avoiding the prompt for data to be entered for the missing variables. This can be done very easily, however, it is a well kept secret of how it is done. Simply define variables for each of the command line options using the NEW_VALUE feature and then run a dummy query that contains a column for each variable i.e.:

SELECT NULL "1", NULL "2", NULL "3", NULL "4", NULL "5" 
  FROM dual
 WHERE rownum=0;
This will assign NULL to each variable if they do noT already exist but if they do the original (i.e. command line) value is preserved. Using this "trick" for the above examples results in the following:
@DefaultParametersWhenNotSpecified.Sql P1 P2
SQL> DEF
...
DEFINE 1               = "P1" (CHAR)
DEFINE 2               = "P2" (CHAR)
DEFINE 3               = "" (CHAR)
...
Showing that the P1 and P2 parameters that were included on the command line have been preserved and the missing 3rd command line parameter has been defaulted to a value of "" - the user was not prompted to enter any data/the script did not halt awaiting such data input.


So the "trick" works and does its job, but it would be nicer to have a generic method to use for all SQL*Plus no matter how many command line variables were expected. You could of course create a script that handled 10, 100, 1000 ... command line varaibles and run this which would work but a "neater" solution surely would be to be able to call a little script that will handle nn command line parmaters - where nn is a varible stating how many command line variables are expected.


This is what DefaultParams.Sql does - it uses the SQL*Plus variable NumberofParameters to define how many command line variables to cater (sadly it cannot accept a command line parameter itself as this would overwrite variable 1 and defeat the object of the exercise. Then it uses simple PL/SQL loops to create two temporary files: defparams1.tmp - which contains the COLUMN n NEW_VALUE n and SELECT ... FROM dual... statements (which is automatically run and then deleted); defparams2.tmp - which contains UNDEFINE commands (should you wish to clean-up at the end of your script, this script is neither automatically run nor deleted). The script is then used as follows (with additional SQL*Plus output added to show how things work):

SQL> DEF NumberofParameters=5
SQL> DEF
...
DEFINE NumberofParameters = "5"
DEFINE 1               = "P1" (CHAR)
DEFINE 2               = "P2" (CHAR)
...
SQL> @DefaultParams.Sql
SQL> DEF
...
DEFINE NumberofParameters = "5"
DEFINE 1               = "P1" (CHAR)
DEFINE 2               = "P2" (CHAR)
DEFINE 3               = "" (CHAR)
DEFINE 4               = "" (CHAR)
DEFINE 5               = "" (CHAR)
...
SQL> @defparams2.tmp
SQL> DEF
...
DEFINE NumberofParameters = "5"
...
Go back to the top of the page

Script of last week - rowsource generator (week 2008/16)


RowSourceGenerator.Sql - there are occasions where in SQL you wish to generate nn rows (for instance you want to list the dates of the next 28 days). It is easy in PL/SQL as you can just set-up a loop. In SQL traditionally coders have used large tables, such as ALL_OBJECTS, and limited the amount of data via restricting on ROWNUM. eg SELECT rownum FROM all_objects WHERE rownum<=nn;


However, you can achieve the same result much more efficiently by using the smallest table there is, DUAL, even though it has just one row! The trick is to perform a CONNECT BY loop bounded only by LEVEL i.e.:


 SELECT level AS rown
   FROM dual
CONNECT BY level<=&NumberofRowsRequired;

Sometimes, however, SQL*Plus gets too clever and works out that you are querying the DUAL table and that the table only has one row and so ignores the CONNECT BY and assumes only one row would be returned. To overcome this it is safest to put the rowsource generator in a WITH statement e.g.:

WITH rowsource AS
(SELECT level AS rown
   FROM dual
CONNECT BY level<=&NumberofRowsRequired
)
SELECT rown
  FROM rowsource;

Sample output for each type of query might look like:
SCOTT@RACDB> start rowsourcegenerator
Enter value for numberofrowsrequired: 5

     ROWN
---------
        1
        2
        3
        4
        5

Enter value for numberofrowsrequired: 6

     ROWN
---------
        1
        2
        3
        4
        5
        6

6 rows selected.
Go back to the top of the page

Previous script of the week - un-pivoting lists (week 2008/15)


PivotExamples.Sql - last week showed SQL to pivot columns from a set of rows into a single column list, here SQL is given to take a list an "un-pivot" it into one entry per row.


PROMPT Un-pivot employee name comma separated list into one row per item
PROMPT
DEF empList="'CLARK','KING','MILLER'"
WITH datatab AS
(SELECT TRIM(SUBSTR(data,
         INSTR(data,',',1,LEVEL)+1,
         INSTR(data,',',1,LEVEL+1)
            - INSTR(data,',',1,LEVEL)-1)) ename
   FROM (SELECT ','||q'[&&empList]'||',' data
           FROM dual)
 CONNECT BY LEVEL<=LENGTH(q'[&&empList]')-LENGTH(REPLACE(q'[&&empList]',',',''))+1
 )
SELECT *
  FROM datatab;

ENAME
-----------
'CLARK'
'KING'
'MILLER'
Go back to the top of the page

Previous script of the week - pivoting column lists (week 2008/14)


PivotExamples.Sql - there are a number of these clever usages of SQL to display the results of a query as a list of values rather than a set of rows, here are a couple of my favourites (inspired by askTom and LazyDBA).


PROMPT Example #1 - Pivot employee name into a list for each department (acknowledgements to askTom)
PROMPT
COLUMN empList FORMAT A40
SELECT deptno
      ,MAX(SYS_CONNECT_BY_PATH(ename,' ')) empList
  FROM (SELECT deptno
              ,ename
              ,ROW_NUMBER() OVER(PARTITION BY deptno ORDER BY ename) rn
          FROM emp)
START WITH rn=1
CONNECT BY PRIOR rn=rn-1
       AND PRIOR deptno=deptno
GROUP BY deptno
ORDER BY deptno;

   DEPTNO EMPLIST
--------- ----------------------------------------
       10  CLARK KING MILLER
       20  ADAMS FORD JONES SCOTT SMITH
       30  ALLEN BLAKE JAMES MARTIN TURNER WARD

PROMPT Example #2 - Pivot employee name into a list for department 10 (acknowledgements to LazyDBA)
PROMPT
WITH datatab AS
(SELECT data
       ,ROW_NUMBER() OVER (ORDER BY data) rown
       ,COUNT(*) OVER () cnt
   FROM (SELECT ename AS data
           FROM emp
          WHERE deptno=10
         )
)
SELECT TRIM(SYS_CONNECT_BY_PATH(data,' ')) empList
  FROM datatab
 WHERE rown=cnt
START WITH rown=1
CONNECT BY PRIOR rown=rown-1;
Go back to the top of the page

Previous script of the week - Display results of a query in one column (aka askTom but without creating a procedure) (week 2007/32)


print_table.Sql - askTom has an excellent utility tool that allows the results of a query to be displayed in "table format" as opposed to the standard rows of columns across one line format that SQL*Plus offers. The results from this can be much easier to read, especially for tables/queries with lots of columns to be displayed and also ensures that the column names (field names) can be fuly read and not truncated to the SQL*Plus COLUMN size.


There is no doubt that this is yet another most useful addition to the Oracle developer/DBA's toolbox and once more we bow down to the great stuff that askTom produces. However (there had to be a but coming), in order to use Tom's "print_table" procedure you need to be able to create a procedure - what if you do not have priviledge to do so, or their are some role grant permissions making it tricky to run the procedure? It does happen - particularly if you are a consultant travelling from database to database with little or no privileges.


This script takes all the good things from Tom's print_table procedure (i.e. shamelessly copies it, but always recognising that it is their brilliant code not ours) and does just about the same thing from an anonymous PL/SQL block. In that way nothing has to be created on the "client's" database - you just run a script. It also simplifies a few things. As it is purely an ad-hoc tool no change is made to the cursor_sharing setting. Also, the date format is only changed if the current session setting is not the same as the requested format (and likewise there is no need to siwtch back afterwards either). The number of rows is reported (just like a standard SQL statement in SQL*Plus when FEEDBACK is SET to ON and the query used is echoed back (useful if it scrolled off the top of the sreen!). NULL values are reported with a text string of .

Because this is SQL*Plus and the SQL needs to be passed in as a command line variable it suffers from two limitations over using a procedure:


1-The maximum length of the SQL statement is 255 characters (if you really need more then split the SQL into 255 byte segments and hack the script to accept multiple parameters and then concatenate them back together in the DBMS_SQL statement).


2-quoted literals are a pain to handle - basically the rule is to always enclose the SQL statement in double quotes and to double up all single quotes that you need in the statement (see sample usage below).


3-The version given here is written for Oracle 9i where DBMS_OUTPUT is still limited to 255 characters (so some of the output can be truncated). Oracle10g limits this restriction so if you are using this on 10g remove the SUSBTR(...,1,255) clauses wherever they occur.


Sample usage of this script might result in the following output:

SQL> start print_table "select * from user_objects where rownum<3 AND status=''VALID''"
Results for query "select * from user_objects where rownum<3 AND status='VALID';" are...
Row #1:
OBJECT_NAME                   : /1000e8d1_LinkedHashMapValueIt
SUBOBJECT_NAME                : 
OBJECT_ID                     : 20518
DATA_OBJECT_ID                : 
OBJECT_TYPE                   : JAVA CLASS
CREATED                       : 24-June-2005 17:40:20
LAST_DDL_TIME                 : 24-June-2005 17:40:20
TIMESTAMP                     : 2005-06-24:17:40:20
STATUS                        : VALID
TEMPORARY                     : N
GENERATED                     : N
SECONDARY                     : N
-----------------
Row #2:
OBJECT_NAME                   : /1005bd30_LnkdConstant
SUBOBJECT_NAME                : 
OBJECT_ID                     : 13674
DATA_OBJECT_ID                : 
OBJECT_TYPE                   : JAVA CLASS
CREATED                       : 24-June-2005 17:40:20
LAST_DDL_TIME                 : 24-June-2005 17:40:20
TIMESTAMP                     : 2005-06-24:17:40:20
STATUS                        : VALID
TEMPORARY                     : N
GENERATED                     : N
SECONDARY                     : N
-----------------
2 rows selected.
Results for query "select * from user_objects where rownum<3 AND status='VALID';" are...
Go back to the top of the page

Previous script of the week - Create COLUMN statements for all columns in a table and produce first cut SELECT <column_list> FROM <table> for use with SQL*Plus. (week 2007/30)


SqlKickstart.Sql - When in SQL*Plus it is really easy to interrogate tables using SELECT * FROM mytable; etc. However, for anything but the simplest of tables the output gets messy and difficult to read. To format the output SQL*Plus allows you to create COLUMN commands, to define the field types, sizes and headings. Typing these in for each column can be, let us say, a bit boring as can writing nicely formatted SQL code.


This script uses the USER_TAB_COLUMNS data dictionary view to produce a first cut of COLUMN commands - one for each column in a given table with the FORMAT clause defined based on the data type and length. The column name is used as the HEADING in INITCAP with underscores replaced with spaces.


It also creates a neatly indented SELECT .. FROM .. statement with each column explictly listed. The idea is that you can then cut-and-paste the sample code and either run it directly within SQL*Plus or place it in a script to be run later. It's not rocket science but sure beats having to type line after line of code (and then go back and fix all the typos that always seem to get introduced).


Sample usage of this script might result in the following output:

SQL> start sqlkickstart
Enter table name to SQL*Plus SQL for for: EMP
COLUMN empno                      PRINT FORMAT 9999 HEADING "Empno"
COLUMN ename                      PRINT FORMAT A10 HEADING "Ename"
COLUMN job                        PRINT FORMAT A9 HEADING "Job"
COLUMN mgr                        PRINT FORMAT 9999 HEADING "Mgr"
COLUMN hiredate                   PRINT FORMAT A22 HEADING "Hiredate"
COLUMN sal                        PRINT FORMAT 9999999.9 HEADING "Sal"
COLUMN comm                       PRINT FORMAT 9999999.9 HEADING "Comm"
COLUMN deptno                     PRINT FORMAT 99 HEADING "Deptno"
SELECT empno
      ,ename
      ,job
      ,mgr
      ,hiredate
      ,sal
      ,comm
      ,deptno
FROM emp
Go back to the top of the page

Previous tip of the week - Saving and restoring SQL*Plus settings. (week 2007/29)


Whenever you run a script in SQL*Plus you inevitably need to change some environment settings to suit the output being generated. Most commonly LINESIZE, PAGESIZE, ECHO, VERIFY or DEFINE would be changed but there are dozens of others that you might need to tweak for your particular requirements. This is all well and good and will result in your script working and displaying exactly as needed. However, once it has done (the script that is) you really need to have the settings restored so that the next thing you do is not adversely affected.


Of course you can have at the end of your scripts a collection of SET command to back-out the changes made at the start of the script - but hang on a moment, you do not actually know what the original settings were!


Well, you don't have to as SQL*Plus can be made to store the settings in a file of your chosing and then the file can be "run" to re-instate the settings back to their original values.


The command to save SQL*Plus settings is simplicity itself:

SQL> store set mysqlplussettings.tmp replace

and to restore the settings when needed just run the file the settings were previously stored to (and also for housekeeping reasons delete the temporary file too - UNIX flavour given here but on Windows just replace rm with del) via:

SQL> @mysqlplussettings.tmp
SQL> !rm mysqlplussettings.tmp
Go back to the top of the page

Previous tip of the week - The quickest way to re-compile invalid objects. (week 2007/28)


There are very many examples and scripts showing how to re-compile PL/SQL objects that Oracle has marked with an INVALID status (there is even on on this page - see week 13!). However, since Oracle8 there as been a DBMS package procedure to do the re-compile via a single line command (no scripting required). The procedure is COMPILE_SCHEMA and is to be found in the DBMS_UTILITY package. As of 10g it takes 3 parameters:


        dbms_utility.compile_schema(schema         VARCHAR2
                                   ,compile_all    BOOLEAN DEFAULT TRUE
                                   ,reuse_settings BOOLEAN DEFAULT FALSE
                                   );

REUSE_SETTINGS is new to 10g and according to the manuals Indicates whether the session settings in the objects should be reused, or whether the current session settings should be adopted instead - meaning the new to 10g PL/SQL compiler options that can be set such as PLSQL_OPTIMIZE_LEVEL.


COMPILE_ALL was new to 9i and defines whether to compile everything or just objects marked as INVALID. Surprisingly the default is set to compile everything - something that should be rarely done (in this author's opinion). The far more useful syntax is compile_all=>FALSE where you get to make valid anything that has gone "bad" (assuming they can be compiled cleanly).


SCHEMA is the user account whose objects are to be re-compiled, most of the time this would be to use the USER pseudo-column so that only objects in the schema you are currently logged into are processed. If you are a DBA, however, you can use the package procedure to re-compile objects in any user account simply by adding the correct schema name.


This would then suggest that the most common usage of the package procedure for a developer would be (10g):

SQL> EXECUTE DBMS_UTILITY.COMPILE_SCHEMA(user,FALSE,TRUE)
SQL> PL/SQL procedure successfully completed.

and for 9i would be:

SQL> EXECUTE DBMS_UTILITY.COMPILE_SCHEMA(user,FALSE)
SQL> PL/SQL procedure successfully completed.
Go back to the top of the page

Previous tip of the week - Using the SQL*Plus COPY command to quickly and easily move data between databases. (week 2007/23)


SQL*Plus has had a nugget of a command in it, the COPY command, for many years yet it is surprising how many developers and DBAs do not know about it. This command uses 2 concurrent SQL*net connections to copy data from a source to a destination database (and hence no DB links are required). It does this by utilising array fetches and array inserts of data - the array size can be tuned and can result in some stunningly quick results. The data is taken from the source into SQL*Plus and then inserted into the destination by SQL*Plus - which implies that the data passes through whatever machine the SQL*Plus session is running on. For this reason you should really only ever use COPY from within a SQL*Plus session that is running directly on the source or destination database server - if you run it from some other machine (your PC workstation for example) all of the data must be shipped via that machine. In which case the additional network traffic will have a serious adverse affect on the rate at which data is loaded.


There are a number of options that can be used - for instance, like SQL*Loader you can tell COPY whether to create a new destination table; replace an existing table; insert into an existing table; append data to an existing table etc. The command can also deal with LONG datatype (although sadly not CLOBS or BLOBS).


The most interesting options however are arraysize and copycommit. Arraysize is how many rows of data to fetch at a time - the bigger you make this then in theory the faster the command should run. Copycommit defines how often to commit the data copied - it defines how many array fetches will be committed as a batch. Tinkering with these parameters can be a great help when large volumes of data need to be inserted yet UNDO space is limited.


There is plenty of great stuff on COPY already out there on the web (enter "sql*plus copy command" into your favourite search engine) but most forget to show how to use it in its simplest form - in this form you copy data from a remote source database to the currently connected database (the destination) where the source and destination table structures are identical. That is you are replicatingdata between databases. This is great for migrating data between databases - even if they are of different versions or releases (remember that all that COPY needs is a connection to a source and a destination database, it makes no demands on the versions of either). Below is how this is done, a simple one line command can copy millions of rows of data in a matter of minutes! Note that COPY is not SQL so does not need a slash or semi-colon to run the command. NB The dash character in SQL*Plus tells it to continue reading the next line as part and parcel of the same command.


Sample usage of the COPY command might result in the following output:

SQL> set arraysize 1024
SQL> set copycommit 1024
SQL> COPY FROM scott/tiger@remotedb.world - 
INSERT BIG_table -
USING SELECT /*+ FULL(t) PARALLEL(t, 4) */ * FROM BIG_table t

Array fetch/bind size is 1024. (arraysize is 1024)
Will commit after every 1024 array binds. (copycommit is 1024)
Maximum long size is 5000. (long is 5000)
   14732163 rows selected from scott@remotedb.world.
   14732163 rows inserted into BIG_table.
   14732163 rows committed into BIG_table at DEFAULT HOST connection.

SQL>
Go back to the top of the page

Script of last week - Show activity (CPU, buffer gets, rows, disk reads) of a SQL statement. (week 2007/22)


SqlActivity.Sql - It can be useful to get a rough idea of what a given SQL statement does in terms of system activity. This can be to determine things such as: how often the statement is run; when it was last run; how much CPU on average it consumes; statement IO consumption; how many rows are returned. This can give an indication of whether the statment requires tuning (or if you have multiple children of the same statement whether one is more expensive that the others). It can also be used to determine the progress of very large SELECTS (by monitoring the rows values).


All the data is held in a single table - V$SQL - and this script, specifically written for Oracle10g (but easily adapted for 9i by replacing SQL_ID with ADDRESS and HASH_VALUE: see SqlActivity91.Sql) can be used to firstly identify the statement and then by using the SQL_ID and CHILD_NUMBER display the statement's activity detail.


Once you have the SQL activity details shown you can "monitor" progress of the selected statement if it is still running by simply entering a "/" to have SQL*Plus re-run the current statment. This can be useful to see how fast things are running or, if you know roughly how many rows are to be fetched, how close to completion the SELECT is (this can be useful when the SQL operation is not one that Oracle automatically places in V$SESSION_LONGOPS).


Sample usage of these scripts might result in the following output:

SQL> start sqlactivity

Enter value for sqlstm: SELECT first_load_time flt

SQL ID          Statement                                             Child
--------------- ----------------------------------------------------- -----
dn75xaf7t442u   SELECT first_load_time flt,last_active_time lat           0
                ,executions e,cpu_time c,CASE  WHEN executions>0
                THEN    round(cpu_time/executions) ELSE 0 END pe
                ,buffer_gets bg,CASE WHEN executions>0
                THEN    round(buffer_gets/executions) ELSE 0 END peb
                ,rows_processed rp,CASE WHEN executions>0
                THEN    round(rows_processed/executions) ELSE 0 END
                per,DISK_READS dr,CASE WHEN executions>0
                THEN round(DISK_READS/executions) ELSE 0 END ped FROM
                v$sql WHERE sql_id='3afk4nb5d5np9' AND child_number=0
Enter value for sqlid: 3afk4nb5d5np9
Enter value for child: 0

First load     Last active    Execs CPU    CPU    BUFFER GETS  ROWS  ROWS  DISK  DISK
time           time                        /Exec  GETS   /Exec       /Exec READS /Exec
-------------- -------------- ----- ------ ------ ------ ----- ----- ----- ----- -----
20070601 17:16 20070601 17:16     2 20,308 10,154      0     0     2     1     0     0
Go back to the top of the page

Previous script of the week - List all connected sessions, then prompt for sid,serial to kill a session. (week 2007/21)


KillSession.Sql - Killing a rogue session is relatively straight-forward as Oracle provides the ALTER SYSTEM DISCONNECT and KILL session commands. However, both of these require the session to be killed to be identified by a combination of SID and SERIAL number.These are available from the V$SESSION view but not something you want to get wrong! Killing the wrong application user is one thing, killing your own session is mightily embarassing but killing an essential background process will bring down the whole database.


This script lists user sessions giving details of who they are and allows the SID and SERIAL number to be cut-and-pasted and from this generates the syntax required to kill the session. That in turn can be cut-and-pasted to ultimately get rid of the session - once you are sure you have the right one! Background processes and your own session are not included to remove the risk of picking one of them by mistake.


The script KillSessionS.Sql takes this one stage further and summarises connected sessions by machine/program/user, prompts for a given machine/program/user and then generates a script to kill all matching sessions.


Sample usage of these scripts might result in the following output:

SQL> start killsession

Oracle User/ SID,
OS User      Serial#    Program                     Status   Logon Time         Run Time
------------ ---------- --------------------------- -------- ------------------ --------
HR/oracle    144,2672   sqlplus@todc.co.uk (TNS     INACTIVE 25MAY2007 17:24:33 00:15:06
                        V1-V3)-pts/6(todc.co.uk)
OWB/         120,2157   -(todc.co.uk)               ACTIVE   30DEC2006 15:47:34 00:00:03
OWB/oracle   124,30     oracle@todc.co.uk           ACTIVE   30DEC2006 15:39:56 00:59:40
                        (J003)-UNKNOWN (todc.co.uk)                               
SCOTT/oracle 122,6319   sqlplus@todc.co.uk (TNS     INACTIVE 25MAY2007 17:27:40 00:11:57
                        V1-V3)-pts/8(todc.co.uk)
SCOTT/oracle 152,1690   sqlplus@todc.co.uk (TNS     INACTIVE 25MAY2007 17:28:13 00:11:24
                        V1-V3)-pts/10(todc.co.uk)

Enter SID,SERIAL# (cut-and-paste from the above) of user to kill: 144,2672 

You can now issue (eg cut and paste it) either:
.  ALTER SYSTEM KILL SESSION '144,2672'
or (from 8i)
.  ALTER SYSTEM DISCONNECT SESSION '144,2672' immediate

The latter is in theory better as "kill session" can leave a dead session.

SQL> ALTER SYSTEM KILL SESSION '144,2672';

System altered.

SQL> start killsessions

Generate ALTER SYSTEM DISCONNECT... script for all connections from a given machine/program

The currently connected machines/users are:

0001 -(todc.co.uk) [OWB/, 1 session]
0002 oracle@todc.co.uk (J003)-UNKNOWN (todc.co.uk) [OWB/oracle, 1 session]
0003 sqlplus@todc.co.uk (TNS V1-V3)-pts/10(todc.co.uk) [SCOTT/oracle, 1 session]
0004 sqlplus@todc.co.uk (TNS V1-V3)-pts/8(todc.co.uk) [SCOTT/oracle, 1 session]

Enter specific program/terminal/machine name to kill connections from: scott

ALTER SYSTEM DISCONNECT SESSION '122,6334' IMMEDIATE;
/* ^sqlplus@todc.co.uk (TNS V1-V3)-pts/8(todc.co.uk) SCOTT/oracle ^*/
ALTER SYSTEM DISCONNECT SESSION '152,1816' IMMEDIATE;
/* ^sqlplus@todc.co.uk (TNS V1-V3)-pts/10(todc.co.uk) SCOTT/oracle ^*/

SQL> start KillSessionS.Tmp

System altered.

System altered.
Go back to the top of the page

Previous script of the week - Identify indexes marked as UNUSABLE and generate a script to rebuild them so as to fix the issue. (week 2007/20)


RebuildUnusableIndexes.Sql - Oracle indexes can become "unusable" for a number of reasons. For instance, if you re-organise a table using ALTER TABLE xxx MOVE, then this involves row movement and as a B-Tree index is a structure with pointers to table ROWIDs then the original B-Tree pointers will no longer be valid/accurate - hence Oracle marks the index as being UNUSABLE. Another example would be when certain bulk loading methods - such as SQL*Loader direct path load - are used where data is pre-formatted in to data blocks and simply appended to the table directly. No effort is made in these tools to format and append the index blocks.


This script very quickly identifies any index that is marked as UNUSABLE - irrespective of how it got into that state - and creates a text file of the ALTER INDEX ... REBUILD statements required to make the indexes usable. It gives a count of total number of UNUSABLE indexes in the database and allows index owner and name to be entered as select parameters (with wild-card matching allowed). SYS and SYSTEM indexes are ignored however.


Note that if you do have unusable indexes in the database and are unable to rebuild them for whatever reason then it is possible to prevent any query from attempting to use these indexes by setting the SKIP_UNUSABLE_INDEXES parameter.


Sample usage of this script might result in the following output:

SQL> alter table employees move;

Table altered.

SQL> start rebuildunusableindexes

Rebuild Unusable Indexes (there are currently 6 unusable indexes)

Enter owner of the unusable indexes to rebuild (% for all): HR
Enter the name of the unusable index to be rebuilt (% for all): %

You now have a file called RebuildUnusableIndexes.TMP to run...

SQL> start RebuildUnusableIndexes.TMP

Index altered.


Index altered.


Index altered.


Index altered.


Index altered.


Index altered.

Go back to the top of the page

Previous script(s) of the week - getting even more use out of V$SQL_Plan/locating tables that are full table scanned and indentifying index usage. (week 2007/19)


SqlPlanFTS.Sql & SqlPlanIndex.Sql - The V$SQL_PLAN view is very useful indeed. Not only can you see the actual access path/explain plan a statement used (as opposed to the possibly flawed access path when you use the EXPLAIN PLAN command) but you can make use of the information in other ways too. These scripts do just that, using the OBJECT_NAME column to identify SQL statements that use a particular table or index. By combining this with the OPERATION column you can then show any SQL that, for instance, used a full table scan.


There are many other ways that this information can be used to good effect - identifying SQL that uses a particular access method (HASH JOIN, SORT MERGE, SORTING...); finding out what SQL is hitting a given table (maybe to detect whether the table is actually used or to try and trap unauthorised access/DML); seeing if a given index is actually used by the application or is redundant and so could be removed to save space/DB work in maintaining it.


With the V$SQL view you must not forget, however, that it only lists SQL that is still in the library cache. So if you are trying to identify whether and object is used be sure to checked V$SQL regularly over the course of a working day/week/month. If the statement gets aged out before you look for it then clearly you will miss it and make a false conclusion on the results you get. Tools such as STATSPACK and AWR capture V$SQL_PLAN information and so can also be used for this type of analysis - their data is retained for a number of days (configurable) and the frequency of capturing information can be set to suit your own particular needs.


Note that these scripts use Oracle 10g columns in the V$SQL and V$SQL_PLAN views - they can be adapted to work in Oracle9i by replacing SQL_ID with ADRESS/HASH_VALUE and SQL_FULLTEXT with SQL_TEXT.


Sample usage of these scripts might result in the following output:

SQL>start sqlplanfts
Enter SQL statement filter if required (NULL=all FTS SQL): %LBAC$POL%

Table name
--------------------------------
LBAC$POL

1 row selected.

Display SQL? (Y/N):y

FTS Statement                                        Table name SQL ID [child]
---------------------------------------------------- ---------- ----------------
select pol#, db_labels, default_format from lbac$pol LBAC$POL   6r75f7k88qr4a[1]
SELECT POL#, PACKAGE FROM LBAC$POL                   LBAC$POL   74jp9b3h4krhd[1]
WHERE BITAND(FLAGS,1)=1 ORDER BY PACKAGE

3 rows selected.

SQL>start sqlplanindex
Enter value for index: SCHEDULER$_PROGRAM_ARG_PK

Optimizer Plan:
--------------------------------------------------------------------------------
| Operation                      | PHV/Object Name     |  Rows | Bytes|   Cost |
|SELECT STATEMENT                |----- 3945791050 [0] |       |      |      1 |
|SORT AGGREGATE                  |                     |     1 |   13 |        |
| INDEX RANGE SCAN               |SCHEDULER$_PROGRAM_A |     1 |   13 |      0 |

Statement
-----------------------------------------------------------------
select count(*) from sys.scheduler$_program_argument where oid=:1

Go back to the top of the page

Previous script of the week - Show long running SQL details, progress and estimated time to complete. (week 2007/18)


LongOpsSql.Sql - Oracle will record metrics of statements it deems to be "long-running" - ie it expects to take considerable time to execute. This typically means it has detected that the execution path will use full tables scans and/or hash joins with a large number of blocks scanned. The data is exposed via the V$SESSION_LONGOPS view. Processes may also write directly to this view via the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure - something that RMAN makes use of as can any application code, if required. This makes DBMS_APPLICATION_INFO a very useful package indeed for instrumenting code - you can create a row in V$SESSION_LONGOPS when your process starts and update it at intervals so the progress can be monitored by some other external process.


The Oracle documentation describes V$SESSION_LONGOPS thus: This view displays the status of various operations that run for longer than 6 seconds (in absolute time). These operations currently include many backup and recovery functions, statistics gathering, and query execution. In Oracle 10g rollback progress data was added to that available.


Data is kept in the view for a certain length of time (basically it will be re-used as space is needed) so it is also a very useful place to look for data after the event - it can show you things like how many block visits were made, and how long the statement actually took.


The script given here uses V$SESSION_LONGOPS to locate statements that are still running and gives details of how much work has been done, how long it has so far taken, and when Oracle expects the operation to complete. It joins back to V$SESSION and V$SQL to give details of who and what is being run.


To see statements that have already completed remove the line:

AND lo.time_remaining!=0
(always assuming the required data, SQL as well as LONG_OPS is still available).


Sample usage of the script might result in the following output:

SQL>longopssql.sql
                                                                              Start time/
                                                                   Target     Last Update time
           Oracle User/                                            So far     remaining->elapsed   Program  SID/
Long Op    OS User      Status   Statement                         Total Work [%done end time]     Terminal Serial#
---------- ------------ -------- --------------------------------- ---------- -------------------- -------- -------
Long op message
------------------------------------------------------------------- 
Sort       TONIONS /    INACTIVE select j1.id from bobs j1,nobs j2  ________, 04-MAY-2007 16:53:15 SQLPLUS  288/557
Output     tonions                                                  _____455, 04-MAY-2007 16:53:33 MyPc     [733977]
                                                                    ____3794  132->18 [12% eta     
                                                                    (Blocks)  16:55pm]             
                                                                    [12%]
Sort Output:  : 455 out of 3794 Blocks done
Table Scan TONIONS /    unknown INSERT INTO mjh (SELECT id FROM bb) TONIONS., 02-MAY-2007 07:27:03          271/4382
           tonions                                                  __193000, 02-MAY-2007 07:30:15          []
                                                                    __193006  0->192 [99% eta
                                                                    (Blocks)  16:53pm]
                                                                    [99%]
Table Scan:  TONIONS.BOB: 193000 out of 193006 Blocks done

Go back to the top of the page

Previous script of the week - Use block SCN to show approximately when a table was last updated. (week 2007/17)


TableLastUpdate.Sql - in Oracle 10g the flashback recovery feature exposed the SCN (system change number) of a block of (table) data - via the ORA_ROWSCN pseudo-column. A SCN can be converted to a TIMESTAMP using the SCN_TO_TIMESTAMP function so it is possible to run a very simple query to get a pretty good idea of when a data change was last made to a table. By default the SCN is only stored per block so it cannot be used to accurately get the change date for an individual row of data - unless the table is created with the ROWDEPENDENCIES clause in which case the SCN is stored per row (at the cost of extra bytes for each row stored).


The Oracle documentation describes ORA_ROWSCN thus: The approximate SCN that the latest COMMIT operation which took place for the block the data row resides in. ORA_SCN is in fact a conservative upper bound of the latest commit time: the actual commit SCN can be somewhat earlier. ORA_SCN is more precise (closer to the actual commit SCN) for a row-dependent table (created using CREATE TABLE with the ROWDEPENDENCIES clause). The documentation goes on to suggest that ORA_ROWSCN can be used (on a ROWDEPENDENCIES table) for optimistic locking purposes - doing away with the need for an application maintained timestamp or row version number column (whether the popular frameworks such as J2EE/Hibernate can cope with this is another matter!).


There is also another caveat in that the link between SCN and TIMESTAMP is only held for a limited amount of time and so you cannot go back indefinately (ultimately it depends on how active the database is as to how far you can go back in time, the more active the DB the less far back). In practise a period of just 5 days is suggested by the Oracle documentation for dependable results although the tests performed by TOdC Limited have tracked SCNs back 2 years!


Sample usage of the script might result in the following output:

SQL>tablelastupdate.sql Enter value for table_name: SCOTT.EMP old 1: select scn_to_timestamp(max(ora_rowscn)) from &table_name new 1: select scn_to_timestamp(max(ora_rowscn)) from SCOTT.EMP SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN)) -------------------------------------- 26-APR-07 09.21.10.000000000 Enter value for table_name: SCOTT.DEPT old 1: select scn_to_timestamp(max(ora_rowscn)) from &table_name new 1: select scn_to_timestamp(max(ora_rowscn)) from SCOTT.DEPT REM Now attempt to get timestamp from a table whose ORA_ROWSCN is no longer mapped by flashback... select scn_to_timestamp(min(ora_rowscn)) from SCOTT.DEPT * ERROR at line 1: ORA-08181: specified number is not a valid system change number ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1
Go back to the top of the page

Previous script of the week - Show the source code for compilation errors of a package or procedure, with a "window" of 5 lines before and after each error line and a "pointer" to each error location. (week 2007/16)


PLSQLErr.Sql - There are a lot of good GUI tools to help you develop PL/SQL code, which will highlight and assist in fixing compilation errors. However, they tend to be rather heavyweight and if you are not a developer (ie you are a DBA) then you may not have access to one or not have one readily to hand. You can see the compilation errors via the SHOW ERRORS command but the format is rather lacking and it can be hard to spot where the error actually lies using that command from within SQL*Plus. This little script gets the error text out of the USER_ERRORS view and combines that with the USER_SOURCE view to not only show each line that has an error, but also "points" to the statement within the error line and as an added bonus gives you contextual information by displaying the 5 lines before and after each.


Whilst possibly not as slick as a GUI product it is fast (very fast) and extremely easy to use. Simply run the script using the name of the package that failed to compile as the only parameter.


Sample usage of the script might result in the following output (with the basic SHOW ERRORS error details included for comparison):

SQL> ALTER PACKAGE mybadpackage COMPILE; Package compiled with errors. SQL> @plsqlerr mybadpackage PL/SQL: Declaration ignored [at line 17, position 1] 12 -- Sub-Program Unit Declarations 13 /* call to ensure package is loaded in shared pool */ 14 PROCEDURE LOAD_INTO_POOL; 15 16 /* mybadprocedure */ --> 17 PROCEDURE mybadprocedure ^ 18 (id IN NUMBER 19 ,DESCRIPTION IN mytable.description%TYPE 20 ,COMPONENT IN VARCHAR2 21 ,ERROR_LEVEL IN PLS_INTEGER 22 ,CREATED_BY IN VARCHAR2 PLS-00201: identifier 'mytable.description' must be declared [at line 19, position 20] 14 PROCEDURE LOAD_INTO_POOL; 15 16 /* mybadprocedure */ 17 PROCEDURE mybadprocedure 18 (id IN NUMBER --> 19 ,DESCRIPTION IN mytable.description%TYPE ^ 20 ,COMPONENT IN VARCHAR2 21 ,ERROR_LEVEL IN PLS_INTEGER 22 ,CREATED_BY IN VARCHAR2 23 ); 24 SQL> SHOW ERRORS Errors for PACKAGE mybadpackage: 17/1 PL/SQL: Declaration ignored 19/20 PLS-00201: identifier 'mytable.description' must be declared
Go back to the top of the page

Previous script of the week - Locate tables that have had a large proportion of data deleted leaving a lot of unused space. (week 2007/15)


StretchedTables.Sql - A stretched table is one that has had, at one time, a lot of rows in it but no longer does. Oracle records, via the High Water Mark (HWM), the last block ever used to hold data. When a full table scan (FTS) is used to query data from the table Oracle scans all blocks up to the HWM - this can (and does) slow down query performance. This script uses the DBMS_SPACE built-in package to try and locate tables with HWM problems. HWM problems can be fixed (in 7.3.n) only by either dropping and recreating the table or via the TRUNCATE command. Remember, though, that if data remains in the table you must take steps to secure it before a drop or truncate. In Oracle 8i and above you can use ALTER TABLE ... MOVE to reclaim the space, in Oracle 10g there are SHRINK and COALESCE commands.


NB For the purposes of this script a table is stretched if more than nn% of blocks below the HWM are free (80-90% is a good starting point).


This script will request 3 parameters if not specified on the command line: 1 = owner of tables to check (wild cards allowable) 2 = name of table to check (wild cards allowable) 3 = % of free blocks considered as being worth reporting on.


The second part of the script (which uses DBMS_SPACE.FREE_BLOCKS) only works on tables that are not in ASSM tablespaces (Automatic Segment Space Managed) - as such tablespaces do away with freelists. The following error is returned if ASSM is used:

 ORA-10618: Operation not allowed on this segment
 ORA-06512: at "SYS.DBMS_SPACE", line 152
 ORA-06512: at line 54

That part must be removed if you want to use the script on ASSM controlled tables.


Sample usage of the script to locate those tables where 90% or more of blocks are "free" (ie on the free list - they may not be empty) might result in the following output:

SQL> start stretchedtables OWNER1 % 90 Table OWNER1.RM$DEFERRED_CHECKS is stretched! Total blocks = 117 Total bytes(k) = 234 Unused bytes(k) = 0 Blocks above HWM = 0 Blocks below HWM = 116 Free blocks = 116 %free wrt used blocks = 100% . Table OWNER1.RM$DEFERRED_QUERIES is stretched! Total blocks = 247 Total bytes(k) = 494 Unused bytes(k) = 0 Blocks above HWM = 0 Blocks below HWM = 246 Free blocks = 246 %free wrt used blocks = 100%
Go back to the top of the page

Previous cript of the week - Show the space used for each table in a SQL Server database and calculate the total space used. (week 2007/14)


SqlServerTableSizes.Sql - The sp_spaceused system stored procedure will give space details, including a row count for each table in the database. sp_MSforeachtable can be used as an "iterator" to run sp_spaceused for every table in a given database. However, the results appear as a single row resultset for each and every table. The output is rather unwieldly and cannot be sorted in this format:


EXEC sp_MSforeachtable @command1="EXEC sp_spaceused '?'"

To get a sorted list then the results of the above command must be first loaded into a temporary table. This also allows some manipulation of the data to make it more presentable. The end result is a resultset of useful information that can be sorted any which way you need and filtered as any normal SQL can. One word of caution, the @command1 parameter only allows up to 128 characters so you have to keep the command as short as possible!


This script uses sp_MSforeachtable to populate a temporary table and then queries this table to give a sorted list of tables and their sizes (largest first). It goes on to aggregate the space used by all tables to give the values for the entire database. Some CASTing is used to prevent data loss due to rounding and to show the end results to four decimal places (an arbitrary choice).


When using temporary tables in T-SQL there is always the temptation to replace them with the more efficient (for small data volumes at least) table variables. You cannot do this here, however, as the EXEC sp_MSforeachtable @command1... construct does not support the use of table variables.


Sample usage of the script might result in the following output:

Table Name Rows reservedMB dataMB indexMB unusedMB sumKB ---------- ------ ---------- ------- ------- -------- -------- State 99322 201.1094 86.2344 14.1875 100.6875 201.1094 ZipCode 70562 140.8672 50.1250 20.1797 70.5625 140.8672 County 33326 90.4766 34.2031 13.4766 42.7969 90.4766 Country 29922 21.3750 15.5859 5.7891 .0000 21.3750 MadeUpTable 13550 8.3750 8.3047 .0391 .0313 8.3750 Total reservedMB Total dataMB Total indexMB Total unusedMB Total sumKB ---------------- ------------ ------------- -------------- ----------- 516 237 66 213 516
Go back to the top of the page

Previous script of the week - Generate a file of commands to try and fix invalid objects by approporiate recompilation. (week 2007/13)


CompileBadObjects.Sql - following on from last week's script that gave a listing of objects marked as invalid this script will generate the commands that are most likely to correctly re-compile those objects.


Note that the commands are not generated in dependency order and so there may be a need to run the script multiple times to get everything that will compile cleanly to compile. Also, note that compilation of certain objects can cause other, dependent objects to go invalid as a result (e.g. if you compile a package specification then any package bodies that reference it (or triggers or views etc.) will go invalid as a result.


A count of the number of invalid objects before and after the commands are run is given so that it is clear as to what the effect has been.


Although Oracle supply scripts to recompile invalid objects (e.g. utlrp.sql, utlprp.sql), this script shows you what is invalid and allows you to pick which to re-compile (via cut and paste etc) before actually going ahead. In that respect it gives a lot more control over what is going on/being done.


Note that in Oracle 10g SYNONYMS can go invalid, and this script struggles when the synonym points to a package, procedure, function or sequence.


Objects go invalid when things they depend on change. In most cases they automatically get re-compiled and become valid the next time they are accessed/used. Sometimes they will not - for instance when something they depend on has changed in a manner that means they are no longer valid (for instance if a table is dropped and the object - package, procedure, function, trigger etc - still uses the table). In either case, as a DBA or developer you will want to know what is invalid so that you can look into why it is invalid and perhaps try to re-validate it rather than risking your application failing in the middle of the night. Locking issues are also a risk if you have a heavily used package that goes invalid and multiple sessions all need to use it and try to validate it. The best strategy is to identify invalid objects as soon as possible and to fix them immediately.


Sample usage of the SQL*Plus script might result in the following output:


SQL> start compilebadobjects

Compile bad PL/SQL objects (there are currently 6 invalid objects)

Enter value for Object Owner: TESTUSER
Enter value for Object Name: %
Enter value for Type: %
ALTER TRIGGER TESTUSER.TR_PEST_ARIUD COMPILE;
ALTER TRIGGER TESTUSER.TR_ESP_ARUD COMPILE;
ALTER TRIGGER TESTUSER.TR_ETC_ARIUD COMPILE;
ALTER TRIGGER TESTUSER.TR_PET_ARIUD COMPILE;
ALTER TRIGGER TESTUSER.TR_ARS_ARUD COMPILE;
ALTER TRIGGER TESTUSER.TR_ART_ARIUD COMPILE;

You now have a file called CompileBadObjects.TMP to run...

SQL>
Go back to the top of the page

Previous script of the week - List out all invalid objects for a given owner (or all owners or owners matching a wild-card search). (week 2007/12)


BadObjects.Sql - this very simple single table SQL*Plus script produces a list of those objects in the database that are invalid for a given schema (or all schemas if % is used or schemas matching any wild card search string.


Objects go invalid when things they depend on change. In most cases they automatically get re-compiled and become valid the next time they are accessed/used. Sometimes they will not - for instance when something they depend on has changed in a manner that means they are no longer valid (for instance if a table is dropped and the object - package, procedure, function, trigger etc - still uses the table). In either case, as a DBA or developer you will want to know what is invalid so that you can look into why it is invalid and perhaps try to re-validate it rather than risking your application failing in the middle of the night. Locking issues are also a risk if you have a heavily used package that goes invalid and multiple sessions all need to use it and try to validate it. The best strategy is to identify invalid objects as soon as possible and to fix them immediately.


Although a very simple simple script indeed it is possibly the most widely used of the scripts presented here. It is run on every system time and time again as a simple health chech - particularly just before and just after making system changes. There should really never be invalid objects in any system other than a development system, if there are invalid objects then it is an indication that trouble is not far away.


Sample usage of the SQL*Plus script might result in the following output:


SQL> start badobjects
Enter object owner to list bad objects for: TESTUSER
List all objects that have a status of INVALID

Sat Mar 24                                                                  page    1
               Objects Owned by TESTUSER with Status of INVALID

Object Owner                   Object Name                      Object Type
------------------------------ -------------------------------- --------------------
TESTUSER                       TR_PEST_ARIUD                    TRIGGER
                               TR_ESP_ARUD                      TRIGGER
                               TR_ETC_ARIUD                     TRIGGER
                               TR_PET_ARIUD                     TRIGGER
                               TR_ARS_ARUD                      TRIGGER
                               TR_ART_ARIUD                     TRIGGER


SQL>
Go back to the top of the page

Previous script of the week - Automatically create an AWR report for the previous day (i.e. without the need to enter any database or snapshot details). (week 2007/11)


AWRReport.Sql - The AWR reports in Oracle 10g would seem to be aimed at replacing STATSPACK - just so long as you have paid for the necessary optional "pack". When using STATSPACK it was found to be useful to generate one each night for the previous 24 hour period. This required a degree of hacking with the supplied SQL report which was awkward with each new release as it meant having to go back and re-do the changes for the latest version of the STATSPACK report.


With AWR the reporting is done via a single package call - dbms_workload_repository.awr_report_text, if you want the main report in text (as opposed to HTML) format. Therefore, all that needs doing is working out the required input parameters: database ID; instance number; begin snapshot id; end snapshot id. All of these can be easily located from tables such as V$DATABASE, V$INSTANCE, DBA_HIST_SNAPSHOT and DBA_HIST_DATABASE_INSTANCE. This script does just that, it requires a single input parameter of the directory to place the output. The output is formated with instance name and the date included - eg awr_racdb1_20070317.lst.


The report as supplied is for the instance that the session logs on to. In a RAC environment you can get the report for any instance (ie. node) of the RAC cluster by adding a second input parameter for the instance number and altering the first SQL to:

SELECT TO_CHAR(d.dbid) dbid ,d.name db_name FROM v$database d;

and getting the instance name from dba_hist_database_instance.


Go back to the top of the page

Previous script of the week - Quickly generate a script that can disable, enable or drop foreign key constraints that reference a given table(s). (week 2007/10)


FkOn-OffDepend.Sql - having used last week's script successfully in the copying of data from one schema to another without having to worry about table order it was not long before it became clear that last week's script was no good should you need to delete data rather than add it. To delete data you need to disable the dependent foreign key constraints, those that reference the table being deleted from, and not the constraints on the deleting table. So this script generates the necessary ALTER table commands to turn off, turn on (with or without validation) or even drop foreign key constraints on all tables that make reference to a specified table, wildcard table name match or all tables for the current schema. This script, therefore, allows you to turn off those constraints that would prevent you from deleting data from a "parent" table and to then generate a script to turn them on again (for whatever that is worth as, if you have deleted the parent data, the "child" data is not going to be very good). As with the previous script when re-enabling a choice of with or without validation is available.


As a security measure against running the resulting output against the wrong schema user or database a simple check is added before anything is run to make sure the same user as the script was generated for is being used and that the same database is also connected to as when the script was genereted. Note that this latter part uses an Oracle SQL*Plus 9 and above feature so you may need to remove it if you are on Oracle8 or below.


A count of the constraints by type and status is also given before and after the resultant ALTER TABLE commands is included so the ultimate impact of the script can be seen.


Go back to the top of the page

Previous script of the week - Quickly generate a script to disable, enable or drop constraints of a given type, for a given table/all tables (week 2007/09)


FkOn-Off.Sql - this simple single table SQL*Plus script generates the necessary ALTER table commands to turn off, turn on (with or without validation) or even drop constraints on a specified table, wildcard tablename match or all tables for the current schema. When re-enabling a choice of with or without validation is available. Any type of constraint can be specified (C=CK,R=FK,P=PK,U=UK,%=ALL) although no dependency checking or order is applied so the disabling or dropping of PKs may not be successful if depdendent FKs still exist.


As a security measure against running the resulting output against the wrong schema user or database a simple check is added before anything is run to make sure the same user as the script was generated for is being used and that the same database is also connected to as when the script was genereted. Note that this latter part uses an Oracle SQL*Plus 9 and above feature so you may need to remove it if you are on Oracle8 or below.


A count of the constriants by type and status is also given before and after the resultant ALTER TABLE commands is included so the ultimate impact of the script can be seen.


Go back to the top of the page

Previous script of the week - Transaction progress/status with detail (week 2007/07)


ShowTransactionDetail.Sql - this script uses the Oracle V$Transaction table to show you how much work has currently been done in active transactions (as blocks and rows). It will show all users with uncommited or non-rolledback transactions, when the transaction staterd, what action the session is currently undertaking (NB this is not necessarily the action that caused the transaction) and the number of blocks/rows affected. It builds on the script presented last week ShowTransactionStatus.Sql.


Having used the script heavily in the seven days since placing it on TOdC.co.uk, it became apparent that with a simple additional outer join to V$SQL the script could show the currently active SQL (if any). Having made this change then it was just as easy to get the previous SQL too (V$SESSION which the query already joined to has references to both the current and previous statements).


It is important to note that the statements shown are not necessarily those that are responsible for the active transaction. They could be just SELECTs issued since the last DML. If, however, they are DML (INSERT, UPDATE, DELETE, MERGE) then it's hard to see that they are not contributors to the transction.


NB Unlike the majority of scripts on this site this one uses an Oracle 10g specific feature - the SQL_FULLTEXT column from V$SQL. If you are using Oracle9i or below replace this with V$SQL.SQL_TEXT (which does not show the full statement, only the first 1000 characters of it!).


Sample usage of the SQL*Plus script might result in the following output:


SQL> start D:\TimOnionsCom\showtransactiondetail

                                         TRANSACTION DETAIL

User      Program                                                   Rollback  Rollback
Details   Client Terminal Details      Start Time           Action    Blocks   Records
Current Statement                      Previous Statement
--------- ---------------------------- -------------------- ------ --------- ---------
-------------------------------------- ----------------------------------------------- 
TOS (137) sqlplus@myserver             09-FEB-2007 17:05:47 UNKNOWN   22,729 2,000,002
          (TNS V1-pts/4(myclnt) as tos
select * from myothertable             insert into mytable select 0,sysdate
                                       rownum from bigtable where id<1000000

SQL>
Go back to the top of the page

Previous of the week - Transaction progress/status (week 2007/06)


ShowTransactionStatus.Sql - this script uses the Oracle V$Transaction table to show you how much work has currently been done in active transactions (as blocks and rows). It will show all users with uncommited or non-rolledback transactions, when the transaction staterd, what action the session is currently undertaking (NB this is not necessarily the action that caused the transaction) and the number of blocks/rows affected.


This is clearly useful to show who on the system is doing work but it has two other uses. Firstly, if you run the script a few times and the number of blocks/records is increasing then the session is continuing within the transaction as normal. If the number is going down then the transaction is being rolled back. If the number remains the same then either it is waiting for a commit or the session is waiting on some resource to continue (which may be waiting on a lock or simply waiting on more data to be made available to add to the transaction - you see this quite often when there is some other SQL driving the transaction and it has a lot of work to do to get data, prehaps from a full table scan).


Secondly, if you know what the transaction is and what indexes are involved you can get an idea of how much work has been done, and if you know how many rows should be affected this can tell you how far the transaction has progressed. The number given is for table records and index entries. The example below was for an update on a table changing an indexed column value. 1 million rows were affected and the output shows just over 2 million records - that's 1 million for the row updates and 1 million for the index entries (plus a couple of extras as overhead). So the number of rollback records shown equates, roughly, to 1 for each row inserted/updated/deleted and 1 for each index entry affected.


Please note, however, that a transaction does not imply a single DML operation - a transaction can be made up of any number of uncommited operations. The number of rollback records applies to all DML operations and hence potentially multiple tables. In addition, DML from with triggers is also included.


Sample usage of the SQL*Plus script might result in the following output:


SQL> start D:\TimOnionsCom\showtransactionstatus

Identify current transactions and show their actions

                                         TRANSACTION STATUS

User      Program                                                   Rollback  Rollback
Details   Client Terminal Details      Start Time           Action    Blocks   Records
--------- ---------------------------- -------------------- ------ --------- ---------
TOS (137) sqlplus@myserver             09-FEB-2007 17:05:47 UPDATE    22,729 2,000,002
          (TNS V1-pts/4(myclnt) as tos

A session will only be listed if it is INSERTing, UPDATEing, DELETEing
or performing DDL (eg CREATE TABLE) or COMMITting, ROLLBACKing DML.
An action of UNKNOWN represents an uncommitted transaction.
The Rollback blocks/records indicate the progress of the transaction,
by monitoring these you can see how far a statement has got and how quickly
it is progressing (for DML they increase, for rollback they decrease down to 0).
NB Rollback records DOES NOT necessarily equate to rows processed (but it may do).

Logical and physical IO can also be shown as can SCN via other V$TRANSACTION columns.
SQL>
Go back to the top of the page

Previous script of the week - show ASM disk group used/free space (week 2007/05)


ASMFreeSpace.Sql - when your Oracle datafiles, redo and archived logs were on operating system file storage it was easy to monitor how much physical disk space was left to grow into. When using ASM you can no longer use OS commands as the OS has no visibility of what ASM has done with the space. Fortunately, ASM makes the details accessible in V$ tables, which in 10.2 at least are visible from the database instance as well as the ASM instance itself. So this means standard DBA SQL can be used to monitor ASM disk groups in much the same was as tablespace free space is checked. The main importance of this is for your archive log disk group as it is the one that is most likely to suffer extremes in usage outside of the DBA's control.


The script simply calculates used and free space for each disk group and shows percentage free, taking into account if internal mirroring has been applied. It requires a value to indicate the percentage free below which you consider the disk group to warrant attention - and will print the word ERROR next to any disk group that falls below that threshold.


By using the script and applying some of the Unix/Linux shell scripting hints and tips found elsewhere on TOdC.co.uk it is quite easy to get the script to email out an error message if and when a disk group goes into an error state. The shell script asmfreespacealert.sh will do this. It passes the output from the SQL*Plus script into a simple awk command to find any lines with the ERROR state set and these lines it uses as input to mail which is controlled by the (wonderful) xarg command. To use the shell script you need two command line arguments:

  • the percentage value below which a disk group should send an error message
  • email address (or delimited list of email addresses) to send the error mail to

Sample usage of the SQL*Plus script might result in the following output:


SQL> start D:\TimOnionsCom\asmfreespace.sql 40 Usable Usable ASM Disk Total Space Used Space Free Space Free Space Used% Free% Free% State Group (MB) (MB) (MB) (MB) -------- ----------- ---------- ---------- ---------- ----- ----- ----- ----- P_Arch 71,547 49,83 21,711 21,711 69.65 30.35 30.35 ERROR P_Data 204,671 35,587 169,084 169,084 17.39 82.61 82.61 OK P_Indx 204,671 33,728 170,943 170,943 16.48 83.52 83.52 OK SQL>
Go back to the top of the page

Previous script of the week - are RAC services on their preferred nodes? (week 2007/04)


SrvCheck.sh - this, for a change, is not a SQL script but a bash shell script. It will list RAC services and show whether they are on their preferred instance or not. Having been unable to find an in-built Oracle command to give a simple indication of whether a service had been migrated (for whatever reason) a simply few lines of shell scripting was necessary to take the outputs from two srvctl commands and compare the results. Ths was necessary as an Oracle service will be automatically migrated to an available instance on failure of the preferred instance but there is no mechanism for the service to automatically fail-back. Hence, the DBA needs a quick and easy means to know where the services are and, more importantly, which ones are no longer on their preferred node. No doubt the details are in CRS logs somewhere, but this script does not need access to them, just the ability to run a srvctl command.


The script is pretty straight forward making use of standard Unix/Linux commands. The first steps are to create temporary files that contain the service configuration (for the preferred instance data) and the service status (for the currently used instance data).


srvctl config service -d $1 |awk '{print $1,$2,$3}' |sort >srvc.lst srvctl status service -d $1 |awk '{print $2,$4,$7}' |sort >srvs.lst

The outputs from the respective srvctl commands are processed through awk to extract the service name and instance details before being sorted so that the services are guarenteed to be in the same order in each resulting temporary file. The sort is most probably not necessary as the srvctl output should be consistent for each command, but it pays to be sure.

The next step is to place each line of the two temporary files side-by-side using the -m option of the prn command:

pr -mt srvs.lst srvc.lst|...

and pipe the results to another little bit of awk that checks each line to see if the preferred instance is being used or not. The awk also ensure the two files have not got into a tangle and become mis-aligned.

awk 'BEGIN ($1!=$4) {print "SEQERR"} ($3!=$6) {print "SVRERR:",$1," is ",$2," on ",$3," when ",$5," is ",$6} ($3== 6) {print "OK:",$1," is ",$2," on ",$3," and ",$5," is ",$6}'

The output from this is one line per service showing if it is on the preferred node or not. As a final step you can pipe that output into a grep to just show the services that have been migrated off their preferred instances.

grep "SRVERR:"

Go back to the top of the page

Previous script of the week - show available space in contiguous chunks (week 2007/03)


ContiguousSpace.Sql - report on the free space in your tablespaces showing each chunk of contiguous space, how big it is (in blocks) and how many extents of the most commonly used size will fit into the space. The report works out which areas of free space are actually adjacent to each other to determine the total space that could be allocated as a contiguous chunk. This is where it scores over other free space reports which show you either the overall total amount of space available but with no indication as to whether or not it is usuable (it may be fragmented into sizes too small for allocation) or each piece of free space without working out if one/two/many areas of free space can be allocated together as a single larger chunk.


The results give, for each tablespace and datafile, an entry for each contiguous chunk of space - showing the start and end block ids. The count of blocks is reported along with the total number of chunks that make up the space. Each chunk of space is summarised as the starting block id and the count of the number of blocks (in brackets). The final two lines for each contiguous chunk show the most common extent size in use for the tablespace and how many such extents would fit in to the contiguous space (giving an indication of how useful that area of space actually is).


Note that this script, although written for SQL*Plus, uses DBMS_OUTPUT to display the resulting data. If your tablespace is defined to have a fixed extent size then this script offers little benefit. It is of most use when a variety of extent sizes have been used as this can result in a variety of sizes of areas of free space which may, or may not, be useable.


Sample usage of this script might result in the following output:


SQL> start D:\TimOnionsCom\contiguousspace.sql
Enter the name of the tablespace to report on: TS_TODC_DATA Enter the minimum number of contiguous chunks to report on: 1 Tablespace: TS_TODC_DATA File name: +TODC_DATA/racdb/datafile/TS_TODC_DATA.268.601743525 Start->End blocks: 1545->2569 Number of blocks: 1024 Number of chunks: 2 Chunk details: 1545(1022) 2568(2) Most common extent size: 512 blocks # of 512 block extents for this space: 1.998 . Tablespace: TS_TODC_DATA File name: +TODC_DATA/racdb/datafile/TS_TODC_DATA.268.601743525 Start->End blocks: 18441->18953 Number of blocks: 512 Number of chunks: 1 Chunk details: 18441(512) Most common extent size: 512 blocks # of 512 block extents for this space: 1 . Tablespace: TS_TODC_DATA File name: +TODC_DATA/racdb/datafile/TS_TODC_DATA.268.601743525 Start->End blocks: 22025->51209 Number of blocks: 29184 Number of chunks: 1 Chunk details: 22025(29184) Most common extent size: 512 blocks # of 512 block extents for this space: 57 SQL>
Go back to the top of the page


Previous script of the week - tablespace free space summary (week 2007/02)


TablespaceFreeSpace.Sql - A quick and easy way to see the essential space details for any or all tablespaces. It shows what is used, what was allocated and how much is left (all in blocks as well as bytes). The percentage of space left free is calculated and an indicate of fragmentation given by the number of chunks that make up the total free space and the largest single chunk of free space (maximum contiguous space). Having been first written for Oracle7 it still works all the way up to 10g (and RAC). Some of the derived data is available as data dictionary columns in the later versions of Oracle but the script retains its original Oracle7 format for backwards compatability.


Sample usage of this script might result in the following output:


SQL> start D:\TimOnionsCom\tablespacefreespace.sql

Enter the name of the tablespace to report space on: TS%BA% No. <--Max Contig--> Free Free Used Used Alloc Alloc Tbsp of Space Space #free Space Space Space Space Space Space Tbsp Name files (blocks) (k) chnks (blocks) (k) (blocks) (k) (blocks) (k) %free Name ------ ----- -------- ------- ----- -------- -------- -------- --------- -------- --------- ----- ------ TS_1BA 2 98,304 786,432 1 98,304 786,432 163,968 1,311,744 262,272 2,098,176 37.48 TS_1BA TS_2BA 1 65,536 524,288 1 65,536 524,288 65,664 525,312 131,200 1,049,600 49.95 TS_2BA SQL>
Go back to the top of the page


Previous script of the week - redo log switch frequency and archive log file write rate (week 2007/01)


RedoRate.Sql - This simple single table SQL*Plus script on the v$archived_log fixed view gives a really useful look at how much redo the database generated in a given time-span. It relies on the database being in archive log mode (otherwise the script will find no data to report on). From the results you can see when a redo log switch occurred, how long each redo log file was in use (and hence the frequency of log switches), how much data was written to each redo log file, how long it took to create the archive log file and (derived from this) how fast the archive process was able to write the redo data to the archive log drive.


As well as showing you how much redo the database is writing, the final column of data - the rate at which redo data was written to the archive logs - is also a pretty good indication of how fast the drive that archive logs are written to is. With EMC symmetrix class SAN storage the rate has been observed up to 50 MB/sec. Note, however, that the rate is impacted by other factors and not just the raw power of the underlying storage. The rate has been seen to dramatically drop (~1Mb/s!) when the CPU utilisation of the server approaches 100% (which is to be expected as the archive process will be competing with the rest of the database application for CPU cycles). Contention for the underlying components of the disk infrastructure will also have a major impact. In one case a SAN with a archive log throughput known to be capable of in excess of 30MB/s was observed to be reduced to a peak rate of 9Mb/s as a result of badly laid out disks leading to competition for SAN resources from outside of the database. Another factor in the write speed is the total amount of data required. For small files (less than 100mb) the rate can be very low - simply due to the fact that the archiver process never really got going!


The archive log filename displayed has the directory removed - and to cater for *nix or Windows directory structures requires a U or W parameter value to be entered (so that the script knows which kind of "slash" to look for).


The time span is specified in hours from the current time - the "from" parameter is the number of hours from now to find the first redo log file whilst the "to" parameter is the the number of hours from now to find the last redo log file to report on. So, to get the redo log files for the last day enter a "from" of 24 (24 hours back from now) and a "to" of 0 (until now).


Sample usage of this script might result in the following output:

SQL> start D:\TimOnionsCom\redorate.sql
*nix or Windows? (U/W):U
Enter value for hours from: 48
Enter value for hours to: 0
                                   Log End  ARC End  In use Arc   Size Mb/
ARC name       Log Start Time       Time     Time     (mins) (s)  (MB)  sec
-------------- -------------------- -------- -------- -----  --- ----- ---
RACDB1_113.arc 04-JAN-2007 03:50:56 08:00:08 08:00:09 1,689    1    43  39
RACDB2_896.arc 04-JAN-2007 22:01:39 01:54:07 01:54:09   232    2    43  36
RACDB2_895.arc 03-JAN-2007 19:00:15 22:01:39 22:01:41 1,621    2    43  36
RACDB2_560.arc 03-JAN-2007 09:34:17 08:55:06 08:55:08 1,400    2    43  36
RACDB1_118.arc 03-JAN-2007 08:00:07 03:50:56 03:50:57 1,190    1    43  39
RACDB2_894.arc 02-JAN-2007 09:05:38 19:00:15 19:00:16 2,034    1    43  39
RACDB2_559.arc 02-JAN-2007 22:01:35 09:34:17 09:34:19   692    2    43  36
RACDB1_181.arc 02-JAN-2007 09:05:36 08:00:07 08:00:08 1,374    1    43  39
RACDB2_558.arc 02-JAN-2007 09:05:31 22:01:35 22:01:36   776    1    43  39
RACDB1_180.arc 02-JAN-2007 09:05:30 09:05:36 09:05:37     0    1    43  39
RACDB1_179.arc 02-JAN-2007 09:05:23 09:05:30 09:05:34     0    4    43  30
RACDB1_178.arc 02-JAN-2007 09:05:16 09:05:23 09:05:25     0    2    43  36
RACDB2_892.arc 02-JAN-2007 05:16:09 09:05:16 09:05:20   229    4     9   6
RACDB1_177.arc 02-JAN-2007 09:05:07 09:05:16 09:05:17     0    1    43  39
RACDB2_556.arc 02-JAN-2007 05:16:08 09:05:10 09:05:11   229    1    34  30
RACDB1_176.arc 02-JAN-2007 05:48:24 09:05:07 09:05:09   196    2    43  36
RACDB1_175.arc 02-JAN-2007 05:16:09 05:48:24 05:49:12    32   48     0   0

SQL>
Go back to the top of the page