Submitted by dave on Wed, 07/21/2010 - 15:52
Did you know that OEM has a built in interface for executing simple ad-hoc queries and recording the results?
Here is how I set up a recurring query to find the number of user logins every 15 minutes and record the data.
1) Navigate to a database instance in OEM
2) Select the Related Links -> User Defined Metrics link from the bottom of the Database Instance Home tab.
3) Select the Create button to create a new User Defined Metric
a. Enter the Metric Name
Submitted by dave on Tue, 07/20/2010 - 12:36
I recently had an issue where a query being submitted to the database had a large change in performance. I could attribute the change to a recent change in object statistics, but the query wasn't behaving because of a combination of a view, subquery, UNION, and LIKE operators. After spending too much time on this statement I decided to hint it to make it perform like it did before the most recent stats change.
Submitted by dave on Mon, 04/12/2010 - 12:42
If you want to record copies of changed records on a target system you can do this with Golden Gate Software and not incur much in the way of source side resource drain.
Submitted by dave on Fri, 03/26/2010 - 16:42
I recently fielded a question about getting data out of Oracle quickly. Without much detail on the systems involved here is my answer:
I need to migrate data from Oracle to MySQL quickly (in less than 1 day). What are my options?
Oracle does not supply an out-of-the-box unload utility.
Keep in mind without comprehensive info about your environment (oracle version? server platform? how much data? what datatypes?) everything here is YMMV and you would want to give it a go on your system for performance and timing.
Submitted by dave on Fri, 02/19/2010 - 15:02
I keep coming across a construct in some legacy SQL that has been causing all kinds of performance issues for us. I guess you could call it using the IN condition with multi-column subqueries. I located the syntax for the IN condition here but it doesn't really get into much detail about using this construct.
Here is an example of the subquery (it returns the lowest salary and department_id for each department):
Submitted by dave on Thu, 02/18/2010 - 16:40
Submitted by dave on Fri, 01/22/2010 - 11:49
This script will generate dbv commands into a shell script and then execute the shell script.
set head off
set lines 200
set feedback off
set define off
select 'dbv file='||name||' blocksize = '||block_size||
' LOGFILE=FILE-'||FILE#||'.LOG' from v$datafile;
host chmod 755 dbv.sh
Output will be created as separate log files. You can run it and review results like this:
$ sqlplus "/ as sysdba" @run-dbv.sql
Submitted by dave on Tue, 01/12/2010 - 15:06
If you want to run AutoTrace using Toad using a non sys or system user, give your user the following select privileges:
grant select on v_$session to &traceuser.;
grant select on v_$sesstat to &traceuser.;
grant select on v_$statname to &traceuser.;
Alternately you can run $ORACLE_HOME/sqlplus/admin/plustrce.sql and then grant the PLUSTRACE role to the user that needs to run AutoTrace.
Submitted by dave on Mon, 01/04/2010 - 15:25
I came across a situation where the profiler.sql supplied with Metalink Doc: 243755.1 was giving some incomplete results. Because some of my PL/SQL code was executing very quickly it was rounding down to 0.00 and not even showing up on the 'Top 10' section.
Throughout profiler.sql there are timing numbers (unit = sends) that are rounded to 2 places. Looking in the comments for profiler.sql I noticed the last update was early 2007. Maybe Moore's law and CPU power has caught up with profiler.sql but I would recommend increasing the granularity of the script by doing the following:
Submitted by dave on Thu, 12/31/2009 - 14:27
If you are experimenting with gathering system statistics it might be helpful to archive your current settings and any intermediate settings you come up with along the way.
There is a way to save stats to a table using DBMS_STATS.CREATE_STAT_TABLE and gathering with DBMS_STATS.GATHER into that table, but the format is cryptic and it is nice to have the descriptive parameter names tagging along with the data. (In a future post I will cover format of the CREATE_STAT_TABLE format).