Finding file counts
For Linux:
cd $ORACLE_BASE/admin -- For Linux find -maxdepth 2 -type d | while read dir; do count=$(find "$dir" -maxdepth 2 -iname \*.aud | wc -l) echo "$count ; $dir" done | grep adump | sort -n
Getting around too many arguments:
This method is scalable, does not have issues with argument list limits, and works well for millions of files. This example looks for files that match *.aud.
for f in *.aud; do rm "$f"; done
You can leverage power of the find command with backticks. This produces a list of files that are fed to the 'do' block one by one.
for f in `find *.aud -mtime +60`; do rm "$f"; done
Finding Large Files Recursively
find / -xdev -type f -size +100M
Delete files older than 60 days
Note: This method can have issues with argument list lengths. It may error out if too many files are fed to the exec command, in this case rm.
find *.aud -mtime +60 -exec rm {} \;
You can combine it with xargs to get around the argument list length limit:
find . -name "*.aud" -mtime +60 -print0 | xargs -0 rm
Grepping Files using Find
find * -type f -name "*.log" -exec grep -i 'error' {} \;
Unlocking Accounts on OEL
pam_tally2 --user oracle --reset
Killing multiple processes
ps -ef | grep -i searchterm | grep -v grep | awk '{print "kill -9 "$2}'
Workaround for "Argument list is too long" shell errors
for file in *.aud
do
find $file -mtime +7 -exec 'rm' {} \;
done
A quickie script that can be run by Cron to take a StatsPack snapshot:
snap.sh:
#!/bin/ksh export ORACLE_HOME=/u01/app/oracle/product/8.1.7 export ORACLE_SID=dbsid export PATH=/u01/app/oracle/product/8.1.7/bin sqlplus -s /nolog <<EOF connect perfstat/perfstat exec statspack.snap(); exit EOF
The following set of scripts will run a simple query against a database and add the results to a file. To keep the file from getting too long, when a new day starts a new file will be created to store the results.
Query.sql:
SET FEEDBACK OFF
SET HEADING OFF
SET PAGESIZE 0
SET SERVEROUTPUT ON
SELECT COUNT(*) FROM v$SESSION;
EXIT
runQuery.sh:
#!/bin/sh
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
TODAY=$(date +%Y-%m-%d)
sqlplus -S perfstat/perfstat @query.sql >> data$TODAY.txt
Now run "crontab -e" and add the following line:
0,15,30,45 * * * * /directory/runQuery.sh
For more information on setting up Cron jobs see Scheduling a Cron Job.
Gnu Screen is a helpful Linux utility which functions as a virtual terminal manager. With it you can manage multiple terminal sessions and detach/reconnect to sessions as needed in your workflow. The virtual terminals will continue to run and insulate you from unexpected disconnections and logouts.
Install
# Debian/Ubuntu sudo apt-get install screen # RHEL/Fedora/OEL sudo yum install screen
Starting
screen
If you are sharing a machine with multiple other users you can provide your own session name which will help with tracking and sorting through multiple sessions. Or if you are handing off responsibility to someone else you can tag with a task name or somethign meaningful.
screen -S mytask
Detaching
Screen is good insurance against a shaky connection. If your connection is interrupted, Screen detaches the session but keeps running. You can resume by following the commands in the Resuming Screen Sessions section below.
If you want to force a detach from your screen session hit "Ctrl-a" followed by hitting "d".
Using Virtual Terminals
After you start Screen you will be logged into a single virtual terminal. This single terminal is useful as-is to protect you from connection errors.
While connected to Screen these special commands are initiated by pressing "Ctrl-a" (Control + a at the same time) followed by another letter or number.
This is only a partial listing of commands. Screen is very powerful, check out the complete documentation here or access the online help via "Ctrl-a" ?.
Resuming Screen Sessions
The real power of Screen is the ability to resume sessions. If you only have one suspended session simply issue this command to resume it:
screen -r
If you have multiple sessions available they will be displayed
[oracle@mymachine1 ~]$ screen -r There are several suitable screens on: 6809.mytask (Attached) 26514.david (Detached) Type "screen [-d] -r [pid.]tty.host" to resume one of them. [oracle@mymachine1 ~]$
If the screen session is listed as Detached then you can resume it directly
screen -r 26514
If the screen is listed as Attached then you must detach/reattach it to gain access.
screen -d -r 6809
Saving scrollback buffer to a file
"Ctrl-a" : - Then enter hardcopy -h <filename> so save a file to the same machine where the screen sessions are originating from.
Note that this file will be saved on the machine where the screen process is running, not on the local machine you may be logged into via screen!
Terminating your screen session
When you exit your final screen session your screen session will be closed and the following will be displayed:
[screen is terminating] [oracle@mymachine1 ~]$
I have a data collection system that I run against all DBs in my environment nightly. I inherited a script that would just start at the top of the list and SQLPlus into them one by one to collect the data. Well the scope of the data I needed to collect expanded and put me in the 0.75-1 minute per database range... and I now have 600+ databases I need to talk to nightly. It was taking over 5 hours a night to collect this info... I wanted it to take an hour or less.
Yes there are other methods to collect this data. My main intent was to have no configuration and zero footprint on target databases. Log in, collect and store data, log out. No fuss, no muss, no distributed code, just get the data and run.
I was about to roll my own script to run multiple SQLPLUS sessions in parallel when I came across Parallel Processing Shell Script on Google Code. I hate recreating the wheel. And after messing with PPSS a few minutes I realized they had covered all the territory I needed to cover and did a much better and robust job than I could in the time I had to deal with my issue.
What is PPSS?
From the project page:
PPSS is a Bash shell script that executes commands, scripts or programs in parallel. It is designed to make full use of current multi-core CPUs. It will detect the number of available CPUs and start a separate job for each CPU core. It will also use hyper threading by default.
PPSS can be run on multiple hosts, processing a single group of items, like a cluster.
Well I didn't need to go super massively parallel across multiple machines, but running 6 or 8 SQLPLUS sessions at a time would get me below my 60 minute goal so that's where I started.
1) Download and set up ppss
The Google Code project and download page is here.
2) Create or generate a list of database names you want to visit
I leveraged TNS via LDAP for connections to the database. So before I kick off ppss my shell script creates a text file with 1 TNS name per line. An example of how to do this is here.
dblist.txt:
MYDB1 MYDB2 MYDB3 MYDB4
... and so on.
If you have OEM you may be able to generate a list of DB names from the OEM repository. More info on these views is here. If you have the full connection string to each database available you could use that as well. We are just going to tack the contents of each line on to a SQLPLUS command line so whatever works easiest for you.
3) Create a shell script that will handle 1 execution of the process you want to parallelize
Like when you are creating a script to run with cron, it is best to have the shell script set up all the environment you may possibly need. In this case it is PATH, ORACLE_SID, and we leverage oraenv to set the rest of the library paths, etc.
Keep in mind the script must not interfere with other instances of the script that are running. For example, if you script outputs to LOG.TXT... when happens when 8 copies of the script are running and writing to LOG.TXT? Consider appending something unique (like DB Name in this case) to the script to make it unique for each run.
getdata.sh
# Pass in the TNS name so we can use it to connect CONNDB=$1; export CONNDB PATH=$PATH:/usr/local/bin:/u01/app/oracle/product/11.1.0.7/bin;export PATH ORACLE_SID=orcl;export ORACLE_SID ORAENV_ASK=NO;export ORAENV_ASK . oraenv orcl $ORACLE_HOME/bin/sqlplus -s /nolog << EOF set echo on connect util/util@CONNDB insert into db_link_@repolink select * from dba_db_links; exit sql.sqlcode EOF exit
4) Prepare to run in parallel
This will fire up 1 instance of the ppss shell script on your local machine, using dblist.txt as input, getdata.sh as the script to run, and spinning up 6 processes.
ppss -f dblist.txt -c 'getdata.sh "$ITEM"' -p 6 -l ppss_log.txt
A demonstration from the PPSS project folks is available here.
Debugging
PPSS writes extensive info to its logs. It is sometimes hard to navigate all of the logs it generates, but if you have a failure you should be able to find out why by looking at the PPSS output directory.
Here is a script that will examine all of the files in the ppss_log directory for errors. It prints out the DB that was trying to be contacted and also the relevant ORA- errors that were thrown:
#!/bin/bash LOGFILES=./ppss_dir/job_log/* ERRNUM=1 for file in $LOGFILES do # Determine if ORA error occured in log file NUMERRORS=`grep ORA- $file | wc -l` if [ $NUMERRORS -gt 0 ] then echo Error Number : $ERRNUM ERRNUM=$[ERRNUM+1] grep Item: $file grep ORA- $file | grep -v 06512 echo ---------------------------------------------------------------------- fi done
The following script runs a simple query and appends the result to a file.
Query.sql:
SET FEEDBACK OFF SET HEADING OFF SET PAGESIZE 0 SET SERVEROUTPUT ON SELECT COUNT(*) FROM v$SESSION; EXIT
runQuery.sh:
#!/bin/sh if [ -f ~/.bashrc ]; then . ~/.bashrc fi sqlplus -S perfstat/perfstat @query.sql > output.txt
Note that if you are wanting to schedule a script to run with Cron and execute SQLPLUS, you may have to make the script aware of the user environment variables like this:
#!/bin/sh if [ -f ~/.bashrc ]; then . ~/.bashrc fi sqlplus -S perfstat/perfstat @query.sql > outfile.txt
To run a script with Cron automatically:
* * * * * /path/for/CommandToBeExecuted
- - - - -
| | | | |
| | | | +----- day of week (0=Sunday, 1=Monday, ... ,6=Saturday)
| | | +------- month (1=Jan, ..., 12=December)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)
Example 1: Run example1.sh every day at 10:30am and 10:30pm:
30 10,22 0 0 0 /home/example1.sh
Example 2: Run example2.sh Monday-Friday at 7:00pm:
* 19 * * 1-5 /home/example1.sh