/* In Oracle10g we can easily see what indexes are used, when they are used and the context where they are used. Here is a simple AWR query to plot index usage:*/
Connect as sys user and use this query to check how many times each index is used for a schema...
Method 1 : ( Oracle 10g )
select p.object_owner owners, p.object_name Obj_Name, p.operation Operation, p.options Options, count(1) Idx_Usg_Cnt
from dba_hist_sql_plan p,dba_hist_sqlstat s
where p.object_owner = 'Schema_Name' and p.operation like '%INDEX%' and p.sql_id = s.sql_id
group by p.object_owner,p.object_name,p.operation,p.options
order by 1,2,3
To list all the indexes in a schema use the following query.
select t.owner,t.table_name,t.index_name,t.last_analyzed from dba_indexes t
where t.owner = 'schema_name'
/* -- Here is another script for tracking unused indexes and showing the invocation count(How manytimes and when they are called/Used) of all indexes.
-- Best of all, this script shows the columns referenced for multi-column indexes: */
select to_char(sn.begin_interval_time,'yy-mm-dd hh24') Begin_Int_Time, p.search_columns Search_Col, count(*) Invocation_Cnt
from dba_hist_snapshot sn, dba_hist_sql_plan p, dba_hist_sqlstat st
where st.sql_id = p.sql_id and sn.snap_id = st.snap_id and lower(object_name) like /*lower('%&idxname%')*/ lower ('Idx_Name')
group by begin_interval_time,search_columns
With these queries we can find out the unused indexes and drop them off.
Method 2 : (Oracle 9i)
Connect to a user and then
alter index index_Name monitoring usage
then use the query below to check whether the index is used
select index_name mon,used from v$object_usage;
Here is a simple SQL*Plus script to track all index usage in a Oracle schemas:
set pages 999;
set heading off;
spool Idx_monitor.sql
select 'alter index '||owner||'.'||index_name||' monitoring usage;'
from dba_indexes
where owner = 'Schema_name';
spool off;
@Idx_monitor.sql
Once idx_monitor.sql completes, you can check for the Index usage by
select index_name mon,used from v$object_usage;
Wednesday, December 2, 2009
Sunday, November 22, 2009
Lock Identification
/* This script shows all users who are waiting for locked resources */
SELECT b.username, b.serial#, c.sid, c.owner object_owner, c.object, d.id1, a.sql_text SQL FROM v$sqltext a,v$session b, v$access c, v$lock d
WHERE a.address = b.sql_address AND a.hash_value = b.sql_hash_value AND b.sid = c.sid AND b.lockwait = d.kaddr AND c.owner NOT IN ('SYS','SYSTEM');
/* This script shows the users who are locking the above waiting resources. This script will also show the SQL being executed by the user. NOTE: If the locking user is no longer executing any SQL, the rows returned will be 0. The user, however, is still locking the above users since he/she has not yet committed his/her transaction -- maybe they are away from their desk?? You should call them and ask them to either commit or rollback. */
SELECT x.serial#, x.sid, x.username, y.id1, z.sql_text SQL FROM v$session x, v$lock y, v$sqltext z
WHERE y.id1 IN (SELECT distinct b.id1 FROM v$session a, v$lock b WHERE b.kaddr = a.lockwait) AND x.sid = y.sid AND z.hash_value = x.sql_hash_value AND y.request = 0
/* This script is identical to the above script EXCEPT that it does not return the SQL text being executed by the locking user. You can use it to quickly identify the locking user's serial#, sid and username if you get to a point where you will need to kill the user's session. */
SELECT x.serial#, x.sid, x.username, y.id1 FROM v$session x, v$lock y WHERE y.id1 IN (SELECT distinct b.id1 FROM v$session a, v$lock b WHERE b.kaddr = a.lockwait) AND x.sid = y.sid AND y.request = 0;
SELECT b.username, b.serial#, c.sid, c.owner object_owner, c.object, d.id1, a.sql_text SQL FROM v$sqltext a,v$session b, v$access c, v$lock d
WHERE a.address = b.sql_address AND a.hash_value = b.sql_hash_value AND b.sid = c.sid AND b.lockwait = d.kaddr AND c.owner NOT IN ('SYS','SYSTEM');
/* This script shows the users who are locking the above waiting resources. This script will also show the SQL being executed by the user. NOTE: If the locking user is no longer executing any SQL, the rows returned will be 0. The user, however, is still locking the above users since he/she has not yet committed his/her transaction -- maybe they are away from their desk?? You should call them and ask them to either commit or rollback. */
SELECT x.serial#, x.sid, x.username, y.id1, z.sql_text SQL FROM v$session x, v$lock y, v$sqltext z
WHERE y.id1 IN (SELECT distinct b.id1 FROM v$session a, v$lock b WHERE b.kaddr = a.lockwait) AND x.sid = y.sid AND z.hash_value = x.sql_hash_value AND y.request = 0
/* This script is identical to the above script EXCEPT that it does not return the SQL text being executed by the locking user. You can use it to quickly identify the locking user's serial#, sid and username if you get to a point where you will need to kill the user's session. */
SELECT x.serial#, x.sid, x.username, y.id1 FROM v$session x, v$lock y WHERE y.id1 IN (SELECT distinct b.id1 FROM v$session a, v$lock b WHERE b.kaddr = a.lockwait) AND x.sid = y.sid AND y.request = 0;
To Check the Database Version
Here is the query to check what is the Database version
Connect as sys/pwd@dbname as sysdba
select username,account_status,default_tablespace,temporary_tablespace,created
from dba_users
Sample Output
PRODUCT VERSION STATUS
NLSRTL 10.2.0.1.0 Production
Oracle Database 10g Enterprise Edition 10.2.0.1.0 Prod
PL/SQL 10.2.0.1.0 Production
TNS for Linux: 10.2.0.1.0 Production
Connect as sys/pwd@dbname as sysdba
select username,account_status,default_tablespace,temporary_tablespace,created
from dba_users
Sample Output
PRODUCT VERSION STATUS
NLSRTL 10.2.0.1.0 Production
Oracle Database 10g Enterprise Edition 10.2.0.1.0 Prod
PL/SQL 10.2.0.1.0 Production
TNS for Linux: 10.2.0.1.0 Production
PL/SQL Last Compiled Date/Time
To check when was the pl/sql procedure compiled.
Connect to the schema which you want and give the query as
Eg:
select to_char(LAST_DDL_TIME,'YYYY-MM-DD HH24:MI:SS') from user_objects where object_name = upper('dcs_dev');
Connect to the schema which you want and give the query as
Eg:
select to_char(LAST_DDL_TIME,'YYYY-MM-DD HH24:MI:SS') from user_objects where object_name = upper('dcs_dev');
Friday, November 20, 2009
High-availability cluster
The most common size for an HA cluster is a two-node cluster, since that's the minimum required to provide redundancy, but many clusters consist of many more, sometimes dozens of nodes. Such configurations can sometimes be categorized into one of the following models:
* Active/Active — Traffic intended for the failed node is either passed onto an existing node or load balanced across the remaining nodes. This is usually only possible when the nodes utilize a homogeneous software configuration.
* Active/Passive — Provides a fully redundant instance of each node, which is only brought online when its associated primary node fails. This configuration typically requires the most amount of extra hardware.
* N+1 — Provides a single extra node that is brought online to take over the role of the node that has failed. In the case of heterogeneous software configuration on each primary node, the extra node must be universally capable of assuming any of the roles of the primary nodes it is responsible for. This normally refers to clusters which have multiple services running simultaneously; in the single service case, this degenerates to Active/Passive.
* N+M — In cases where a single cluster is managing many services, having only one dedicated failover node may not offer sufficient redundancy. In such cases, more than one (M) standby servers are included and available. The number of standby servers is a tradeoff between cost and reliability requirements.
* N-to-1 — Allows the failover standby node to become the active one temporarily, until the original node can be restored or brought back online, at which point the services or instances must be failed-back to it in order to restore High Availability.
* N-to-N — A combination of Active/Active and N+M clusters, N to N clusters redistribute the services or instances from the failed node among the remaining active nodes, thus eliminating (as with Active/Active) the need for a 'standby' node, but introducing a need for extra capacity on all active nodes.
* Active/Active — Traffic intended for the failed node is either passed onto an existing node or load balanced across the remaining nodes. This is usually only possible when the nodes utilize a homogeneous software configuration.
* Active/Passive — Provides a fully redundant instance of each node, which is only brought online when its associated primary node fails. This configuration typically requires the most amount of extra hardware.
* N+1 — Provides a single extra node that is brought online to take over the role of the node that has failed. In the case of heterogeneous software configuration on each primary node, the extra node must be universally capable of assuming any of the roles of the primary nodes it is responsible for. This normally refers to clusters which have multiple services running simultaneously; in the single service case, this degenerates to Active/Passive.
* N+M — In cases where a single cluster is managing many services, having only one dedicated failover node may not offer sufficient redundancy. In such cases, more than one (M) standby servers are included and available. The number of standby servers is a tradeoff between cost and reliability requirements.
* N-to-1 — Allows the failover standby node to become the active one temporarily, until the original node can be restored or brought back online, at which point the services or instances must be failed-back to it in order to restore High Availability.
* N-to-N — A combination of Active/Active and N+M clusters, N to N clusters redistribute the services or instances from the failed node among the remaining active nodes, thus eliminating (as with Active/Active) the need for a 'standby' node, but introducing a need for extra capacity on all active nodes.
Thursday, November 19, 2009
Database Replay - Ora 11g
“Simulating production load is not possible” , you might have heard these word.
In one project, where last 2 year management want to migrate from UNIX system to Linux system ( RAC ) , but they still testing because they are not sure where this Linux Boxes where bale to handle load or not. They have put lot of efforts and time in load testing and functional testing etc, but still not le gain confidence.
After using these feature of 11g , they will gain confidence and will able to migrate to Linux with full confidence and will know how there system will behave after migration/upgrade.
As per datasheet given on OTN
Database Replay workload capture of external clients is performed at the database server level. Therefore, Database Replay can be used to assess the impact of any system changes below the database tier level such as below:
* Database upgrades, patches, parameter, schema changes, etc.
* Configuration changes such as conversion from a single instance to RAC etc.
* Storage, network, interconnect changes
* Operating system, hardware migrations, patches, upgrades, parameter changes
DB replay does this by capturing a workload on the production system with negligible performance overhead( My observation is 2-5% more CPU usage ) and replaying it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. This makes possible complete assessment of the impact of the change including undesired results; new contentions points or performance regressions. Extensive analysis and reporting ( AWR , ADDM report and DB replay report) is provided to help identify any potential problems, such as new errors encountered and performance divergences. The ability to accurately capture the production workload results in significant cost and timesaving since it completely eliminates the need to develop simulation workloads or scripts. As a result, realistic testing of even complex applications using load simulation tools/scripts that previously took several months now can be accomplished at most in a few days with Database Replay and with minimal effort. Thus using Database Replay, businesses can incur much lower costs and yet have a high degree of confidence in the overall success of the system change and significantly reduce production deployment
Steps for Database Replay
1.
Workload Capture
Database are tracked and stored in binary files, called capture files, on the file system. These files contain all relevant information about the call needed for replay such as SQL text, bind values, wall clock time, SCN, etc.
1) Backup production Database #
2) Add/remove filter ( if any you want )
By default, all user sessions are recorded during workload capture. You can use workload filters to specify which user sessions to include in or exclude from the workload. Inclusion filters enable you to specify user sessions that will be captured in the workload. This is useful if you want to capture only a subset of the database workload.
For example , we don't want to capture load for SCOTT user
BEGIN
DBMS_WORKLOAD_CAPTURE.ADD_FILTER (
fname => 'user_scott',
fattribute => 'USER',
fvalue => 'SCOTT');
END;
Here filter name is "user_scott" ( user define name)
3) Create directory make sure enough space is there
CREATE OR REPLACE DIRECTORY db_replay_dir
AS '/u04/oraout/test/db-replay-capture';
4) Capture workload
BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (
name => capture_testing',dir=>'DB_REPLAY_DIR',
duration => NULL );
END;
Duration => NULL mean , it will capture load till we stop with below mentioned manual SQL command. Duration is optional input to specify the duration (in seconds) , default is NULL
5) Finish capture
BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;
# Take backup of production before Load capture, so we can restore database on test environment and will run replay on same SCN level of database to minimize data divergence
Note as per Oracle datasheet
The workload that has been captured on Oracle Database release 10.2.0.4 and higher can also be replayed on Oracle Database 11g release.So , I think , It simply mean NEW patch set 10.2.0.4 will support capture processes. Is it mean Current patch set (10.2.0.3) not support load capture ??????
2. Workload Processing
Once the workload has been captured, the information in the capture files has to be processed preferably on the test system because it is very resource intensive job. This processing transforms the captured data and creates all necessary metadata needed for replaying the workload.
exec DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_DIR');
3.
Workload Replay
1) Restore database backup taken step one to test system and start Database
2) Initialize
BEGIN
DBMS_WORKLOAD_REPLAY.initialize_replay (
replay_name => 'TEST_REPLAY',
replay_dir => 'DB_REPLAY_DIR');
END;
3) Prepare
exec DBMS_WORKLOAD_REPLAY.prepare_replay(synchronization => TRUE)
4) Start clients
$ wrc mode=calibrate replaydir=/u03/oradata/test/db-replay-capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:41 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Report for Workload in: /u03/oradata/test/db-replay-capture
-----------------------
Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).
Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 7
Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE
$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (00:31:52)
5) Start Replay
BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/
$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (00:31:52)
Replay started (00:33:32)
Replay finished (00:42:52)
4.
Analysis and Reporting
Generate AWR , ADDM and DB reply report and compare with data gathered on production for same time period when load was captured on Production database. For Database Replay Report run following command
SQL> COLUMN name FORMAT A20
SQL> SELECT id, name FROM dba_workload_replays;
ID NAME
---------- --------------------
1 TEST_REPLAY
DECLARE
v_report CLOB;
BEGIN
v_report := DBMS_WORKLOAD_replay.report(
replay_id => 1,
format=>DBMS_WORKLOAD_CAPTURE.TYPE_HTML
);
dbms_output.put_line(l_report);
END;
/
In one project, where last 2 year management want to migrate from UNIX system to Linux system ( RAC ) , but they still testing because they are not sure where this Linux Boxes where bale to handle load or not. They have put lot of efforts and time in load testing and functional testing etc, but still not le gain confidence.
After using these feature of 11g , they will gain confidence and will able to migrate to Linux with full confidence and will know how there system will behave after migration/upgrade.
As per datasheet given on OTN
Database Replay workload capture of external clients is performed at the database server level. Therefore, Database Replay can be used to assess the impact of any system changes below the database tier level such as below:
* Database upgrades, patches, parameter, schema changes, etc.
* Configuration changes such as conversion from a single instance to RAC etc.
* Storage, network, interconnect changes
* Operating system, hardware migrations, patches, upgrades, parameter changes
DB replay does this by capturing a workload on the production system with negligible performance overhead( My observation is 2-5% more CPU usage ) and replaying it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. This makes possible complete assessment of the impact of the change including undesired results; new contentions points or performance regressions. Extensive analysis and reporting ( AWR , ADDM report and DB replay report) is provided to help identify any potential problems, such as new errors encountered and performance divergences. The ability to accurately capture the production workload results in significant cost and timesaving since it completely eliminates the need to develop simulation workloads or scripts. As a result, realistic testing of even complex applications using load simulation tools/scripts that previously took several months now can be accomplished at most in a few days with Database Replay and with minimal effort. Thus using Database Replay, businesses can incur much lower costs and yet have a high degree of confidence in the overall success of the system change and significantly reduce production deployment
Steps for Database Replay
1.
Workload Capture
Database are tracked and stored in binary files, called capture files, on the file system. These files contain all relevant information about the call needed for replay such as SQL text, bind values, wall clock time, SCN, etc.
1) Backup production Database #
2) Add/remove filter ( if any you want )
By default, all user sessions are recorded during workload capture. You can use workload filters to specify which user sessions to include in or exclude from the workload. Inclusion filters enable you to specify user sessions that will be captured in the workload. This is useful if you want to capture only a subset of the database workload.
For example , we don't want to capture load for SCOTT user
BEGIN
DBMS_WORKLOAD_CAPTURE.ADD_FILTER (
fname => 'user_scott',
fattribute => 'USER',
fvalue => 'SCOTT');
END;
Here filter name is "user_scott" ( user define name)
3) Create directory make sure enough space is there
CREATE OR REPLACE DIRECTORY db_replay_dir
AS '/u04/oraout/test/db-replay-capture';
4) Capture workload
BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (
name => capture_testing',dir=>'DB_REPLAY_DIR',
duration => NULL );
END;
Duration => NULL mean , it will capture load till we stop with below mentioned manual SQL command. Duration is optional input to specify the duration (in seconds) , default is NULL
5) Finish capture
BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;
# Take backup of production before Load capture, so we can restore database on test environment and will run replay on same SCN level of database to minimize data divergence
Note as per Oracle datasheet
The workload that has been captured on Oracle Database release 10.2.0.4 and higher can also be replayed on Oracle Database 11g release.So , I think , It simply mean NEW patch set 10.2.0.4 will support capture processes. Is it mean Current patch set (10.2.0.3) not support load capture ??????
2. Workload Processing
Once the workload has been captured, the information in the capture files has to be processed preferably on the test system because it is very resource intensive job. This processing transforms the captured data and creates all necessary metadata needed for replaying the workload.
exec DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_DIR');
3.
Workload Replay
1) Restore database backup taken step one to test system and start Database
2) Initialize
BEGIN
DBMS_WORKLOAD_REPLAY.initialize_replay (
replay_name => 'TEST_REPLAY',
replay_dir => 'DB_REPLAY_DIR');
END;
3) Prepare
exec DBMS_WORKLOAD_REPLAY.prepare_replay(synchronization => TRUE)
4) Start clients
$ wrc mode=calibrate replaydir=/u03/oradata/test/db-replay-capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:41 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Report for Workload in: /u03/oradata/test/db-replay-capture
-----------------------
Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).
Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 7
Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE
$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (00:31:52)
5) Start Replay
BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/
$ wrc system/pass mode=replay replaydir=/u03/oradata/test/db-replay-capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Wed Dec 26 00:31:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (00:31:52)
Replay started (00:33:32)
Replay finished (00:42:52)
4.
Analysis and Reporting
Generate AWR , ADDM and DB reply report and compare with data gathered on production for same time period when load was captured on Production database. For Database Replay Report run following command
SQL> COLUMN name FORMAT A20
SQL> SELECT id, name FROM dba_workload_replays;
ID NAME
---------- --------------------
1 TEST_REPLAY
DECLARE
v_report CLOB;
BEGIN
v_report := DBMS_WORKLOAD_replay.report(
replay_id => 1,
format=>DBMS_WORKLOAD_CAPTURE.TYPE_HTML
);
dbms_output.put_line(l_report);
END;
/
Friday, November 13, 2009
Listener
when attempting to rename the listener.log file while the TNS listener process is running. The Oracle TNS listener process holds an open handle to the file. Under UNIX, you CAN remove the file, but Oracle will not re-create the file when it attempts to write to it again. The TNS listener will need to be stopped and restarted in order for it to create the new listener.log file.
Well, here is a solution for renaming (or removing) the listener.log file without having to stop and start the TNS listener process under either Windows or UNIX:
Windows
C:\> cd \oracle\ora92\network\log
C:\oracle\ora92\network\log> lsnrctl set log_status off
C:\oracle\ora92\network\log> rename listener.log listener.old
C:\oracle\ora92\network\log> lsnrctl set log_status on
UNIX
% cd $ORACLE_HOME/network/log
% lsnrctl set log_status off
% mv listener.log listener.old
% lsnrctl set log_status on
On the Other hand even if you were able to rename the listener.log file it will not create a new log file like alert log. Instead it keeps writing to the renamed log file.
For Eg,
When the Listener is running if we give the command as
% cd $ORACLE_HOME/network/log
% mv listener.log listenerOld.log
After we move to the listenerOld.log file, it will continue writing log files in the file listenerOld.log. In order to make sure we create a new log file we must use the following commands.
% cd $ORACLE_HOME/network/log
% lsnrctl set log_status off
% mv listener.log listener.old
% lsnrctl set log_status on
It is advisable to move/rename the listener.log file atleast once in a month, we can create a script file to do this.
Shell Script for moving the Listener Log file
Save the Following contents in a file eg.,lsnrlogmv.sh file
export ORACLE_SID=opubp
export ORACLE_HOME=/opt/oracle/u01/oracle/product/10.2.0/db_1
$ORACLE_HOME/bin/lsnrctl set log_status off
mv $ORACLE_HOME/network/log/listener.log $ORACLE_HOME/network/log/listener_$(date +"%d-%m-%Y").log
$ORACLE_HOME/bin/lsnrctl set log_status on
Edit Crontab by Crontab -e
00 15 * * 2 /opt/oracle/lsnrlogMv.sh
#mi hh dd mm dow filename
Hope this helps.
Well, here is a solution for renaming (or removing) the listener.log file without having to stop and start the TNS listener process under either Windows or UNIX:
Windows
C:\> cd \oracle\ora92\network\log
C:\oracle\ora92\network\log> lsnrctl set log_status off
C:\oracle\ora92\network\log> rename listener.log listener.old
C:\oracle\ora92\network\log> lsnrctl set log_status on
UNIX
% cd $ORACLE_HOME/network/log
% lsnrctl set log_status off
% mv listener.log listener.old
% lsnrctl set log_status on
On the Other hand even if you were able to rename the listener.log file it will not create a new log file like alert log. Instead it keeps writing to the renamed log file.
For Eg,
When the Listener is running if we give the command as
% cd $ORACLE_HOME/network/log
% mv listener.log listenerOld.log
After we move to the listenerOld.log file, it will continue writing log files in the file listenerOld.log. In order to make sure we create a new log file we must use the following commands.
% cd $ORACLE_HOME/network/log
% lsnrctl set log_status off
% mv listener.log listener.old
% lsnrctl set log_status on
It is advisable to move/rename the listener.log file atleast once in a month, we can create a script file to do this.
Shell Script for moving the Listener Log file
Save the Following contents in a file eg.,lsnrlogmv.sh file
export ORACLE_SID=opubp
export ORACLE_HOME=/opt/oracle/u01/oracle/product/10.2.0/db_1
$ORACLE_HOME/bin/lsnrctl set log_status off
mv $ORACLE_HOME/network/log/listener.log $ORACLE_HOME/network/log/listener_$(date +"%d-%m-%Y").log
$ORACLE_HOME/bin/lsnrctl set log_status on
Edit Crontab by Crontab -e
00 15 * * 2 /opt/oracle/lsnrlogMv.sh
#mi hh dd mm dow filename
Hope this helps.
Subscribe to:
Posts (Atom)