Contents
- Introduction
- Requirements
- Copying V$LOGMNR_CONTENTS
- Scenario 1: Extract LogMiner Dictionary to a Flat File
- Scenario 2: Extract LogMiner Dictionary to the Redo Log Files
- Scenario 3: Reading the LogMiner Dictionary From the Current Online Catalog
- Scenario 4: Source Database (10g) — Remote Mining Database (11g)
- Scenario 5: Using the CONTINUOUS_MINE Option
- About the Author
Introduction
LogMiner is a built-in database tool introduced in Oracle8i that provides a relational interface for users to query redo records found in online and archived redo log files. LogMiner is most often used for auditing purposes, data analysis, or recovering data from a user error.This tutorial provides an example-based approach on how to use LogMiner in a number of different scenarios. The examples will be based on Oracle Database 11g Release 2 (11.2.0.3.0) running on the Linux platform.
Consult the following article for an overview of the LogMiner tool and an explanation of its components.
- Understanding LogMiner
LogMiner automatically builds its own internal dictionary from the LogMiner dictionary that you specify when you start LogMiner (either an online catalog, a dictionary in the redo log files, or a flat file). This internal dictionary provides a snapshot of the database objects and their definitions. If your LogMiner dictionary is in the redo log files or is a flat file, you can use the DDL_DICT_TRACKING option to the PL/SQL DBMS_LOGMNR.START_LOGMNR procedure to direct LogMiner to track data definition language (DDL) statements. DDL tracking enables LogMiner to successfully track structural changes made to a database object, such as adding or dropping columns from a table. It is important to understand that the LogMiner internal dictionary is not the same as the LogMiner dictionary contained in a flat file, in redo log files, or in the online catalog. LogMiner does update its internal dictionary, but it does not update the dictionary that is contained in a flat file, in redo log files, or in the online catalog.
LogMiner Interfaces
The LogMiner utility can be accessed through SQL statements (command-line) or through the Oracle LogMiner Viewer graphical user interface which is part of Oracle Enterprise Manager. The examples presented in this guide will all use the command-line interface.Requirements
The following are requirements for the source and mining database, the data dictionary, and the redo log files that LogMiner will mine in the examples presented in this guide.LogMiner Source and Mining Database
- Source and Mining Database
-
Both the source database and the mining database must be running on the same hardware platform.
-
The mining database can be the same as, or completely separate from, the source database.
-
The mining database must run the same version or
a later version of the Oracle Database software as the source database.
-
The mining database must use the same character
set (or a super set of the character set) used by the source database.
-
Both the source database and the mining database must be running on the same hardware platform.
- LogMiner Dictionary
-
The dictionary must be produced by the same
source database that generates the redo log files that LogMiner will
analyze.
-
The dictionary must be produced by the same
source database that generates the redo log files that LogMiner will
analyze.
- All Redo Log Files
-
Must be produced by the same source database.
-
Must be associated with the same database RESETLOGS SCN.
-
Must be from a release 8.0 or later Oracle
Database. However, several of the LogMiner features introduced as of
release 9.0.1 work only with redo log files produced on an Oracle9i or later database.
-
Must be produced by the same source database.
Archivelog Mode
Archivelog mode must be enabled in order to generate usable redo log files.SQL> select log_mode from v$database; LOG_MODE ------------ NOARCHIVELOG SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1653518336 bytes Fixed Size 2228904 bytes Variable Size 1291849048 bytes Database Buffers 352321536 bytes Redo Buffers 7118848 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> select log_mode from v$database; LOG_MODE ------------ ARCHIVELOG |
Security
A new role named LOGMNR_ADMIN will be created and assigned the appropriate privileges for LogMiner analysis. This role will be assigned to a new user named MINER that will be used to perform the LogMiner examples presented in this guide.SQL> create role logmnr_admin; Role created. SQL> grant create session to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_contents to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_dictionary to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_parameters to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_logs to logmnr_admin; Grant succeeded. SQL> grant select on v_$archived_log to logmnr_admin; Grant succeeded. SQL> grant execute_catalog_role , select any dictionary , select any transaction , select any table , create tablespace , drop tablespace to logmnr_admin; Grant succeeded. SQL> create user miner identified by miner; User created. SQL> grant logmnr_admin to miner; Grant succeeded. SQL> alter user miner quota unlimited on users; User altered. |
Supplemental Logging
LogMiner is a redo-based application and as such, requires at least minimal supplemental logging be enabled on the source database. Oracle does not enable any supplemental logging by default.After verifying supplemental logging is enabled, force a log switch in order for the new supplemental log configuration to begin archiving the additional column data to the redo logs.
SQL> alter database add supplemental log data; Database altered. SQL> select supplemental_log_data_min from v$database; SUPPLEMENTAL_LOG_DATA_MIN -------------------------- YES SQL> alter system switch logfile; System altered. |
LogMiner Tablespace
This optional step allows you to re-create all LogMiner tables in an alternative tablespace other than the default SYSAUX tablespace.CREATE TABLESPACE logmnrts DATAFILE SIZE 25M AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED; Tablespace created. BEGIN DBMS_LOGMNR_D.SET_TABLESPACE ( new_tablespace => 'logmnrts' ); END; / PL/SQL procedure successfully completed. |
Copying V$LOGMNR_CONTENTS
After preparing and starting a LogMiner session, the redo entries mined from the redo log files by LogMiner are made available through the private view V$LOGMNR_CONTENTS. The SQL interface can be used to query SQL statements (and their UNDO) executed on the database during any point in time you have redo log files for. When a SELECT statement is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially. Translated records from the redo log files are returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup (EndTime or endScn) are met or the end of the redo log file(s) is reached. The records available through the V$LOGMNR_CONTENTS view are only accessible to the current session which started the LogMiner session and only until the DBMS_LOGMNR.END_LOGMNR is called.The examples in this guide are provided to demonstrate a number of practical scenarios but are kept small in order to explain concepts. However, in many cases, the number of redo entries to analyze can be significant. Trying to perform a LogMiner analysis of many redo entries through the V$LOGMNR_CONTENTS view can be expensive given the nature of how the redo log files need to be read.
A good practice is to copy the LogMiner contents from V$LOGMNR_CONTENTS to a user table. Accessing a user table will be faster and can also be indexed for better query performance. In addition, the LogMiner contents will be persisted when the LogMiner session ends.
... BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_REDO_LOGS ); END; / PL/SQL procedure successfully completed. CREATE TABLE miner.logmnr_contents_20121018 AS SELECT * FROM v$logmnr_contents; Table created. CREATE INDEX miner.logmnr_contents_20121018_idx1 ON miner.logmnr_contents_20121018(username, operation, seg_owner); Index created. BEGIN DBMS_LOGMNR.END_LOGMNR(); END; / PL/SQL procedure successfully completed. |
Scenario 1: Extract LogMiner Dictionary to a Flat File
Prior to Oracle9i, extracting the LogMiner dictionary to a flat file was the only method available to create the dictionary. This method is maintained for backward compatibility with previous releases. In addition, this method does not guarantee transactional consistency.Requirements
Review the requirements section at the beginning of this guide to prepare the source and mining database, the data dictionary, and the redo log files that LogMiner will mine in this example. To extract the dictionary to a flat file, set the dictionary_filename parameter to the name of the file when calling DBMS_LOGMNR_D.BUILD, the dictionary_location set to a directory on the database machine specified in UTL_FILE_DIR, and optionally set the options parameter to the constant DBMS_LOGMNR_D.STORE_IN_FLAT_FILE. Using these options, the BUILD procedure will query the data dictionary tables of the current database and create a text-based file containing the contents of the tables.The directory specified in dictionary_location must exist as a value in the UTL_FILE_DIR initialization parameter. After setting this parameter, the instance will need to be restarted in order to take effect.
Only extract the dictionary to a flat file after all DDL statements have been run and prior to the creation of any redo log files that will be analyzed. Do not run DBMS_LOGMNR_D.BUILD while DDL operations are occurring in the database, else the snapshot of the dictionary will not be consistent.
Action Items to Perform
The scenario described in this section performs the following actions:-
The same database will be used for the source and mining database.
-
The following SQL statement was run in the database by the user
SCOTT and written to the redo log file on around
06-OCT-2012 00:21:00.
insert into hr.departments(department_id, department_name, manager_id, location_id) values(hr.departments_seq.nextval, 'Engineering', null, 1700);
-
Verify the utl_file_dir initialization parameter contains
the destination directory where the LogMiner dictionary will
be extracted to.
-
Extract LogMiner dictionary to a flat file.
-
Determine the redo log files to be analyzed by LogMiner.
-
Add two archived redo log files to the list of log files for
LogMiner to analyze.
-
Start a LogMiner session with no options.
-
Query V$LOGMNR_CONTENTS for the SQL_REDO and
SQL_UNDO for the above DDL INSERT statement.
-
Properly end a LogMiner session using the DBMS_LOGMNR.END_LOGMNR
procedure which performs all cleanup operations. This procedure is
automatically called when you log out of a database or when you
call DBMS_LOGMNR.ADD_LOGFILE and specify the NEW option.
Example LogMiner Session
CONNECT miner/miner Connected. ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. SELECT name, value FROM v$parameter WHERE name = 'utl_file_dir'; NAME VALUE --------------- ------------------------- utl_file_dir /u04/app/oracle/logmnr BEGIN DBMS_LOGMNR_D.BUILD ( dictionary_filename => 'dictionary.ora', dictionary_location => '/u04/app/oracle/logmnr', options => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE ); END; / PL/SQL procedure successfully completed. ls -l /u04/app/oracle/logmnr/dictionary.ora -rw-r--r-- 1 oracle asmadmin 39125525 Oct 6 00:25 /u04/app/oracle/logmnr/dictionary.ora COLUMN name FORMAT A66 HEAD "Log File Name" COLUMN first_time FORMAT A23 HEAD "First Time" SELECT name , TO_CHAR(first_time, 'DD-MON-YYYY HH24:MI:SS') first_time FROM v$archived_log WHERE name IS NOT NULL AND first_time BETWEEN TO_DATE('06-OCT-2012 00:15:00', 'DD-MON-YYYY HH24:MI:SS') AND TO_DATE('06-OCT-2012 00:45:00', 'DD-MON-YYYY HH24:MI:SS') ORDER BY sequence#; Log File Name First Time ------------------------------------------------------------------ ----------------------- +FRA/racdb/archivelog/2012_10_06/thread_1_seq_3138.1105.795917893 06-OCT-2012 00:16:38 +FRA/racdb/archivelog/2012_10_06/thread_1_seq_3139.1036.795918129 06-OCT-2012 00:26:41 +FRA/racdb/archivelog/2012_10_06/thread_1_seq_3140.1123.795919913 06-OCT-2012 00:36:39 BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_06/thread_1_seq_3138.1105.795917893', options => DBMS_LOGMNR.NEW ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_06/thread_1_seq_3139.1036.795918129', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.START_LOGMNR ( DictFileName => '/u04/app/oracle/logmnr/dictionary.ora' ); END; / PL/SQL procedure successfully completed. COLUMN timestamp FORMAT A20 COLUMN filename FORMAT A38 SELECT timestamp , filename , dictionary_scn , NVL(info, 'VALID') as DICTIONARY_STATUS FROM v$logmnr_dictionary; TIMESTAMP FILENAME DICTIONARY_SCN DICTIONARY_STATUS -------------------- -------------------------------------- -------------- ------------------ 06-OCT-2012 00:24:57 /u04/app/oracle/logmnr/dictionary.ora 96112409 VALID COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN sql_undo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , sql_undo , TO_CHAR(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp , scn FROM v$logmnr_contents WHERE username = 'SCOTT' AND operation = 'INSERT' AND seg_owner = 'HR'; Username OPERATION SQL_REDO SQL_UNDO TIMESTAMP SCN -------- --------- ------------------------- ------------------------- -------------------- ---------- SCOTT INSERT insert into delete from 06-OCT-2012 00:21:38 96112299 "HR"."DEPARTMENTS"("DEPAR "HR"."DEPARTMENTS" where TMENT_ID","DEPARTMENT_NAM "DEPARTMENT_ID" = '320' E","MANAGER_ID","LOCATION and "DEPARTMENT_NAME" = _ID") values 'Engineering' and ('320','Engineering',NULL "MANAGER_ID" IS NULL and ,'1700'); "LOCATION_ID" = '1700' and ROWID = 'AAAQ08AAEAAAACvAAb';
|
Scenario 2: Extract LogMiner Dictionary to the Redo Log Files
Oracle recommends extracting the LogMiner Dictionary to the redo log files when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.Requirements
Review the requirements section at the beginning of this guide to prepare the source and mining database, the data dictionary, and the redo log files that LogMiner will mine in this example. When reading the dictionary from the redo log files, call the DBMS_LOGMNR_D.BUILD procedure and specify the options parameter with the value set to the constant DBMS_LOGMNR.DICT_FROM_REDO_LOGS. The redo log file(s) including the LogMiner dictionary from the DBMS_LOGMNR_D.BUILD procedure need to have been explicitly loaded for the current LogMiner session through the DBMS_LOGMNR.ADD_LOGFILE procedure. LogMiner will expect to find the LogMiner dictionary in the redo log files specified for the current LogMiner session.The size of the LogMiner dictionary may cause it to be contained in multiple redo log files.
Extracting the dictionary to the redo log files requires the following:
- Oracle9i database or later
- The COMPATIBLE initialization parameter must be set to 9.2.0 or higher
- Archivelog mode enabled in order to generate usable redo log files
- Supplemental logging (at least the minimum level) should be enabled in order to take advantage of all LogMiner features
Action Items to Perform
The scenario described in this section performs the following actions:-
The same database will be used for the source and mining database.
-
The following SQL statement was run in the database by the user
SCOTT and written to the redo log file on around
09-OCT-2012 10:05:00.
insert into hr.departments(department_id, department_name, manager_id, location_id) values(hr.departments_seq.nextval, 'Engineering', null, 1700);
-
Extract LogMiner dictionary to the Redo Log Files.
-
Identify the archived redo log files containing the LogMiner dictionary
and any other log files to be analyzed by LogMiner.
-
Add archived redo log files to the list of log files for
LogMiner to analyze. All log files from the time of the
target SQL statement until the LogMiner dictionary must
be added.
-
Query the V$LOGMNR_LOGS view to determine which
redo log files have been manually or automatically added
to the list of redo log files for LogMiner to analyze.
-
Start a LogMiner session and enable the following options:
-
DICT_FROM_REDO_LOGS — If set,
LogMiner expects to find a LogMiner dictionary in the redo log files that were specified.
-
DDL_DICT_TRACKING —
instructs LogMiner to
update its internal dictionary if a DDL event occurs.
(This option cannot be used in conjunction with the
DICT_FROM_ONLINE_CATALOG option and cannot be used when the LogMiner
dictionary being used is one that was extracted to a flat file prior to Oracle9i.)
-
PRINT_PRETTY_SQL — format the reconstructed SQL
statements making them easier to read.
-
DICT_FROM_REDO_LOGS — If set,
LogMiner expects to find a LogMiner dictionary in the redo log files that were specified.
-
Query V$LOGMNR_CONTENTS for the SQL_REDO and
SQL_UNDO for the above DDL INSERT statement.
-
Properly end a LogMiner session using the DBMS_LOGMNR.END_LOGMNR
procedure which performs all cleanup operations. This procedure is
automatically called when you log out of a database or when you
call DBMS_LOGMNR.ADD_LOGFILE and specify the NEW option.
Example LogMiner Session
CONNECT miner/miner Connected. ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. BEGIN DBMS_LOGMNR_D.BUILD ( options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS ); END; / PL/SQL procedure successfully completed. COLUMN name FORMAT A66 HEAD "Log File Name" COLUMN first_time FORMAT A23 HEAD "First Time" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT name , TO_CHAR(first_time, 'DD-MON-YYYY HH24:MI:SS') first_time , dictionary_begin , dictionary_end FROM v$archived_log WHERE name IS NOT NULL AND first_time BETWEEN TO_DATE('09-OCT-2012 10:00:00', 'DD-MON-YYYY HH24:MI:SS') AND TO_DATE('09-OCT-2012 11:00:00', 'DD-MON-YYYY HH24:MI:SS') ORDER BY sequence#; Dict Dict Log File Name First Time Begin End ------------------------------------------------------------------ ----------------------- ----- ----- +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3198.1127.796212243 09-OCT-2012 10:01:07 NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3199.957.796212841 09-OCT-2012 10:04:03 NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3200.1044.796213443 09-OCT-2012 10:14:00 NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3201.1083.796214041 09-OCT-2012 10:24:02 NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3202.1013.796214585 09-OCT-2012 10:34:00 NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3203.1038.796214587 09-OCT-2012 10:43:05 NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3204.982.796214591 09-OCT-2012 10:43:07 YES NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3205.1103.796214597 09-OCT-2012 10:43:10 NO YES +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3206.983.796215193 09-OCT-2012 10:43:17 NO NO BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3199.957.796212841', options => DBMS_LOGMNR.NEW ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3200.1044.796213443', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3201.1083.796214041', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3202.1013.796214585', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3203.1038.796214587', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3204.982.796214591', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_09/thread_1_seq_3205.1103.796214597', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. COLUMN filename FORMAT A66 HEAD "Log File Name" COLUMN type FORMAT A8 HEAD "Type" COLUMN status FORMAT A33 HEAD "Status" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT filename , type , ( CASE status WHEN 0 THEN 'Redo log file will be processed.' WHEN 1 THEN 'First log file to be processed.' WHEN 2 THEN 'Redo log file will not be processed (pruned).' WHEN 4 THEN 'Redo log file is missing from LogMiner list.' END) as status , dictionary_begin , dictionary_end FROM v$logmnr_logs ORDER BY log_id; Dict Dict Log File Name Type Status Begin End ------------------------------------------------------------------ -------- --------------------------------- ----- ----- +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3199.957.796212841 ARCHIVE Redo log file will be processed. NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3200.1044.796213443 ARCHIVE Redo log file will be processed. NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3201.1083.796214041 ARCHIVE Redo log file will be processed. NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3202.1013.796214585 ARCHIVE Redo log file will be processed. NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3203.1038.796214587 ARCHIVE Redo log file will be processed. NO NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3204.982.796214591 ARCHIVE Redo log file will be processed. YES NO +FRA/racdb/archivelog/2012_10_09/thread_1_seq_3205.1103.796214597 ARCHIVE Redo log file will be processed. NO YES BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.PRINT_PRETTY_SQL ); END; / PL/SQL procedure successfully completed. COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN sql_undo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , sql_undo , TO_CHAR(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp , scn FROM v$logmnr_contents WHERE username = 'SCOTT' AND operation = 'INSERT' AND seg_owner = 'HR'; USERNAME OPERATION SQL_REDO SQL_UNDO TIMESTAMP SCN -------- --------- ------------------------- ------------------------- -------------------- ---------- SCOTT INSERT insert into delete from 09-OCT-2012 10:05:12 97631060 "HR"."DEPARTMENTS" "HR"."DEPARTMENTS" values where "DEPARTMENT_ID" = 350, "DEPARTMENT_ID" = 350 and "DEPARTMENT_NAME" = "DEPARTMENT_NAME" = 'Engineering', 'Engineering' and "MANAGER_ID" IS NULL, "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700; "LOCATION_ID" = 1700 and ROWID = 'AAAQ08AAEAAAACuAAB';
|
Scenario 3: Reading the LogMiner Dictionary From the Current Online Catalog
Oracle recommends reading the LogMiner Dictionary from the current online catalog when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated.The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables should be somewhat rare in a production system, the online catalog generally contains the information you need for your analysis.
Requirements
Review the requirements section at the beginning of this guide to prepare the source and mining database, the data dictionary, and the redo log files that LogMiner will mine in this example. If reading the dictionary from the current online catalog, specify the options parameter with the value set to the constant DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG of the DBMS_LOGMNR.START_LOGMNR procedure. In this case, it is not necessary to have previously extracted the LogMiner dictionary to a flat file or in the redo log files being analyzed for the current LogMiner sessions through the DBMS_LOGMNR_D.BUILD procedure.In addition to using the online catalog to analyze online redo log files, you can use it to analyze archived redo log files, if you are on the same system that generated the archived redo log files.
The online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog is updated and no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table. Instead, LogMiner generates non-executable SQL (including hexadecimal-to-raw formatting of binary values) in the SQL_REDO column of the V$LOGMNR_CONTENTS view similar to the following example:
insert into HR.EMPLOYEES("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('c2035b'),HEXTORAW('456e67696e656572696e67'),NULL,HEXTORAW('c212')); |
The online catalog option requires that the database be open.
The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.
Action Items to Perform
The scenario described in this section performs the following actions:-
The same database will be used for the source and mining database.
-
The following SQL statement was run in the database by the user
SCOTT and written to the redo log file on around
16-OCT-2012 10:22:13.
insert into hr.departments(department_id, department_name, manager_id, location_id) values(hr.departments_seq.nextval, 'Engineering', null, 1700);
-
Read the LogMiner dictionary from the current online catalog.
-
Determine and add the redo log files to be analyzed by LogMiner.
-
Start a LogMiner session and enable the following options:
-
DICT_FROM_ONLINE_CATALOG — direct LogMiner to use the
dictionary currently in use for the database.
-
PRINT_PRETTY_SQL — format the reconstructed SQL
statements making them easier to read.
-
NO_SQL_DELIMITER — suppresses the semicolon from
the reconstructed statements. This is helpful for applications
that open a cursor and then execute the reconstructed statements.
-
COMMITTED_DATA_ONLY — If set, DML statements
corresponding to committed transactions are returned. DML statements corresponding to a committed
transaction are grouped together. Transactions are returned in their commit order. Transactions
that are rolled back or in-progress are filtered out, as are internal redo records
(those related to index operations, management, and so on).
If this option is not set, all rows for all transactions (committed, rolled back, and in-progress)
are returned in the order in which they are found in the redo logs (in order of SCN values).
-
DICT_FROM_ONLINE_CATALOG — direct LogMiner to use the
dictionary currently in use for the database.
-
Query the V$LOGMNR_LOGS view to determine which
redo log files have been manually or automatically added
to the list of redo log files for LogMiner to analyze.
-
Query V$LOGMNR_CONTENTS for the SQL_REDO and
SQL_UNDO for the above DDL INSERT statement.
-
Properly end a LogMiner session using the DBMS_LOGMNR.END_LOGMNR
procedure which performs all cleanup operations. This procedure is
automatically called when you log out of a database or when you
call DBMS_LOGMNR.ADD_LOGFILE and specify the NEW option.
Example LogMiner Session
CONNECT miner/miner Connected. ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. COLUMN name FORMAT A66 HEAD "Log File Name" COLUMN first_time FORMAT A23 HEAD "First Time" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT name , TO_CHAR(first_time, 'DD-MON-YYYY HH24:MI:SS') first_time , dictionary_begin , dictionary_end FROM v$archived_log WHERE name IS NOT NULL AND first_time BETWEEN TO_DATE('16-OCT-2012 10:00:00', 'DD-MON-YYYY HH24:MI:SS') AND TO_DATE('16-OCT-2012 11:00:00', 'DD-MON-YYYY HH24:MI:SS') ORDER BY sequence#; Dict Dict Log File Name First Time Begin End ------------------------------------------------------------------ ----------------------- ----- ----- +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4261.985.796817011 16-OCT-2012 10:02:20 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4262.975.796817609 16-OCT-2012 10:03:30 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4263.1109.796818207 16-OCT-2012 10:13:28 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4264.969.796818655 16-OCT-2012 10:23:27 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4265.1126.796818969 16-OCT-2012 10:30:54 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4266.1082.796819407 16-OCT-2012 10:36:09 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4267.1081.796819855 16-OCT-2012 10:43:26 NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4268.1076.796820451 16-OCT-2012 10:50:53 NO NO BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_16/thread_1_seq_4263.1109.796818207', options => DBMS_LOGMNR.NEW ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_10_16/thread_1_seq_4264.969.796818655', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. COLUMN filename FORMAT A66 HEAD "Log File Name" COLUMN type FORMAT A8 HEAD "Type" COLUMN status FORMAT A33 HEAD "Status" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT filename , type , ( CASE status WHEN 0 THEN 'Redo log file will be processed.' WHEN 1 THEN 'First log file to be processed.' WHEN 2 THEN 'Redo log file will not be processed (pruned).' WHEN 4 THEN 'Redo log file is missing from LogMiner list.' END) as status , dictionary_begin , dictionary_end FROM v$logmnr_logs ORDER BY log_id; Dict Dict Log File Name Type Status Begin End ------------------------------------------------------------------ -------- --------------------------------- ----- ----- +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4263.1109.796818207 ARCHIVE Redo log file will be processed. NO NO +FRA/racdb/archivelog/2012_10_16/thread_1_seq_4264.969.796818655 ARCHIVE Redo log file will be processed. NO NO BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.NO_SQL_DELIMITER + DBMS_LOGMNR.COMMITTED_DATA_ONLY ); END; / PL/SQL procedure successfully completed. COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN sql_undo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , sql_undo , TO_CHAR(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp , scn FROM v$logmnr_contents WHERE username = 'SCOTT' AND operation = 'INSERT' AND seg_owner = 'HR'; USERNAME OPERATION SQL_REDO SQL_UNDO TIMESTAMP SCN -------- --------- ------------------------- ------------------------- -------------------- ---------- SCOTT INSERT insert into delete from 16-OCT-2012 10:22:19 102833460 "HR"."DEPARTMENTS" "HR"."DEPARTMENTS" values where "DEPARTMENT_ID" = 360, "DEPARTMENT_ID" = 360 and "DEPARTMENT_NAME" = "DEPARTMENT_NAME" = 'Engineering', 'Engineering' and "MANAGER_ID" IS NULL, "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700 "LOCATION_ID" = 1700 and ROWID = 'AAAQ08AAEAAAACsAAB'
|
Scenario 4: Source Database (10g) — Remote Mining Database (11g)
This particular example demonstrates a scenario where the DBA may be managing a heavy utilized Oracle 10g OLTP database that is unable to mine its archived redo logs because of a lack of resources. Because it is not required that the mining operating occur on the source database, the DBA decides to locate another database with available resources that will allow LogMiner to analyze the logs. In this example, the mining database will be an Oracle 11g database running on a different server than the source.This example will use the following configuration:
- Source Database
SOURCE_DB=ORA10G SOURCE_MACHINE=packmule.idevelopment.info SOURCE_DB_CONNECT_STRING=sys/manager@//packmule:1521/ora10g.idevelopment.info as sysdba SOURCE_DB_VERSION=10.2.0.4
- Mining Database
MINING_DB=ORA11G MINING_MACHINE=testnode1.idevelopment.info MINING_ARC_DIR=/u03/app/oracle/oraarch/ORA10G MINING_DB_CONNECT_STRING=miner/miner@//testnode1:1521/ora11g.idevelopment.info MINING_DB_VERSION=11.2.0.3
Action Items to Perform
The scenario described in this section performs the following actions:-
The following SQL statement was run in the database by the user
SCOTT and written to the redo log file on around
18-OCT-2012 18:05:15.
insert into hr.departments(department_id, department_name, manager_id, location_id) values(hr.departments_seq.nextval, 'Engineering', null, 1700);
-
Connect to the source database.
-
Extract LogMiner dictionary to the Redo Log Files.
-
Identify the archived redo log files containing the LogMiner dictionary
and any other log files to be analyzed by LogMiner. All log files from
the time of the target SQL statement until the LogMiner dictionary must
be part of the LogMiner session.
-
Send all identified archived redo log files from the source database server
to the mining database server.
-
Connect to the mining database.
-
From the mining database, add the archived redo log files sent from
the source database to the list of log files for LogMiner to analyze.
All log files from the time of the target SQL statement until the
LogMiner dictionary must be added.
-
Query the V$LOGMNR_LOGS view to determine which
redo log files have been manually or automatically added
to the list of redo log files for LogMiner to analyze.
-
Start a LogMiner session on the mining database and enable the
following options:
-
DICT_FROM_REDO_LOGS — If set,
LogMiner expects to find a LogMiner dictionary in the redo log files that were specified.
-
PRINT_PRETTY_SQL — format the reconstructed SQL
statements making them easier to read.
-
DICT_FROM_REDO_LOGS — If set,
LogMiner expects to find a LogMiner dictionary in the redo log files that were specified.
-
Query V$LOGMNR_CONTENTS for the SQL_REDO and
SQL_UNDO for the above DDL INSERT statement.
-
Properly end a LogMiner session using the DBMS_LOGMNR.END_LOGMNR
procedure which performs all cleanup operations. This procedure is
automatically called when you log out of a database or when you
call DBMS_LOGMNR.ADD_LOGFILE and specify the NEW option.
Example LogMiner Session
CONNECT sys/manager@//packmule:1521/ora10g.idevelopment.info as sysdba Connected. ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. BEGIN DBMS_LOGMNR_D.BUILD ( options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS ); END; / PL/SQL procedure successfully completed. COLUMN name FORMAT A81 HEAD "Log File Name" COLUMN first_time FORMAT A23 HEAD "First Time" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT name , TO_CHAR(first_time, 'DD-MON-YYYY HH24:MI:SS') first_time , dictionary_begin , dictionary_end FROM v$archived_log WHERE name IS NOT NULL AND first_time BETWEEN TO_DATE('18-OCT-2012 17:45:00', 'DD-MON-YYYY HH24:MI:SS') AND TO_DATE('18-OCT-2012 19:00:00', 'DD-MON-YYYY HH24:MI:SS') ORDER BY sequence#; Dict Dict Log File Name First Time Begin End --------------------------------------------------------------------------------- ----------------------- ----- ----- /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3047_880yr881_.arc 18-OCT-2012 17:46:23 NO NO /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3048_880yzv08_.arc 18-OCT-2012 17:56:24 NO NO /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3049_880zlk2d_.arc 18-OCT-2012 18:00:26 NO NO /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3050_881059l7_.arc 18-OCT-2012 18:10:25 NO NO /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3051_8810qz36_.arc 18-OCT-2012 18:20:25 NO NO /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3052_8810shg8_.arc 18-OCT-2012 18:30:23 NO NO /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3053_8810sr4p_.arc 18-OCT-2012 18:31:11 YES YES /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18/o1_mf_1_3054_8811dmfj_.arc 18-OCT-2012 18:31:20 NO NO SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the Partitioning, Oracle Label Security, OLAP, Data Mining Scoring Engine and Real Application Testing options [oracle@packmule ~]$ cd /u03/flash_recovery_area/ORA10G/archivelog/2012_10_18 [oracle@packmule 2012_10_18]$ scp o1_mf_1_3049_880zlk2d_.arc o1_mf_1_305[0123]_*.arc testnode1:/u03/app/oracle/oraarch/ORA10G/ oracle@testnode1's password: xxxxxxxxxx o1_mf_1_3049_880zlk2d_.arc 100% 691KB 691.0KB/s 00:00 o1_mf_1_3050_881059l7_.arc 100% 131KB 131.0KB/s 00:00 o1_mf_1_3051_8810qz36_.arc 100% 82KB 82.0KB/s 00:00 o1_mf_1_3052_8810shg8_.arc 100% 5120 5.0KB/s 00:00 o1_mf_1_3053_8810sr4p_.arc 100% 9223KB 9.0MB/s 00:00 [oracle@packmule 2012_10_18]$ sqlplus /nolog SQL*Plus: Release 10.2.0.4.0 - Production on Thu Oct 18 18:53:37 2012 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. SQL> CONNECT miner/miner@//testnode1:1521/ora11g.idevelopment.info Connected. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '/u03/app/oracle/oraarch/ORA10G/o1_mf_1_3049_880zlk2d_.arc', options => DBMS_LOGMNR.NEW ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '/u03/app/oracle/oraarch/ORA10G/o1_mf_1_3050_881059l7_.arc', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '/u03/app/oracle/oraarch/ORA10G/o1_mf_1_3051_8810qz36_.arc', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '/u03/app/oracle/oraarch/ORA10G/o1_mf_1_3052_8810shg8_.arc', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '/u03/app/oracle/oraarch/ORA10G/o1_mf_1_3053_8810sr4p_.arc', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. COLUMN filename FORMAT A66 HEAD "Log File Name" COLUMN type FORMAT A8 HEAD "Type" COLUMN status FORMAT A33 HEAD "Status" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT filename , type , ( CASE status WHEN 0 THEN 'Redo log file will be processed.' WHEN 1 THEN 'First log file to be processed.' WHEN 2 THEN 'Redo log file will not be processed (pruned).' WHEN 4 THEN 'Redo log file is missing from LogMiner list.' END) as status , dictionary_begin , dictionary_end FROM v$logmnr_logs ORDER BY log_id; Dict Dict Log File Name Type Status Begin End ------------------------------------------------------------------ -------- --------------------------------- ----- ----- /u03/app/oracle/oraarch/ORA10G/o1_mf_1_3049_880zlk2d_.arc ARCHIVE Redo log file will be processed. NO NO /u03/app/oracle/oraarch/ORA10G/o1_mf_1_3050_881059l7_.arc ARCHIVE Redo log file will be processed. NO NO /u03/app/oracle/oraarch/ORA10G/o1_mf_1_3051_8810qz36_.arc ARCHIVE Redo log file will be processed. NO NO /u03/app/oracle/oraarch/ORA10G/o1_mf_1_3052_8810shg8_.arc ARCHIVE Redo log file will be processed. NO NO /u03/app/oracle/oraarch/ORA10G/o1_mf_1_3053_8810sr4p_.arc ARCHIVE Redo log file will be processed. YES YES BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.PRINT_PRETTY_SQL ); END; / PL/SQL procedure successfully completed. COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN sql_undo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , sql_undo , TO_CHAR(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp , scn FROM v$logmnr_contents WHERE username = 'SCOTT' AND operation = 'INSERT' AND seg_owner = 'HR'; USERNAME OPERATION SQL_REDO SQL_UNDO TIMESTAMP SCN -------- --------- ------------------------- ------------------------- -------------------- ---------- SCOTT INSERT insert into delete from 18-OCT-2012 18:05:06 63352304 "HR"."DEPARTMENTS" "HR"."DEPARTMENTS" values where "DEPARTMENT_ID" = 310, "DEPARTMENT_ID" = 310 and "DEPARTMENT_NAME" = "DEPARTMENT_NAME" = 'Engineering', 'Engineering' and "MANAGER_ID" IS NULL, "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700; "LOCATION_ID" = 1700 and ROWID = 'AAApwWAANAAAAKcAAb';
|
Scenario 5: Using the CONTINUOUS_MINE Option
LogMiner needs to mine data in the redo log files in order to analyze changes made to the database. Up until to this point, any redo log files that were required to be analyzed by LogMiner needed to be explicitly added to the list using the DBMS_LOGMNR.ADD_LOGFILE procedure.# First redo log file
BEGIN
DBMS_LOGMNR.ADD_LOGFILE (
LogFileName => '+FRA/racdb/archivelog/2012_10_22/thread_1_seq_5143.2392.797337181',
options => DBMS_LOGMNR.NEW
);
END;
/
|
# Subsequent redo log file(s)
BEGIN
DBMS_LOGMNR.ADD_LOGFILE (
LogFileName => '+FRA/racdb/archivelog/2012_10_22/thread_1_seq_5144.2394.797337779',
options => DBMS_LOGMNR.ADDFILE
);
END;
/
|
The CONTINUOUS_MINE Option
Oracle introduced an alternative approach in Oracle 9i (9.2) that directs LogMiner to automatically find and create a list of redo log files (archived and online) for LogMiner to analyze after the LogMiner session has started. This is done using the CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure. You can specify a date or SCN range to the DBMS_LOGMNR.START_LOGMNR procedure to find the data of interest or specify the first redo log file to start mining. LogMiner will use the database control file to find and add redo log files that satisfy the specified time or SCN range to the LogMiner redo log file list. When using the CONTINUOUS_MINE option, it is no longer necessary to map the time frame to an explicit set of redo log files.The LogMiner session must be connected to the same database (the source database) that generated the redo log files when using the CONTINUOUS_MINE option. It also requires that the database be mounted and that archiving be enabled.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
BEGIN
DBMS_LOGMNR.START_LOGMNR (
STARTTIME => '22-OCT-2012 10:30:00',
ENDTIME => '22-OCT-2012 10:45:00',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG +
DBMS_LOGMNR.CONTINUOUS_MINE
);
END;
/
|
You can also direct LogMiner to automatically build a list of redo log files to analyze by specifying just one redo log file using DBMS_LOGMNR.ADD_LOGFILE, and then specifying the CONTINUOUS_MINE option when you start LogMiner. If only one redo log file is specified in the list for the LogMiner session, then LogMiner will assume the name of the next archived redo log file(s) based on the same directory location and subsequent log sequence numbers.
BEGIN
DBMS_LOGMNR.ADD_LOGFILE (
LogFileName => '+FRA/racdb/archivelog/2012_10_22/thread_1_seq_5159.2424.797346781',
options => DBMS_LOGMNR.NEW
);
END;
/
BEGIN
DBMS_LOGMNR.START_LOGMNR (
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG +
DBMS_LOGMNR.CONTINUOUS_MINE
);
END;
/
|
Keep the following in mind when using the CONTINUOUS_MINE option:
-
You need only specify the first log file to start mining, or simply
provide a starting date or SCN to indicate to LogMiner where to begin
mining logs.
-
The database control file will hold information about a limited number
of archived redo log files, although the number of entries can be quite
large. Query the V$ARCHIVED_LOGS view to determine which
redo log file entries can be found by LogMiner.
Even if an entry is listed in the database control file (and the V$ARCHIVED_LOGS view), the archived redo log file may not be accessible by LogMiner for various reasons. For example, the archived redo log file may have been deleted or moved from its location (for example, honoring a retention policy where a backup operation was performed to tape), or the directory where it resides may not be not available.
-
If you specify the CONTINUOUS_MINE option and an ending time or SCN
that will occur in the future (or you do not specify an end time or SCN), a
query of the V$LOGMNR_CONTENTS view will not finish until the database
has generated redo log files beyond the specified time or SCN. In this scenario,
LogMiner will automatically add archived redo log files to the LogMiner
redo log file list as they are generated. In addition, in this scenario only,
LogMiner may automatically remove redo log files from the list to keep it at
50 processed redo files. This is to save PGA memory as LogMiner automatically
adds redo log files to the list. If LogMiner did not perform automated removal,
memory could eventually be exhausted.
-
LogMiner can mine online redo logs. However, if the CONTINUOUS_MINE option
is not specified, it is possible that the database is writing to the online redo log file
at the same time that LogMiner is reading the online redo log file. If a log switch occurs
while LogMiner is reading an online redo log file, the database will overwrite what
LogMiner is attempting to read. The data that LogMiner returns if the file it is trying
to read gets overwritten by the database is unpredictable.
Example 1 — Mining the Redo Log Files in a Given Date Range
This example demonstrates how to specify a date range of interest and mine the redo log files that satisfy that range. LogMiner will only report committed transactions (using the COMMITTED_DATA_ONLY option) whose effects may not have yet been made permanent in the datafiles.-
Connect as the MINER database user and set the date/time
format for the SQL session.
CONNECT miner/miner Connected. ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered.
-
Start a new LogMiner session with a date (or SCN) range
and specify the CONTINUOUS_MINE option.
BEGIN DBMS_LOGMNR.START_LOGMNR ( STARTTIME => '25-OCT-2012 15:00:00', ENDTIME => '25-OCT-2012 15:30:00', OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE ); END; / PL/SQL procedure successfully completed.
-
Determine which archived redo log files have been automatically added to
the list of log files LogMiner will analyze.
COLUMN filename FORMAT A66 HEAD "Log File Name" COLUMN low_time FORMAT A21 HEAD "Low Time" COLUMN high_time FORMAT A21 HEAD "High Time" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT filename , low_time , high_time , dictionary_begin , dictionary_end FROM v$logmnr_logs ORDER BY log_id; Dict Dict Log File Name Low Time High Time Begin End ------------------------------------------------------------------ --------------------- --------------------- ----- ----- +FRA/racdb/archivelog/2012_10_25/thread_2_seq_4732.1517.797612407 25-OCT-2012 14:53:17 25-OCT-2012 15:00:06 NO NO +FRA/racdb/archivelog/2012_10_25/thread_2_seq_4733.1513.797613711 25-OCT-2012 15:00:06 25-OCT-2012 15:21:50 NO NO +FRA/racdb/archivelog/2012_10_25/thread_1_seq_5580.1518.797612405 25-OCT-2012 14:58:42 25-OCT-2012 15:00:04 NO NO +FRA/racdb/archivelog/2012_10_25/thread_1_seq_5581.1516.797612443 25-OCT-2012 15:00:04 25-OCT-2012 15:00:43 NO NO +FRA/racdb/archivelog/2012_10_25/thread_1_seq_5582.1515.797612721 25-OCT-2012 15:00:43 25-OCT-2012 15:05:21 NO NO +FRA/racdb/archivelog/2012_10_25/thread_1_seq_5583.1514.797613709 25-OCT-2012 15:05:21 25-OCT-2012 15:21:48 NO NO
Use the following query to determine whether the last log file added by LogMiner is the latest archived redo log file produced from the database.
COLUMN name FORMAT A66 HEAD "Log File Name" COLUMN first_time FORMAT A21 HEAD "First Time" COLUMN completion_time FORMAT A21 HEAD "Completion Time" SELECT name , first_time , completion_time FROM v$archived_log WHERE sequence# = (select max(sequence#) from v$archived_log); Log File Name First Time Completion Time ------------------------------------------------------------------ --------------------- --------------------- +FRA/racdb/archivelog/2012_10_25/thread_1_seq_5583.1514.797613709 25-OCT-2012 15:05:21 25-OCT-2012 15:21:48
COLUMN low_time FORMAT A21 HEAD "Low Time" COLUMN high_time FORMAT A21 HEAD "High Time" SELECT MIN(timestamp) as log_time , MAX(timestamp) as high_time FROM v$logmnr_contents; LOG_TIME High Time -------------------- --------------------- 25-OCT-2012 15:00:04 25-OCT-2012 15:29:18
-
Query V$LOGMNR_CONTENTS for any data of interest. Include
all COMMIT statements but disregard any Oracle auditing
records (INSERT INTO "SYSTEM"."AUD$").
COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN sql_undo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , sql_undo , TO_CHAR(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp , scn FROM v$logmnr_contents WHERE username = 'SCOTT' AND operation IN ('INSERT', 'COMMIT') AND ( seg_owner NOT IN ('SYSTEM') -- Disregard Oracle Auditing OR seg_owner IS NULL -- Still include COMMITs ) ORDER BY timestamp; USERNAME OPERATION SQL_REDO SQL_UNDO TIMESTAMP SCN -------- --------- ------------------------- ------------------------- -------------------- ---------- SCOTT COMMIT commit; 25-OCT-2012 15:00:04 109026619 SCOTT COMMIT commit; 25-OCT-2012 15:00:43 109028429 SCOTT COMMIT commit; 25-OCT-2012 15:02:12 109028564 SCOTT INSERT insert into delete from 25-OCT-2012 15:02:12 109028564 "HR"."DEPARTMENTS" "HR"."DEPARTMENTS" values where "DEPARTMENT_ID" = 430, "DEPARTMENT_ID" = 430 and "DEPARTMENT_NAME" = "DEPARTMENT_NAME" = 'Computer Scientists', 'Computer Scientists' and "MANAGER_ID" IS NULL, "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700; "LOCATION_ID" = 1700 and ROWID = 'AAAQ08AAEAAAACtAAB'; SCOTT COMMIT commit; 25-OCT-2012 15:02:16 109028577 SCOTT COMMIT commit; 25-OCT-2012 15:05:21 109029883 SCOTT COMMIT commit; 25-OCT-2012 15:08:19 109030629 SCOTT INSERT insert into delete from 25-OCT-2012 15:08:20 109030630 "HR"."DEPARTMENTS" "HR"."DEPARTMENTS" values where "DEPARTMENT_ID" = 440, "DEPARTMENT_ID" = 440 and "DEPARTMENT_NAME" = 'IT', "DEPARTMENT_NAME" = 'IT' "MANAGER_ID" IS NULL, and "LOCATION_ID" = 1800; "MANAGER_ID" IS NULL and "LOCATION_ID" = 1800 and ROWID = 'AAAQ08AAEAAAACtAAC'; SCOTT COMMIT commit; 25-OCT-2012 15:08:22 109030644 SCOTT COMMIT commit; 25-OCT-2012 15:21:48 109032856 SCOTT COMMIT commit; 25-OCT-2012 15:22:11 109032894 SCOTT INSERT insert into delete from 25-OCT-2012 15:22:11 109032894 "HR"."DEPARTMENTS" "HR"."DEPARTMENTS" values where "DEPARTMENT_ID" = 450, "DEPARTMENT_ID" = 450 and "DEPARTMENT_NAME" = "DEPARTMENT_NAME" = 'Mathematics', 'Mathematics' and "MANAGER_ID" IS NULL, "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700; "LOCATION_ID" = 1700 and ROWID = 'AAAQ08AAEAAAACtAAA'; SCOTT COMMIT commit; 25-OCT-2012 15:22:13 109032901
-
End the LogMiner session.
BEGIN DBMS_LOGMNR.END_LOGMNR(); END; / PL/SQL procedure successfully completed.
Example 2 — Using Continuous Mining to Include Future Values in a Query
This example demonstrates how to specify a query on V$LOGMNR_CONTENTS not finish until some future time occurs or SCN is reached by using the CONTINUOUS_MINE option. This is done by setting either the end time (ENDTIME) or end SCN (ENDSCN) parameter when calling the DBMS_LOGMNR.START_LOGMNR procedure to a time in the future or to an SCN value that has not yet been reached.This example assumes you want to monitor all changes made to the table HR.DEPARTMENTS from now until 3 hours in the future and that you are using the dictionary in the online catalog.
-
Connect as the MINER database user and set the date/time
format for the SQL session.
CONNECT miner/miner Connected. ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered.
-
Start a new LogMiner session with an end date 3 hours from now
and specify the CONTINUOUS_MINE option.
BEGIN DBMS_LOGMNR.START_LOGMNR ( STARTTIME => SYSDATE, ENDTIME => SYSDATE + 3/24, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE ); END; / PL/SQL procedure successfully completed.
-
Query V$LOGMNR_CONTENTS for any data of interest. The
SELECT operation will not complete until it encounters
the first redo log file record that is generated after the time
range of interest (3 hours from now). You can end the SELECT
operation prematurely by entering Ctrl-C.
Note that this example specifies the SET ARRAYSIZE statement so that rows are displayed as they are entered in the redo log file. If you do not specify the SET ARRAYSIZE statement, rows are not returned until the SQL internal buffer is full.
SET ARRAYSIZE 1 COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , TO_CHAR(timestamp, 'DD-MON-YYYY HH24:MI:SS') timestamp , scn FROM v$logmnr_contents WHERE (seg_owner = 'HR' AND table_name = 'DEPARTMENTS') OR (operation = 'COMMIT' AND username NOT IN ('SYS', 'UNKNOWN')); USERNAME OPERATION SQL_REDO TIMESTAMP SCN -------- --------- ------------------------- -------------------- ---------- SCOTT COMMIT commit; 26-OCT-2012 12:22:58 109364105 SCOTT INSERT insert into 26-OCT-2012 12:22:58 109364105 "HR"."DEPARTMENTS" values "DEPARTMENT_ID" = 500, "DEPARTMENT_NAME" = 'Computer Scientists', "MANAGER_ID" IS NULL, "LOCATION_ID" = 1700; SCOTT COMMIT commit; 26-OCT-2012 12:23:08 109364124 SCOTT COMMIT commit; 26-OCT-2012 12:24:43 109364340 SCOTT INSERT insert into 26-OCT-2012 12:24:43 109364340 "HR"."DEPARTMENTS" values "DEPARTMENT_ID" = 510, "DEPARTMENT_NAME" = 'IT', "MANAGER_ID" IS NULL, "LOCATION_ID" = 1800; SCOTT COMMIT commit; 26-OCT-2012 12:24:43 109364344 SCOTT COMMIT commit; 26-OCT-2012 12:27:35 109364831 SCOTT INSERT insert into 26-OCT-2012 12:27:35 109364832 "HR"."DEPARTMENTS" values "DEPARTMENT_ID" = 520, "DEPARTMENT_NAME" = 'Mathematics', "MANAGER_ID" IS NULL, "LOCATION_ID" = 1700; SCOTT COMMIT commit; 26-OCT-2012 12:27:35 109364835 SCOTT UPDATE update "HR"."DEPARTMENTS" 26-OCT-2012 12:30:13 109365178 set "DEPARTMENT_NAME" = 'IT - Admin' where "DEPARTMENT_ID" = 510 and "DEPARTMENT_NAME" = 'IT' and ROWID = 'AAAQ08AAEAAAACrAAC'; SCOTT COMMIT commit; 26-OCT-2012 12:30:20 109365348 SCOTT DELETE delete from 26-OCT-2012 12:31:21 109365525 "HR"."DEPARTMENTS" where "DEPARTMENT_ID" = 500 and "DEPARTMENT_NAME" = 'Computer Scientists' and "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700 and ROWID = 'AAAQ08AAEAAAACrAAB'; SCOTT DELETE delete from 26-OCT-2012 12:31:21 109365526 "HR"."DEPARTMENTS" where "DEPARTMENT_ID" = 510 and "DEPARTMENT_NAME" = 'IT - Admin' and "MANAGER_ID" IS NULL and "LOCATION_ID" = 1800 and ROWID = 'AAAQ08AAEAAAACrAAC'; SCOTT DELETE delete from 26-OCT-2012 12:31:21 109365526 "HR"."DEPARTMENTS" where "DEPARTMENT_ID" = 520 and "DEPARTMENT_NAME" = 'Mathematics' and "MANAGER_ID" IS NULL and "LOCATION_ID" = 1700 and ROWID = 'AAAQ08AAEAAAACrAAA'; SCOTT COMMIT commit; 26-OCT-2012 12:31:30 109365570 <Ctrl-C> ERROR: ORA-01013: user requested cancel of current operation
-
End the LogMiner session.
BEGIN DBMS_LOGMNR.END_LOGMNR(); END; / PL/SQL procedure successfully completed.
Ref: http://www.idevelopment.info/data/Oracle/DBA_tips/LogMiner/
Jadwal Daftar Sabung Ayam SV388 26 Februari 2019 - Selasa, 26 Februari 2019 – Pada Hari Tersebut Akan Di Laksanakan Berbagai Pertandingan Sabung Ayam Secara Live di Arena Sabung Ayam Thailand.
ReplyDeleteSitus Judi Sabung Ayam Online SV388 Merupakan Situs Judi Asal Thailand Yang Sangat Terkenal Dengan Permainan Sabung Ayam Yang Fair dan Menghibur Para Penonton Judi Sabung Ayam.
Untuk Info Lebih Lanjut Bisa Hub kami Di :
wechat : bolavita
line : cs_bolavita
whatsapp : +628122222995
BBM: BOLAVITA