• 欢迎访问搞代码网站,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站!
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏搞代码吧

理解和使用Oracle 8i分析工具-LogMine_oracle

oracle 搞代码 7年前 (2018-06-14) 265次浏览 已收录 0个评论

oracle LogMiner 是Oracle公司从产品8i以后提供的一个实际非常有用的分析工具,使用该工具可以轻松获得Oracle 重作日志文件(归档日志文件)中的具体内容,特别是,该工具可以分析出所有对于数据库操作的DML(insert、update、delete等)语句,另外还可分析得到一些必要的回滚SQL语句。该工具特别适用于调试、审计或者回退某个特定的事务。

  LogMiner分析工具实际上是由一组PL/SQL包和一些动态视图(Oracle8i内置包的一部分)组成,它作为Oracle数据库的一部分来发布,是8i产品提供的一个完全免费的工具。但该工具和其他Oracle内建工具相比使用起来显得有些复杂,主要原因是该工具没有提供任何的图形用户界面(GUI)。本文将详细介绍如何安装以及使用该工具。

  一、LogMiner的用途

  日志文件中存放着所有进行数据库恢复的数据,记录了针对数据库结构的每一个变化,也就是对数据库操作的所有DML语句。

  在Oracle 8i之前,Oracle没有提供任何协助数据库管理员来读取和解释重作日志文件内容的工具。系统出现问题,对于一个普通的数据管理员来讲,唯一可以作的工作就是将所有的log文件打包,然后发给Oracle公司的技术支持,然后静静地等待Oracle 公司技术支持给我们最后的答案。然而从8i以后,Oracle提供了这样一个强有力的工具-LogMiner。

  LogMiner 工具即可以用来分析在线,也可以用来分析离线日志文件,即可以分析本身自己数据库的重作日志文件,也可以用来分析其他数据库的重作日志文件。

http://www.gaodaima.com/31539.html理解和使用Oracle 8i分析工具-LogMine_oracle

  总的说来,LogMiner工具的主要用途有:

   1. 跟踪数据库的变化:可以离线的跟踪数据库的变化,而不会影响在线系统的性能。

   2. 回退数据库的变化:回退特定的变化数据,减少point-in-time recovery的执行。

   3. 优化和扩容计划:可通过分析日志文件中的数据以分析数据增长模式。

  二、安装LogMiner

  要安装LogMiner工具,必须首先要运行下面这样两个脚本,

   l $ORACLE_HOME/rdbms/admin/dbmslsm.sql

   2 $ORACLE_HOME/rdbms/admin/dbmslsmd.sql.

  这两个脚本必须均以SYS用户身份运行。其中第一个脚本用来创建DBMS_LOGMNR包,该包用来分析日志文件。第二个脚本用来创建DBMS_LOGMNR_D包,该包用来创建数据字典文件。

  三、使用LogMiner工具

  下面将详细介绍如何使用LogMiner工具。

  1、创建数据字典文件(data-dictionary)

  前面已经谈到,LogMiner工具实际上是由两个新的PL/SQL内建包((DBMS_LOGMNR 和 DBMS_ LOGMNR_D)和四个V$动态性能视图(视图是在利用过程DBMS_LOGMNR.START_LOGMNR启动LogMiner时创建)组成。在使用LogMiner工具分析redo log文件之前,可以使用DBMS_LOGMNR_D 包将数据字典导出为一个文本文件。该字典文件是可选的,但是如果没有它,LogMiner解释出来的语句中关于数据字典中的部分(如表名、列名等)和数值都将是16进制的形式,我们是无法直接理解的。例如,下面的sql语句:

INSERT INTO dm_dj_swry (rydm, rymc) VALUES (00005, ‘张三’);

  LogMiner解释出来的结果将是下面这个样子,

insert into Object#308(col#1, col#2) values (hextoraw(‘c30rte567e436’), hextoraw(‘4a6f686e20446f65’));

  创建数据字典的目的就是让LogMiner引用涉及到内部数据字典中的部分时为他们实际的名字,而不是系统内部的16进制。数据字典文件是一个文本文件,使用包DBMS_LOGMNR_D来创建。如果我们要分析的数据库中的表有变化,影响到库的数据字典也发生变化,这时就需要重新创建该字典文件。另外一种情况是在分析另外一个数据库文件的重作日志时,也必须要重新生成一遍被分析数据库的数据字典文件。

  首先在init.ora初始化参数文件中,指定数据字典文件的位置,也就是添加一个参数UTL_FILE_DIR,该参数值为服务器中放置数据字典文件的目录。如:

UTL_FILE_DIR = (e:/Oracle/logs)

  重新启动数据库,使新加的参数生效,然后创建数据字典文件:

SQL> CONNECT SYS
SQL> EXECUTE dbms_logmnr_d.build(
dictionary_filename => ‘ v816dict.ora’,
dictionary_location => ‘e:/oracle/logs’);

 2、创建要分析的日志文件列表

  Oracle的重作日志分为两种,在线(online)和离线(offline)归档日志文件,下面就分别来讨论这两种不同日志文件的列表创建。

  (1)分析在线重作日志文件

  A. 创建列表

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>’ e:/Oracle/oradata/sxf/redo01.log’,
Options=>dbms_logmnr.new);

  B. 添加其他日志文件到列表

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>’ e:/Oracle/oradata/sxf/redo02.log’,
Options=>dbms_logmnr.addfile);

  (2)分析离线日志文件

  A.创建列表

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>’ E:/Oracle/oradata/sxf/archive/ARCARC09108.001′,
Options=>dbms_logmnr.new);

  B.添加另外的日志文件到列表

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>’ E:/Oracle/oradata/sxf/archive/ARCARC09109.001′,
Options=>dbms_logmnr.addfile);

  关于这个日志文件列表中需要分析日志文件的个数完全由你自己决定,但这里建议最好是每次只添加一个需要分析的日志文件,在对该文件分析完毕后,再添加另外的文件。

  和添加日志分析列表相对应,使用过程 ‘dbms_logmnr.removefile’ 也可以从列表中移去一个日志文件。下面的例子移去上面添加的日志文件e:/Oracle/oradata/sxf/redo02.log。

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>’ e:/Oracle/oradata/sxf/redo02.log’,
Options=>dbms_logmnr. REMOVEFILE);

  创建了要分析的日志文件列表,下面就可以对其进行分析了。

  3、使用LogMiner进行日志分析

  (1)无限制条件

SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>’ e:/oracle/logs/ v816dict.ora ‘);

  (2)有限制条件

  通过对过程DBMS_ LOGMNR.START_LOGMNR中几个不同参数的设置(参数含义见表1),可以缩小要分析日志文件的范围。通过设置起始时间和终止时间参数我们可以限制只分析某一时间范围的日志。如下面的例子,我们仅仅分析2001年9月18日的日志,:

SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => ‘ e:/oracle/logs/ v816dict.ora ‘,
StartTime => to_date(‘2001-9-18 00:00:00′,’YYYY-MM-DD HH24:MI:SS’)
EndTime => to_date(”2001-9-18 23:59:59′,’YYYY-MM-DD HH24:MI:SS ‘));

  也可以通过设置起始SCN和截至SCN来限制要分析日志的范围:

SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => ‘ e:/oracle/logs/ v816dict.ora ‘,
StartScn => 20,
EndScn => 50);

  表1 DBMS_LOGMNR.START__LOGMNR过程参数含义

参数 参数类型 默认值  含义
StartScn 数字型(Number) 0  分析重作日志中SCN≥StartScn日志文件部分
EndScn 数字型(Number) 0  分析重作日志中SCN≤EndScn日志文件部分
StartTime 日期型(Date) 1998-01-01  分析重作日志中时间戳≥StartTime的日志文件部分
EndTime 日期型(Date) 2988-01-01  分析重作日志中时间戳≤EndTime的日志文件部分
DictFileName 字符型(VARCHAR2)   字典文件,该文件包含一个数据库目录的快照。使用该文件可以使得到的分析结果是可以理解的文本形式,而非系统内部的16进制
Options BINARY_INTEGER 0  系统调试参数,实际很少使用

  4、观察分析结果(v$logmnr_contents)

  到现在为止,我们已经分析得到了重作日志文件中的内容。动态性能视图v$logmnr_contents包含LogMiner分析得到的所有的信息。

SELECT sql_redo FROM v$logmnr_contents;

  如果我们仅仅想知道某个用户对于某张表的操作,可以通过下面的SQL查询得到,该查询可以得到用户DB_ZGXT对表SB_DJJL所作的一切工作。

SQL> SELECT sql_redo FROM v$logmnr_contents WHERE username=’DB_ZGXT’ AND tablename=’SB_DJJL’;

  需要强调一点的是,视图v$logmnr_contents中的分析结果仅在我们运行过程’dbms_logmrn.start_logmnr’这个会话的生命期中存在。这是因为所有的LogMiner存储都在PGA内存中,所有其他的进程是看不到它的,同时随着进程的结束,分析结果也随之消失。

  最后,使用过程DBMS_LOGMNR.END_LOGMNR终止日志分析事务,此时PGA内存区域被清除,分析结果也随之不再存在。

  四、其他注意事项

  我们可以利用LogMiner日志分析工具来分析其他数据库实例产生的重作日志文件,而不仅仅用来分析本身安装LogMiner的数据库实例的redo logs文件。使用LogMiner分析其他数据库实例时,有几点需要注意:

  1. LogMiner必须使用被分析数据库实例产生的字典文件,而不是安装LogMiner的数据库产生的字典文件,另外必须保证安装LogMiner数据库的字符集和被分析数据库的字符集相同。

  2. 被分析数据库平台必须和当前LogMiner所在数据库平台一样,也就是说如果我们要分析的文件是由运行在UNIX平台上的Oracle 8i产生的,那么也必须在一个运行在UNIX平台上的Oracle实例上运行LogMiner,而不能在其他如Microsoft NT上运行LogMiner。当然两者的硬件条件不一定要求完全一样。

  3. LogMiner日志分析工具仅能够分析Oracle 8以后的产品,对于8以前的产品,该工具也无能为力。

  五、结语

  LogMiner对于数据库管理员(DBA)来讲是个功能非常强大的工具,也是在日常工作中经常要用到的一个工具,借助于该工具,可以得到大量的关于数据库活动的信息。其中一个最重要的用途就是不用全部恢复数据库就可以恢复数据库的某个变化。另外,该工具还可用来监视或者审计用户的活动,如你可以利用LogMiner工具察看谁曾经修改了那些数据以及这些数据在修改前的状态。我们也可以借助于该工具分析任何Oracle 8及其以后版本产生的重作日志文件。另外该工具还有一个非常重要的特点就是可以分析其他数据库的日志文件。总之,该工具对于数据库管理员来讲,是一个非常有效的工具,深刻理解及熟练掌握该工具,对于每一个数据库管理员的实际工作是非常有帮助的。

可能翻译有误, 我找来原文给你看看
PURPOSE
  This paper details the mechanics of what LogMiner does, as well as detailing
  the commands and environment it uses.

SCOPE & APPLICATION
  For DBAs requiring further information about LogMiner.

  The ability to provide a readable interface to the redo logs has been asked 
  for by customers for a long time. The ALTER SYTSTEM DUMP LOGFILE interface 
  has been around for a long time, though its usefulness outside Support is 
  limited. There have been a number of third party products, e.g. BMC’s PATROL
  DB-Logmaster (SQL*Trax as was), which provide some functionality in this 
  area. With Oracle release 8.1 there is a facility in the Oracle kernel to do
  the same. LogMiner allows the DBA to audit changes to data and performs 
  analysis on the redo to determine trends, aid in capacity planning, 
  Point-in-time Recovery etc. 
  
RELATED DOCUMENTS
 [NOTE:117580.1]  ORA-356, ORA-353, & ORA-334 Errors When Mining Logs with
                  Different DB_BLOCK_SIZE
Oracle8i  – 8.1 LogMiner:
=========================
 
1. WHAT DOES LOGMINER DO?
=========================

  LogMiner can be used against online or archived logs from either the 
  ‘current’ database or a ‘foreign’ database. The reason for this is that it 
  uses an external dictionary file to access meta-data, rather than the 
  ‘current’ data dictionary.

  It is important that this dictionary file is kept in step with the database 
  which is being analyzed. If the dictionary used is out of step from the redo
  then analysis will be considerably more difficult. Building the external 
  dictionary will be discussed in detail in section 3.
 
  LogMiner scans the log/logs it is interested in, and generates, using the 
  dictionary file meta-data, a set of SQL statements which would have the same
  effect on the database as applying the corresponding redo record.

  LogMiner prints out the ‘Final’ SQL that would have gone against the 
  database. For example:

      Insert into Table x Values ( 5 );
      Update Table x set COLUMN=newvalue WHERE ROWID='<>’ 
      Delete from Table x WHERE ROWID='<>’ AND COLUMN=value AND COLUMN=VALUE

  We do not actually see the SQL that was issued, rather an executable SQL 
  statement that would have the same EFFECT. Since it is also stored in the 
  same redo record, we also generate the undo column which would be necessary 
  to roll this change out.

  For SQL which rolls back, no undo SQL is generated, and the rollback flag is
  set. An insert followed by a rollback therefore looks like: 

      REDO                              UNDO              ROLLBACK 

      insert sql                        Delete sql        0
      delete sql                        <null>            1

  Because it operates against the physical redo records, multirow operations
  are not recorded in the same manner e.g. DELETE FROM EMP WHERE DEPTNO=30
  might delete 100 rows in the SALES department in a single statement, the 
  corresponding LogMiner output would show one row of output per row in the 
  database.

2. WHAT IT DOES NOT DO
======================

  1. ‘Trace’ Application SQL – use SQL_Trace/10046

     Since LogMiner only generates low-level SQL, not what was issued, you 
     cannot use LogMiner to see exactly what was being done based on the SQL. 
     What you can see, is what user changed what data at what time.

  2. ‘Replicate’ an application  

     LogMiner does not cover everything. Also, since DDL is not supported 
     (the insert into the tab$ etc. is, however the create table is not). 

  3. Access data dictionary SQL In a visible form

     Especially UPDATE USER$ SET PASSWORD=<newpassword>.

  Other Known Current Limitations
  ===============================

  LogMiner cannot cope with Objects.
  LogMiner cannot cope with Chained/Migrated Rows.
  LogMiner produces fairly unreadable output if there is no record of the 
  table in the dictionary file. See below for output.
  
  The database where the analysis is being performed must have a block size 
  of at least equal to that of the originating database. See [NOTE:117580.1].
  

3. FUNCTIONALITY
================

  The LogMiner feature is made up of three procedures in the LogMiner 
  (dbms_logmnr) package, and one in the Dictionary (dbms_logmnr_d). 

  These are built by the following scripts: (Run by catproc)
 
      $ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql
      $ORACLE_HOME/rdbms/admin/dbmslogmnr.sql
      $ORACLE_HOME/rdbms/admin/prvtlogmnr.plb

  since 8.1.6:
 
      $ORACLE_HOME/rdbms/admin/dbmslmd.sql
      $ORACLE_HOME/rdbms/admin/dbmslm.sql
      $ORACLE_HOME/rdbms/admin/prvtlm.plb

  1. dbms_logmnr_d.build 

     This procedure builds the dictionary file used by the main LogMiner
     package to resolve object names, and column datatypes. It should be 
     generated relatively frequently, since otherwise newer objects will not 
     be recorded.

     It is possible to generate a Dictionary file from an 8.0.database and 
     use it to Analyze Oracle 8.0 redo logs. In order to do this run 
     “dbmslogmnrd.sql” against the 8.0 database, then follow the procedure as 
     below. All analysis of the logfiles will have to take place while 
     connected to an 8.1 database since dbms_logmnr cannot operate against 
     Oracle 8.0 because it uses trusted callouts.

     Any redo relating to tables which are not included in the dictionary 
     file are dumped RAW. Example: If LogMiner cannot resolve the Table and 
     column references, then the following is output: (insert statement)

         insert into UNKNOWN.objn:XXXX(Col[x],….) VALUES 
            ( HEXTORAW(‘xxxxxx’), HEXTORAW(‘xxxxx’)……)

     PARAMETERS
     ==========

     1. The name of the dictionary file you want to produce.
     2. The name of the directory where you want the file produced. 

     The Directory must be writeable by the server i.e. included in
     UTL_FILE_DIR path.  
  
     EXAMPLE
     =======

     BEGIN
        dbms_logmnr_d.build(
          dictionary_filename=> ‘miner_dictionary.dic’,
          dictionary_location => ‘/export/home/sme81/aholland/testcases
          /logminer’
                            );
     END;
     /

  The dbms_logmnr package actually performs the redo analysis.  

  2. dbms_logmnr.add_logfile 

     This procedure registers the logfiles to be analyzed in this session. It
     must be called once for each logfile. This populates the fixed table
     X$logmnr_logs (v$logmnr_logs) with a row corresponding to the logfile.

     Parameters 
     ===========

     1. The logfile to be analyzed.
     2. Option 
        DBMS_LOGMNR.NEW (SESSION) First file to be put into PGA memory. 
           This initialises the V$logmnr_logs table.
           and 
        DBMS_LOGMNR.ADDFILE 
           adds another logfile to the v$logmnr_logs PGA memory. 
           Has the same effect as NEW if there are no rows there 
           presently.

        DBMS_LOGMNR.REMOVEFILE 
           removes a row from v$logmnr_logs.

     Example 
     =======

     Include all my online logs for analysis………

     BEGIN
        dbms_logmnr.add_logfile(
           ‘/export/home/sme81/aholland/database/files/redo03.log’,
                              DBMS_LOGMNR.NEW );
        dbms_logmnr.add_logfile(
           ‘/export/home/sme81/aholland/database/files/redo02.log’,
                              DBMS_LOGMNR.ADDFILE );
        dbms_logmnr.add_logfile(
           ‘/export/home/sme81/aholland/database/files/redo01.log’,
                              DBMS_LOGMNR.ADDFILE );
     END;
     /

     Full Path should be required, though an environment variable 
     is accepted. This is NOT expanded in V$LOGMNR_LOGS. 

  3. dbms_logmnr.start_logmnr;

     This package populates V$logmnr_dictionary, v$logmnr_parameters, 
     and v$logmnr_contents.

     Parameters
     ==========

     1.  StartScn      Default 0 
     2.  EndScn        Default 0,
     3.  StartTime     Default ’01-jan-1988′
     4.  EndTime       Default ’01-jan-2988′
     5.  DictFileName  Default ”,
     6.  Options       Default 0  Debug flag – uninvestigated as yet

     A Point to note here is that there are comparisions made between the 
     SCNs, the times entered, and the range of values in the file. If the SCN 
     range OR the start/end range are not wholly contained in this log, then 
     the start_logmnr command will fail with the general error: 
         ORA-01280 Fatal LogMiner Error.

  4. dbms_logmnr.end_logmnr; 

     This is called with no parameters. 

     /* THIS IS VERY IMPORTANT FOR SUPPORT */

     This procedure MUST be called prior to exiting the session that was 
     performing the analysis. This is because of the way the PGA is used to 
     store the dictionary definitions from the dictionary file, and the 
     V$LOGMNR_CONTENTS output. 
     If you do not call end_logmnr, you will silently get ORA-00600 [723] …
     on logoff. This OERI is triggered because the PGA is bigger at logoff 
     than it was at logon, which is considered a space leak. The main problem 
     from a support perspective is that it is silent, i.e. not signalled back 
     to the user screen, because by then they have logged off. 

     The way to spot LogMiner leaks is that the trace file produced by the 
     OERI 723 will have A PGA heap dumped with many Chunks of type ‘Freeable’
     With a description of “KRVD:alh” 

4. OUTPUT 
=========

  Effectively, the output from LogMiner is the contents of V$logmnr_contents.
  The output is only visible during the life of the session which runs 
  start_logmnr. This is because all the LogMiner memory is PGA memory, so it 
  is neither visible to other sessions, nor is it persistent. As the session 
  logs off, either dbms_logmnr.end_logmnr is run to clear out the PGA, or an 
  OERI 723 is signalled as described above. 

  Typically users are going to want to output sql_redo based on queries by 
  timestamp, segment_name or rowid. 

  v$logmnr_contents
  Name                            Null?    Type
  ——————————- ——– —-
  SCN                                      NUMBER
  TIMESTAMP                                DATE
  THREAD#                                  NUMBER
  LOG_ID                                   NUMBER
  XIDUSN                                   NUMBER
  XIDSLT                                   NUMBER
  XIDSQN                                   NUMBER
  RBASQN                                   NUMBER
  RBABLK                                   NUMBER
  RBABYTE                                  NUMBER
  UBAFIL                                   NUMBER
  UBABLK                                   NUMBER
  UBAREC                                   NUMBER
  UBASQN                                   NUMBER
  ABS_FILE#                                NUMBER
  REL_FILE#                                NUMBER
  DATA_BLK#                                NUMBER
  DATA_OBJ#                                NUMBER
  DATA_OBJD#                               NUMBER
  SEG_OWNER                                VARCHAR2(32)
  SEG_NAME                                 VARCHAR2(32)
  SEG_TYPE                                 VARCHAR2(32)
  TABLE_SPACE                              VARCHAR2(32)
  ROW_ID                                   VARCHAR2(19)
  SESSION#                                 NUMBER
  SERIAL#                                  NUMBER
  USERNAME                                 VARCHAR2(32)
  ROLLBACK                                 NUMBER
  OPERATION                                VARCHAR2(32)
  SQL_REDO                                 VARCHAR2(4000)
  SQL_UNDO                                 VARCHAR2(4000)
  RS_ID                                    VARCHAR2(32)
  SSN                                      NUMBER
  CSF                                      NUMBER
  INFO                                     VARCHAR2(32)
  STATUS                                   NUMBER
  PH1_NAME                                 VARCHAR2(32)
  PH1_REDO                                 VARCHAR2(4000)
  PH1_UNDO                                 VARCHAR2(4000)
  PH2_NAME                                 VARCHAR2(32)
  PH2_REDO                                 VARCHAR2(4000)
  PH2_UNDO                                 VARCHAR2(4000)
  PH3_NAME                                 VARCHAR2(32)
  PH3_REDO                                 VARCHAR2(4000)
  PH3_UNDO                                 VARCHAR2(4000)
  PH4_NAME                                 VARCHAR2(32)
  PH4_REDO                                 VARCHAR2(4000)
  PH4_UNDO                                 VARCHAR2(4000)
  PH5_NAME                                 VARCHAR2(32)
  PH5_REDO                                 VARCHAR2(4000)
  PH5_UNDO                                 VARCHAR2(4000)

  SQL> set heading off
  SQL> select scn, username, sql_undo from v$logmnr_contents
          where segment_name = ’emp’;

  12134756        scott           insert (…) into emp;
  12156488        scott           delete from emp where empno = …
  12849455        scott           update emp set mgr =

  This will return the results of an SQL statement without the column
  headings.  The columns that you are really going to want to query are the
  “sql_undo” and “sql_redo” values because they give the transaction details 
  and syntax.

5. PLACEHOLDERS
===============

  In order to allow users to be able to query directly on specific data 
  values, there are up to five PLACEHOLDERs included at the end of 
  v$logmnr_contents. When enabled, a user can query on the specific BEFORE and
  AFTER values of a specific field, rather than a %LIKE% query against the 
  SQL_UNDO/REDO fields. This is implemented via an external file called 
  “logmnr.opt”. (See the Supplied Packages manual entry on dbms_logmnr for 
  further details.) The file must exist in the same directory as the 
  dictionary file used, and contains the prototype mappings of the PHx fields 
  to the fields in the table being analyzed.

     Example entry
     =============
     colmap =  SCOTT EMP ( EMPNO, 1, ENAME, 2, SAL, 3 ); 

  In the above example, when a redo record is encountered for the SCOTT.EMP
  table, the full Statement redo and undo information populates the SQL_REDO 
  and SQL_UNDO columns respectively, however the PH3_NAME, PH3_REDO and 
  PH3_UNDO columns will also be populated with  ‘SAL’ , <NEWVALUE>, <OLDVALUE>
  respectively,which means that the analyst can query in the form.

      SELECT * FROM V$LOGMNR_CONTENTS 
      WHERE SEG_NAME =’EMP’
      AND PH3_NAME=’SAL’
      AND PH3_REDO=1000000;

  The returned PH3_UNDO column would return the value prior to the update. 
  This enables much more efficient queries to be run against V$LOGMNR_CONTENTS
  view, and if, for instance, a CTAS was issued to store a physical copy, the
  column can be indexed.

欢迎大家阅读《理解和使用Oracle 8i分析工具-LogMine_oracle》,跪求各位点评,若觉得好的话请收藏本文,by 搞代码


搞代码网(gaodaima.com)提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发送到邮箱[email protected],我们会在看到邮件的第一时间内为您处理,或直接联系QQ:872152909。本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:理解和使用Oracle 8i分析工具-LogMine_oracle

喜欢 (0)
[搞代码]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址