Wednesday, 14 October 2015

ADO (Automatic Data optimization & Heat Map) in 12c

ADO (Automatic Data optimization & Heat Map) in 12c

ILM   (Information Lifecycle Managment)

Is the practice of applying polices for the effective management of information throughout it useful life.

ILM of Data is classified as
1 Data collected.
2   Data Storage.
3   Data actively used.
4   Data inactively used.
5   Data purged.

ILM Need satisfied using.
a  Compression.
b  Storage Tiering.
Once the data become inactive that particular segment can be compressed and moved to less expensive storge.
In 11g ILM has to be done manually.
In 12c ILM can be achieved by creating policies that define rules on the data classification. These policy rules
Enforces data flows automatically with minimal manual intervention, it called ADO (Automatic Data optimization).

ADO can be implemented only in non-cdb.

ADO ( Automatic Data optimization) - Policies automatically - Compresses according to the user defined rules, Move data due to space pressure.

  
- Recent inserted  / actively updated data
      -> Advanced Row compressed (2-4X)
      -> High performance storage

- Infrequently updated / frequents reports
      -> High compression (10x)
      -  High performance storage

- Infrequently accessed
      -> High compression (10x)
      -> Low cost storage

- Retained for Long term for analytics
      -> Archive compressed (15-50x)
      -> Archival storage (database or tape)


What is heat map and ADO?

Orale 12c database includes activity tracking with heat map providing the ability to track and mark data as it goes through the data life cycle changes.

- Data accesses at segment level.
- Data modification at block / segment level.

Block level and segment level statistics are collected in the memory and then it is Stored in the tables in the sysaux Tablespace.

ADO allows you to create policies that use Heap map statistics to compress / to move the data only when necessary. ADO automatically evaluates and executes the policies and perform compression and storage tiering actions.

In - Database archiving...

Can keep both active data as well as the archived data in the same table.


Components of Heat map and ADO


Data classification
      -> Tablespace level
      -> Group level
      -> Segment level
      -> Row level      ( In 11g data classificaion is segment / table / parititio level, in 12c you can classify as row-level as well)

Automatic Detection
      if and when
            -> Data access
            -> Data Modification
            -> Data creation 

      When
            -> Ater 1 year
            -> Afte 3 days
            -> Tablespace full

Automatic action

            -> Compresssion
            -> Move to storage tier
            -> Compression + move


ADO Requires advance compression option.


Ado automatically compresses data when ado policy qualifies.
Ado automatically moves data when ado policy qualifies.
Ado policy depend on heat map statistics, hence heat map has to be enabled.


Row level statistics -> Row level statistics are managed and stored in line when a process modified the data.
Segment level statistics -> Access and modification.

Segment level activity tracking information is automatically flushed for persistence by scheduler jobs every one hour.


Process
-------


1) Enable heat map by setting the heat_map parameter to on.
   heat_map=on

2) It will start gathering statisitcs in memeory.
   V$heat_map_segment --> stored or flushed to stats table in sysaux  HEAT_MAP_STAT$  by scheduled    dbms_scheduler jobs and can be view through -->  dba_heat_map_seg_histogram  view.

3) Create ADO Policies.

4) Ado polices are evaluated. (you can schedule when the policies to be evolved, Mmon evaluates row-level policies periodically and compress which blocks qualifying.      Segment level evaluation will be happen only during the scheduled maintenance window).

5) Actions are taken based on ADO defined policies.

6) View ado resulsts if compressed compression_stat$ table / dba_ilmevaluationdetails / dba_ilmresults views.
Segment Level statistics can be queried using.

DBA_HEAT_MAP_SEGMENT view.

Row level statistics can be queried using.

SQL> create table emp (empid number, ename varchar2(30)) tablespace active_data;
Table created.

SQL> begin
  2             for i in 1..1000 loop
  3             insert into emp values (i, dbms_random.string('U',30));
  4             end loop;
  5  commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select count(*) from emp;

  COUNT(*)
----------
      1000

SQL> select * from V$heat_map_segment where object_name = 'EMP';

OBJECT_NAM SUBOBJECT_       OBJ#   DATAOBJ#        TS# TRACK_TIME        SEG SEG FUL LOO     CON_ID
---------- ---------- ---------- ---------- ---------- ----------------- --- --- --- --- ----------
EMP                       287328     287328          6 14-10-15 11:32:16 YES NO  YES NO           0


From the above statistics we can see that segment_write and full scan has happened.


Block level modification can be viewed using below.


SQL> Select segment_name, tablespace_name, block_id, writetime from table
(
dbms_heat_map.block_heat_map('MANZOOR','EMP')
)
/
  2    3    4    5
SEGMENT_NA TABLESPACE_NAME                  BLOCK_ID WRITETIME
---------- ------------------------------ ---------- -----------------
EMP        ACTIVE_DATA                           131 14-10-15 11:44:18
EMP        ACTIVE_DATA                           132 14-10-15 11:44:18
EMP        ACTIVE_DATA                           133 14-10-15 11:44:18
EMP        ACTIVE_DATA                           134 14-10-15 11:44:18
EMP        ACTIVE_DATA                           135 14-10-15 11:44:18
EMP        ACTIVE_DATA                           136 14-10-15 11:44:18
EMP        ACTIVE_DATA                           137 14-10-15 11:44:18
EMP        ACTIVE_DATA                           138 14-10-15 11:44:18
EMP        ACTIVE_DATA                           139 14-10-15 11:44:18
EMP        ACTIVE_DATA                           140 14-10-15 11:44:18
EMP        ACTIVE_DATA                           141 14-10-15 11:44:18
EMP        ACTIVE_DATA                           142 14-10-15 11:44:18
EMP        ACTIVE_DATA                           143 14-10-15 11:44:18

Extent level statistics.

SQL> select segment_name, block_id , max_writetime from table
     (
     dbms_heat_map.extent_heat_map ('MANZOOR','EMP'));

SEGMENT_NA   BLOCK_ID MAX_WRITETIME
---------- ---------- -----------------
EMP               131 14-10-15 11:44:18
EMP               136 14-10-15 11:44:18


Creating ADO Policies

For Compression

Policy Type             Policy can be defined based on
a  Row Level                No Modification
    Segment Level            No Modification, No Accessed, Creation and Custom policy                             using functions.
    Group Level              No Modification, No Accessed & Creation  
    Tablespace Level         No Modification, No Accessed & Creation
Group level indicates that dependent secure file lobs are also to be compressed. Global index are Maintained.

For Storage Tier

Policy Type             Policy can be defined based on
a)  Segment                  Tablespace full & Custom policy using functions.
b)  Tablespace               Tablespace full

Compression type                               
-------------------------------  
ROW STORE COMPRESSION BASIC (Basic Compression) :-

Rows are compressed only when using direct path insert, update rows and
insert without direct path are uncompressed.
  
ROW STORE COMPRESSION ADVANCED (Advanced Compression also called compression replaces OLTP):-

Rows are compress with are without direct path insert and updated rows are 
compressed.   

COLUMN STORE COMPRESS FOR QUERY LOW / HIGH  (Warehouse Compression ( Hybrid columnar Compression)) :-

Results in higher CPU overhead.

Updated rows and rows inserted without direct path insert are stored in row format instead of column format and thus have a lower compression level.

Rows compressed in warehouse compression columnar format when the data inserted using direct path.

COLUMN STORE COMPRESS FOR ARCHIVE LOW / HIGH (Archival Compression (Hybrid columnar Compression)):-

Result in Higher CPU overhead.

Updated rows and rows inserted without direct path insert are stored in row format instead of column format and thus have a lower compression level.

Rows compressed in archival compression columnar format when the data inserted using direct path.


Note:- Column store compress can be used only in below storage.
a  a)  Exadata storage
b)  Pillar Axiom
c)  Sun ZFS storage appliance.


Creating Policies:-

Scenario 1:-


There is a table called current_data, this data which is comes in to this table will active only for the 2 mins. After 2 mins there wont be any modification (update) will be done. So We will keep the data in uncompressed mode only for the latest 2 mins data, after that it has to be compressed.

This can be achieved using the row level access tracking. We cannot go with segment level as it will done for the entire segment which means even the active rows that is latest 2 mins data will also be compressed, so to achieve our goal we required row level access tracking.

Note that - Row level compression will be done when all the rows in the entire block is evaluated. It will not be done for row where still block is having space.




1) Enable heat map.

      SQL> exec dbms_ilm_admin.CLEAR_HEAT_MAP_ALL;

      PL/SQL procedure successfully completed.


      SQL> alter system set heat_map=on scope=both;

      System altered.

2) Create the current_data table, also note that compression is disabled initially.


SQL> create table current_data (upd_date date, upd_by varchar2(30), upd_id number, message varchar2(50));

Table created.


SQL> select table_name, tablespace_name , COMPRESSION, COMPRESS_FOR from user_tables where
table_name = 'CURRENT_DATA';

TABLE_NAME      TABLESPACE_NAME                COMPRESS COMPRESS_FOR
--------------- ------------------------------ -------- ------------------------------
CURRENT_DATA    ACTIVE_DATA                    DISABLED

3) Insert some data to the table.
 SQL>  begin
       for i in 1..10000 loop
               insert into current_data values (sysdate, user, i, dbms_random.string('U',50));
       end loop;
       commit;
       end;
       /

       PL/SQL procedure successfully completed.



4) You can use the below query to check whether the data is compressed or not and also the type of compression. Save the query in compression_check_table.sql


vi compression_check_table.sql

COL compression_type for a40;
set lines 200 pages 200;
SELECT CASE compression_type
    WHEN 1       THEN 'No Compression'
    WHEN 2       THEN 'Advanced compression level'
    WHEN 4       THEN 'Hybrid Columnar Compression for Query High'
    WHEN 8       THEN 'Hybrid Columnar Compression for Query Low'
    WHEN 16      THEN 'Hybrid Columnar Compression for Archive High'
    WHEN 32      THEN 'Hybrid Columnar Compression for Archive Low'
    WHEN 64      THEN 'Compressed row'
    WHEN 128     THEN 'High compression level for LOB operations'
    WHEN 256     THEN 'Medium compression level for LOB operations'
    WHEN 512     THEN 'Low compression level for LOB operations'
    WHEN 1000    THEN 'Minimum required number of LOBs in the object for which LOB compression ratio is to be estimated'
    WHEN 4096    THEN 'Basic compression level'
    WHEN 5000    THEN 'Maximum number of LOBs used to compute the LOB compression ratio'
    WHEN 1000000 THEN 'Minimum required number of rows in the object for which HCC ratio is to be estimated'
    WHEN -1      THEN 'To indicate the use of all the rows in the object to estimate HCC ratio'
    WHEN 1       THEN 'Identifies the object whose compression ratio is estimated as of type table'
    ELSE              'Unknown Compression Type'
       END AS compression_type,  n as num_rows
FROM (SELECT compression_type, Count(*) n
      FROM (SELECT dbms_compression.Get_compression_type('MANZOOR', 'CURRENT_DATA', ROWID) AS COMPRESSION_TYPE
              FROM MANZOOR.CURRENT_DATA )
      GROUP  BY compression_type
      );


SQL> @compression_check_table.sql

COMPRESSION_TYPE                           NUM_ROWS
---------------------------------------- ----------
No Compression                                10000


We could see that all the rows are not compressed (obviously).


5) Check the dba_ilmparameters.

 SQL>  select * from dba_ilmparameters;

NAME                                          VALUE
---------------------------------------- ----------
ENABLED                                           1
JOB LIMIT                                        10
EXECUTION MODE                                    3
EXECUTION INTERVAL                               15
TBS PERCENT USED                                 85
TBS PERCENT FREE                                 25
RETENTION TIME                                   30
POLICY TIME                                       0

8 rows selected.


The above provide the current setting for the ilm , here we are going to a ado policy which will compress the rows in a table which
are more than 2 mins. So we look at the policy time . The value is showing as 0. Which means .. as below.


Constant              Value         Desription
ILM_POLICY_IN_DAYS      0           Indicates policy is specified in days. This                                       is default.
ILM_POLICY_IN_SECONDS   1           Indicates policy is specified in seconds.

The default is value is 0, which is policy is specified in days, for our scenario it is required be in in minutes, so we can use the ILM_POLICY_IN_SECONDS options.


SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.POLICY_TIME,DBMS_ILM_ADMIN.ILM_POLICY_IN_SECONDS);

PL/SQL procedure successfully completed.

SQL> select * from dba_ilmparameters;

NAME                                          VALUE
---------------------------------------- ----------
ENABLED                                           1
JOB LIMIT                                        10
EXECUTION MODE                                    3
EXECUTION INTERVAL                               15
TBS PERCENT USED                                 85
TBS PERCENT FREE                                 25
RETENTION TIME                                   30
POLICY TIME                                       1

8 rows selected.

Now we can see that our policy time value has been changed to 1 , which in turn denotes that it is in seconds.


6) Define the ADO policies for the table.

Refer - http://docs.oracle.com/database/121/SQLRF/statements_3001.htm for Syntax and ilm clause details.

We need to define the ado policy for the row level compression, below syntax shows that after 120 days but actually we have set the ado policy time in seconds so 120 days will be considered as 120 seconds which is 2 minutes. In below syntax have defined the ROW compression to be done for the ROW level which we have defined in CAPS.


SQL > alter table manzoor.current_data
          ilm add policy
          row store compress advanced
          ROW
          after 120 days of no modification;   


Table altered.

Verify the policy which is added using the below views.


SQL> select table_name from dict where table_name like'%ILM%POLICIES%';

TABLE_NAME
----------------------------
DBA_ILMDATAMOVEMENTPOLICIES
DBA_ILMPOLICIES
USER_ILMDATAMOVEMENTPOLICIES
USER_ILMPOLICIES



SQL> select POLICY_NAME, action_type, scope, COMPRESSION_LEVEL, CONDITION_TYPE, CONDITION_DAYS from dba_ilmdatamovementpolicies;

POLICY_NAME          ACTION_TYPE SCOPE   COMPRESSION_LEVEL              CONDITION_TYPE         CONDITION_DAYS
-------------------- ----------- ------- ------------------------------ ---------------------- --------------
P3                   COMPRESSION ROW     ADVANCED                       LAST MODIFICATION TIME            120


SQL> select * from dba_ilmpolicies;

POLICY_NAME          POLICY_TYPE   TABLESPACE                     ENABLE DELETED
-------------------- ------------- ------------------------------ ------ -------
P3                   DATA MOVEMENT                                YES    NO



Evaluation
            -> Policies will be evolved using the heat map statistics which are collected by Mmon process and real time statistics are updated
            in V$heat_map_segment view, later the same will be flushed to HEAT_MAP_STAT$ table, evaluation will be done based on the
            statistics which are flushed to disk. The flush will happen every hour by dbms_scheduler.


Flush the stats to disk manually so that ADO will evaluate the heat map statistics.


SQL> exec dbms_ilm.flush_all_segments;

PL/SQL procedure successfully completed.


Wait for two mins..

SQL> exec dbms_lock.sleep(120);

PL/SQL procedure successfully completed.


7) The next step is to create a scheule job when ado policy evaluation should happen if the default does not match our requirements.
Mmon evaluates row-level policies periodically and start job to compress which block qualify. Segment-level policies are evaluated
and executed only during mainteance window.

So lets create a scheudler job to evaluate the statistics which runs for every 2 mins....



  1  begin
  2     dbms_scheduler.create_job
  3     (
  4     job_name        =>      'flush_heat_map_stats',
  5     job_type        =>      'PLSQL_BLOCK',
  6     job_action      =>      'BEGIN  DBMS_ILM.FLUSH_ALL_SEGMENTS; END;',
  7     start_date      =>      '26-AUG-15 01.10.00 PM',
  8     repeat_interval =>      'FREQ=MINUTELY;INTERVAL=2',
  9     enabled         =>      TRUE,
 10     comments        =>      'Flush Heat Map stats to disk'
 11     );
 12* end;
SQL> /

PL/SQL procedure successfully completed.



begin
       dbms_scheduler.create_job
       (
       job_name      =>     'Evaluate_current_data_table',
       job_type      =>     'PLSQL_BLOCK',
       job_action    =>     'Declare v_executionid number; begin dbms_ilm.execute_ILM (owner => ''MANZOOR'', object_name => ''CURRENT_DATA'', task_id => v_executionid, policy_name => ''P3'', execution_mode => dbms_ilm.ilm_execution_offline); end;',
       start_date    =>     '26-AUG-15 01.12.00 PM',
        repeat_interval => 'FREQ=MINUTELY;INTERVAL=2',
       enabled              =>     TRUE,
       comments      =>     'Trigger evaluation on current_data object'
       );
end;
/

PL/SQL procedure successfully completed.


SQL> select job_name, JOB_ACTION, NEXT_RUN_DATE, REPEAT_INTERVAL from dba_scheduler_jobs where lower(job_name) in ('flush_heat_map_stats','evaluate_current_data_table');

JOB_NAME             JOB_ACTION                                                                       NEXT_RUN_DATE                  REPEAT_INTERVAL
-------------------- -------------------------------------------------------------------------------- ------------------------------ ------------------------------
EVALUATE_CURRENT_DAT Declare v_executionid number; begin dbms_ilm.execute_ILM (owner => 'MANZOOR', ob 26-AUG-15 01.12.00.000000 PM + FREQ=MINUTELY;INTERVAL=2
A_TABLE              ject_name => 'CURRENT_DATA', task_id => v_executionid, policy_name => 'P3', exec 08:00
                     ution_mode => dbms_ilm.ilm_execution_offline); end;

FLUSH_HEAT_MAP_STATS BEGIN      DBMS_ILM.FLUSH_ALL_SEGMENTS; END;                                          26-AUG-15 01.10.00.000000 PM + FREQ=MINUTELY;INTERVAL=2
                                                                                                      08:00





SQL> select job_name, JOB_ACTION, NEXT_RUN_DATE, REPEAT_INTERVAL from dba_scheduler_jobs where lower(job_name) in ('flush_heat_map_stats','evaluate_current_data_table')
  2  /

JOB_NAME             JOB_ACTION                                                                       NEXT_RUN_DATE                  REPEAT_INTERVAL
-------------------- -------------------------------------------------------------------------------- ------------------------------ ------------------------------
EVALUATE_CURRENT_DAT Declare v_executionid number; begin dbms_ilm.execute_ILM (owner => 'MANZOOR', ob 26-AUG-15 01.14.00.000000 PM + FREQ=MINUTELY;INTERVAL=2
A_TABLE              ject_name => 'CURRENT_DATA', task_id => v_executionid, policy_name => 'P3', exec 08:00
                     ution_mode => dbms_ilm.ilm_execution_offline); end;

FLUSH_HEAT_MAP_STATS BEGIN      DBMS_ILM.FLUSH_ALL_SEGMENTS; END;                                          26-AUG-15 01.14.00.000000 PM + FREQ=MINUTELY;INTERVAL=2
                                                                                                      08:00


above jobs triggered as we could see that next run date is updated.



8) Check the Evaluation status / results


SQL> select table_name from dict where table_name like '%ILM%TASK%';

TABLE_NAME
----------------------------
DBA_ILMTASKS
USER_ILMTASKS


Tasks are being completed.


SQL> select task_id, start_time as start_time, state from user_ilmtasks order by 1;

   TASK_ID START_TIME                                                                  STATE
---------- --------------------------------------------------------------------------- ---------
         4 26-AUG-15 01.12.03.721844 PM                                                COMPLETED
         5 26-AUG-15 01.14.03.766599 PM                                                COMPLETED
         6 26-AUG-15 01.16.03.295030 PM                                                COMPLETED


SQL> select table_name from dict where table_name like '%ILM%EVALUATION%';

TABLE_NAME
--------------------------
DBA_ILMEVALUATIONDETAILS
USER_ILMEVALUATIONDETAILS

SQL> select table_name from dict where table_name like '%ILM%RESULT%';

TABLE_NAME
-----------------
DBA_ILMRESULTS
USER_ILMRESULTS



SQL> select task_id, policy_name, OBJECT_NAME, SELECTED_FOR_EXECUTION, job_name from DBA_ILMEVALUATIONDETAILS where object_name = 'CURRENT_DATA' order by 1;

   TASK_ID POLICY_NAME          OBJECT_NAME                              SELECTED_FOR_EXECUTION                     JOB_NAME
---------- -------------------- ---------------------------------------- ------------------------------------------ --------------------
         4 P3                   CURRENT_DATA                             SELECTED FOR EXECUTION                     ILMJOB150
         5 P3                   CURRENT_DATA                             SELECTED FOR EXECUTION                     ILMJOB152
         6 P3                   CURRENT_DATA                             SELECTED FOR EXECUTION                     ILMJOB154
         7 P3                   CURRENT_DATA                             SELECTED FOR EXECUTION                     ILMJOB156



-- From the above we could see that the task has been evaluation and it has been selected for execution.
  
SQL> select task_id, job_name, start_time, completion_time from dba_ilmresults where task_id in (4,5,6,7) order by 1
  2  /

   TASK_ID JOB_NAME             START_TIME                     COMPLETION_TIME
---------- -------------------- ------------------------------ ------------------------------
         4 ILMJOB150            26-AUG-15 01.12.04.118381 PM   26-AUG-15 01.12.04.431165 PM
         5 ILMJOB152            26-AUG-15 01.14.04.142666 PM   26-AUG-15 01.14.04.286135 PM
         6 ILMJOB154            26-AUG-15 01.16.03.644307 PM   26-AUG-15 01.16.03.784602 PM
         7 ILMJOB156            26-AUG-15 01.18.03.706115 PM   26-AUG-15 01.18.03.858157 PM


We could see that job has been completed as well.


 8) Now lets view the final output.

SQL> select table_name, tablespace_name , COMPRESSION, COMPRESS_FOR from user_tables where table_name = 'CURRENT_DATA';


TABLE_NAME           TABLESPACE_NAME                COMPRESS COMPRESS_FOR
-------------------- ------------------------------ -------- ------------------------------
CURRENT_DATA         ACTIVE_DATA                    DISABLED


The compress is showing as disabled.... It’s because we have not enabled the compress in segment level.. we have enabled it only in the row level.


 SQL> @compression_check_table.sql

COMPRESSION_TYPE                           NUM_ROWS
---------------------------------------- ----------
No Compression                                  214
Basic compression level                        9786

Now we could see that 9786 rows has been compress and 214 rows is not compressed yet. Its because the compression will happen only when the
entire block is full.


10) Lets insert some more data and check in another 2 mins.

begin
for i in 10001..15000 loop
insert into current_data values (sysdate, user, i, dbms_random.string('U',50));
       end loop;
       commit;
       end;
       /

PL/SQL procedure successfully completed.


SQL> select count(*) from current_data;

  COUNT(*)
----------
     15000

-- Lets wait for 2 mins !!

SQL> exec dbms_lock.sleep(120);

PL/SQL procedure successfully completed.


SQL> SELECT policy_name, object_name, enabled FROM dbA_ilmobjects where object_name = 'CURRENT_DATA';

POLICY_NAME          OBJECT_NAME                              ENABLED
-------------------- ---------------------------------------- -------
P3                   CURRENT_DATA                               YES


SQL> @compression_check_table.sql

COMPRESSION_TYPE                           NUM_ROWS
---------------------------------------- ----------
No Compression                                 1434

Basic compression level                       13566

Now we could see that whenever the evaluation happens it is compressing the data.



11) Lets remove the table from that policy.

SQL> alter table manzoor.current_data ilm delete_all;

Table altered.

SQL> select * from user_ilmobjects;

no rows selected

SQL> exec dbms_scheduler.drop_job('EVALUATE_CURRENT_DATA_TABLE');

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.drop_job('FLUSH_HEAT_MAP_STATS');

PL/SQL procedure successfully completed.

Scenario -2


We have a paritioned table, the parition is based on monthly partion whcih will be created manually. Always the last six month paritioned are acessed/modified
any parition which is more than 6 month are not much accessed or modified. We have two tablespaces active ACTIVE_DATA and ARCHIVE_DATA, the active_data tablespace
has been created on the file system which has high performace and the archive_data has been created on low cost file system. We will move the paritions
which are more than 6 month to the tablespace

1) Creating a partitioned table.

     
create table current_data
     (
     upd_id   number,
     upd_by   varchar2(30),
     upd_tm   date,
     message  varchar2(30)
     )
     Partition by range (upd_tm)
     (
     Partition P_JAN_2015 values less than (to_date ('01-FEB-2015')) tablespace active_data,
     Partition P_FEB_2015 values less than (to_date ('01-MAR-2015')) tablespace active_data,
     Partition P_MAR_2015 values less than (to_date ('01-APR-2015')) tablespace active_data,
     Partition P_APR_2015 values less than (to_date ('01-MAY-2015')) tablespace active_data,
     Partition P_MAY_2015 values less than (to_date ('01-JUN-2015')) tablespace active_data,
     Partition P_JUN_2015 values less than (to_date ('01-JUL-2015')) tablespace active_data,
     Partition P_JUL_2015 values less than (to_date ('01-AUG-2015')) tablespace active_data,
     Partition P_AUG_2015 values less than (to_date ('01-SEP-2015')) tablespace active_data
     )
     /

Table created.


col tablespace_name for a15;
col table_name for a20;
col partition_name for a20;

SQL> select TABLE_NAME, parTition_name, HIGH_VALUE, partition_position, TABLESPACE_NAME , interval from dba_tab_partitions where table_name = 'CURRENT_DATA';

TABLE_NAME           PARTITION_NAME       HIGH_VALUE                                                                       PARTITION_POSITION TABLESPACE_NAME INT
-------------------- -------------------- -------------------------------------------------------------------------------- ------------------ --------------- ---
CURRENT_DATA         P_JAN_2015           TO_DATE(' 2015-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  1 ACTIVE_DATA     NO
CURRENT_DATA         P_FEB_2015           TO_DATE(' 2015-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  2 ACTIVE_DATA     NO
CURRENT_DATA         P_MAR_2015           TO_DATE(' 2015-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  3 ACTIVE_DATA     NO
CURRENT_DATA         P_APR_2015           TO_DATE(' 2015-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  4 ACTIVE_DATA     NO
CURRENT_DATA         P_MAY_2015           TO_DATE(' 2015-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  5 ACTIVE_DATA     NO
CURRENT_DATA         P_JUN_2015           TO_DATE(' 2015-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  6 ACTIVE_DATA     NO
CURRENT_DATA         P_JUL_2015           TO_DATE(' 2015-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  7 ACTIVE_DATA     NO
CURRENT_DATA         P_AUG_2015           TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA                  8 ACTIVE_DATA     NO

  1  begin
  2             for i in 1..1000 loop
  3             insert into current_data values (i,dbms_random.string('U',30), to_date('01-JAN-2015','DD-MON-YYYY'), 'Jan Partition');
  4             end loop;
  5             commit;
  6             for i in 1001..1500 loop
  7             insert into current_data values (i,dbms_random.string('U',30), to_date('01-FEB-2015','DD-MON-YYYY'), 'Feb Partition');
  8             end loop;
  9             commit;
 10             for i in 1501..2000 loop
 11             insert into current_data values (i,dbms_random.string('U',30), to_date('01-MAR-2015','DD-MON-YYYY'), dbms_random.string('U',30));
 12             END LOOP;
 13             commit;
 14             for i in 2001..2500 loop
 15             insert into current_data values (i,dbms_random.string('U',30), to_date('01-APR-2015','DD-MON-YYYY'), dbms_random.string('U',30));
 16             END LOOP;
 17             commit;
 18             for i in 2501..3000 loop
 19             insert into current_data values (i,dbms_random.string('U',30), to_date('01-MAY-2015','DD-MON-YYYY'), dbms_random.string('U',30));
 20             end loop;
 21             commit;
 22             for i in 3001..4000 loop
 23             insert into current_data values (i,dbms_random.string('U',30), to_date('01-JUN-2015','DD-MON-YYYY'), dbms_random.string('U',30));
 24             END LOOP;
 25             commit;
 26             for i in 4001..5000 loop
 27             insert into current_data values (i,dbms_random.string('U',30), to_date('01-JUL-2015','DD-MON-YYYY'), dbms_random.string('U',30));
 28             end loop;
 29             commit;
 30             for i in 5001..6000 loop
 31             insert into current_data values (i,dbms_random.string('U',30), to_date('01-AUG-2015','DD-MON-YYYY'), dbms_random.string('U',30));
 32             end loop;
 33             commit;
 34* end;
SQL> /

PL/SQL procedure successfully completed.

SQL> create index idx_cur_Data on current_data (upd_tm) local tablespace active_Data;

Index created.


SQL> exec dbms_stats.gather_table_stats('MANZOOR','CURRENT_DATA');

PL/SQL procedure successfully completed.



SQL> select table_name, PARTITION_NAME, NUM_ROWS from dba_tab_statistics where table_name = 'CURRENT_DATA' ORDER BY PARTITION_POSITION;

TABLE_NAME           PARTITION_NAME         NUM_ROWS
-------------------- -------------------- ----------
CURRENT_DATA         P_JAN_2015                 1000
CURRENT_DATA         P_FEB_2015                  500
CURRENT_DATA         P_MAR_2015                  500
CURRENT_DATA         P_APR_2015                  500
CURRENT_DATA         P_MAY_2015                  500
CURRENT_DATA         P_JUN_2015                 1000
CURRENT_DATA         P_JUL_2015                 1000
CURRENT_DATA         P_AUG_2015                 1000


SQL> COL INDEX_NAME FOR A30;
SQL> select INDEX_NAME, PARTITION_NAME, PARTITION_POSITION from dba_ind_partitions where INDEX_NAME = 'IDX_CUR_DATA' ORDER BY 3;
INDEX_NAME                     PARTITION_NAME       PARTITION_POSITION
------------------------------ -------------------- ------------------
IDX_CUR_DATA                   P_JAN_2015                            1
IDX_CUR_DATA                   P_FEB_2015                            2
IDX_CUR_DATA                   P_MAR_2015                            3
IDX_CUR_DATA                   P_APR_2015                            4
IDX_CUR_DATA                   P_MAY_2015                            5
IDX_CUR_DATA                   P_JUN_2015                            6
IDX_CUR_DATA                   P_JUL_2015                            7
IDX_CUR_DATA                   P_AUG_2015                            8

8 rows selected.


SQL> select * from V$heat_map_segment where object_name  = 'CURRENT_DATA';

OBJECT_NAME                    SUBOBJECT_NAME                       OBJ#   DATAOBJ#        TS# TRACK_TIM SEG SEG FUL LOO     CON_ID
------------------------------ ------------------------------ ---------- ---------- ---------- --------- --- --- --- --- ----------
CURRENT_DATA                   P_APR_2015                          95522      95522          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_FEB_2015                          95520      95520          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_MAY_2015                          95523      95523          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_AUG_2015                          95526      95526          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_JUN_2015                          95524      95524          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_JUL_2015                          95525      95525          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_MAR_2015                          95521      95521          6 27-AUG-15 NO  NO  NO  NO           0
CURRENT_DATA                   P_JAN_2015                          95519      95519          6 27-AUG-15 NO  NO  YES YES          0



Now we will define an policy to move the partition which are more than 6 month old to another table archive_data. Storage tiering
can be done using two policys one is tablespace fullness or using custom policy using function.

For our scenario it is not based on tablespace fullness, hence we need to create our own fucntion. Lets create a function which
will return either true or false. If it retuns true then the policy condition will be satisfied and action will be taken, if the
function returns false then condition is failed and policy will not take affect.


set serveroutput on
create or replace function curr_arch_fun (obj_id number) return boolean as
seg_wr_t date;
seg_re_t date;
seg_fu_t date;
seg_lk_t date;
max_t    date;
begin
      select SEGMENT_WRITE_TIME,SEGMENT_READ_TIME,FULL_SCAN,LOOKUP_SCAN into
              seg_wr_t, seg_re_t, seg_fu_t, seg_lk_t
                from user_heat_map_segment a, user_objects b
                where a.object_name = b.object_name and a.SUBOBJECT_NAME=b.subobject_name and b.object_id = obj_id;
      max_t := greatest(nvl(seg_wr_t,'01-JAN-2015'),nvl(seg_re_t,'01-JAN-2015'),nvl(seg_fu_t,'01-JAN-2015'), nvl(seg_lk_t,'01-JAN-2015'));
      if max_t < add_months(sysdate,-6) then
      return true;
      else
      return false;
      end if;
end;
/


We have created an function, this function will retrun true if the segment is not accessed / modified in last 6 months else it will return false.

For the purpose of the example we will change the condition to last 10 Minutes. We want to move the paritions which are not accessed or
modified in last 10 minutes . So we can change the function as per below.



set serveroutput on
create or replace function curr_arch_fun (obj_id number) return boolean as
seg_wr_t date;
seg_re_t date;
seg_fu_t date;
seg_lk_t date;
max_t    date;
begin
      select SEGMENT_WRITE_TIME,SEGMENT_READ_TIME,FULL_SCAN,LOOKUP_SCAN into
              seg_wr_t, seg_re_t, seg_fu_t, seg_lk_t
                from user_heat_map_segment a, user_objects b
                where a.object_name = b.object_name and a.SUBOBJECT_NAME=b.subobject_name and b.object_id = obj_id;
      max_t := greatest(nvl(seg_wr_t,'01-JAN-2015'),nvl(seg_re_t,'01-JAN-2015'),nvl(seg_fu_t,'01-JAN-2015'), nvl(seg_lk_t,'01-JAN-2015'));
      if max_t < sysdate-10/1400 then
      return true;
      else
      return false;
      end if;
end;
/


Now lets add the policy to the paritions.



SQL> select partition_name from dba_tab_partitions where table_name = 'CURRENT_DATA' order by partition_position;

PARTITION_NAME
--------------------
P_JAN_2015
P_FEB_2015
P_MAR_2015
P_APR_2015
P_MAY_2015
P_JUN_2015
P_JUL_2015
P_AUG_2015
P_SEP_2015

9 rows selected.


9 rows selected.

SQL> alter table current_data modify partition P_JAN_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_FEB_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_MAR_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_APR_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_MAY_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_JUN_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_JUL_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_AUG_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

SQL> alter table current_data modify partition P_SEP_2015 ilm add policy tier to ARCHIVE_DATA on curr_arch_fun;

Table altered.

-- We have added storage tier policy to alll the paritions. When ever your creating a new parition just add
this policy as well.


SQL> select * from dba_ilmpolicies;

POLICY_NAM POLICY_TYPE   TABLESPACE                     ENABLE DELETED
---------- ------------- ------------------------------ ------ -------
P21        DATA MOVEMENT                                YES    NO
P22        DATA MOVEMENT                                YES    NO
P23        DATA MOVEMENT                                YES    NO
P24        DATA MOVEMENT                                YES    NO
P25        DATA MOVEMENT                                YES    NO
P26        DATA MOVEMENT                                YES    NO
P27        DATA MOVEMENT                                YES    NO
P28        DATA MOVEMENT                                YES    NO
P29        DATA MOVEMENT                                YES    NO

9 rows selected.

SQL> select policy_name, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_TYPE, ENABLED from user_ilmobjects;

POLICY_NAM OBJECT_NAME     SUBOBJECT_NAME  OBJECT_TYPE        ENA
---------- --------------- --------------- ------------------ ---
P21        CURRENT_DATA    P_JAN_2015      TABLE PARTITION    YES
P22        CURRENT_DATA    P_FEB_2015      TABLE PARTITION    YES
P23        CURRENT_DATA    P_MAR_2015      TABLE PARTITION    YES
P24        CURRENT_DATA    P_APR_2015      TABLE PARTITION    YES
P25        CURRENT_DATA    P_MAY_2015      TABLE PARTITION    YES
P26        CURRENT_DATA    P_JUN_2015      TABLE PARTITION    YES
P27        CURRENT_DATA    P_JUL_2015      TABLE PARTITION    YES
P28        CURRENT_DATA    P_AUG_2015      TABLE PARTITION    YES
P29        CURRENT_DATA    P_SEP_2015      TABLE PARTITION    YES

9 rows selected.

SQL> select POLICY_NAME, ACTION_TYPE, TIER_TABLESPACE, TIER_STATUS, CONDITION_TYPE, CONDITION_DAYS, CUSTOM_FUNCTION from user_ilmdatamovementpolicies;

POLICY_NAM ACTION_TYPE TIER_TABLESPACE      TIER_STAT CONDITION_TYPE         CONDITION_DAYS CUSTOM_FUNCTION
---------- ----------- -------------------- --------- ---------------------- -------------- ------------------------------
P21        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P22        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P23        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P24        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P25        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P26        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P27        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P28        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN
P29        STORAGE     ARCHIVE_DATA                   USER DEFINED                        0 CURR_ARCH_FUN

9 rows selected.


-- The policy will be evaluated once daily during the maintenace window, and if the condition is satisifed then the policy action will be
executed. Now lets evaluate the policy.



-- For testing .. we have accessed all the data once.

> select * from manzoor.current_data;

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
27-AUG-15 12.16.28.043563 PM +08:00


-- Flush the heat map statistics to disk.

SQL> exec dbms_ilm.flush_all_segments;

PL/SQL procedure successfully completed.


Evaluate the policy Manually.

declare
v_executionid_1 number;
begin
dbms_ilm.execute_ILM
(
owner              => 'MANZOOR',
object_name    => 'CURRENT_DATA',
task_id        => v_executionid_1,
execution_mode => dbms_ilm.ilm_execution_offline
);
end;
/


PL/SQL procedure successfully completed.


-- Let us check the result of the policy evaluations.

SQL> select TASK_ID, POLICY_NAME, OBJECT_NAME, SUBOBJECT_NAME, SELECTED_FOR_EXECUTION from USER_ILMEVALUATIONDETAILS order by 1;

   TASK_ID POLICY_NAM OBJECT_NAME     SUBOBJECT_NAME  SELECTED_FOR_EXECUTION
---------- ---------- --------------- --------------- ------------------------------------------
        34 P21        CURRENT_DATA    P_JAN_2015      SELECTED FOR EXECUTION
        34 P22        CURRENT_DATA    P_FEB_2015      SELECTED FOR EXECUTION
        34 P23        CURRENT_DATA    P_MAR_2015      SELECTED FOR EXECUTION
        34 P24        CURRENT_DATA    P_APR_2015      SELECTED FOR EXECUTION
        34 P25        CURRENT_DATA    P_MAY_2015      SELECTED FOR EXECUTION
        34 P26        CURRENT_DATA    P_JUN_2015      SELECTED FOR EXECUTION
        34 P27        CURRENT_DATA    P_JUL_2015      SELECTED FOR EXECUTION
        34 P28        CURRENT_DATA    P_AUG_2015      SELECTED FOR EXECUTION
        34 P29        CURRENT_DATA    P_SEP_2015      SELECTED FOR EXECUTION


SQL> col creation_time for a40;
SQL> col start_time for a40;
SQL> col completion_time for a40;

SQL> select * from user_ilmtasks where task_id = 34;

   TASK_ID STATE     CREATION_TIME                            START_TIME                               COMPLETION_TIME
---------- --------- ---------------------------------------- ---------------------------------------- ----------------------------------------
        34 COMPLETED 27-AUG-15 01.29.26.519883 PM             27-AUG-15 01.29.26.519883 PM             27-AUG-15 01.29.45.882804 PM



-- From the above we could see that policy has been evaluated and it is selected for execution, aslo the task shows as completed.



SQL> col job_name for a10;
SQL> col job_state for a35;
SQL>  select task_id, JOB_NAME, JOB_STATE, START_TIME, COMPLETION_TIME from user_ilmresults where task_id = 34;


   TASK_ID JOB_NAME   JOB_STATE                      START_TIME                               COMPLETION_TIME
---------- ---------- ------------------------------ ---------------------------------------- ----------------------------------------
        34 ILMJOB264  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.29.588061 PM             27-AUG-15 01.29.45.882804 PM
        34 ILMJOB266  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.29.717945 PM             27-AUG-15 01.29.45.729649 PM
        34 ILMJOB268  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.30.443651 PM             27-AUG-15 01.29.45.001037 PM
        34 ILMJOB270  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.30.675137 PM             27-AUG-15 01.29.32.159260 PM
        34 ILMJOB272  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.29.489066 PM             27-AUG-15 01.29.45.690322 PM
        34 ILMJOB274  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.30.670540 PM             27-AUG-15 01.29.45.515296 PM
        34 ILMJOB276  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.29.498106 PM             27-AUG-15 01.29.45.840876 PM
        34 ILMJOB278  COMPLETED SUCCESSFULLY         27-AUG-15 01.29.29.493288 PM             27-AUG-15 01.29.45.852788 PM


-- Results shows completed successfully.
-- Lets us view the final result.


SQL> select table_name, partition_name, tablespace_name from dba_tab_partitions where table_name = 'CURRENT_DATA' ORDER BY 2;

TABLE_NAME           PARTITION_NAME       TABLESPACE_NAME
-------------------- -------------------- ---------------
CURRENT_DATA         P_APR_2015           ARCHIVE_DATA
CURRENT_DATA         P_AUG_2015           ARCHIVE_DATA
CURRENT_DATA         P_FEB_2015           ARCHIVE_DATA
CURRENT_DATA         P_JAN_2015           ARCHIVE_DATA
CURRENT_DATA         P_JUL_2015           ARCHIVE_DATA
CURRENT_DATA         P_JUN_2015           ARCHIVE_DATA
CURRENT_DATA         P_MAR_2015           ARCHIVE_DATA
CURRENT_DATA         P_MAY_2015           ARCHIVE_DATA
CURRENT_DATA         P_SEP_2015           ACTIVE_DATA


SQL> select index_name, PARTITION_NAME, STATUS, TABLESPACE_NAME FROM DBA_IND_PARTITIONS WHERE INDEX_NAME = 'IDX_CUR_DATA';

INDEX_NAME                     PARTITION_NAME       STATUS   TABLESPACE_NAME
------------------------------ -------------------- -------- ---------------
IDX_CUR_DATA                   P_APR_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_AUG_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_FEB_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_JAN_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_JUL_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_JUN_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_MAR_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_MAY_2015           USABLE   ACTIVE_DATA
IDX_CUR_DATA                   P_SEP_2015           USABLE   ACTIVE_DATA

9 rows selected.

-- Partition has been moved to the another tablespace,

   
Remove the policy if required.

SQL> alter table current_data modify partition P_JAN_2015 ilm delete_all;

Table altered.

SQL> alter table current_data modify partition P_FEB_2015 ilm delete_all;

Table altered.

SQL> select 'alter table current_data modify partition ' || SUBOBJECT_NAME || ' ilm delete_all;' from user_ilmobjects;

'ALTERTABLECURRENT_DATAMODIFYPARTITION'||SUBOBJECT_NAME||'ILMDELETE_ALL;'
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
alter table current_data modify partition P_APR_2015 ilm delete_all;
alter table current_data modify partition P_MAY_2015 ilm delete_all;
alter table current_data modify partition P_JUN_2015 ilm delete_all;
alter table current_data modify partition P_JUL_2015 ilm delete_all;
alter table current_data modify partition P_AUG_2015 ilm delete_all;
alter table current_data modify partition P_SEP_2015 ilm delete_all;

- Execute above.

Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.


SQL> select * from user_ilmobjects;

no rows selected

Creating differnt policies.


> alter tablespace tbs1 default
      ilm add policy   
      row store compress advanced
      segment
      after 30 days of no modification;

-- Row store compress policy for the tablespace (it will be applied for all the tables which are stored on this tablespace).


> alter table tab1
      ilm add policy
      row store compress advanced
      group
      after 90 days of no modification;

--> The group keyword indicates that even the secure files lobs will also be compressed. Global index are maintained,
For row store it will be low for secure file lob compression and for column it will be medium.

> alter table tab1 modify partition p1
      ilm add policy
      column store compress query high
      group
      after 6 month of creation;



> alter table tab1
      ilm add policy
      row store compress advanced
      segment
      after 90 days of no modification;

> alter table tab1
      ilm add policy
      row store compress advanced
      segment
      after 6 month of no access;

> alter table tab1
      ilm add policy
      row store compress advanced
      segment
      after 1 year of creation;

> alter table tab1
      ilm add policy
      row store compress advanced
      row
      after 30 of no modification;

Only possible compress for row level is row store compress advanced and the tracking method is no modification.

-- Storage tiering.

> alter table tabl1 ilm add policy tier to tablespace_name on custom_function;

A segment level policy will be executed only once in the life time after that it will become disabled. This applies only to the tiering policy.

Priority ->

Eg, Three objects are defined with storage tier policies, when the evaluation condition passed first it will move the table with least recently accessed.
  

Multiple Segment polices on a Segments.


New data load - Life cycle starts.

After 1 day   - Row compression
After 1 month - Segment compression
After 1 year  - Move to low cost tablespace
After 2 year  - Purge - Cycle ends.


SQL> alter table current_data ilm add policy row store compress advanced row after 1 day of no modification;

Table altered.

SQL>  alter table current_data_1 ilm add policy column store compress for query high segment after 1 month of no modification;

Table altered.


SQL> alter table current_data_1 ilm add policy tier to archive_data on curr_arch_fun;

Table altered.

SQL> alter table current_data_1 ilm add policy column store compress for archive high segment after 2 years of no modification;

Table altered.

SQL> alter table current_data_1 ilm add policy column store compress for archive high segment after 2 years of no access;
alter table current_data_1 ilm add policy column store compress for archive high segment after 2 years of no access
*
ERROR at line 1:
ORA-38323: policy conflicts with policy 47


Several policies can been applied on the same segment as long as they do not have any conflicting conditioins.

Conflict appear if all the polices on the same object / action are not of same static. For example
all compression polices have to be applied on same static either no access / no modification / or creation time.


After a segment move to a particular compression level then all the below compression level polices are disabled.
That is why segment can move to the greater compression.


There can be only one ROW compress policy on a segment.


Policy inheritance.

-- The child level policy overrides the parent level policy with the same action.

The table level policy overriddes
      * the tablespace level policy
      * is inherited at the partition level.



Segment level policy will be evaluation during mainteance window.
Row   level policy will be evaluated by mmon for every 15 mins.


  
SQL> select * from dba_ilmparameters;

NAME                                                                                                                                  VALUE
-------------------------------------------------------------------------------------------------------------------------------- ----------
ENABLED                                                                                                                                   1
JOB LIMIT                                                                                                                                10
EXECUTION MODE                                                                                                                            3
EXECUTION INTERVAL                                                                                                                       15
TBS PERCENT USED                                                                                                                         85
TBS PERCENT FREE                                                                                                                         25
RETENTION TIME                                                                                                                           30
POLICY TIME                                                                                                                               1


The execution interval will be used of the row level policy evaluation.


exec dbms_ilm_admin.customize_ilm (dbms_ilm_admin.execution_interval,1);



Enable all policies


> alter table t1 ilm enable_all;
> alter table t1 ilm disable_all;
> alter table t1 ilm delete_all;
> alter table t1 modify partition p1 ilm disable_all;
> alter table t1 modify partition p1 ilm delete_all;

Tun off / on ADO

> exec dbms_ilm_admin.disable_ilm
> exec dbms_ilm_admin.enable_ilm

you can view the enabled value in dba_ilmparametes. 1 is enabled and 2 is disabled.



dbms_ilm.preview_ilm - To view the results of evaluation of ADO on set of objects.

      ILM_scope => dbms_ilm.scope_schema   ( Evaluation selects all ADO policeis in the current schema).
      ILM_scope => dbms_ilm.scope_database ( Evaluation selects all ADO polices in the database).

dbms_ilm.execute_ilm_task
      Execution_mode => dbms_ilm.schedule_immediate, execution_schedule => dbms_ilm.ilm_exectuion_offline)



select task_id, state from dba_ilmtasks;

Task_id     State
-------     ------
26482     ACTIVE


The task is created in INACTIVE state. The user can still add or delete objects to this and review the
ADO policy evaluation again.

Once done execute the execute_ilm_task procedure to perform immediately scheduling of ADO execution on this set of objects. The state of Task moves to active state when it starts execting.


ADO jobs fails
--------------
Row Level :-
In Row level compression of ADO policies, the blocks are compressed in transactions.
If the job is killed when when the transaction is on progress it be be rolled back.
If some blocks are already compressed then those are not affected.

Segment Level :-

A Ado job involving alter table.. move is completed but failed before rebuiling the
indexes , in this case jobs will be spawned to rebuild the indexes thats are in
usable state before alter table move operation started.

No comments:

Post a Comment