DBMS_REDEFINITION is a well-known, built-in package of Oracle, which can be used to reorganize tables. It works well most of the times, but recently I have found myself working out a custom solution to reorg a large table for a customer.
For the impatient: if your table you are going to reorg contains CLOB data, you might have to consider using an alternative method such as CTAS. (Problem appeared on 11.2.0.2, I did not test it on other Oracle versions).
In this case our customer had a trace table which occupied approximately 27GB (20GB of that belonged to LOB segments!). In order to sort the problem out, the proposed solution was that we would re-create it as a partitioned table and Oracle's secure file feature with compression would be used to help cut down on required space.
For some reason the execution of DBMS_REDEFINITION.SYNC_INTERIM_TABLE(...) has never finished and the whole process got stuck in this step. As the name of the procedure suggests this ought to synchronize inserted/updated/deleted records from the original table to the new one. Internally DBMS_REDEFINITION uses a materialized view and a materialized view log on the source table to do this job.
Most likely the CLOB column which the source table had, was responsible for the misbehaviour, I experienced.
You can reproduce this issue easily by taking the following steps.
You can reproduce this issue easily by taking the following steps.
- Create a large enough source table and an empty destination table with the same definition.
- Perform on-line redefinition
- While redefinition is in progress insert some records to the source table
- Your process will be bogged down on the sync phrase.
-- Create source table
create table old_table
as select ao.owner, ao.object_name from all_objects ao where 1 = 2;
as select ao.owner, ao.object_name from all_objects ao where 1 = 2;
-- Add columns (if you change the DATA column to VARCHAR2,
-- or you simply just remove it, everything will be working
-- fine.
alter table old_table add data clob not null;
alter table old_table add id number not null;
create table new_table
(
owner varchar2(30 byte) not null,
object_name varchar2(30 byte) not null,
data clob,
id number
);
alter table old_table add id number not null;
create table new_table
(
owner varchar2(30 byte) not null,
object_name varchar2(30 byte) not null,
data clob,
id number
);
-- Populate original table (it inserts ~1M records on my database)
create sequence seq minvalue 1 start with 1 increment by 1 cache 10;
insert into old_table
select ao.owner, ao.object_name, ao.owner || '.' || ao.object_name as data, seq.nextval as data from all_objects ao, dba_users du
where du.username like '%SYS%';
commit;
-- We have to add a primary key, because redefinition
-- will not start without a primary key.
insert into old_table
select ao.owner, ao.object_name, ao.owner || '.' || ao.object_name as data, seq.nextval as data from all_objects ao, dba_users du
where du.username like '%SYS%';
commit;
-- We have to add a primary key, because redefinition
-- will not start without a primary key.
alter table old_table add constraint pk_old_table primary key (id);
declare
procedure cleanup is
begin
dbms_application_info.set_action(action_name => '');
dbms_monitor.session_trace_disable();
end;
begin
dbms_monitor.session_trace_enable(waits => true, binds => true);
dbms_application_info.set_action(action_name => 'start_redef_table');
dbms_redefinition.start_redef_table( uname => 'lcsontos',
orig_table => 'old_table',
int_table => 'new_table');
dbms_application_info.set_action(action_name => 'sync_interim_table');
dbms_redefinition.sync_interim_table( uname => 'lcsontos',
orig_table => 'old_table',
int_table => 'new_table');
dbms_application_info.set_action(action_name => 'finish_redef_table');
dbms_redefinition.finish_redef_table( uname => 'lcsontos',
orig_table => 'old_table',
int_table => 'new_table');
cleanup();
exception
when others then
cleanup();
dbms_redefinition.abort_redef_table( uname => 'lcsontos',
orig_table => 'old_table',
int_table => 'new_table');
dbms_output.put_line(dbms_utility.format_error_backtrace());
end;
insert into old_table
select ao.owner, ao.object_name, ao.owner || '.' || ao.object_name, seq.nextval as data from all_objects ao
where rownum <= 1000;
commit;
select ao.owner, ao.object_name, ao.owner || '.' || ao.object_name, seq.nextval as data from all_objects ao
where rownum <= 1000;
commit;
Take a look at the active sessions, this query will be running (and running ...), until you kill it. If you try to execute it from another session it gives a result within a few seconds. So I suppose there is nothing wrong with the query itself.
SELECT CURRENT$."OWNER",
CURRENT$."OBJECT_NAME",
CURRENT$."SUBOBJECT_NAME",
CURRENT$."OBJECT_ID",
CURRENT$."DATA_OBJECT_ID",
CURRENT$."OBJECT_TYPE",
CURRENT$."CREATED",
CURRENT$."LAST_DDL_TIME",
CURRENT$."TIMESTAMP",
CURRENT$."STATUS",
CURRENT$."TEMPORARY",
CURRENT$."GENERATED",
CURRENT$."SECONDARY",
CURRENT$."NAMESPACE",
CURRENT$."EDITION_NAME",
CURRENT$."DATA",
CURRENT$."ID",
LOG$.CHANGE_VECTOR$$
FROM (SELECT "OLD_TABLE"."OWNER" "OWNER",
"OLD_TABLE"."OBJECT_NAME" "OBJECT_NAME",
"OLD_TABLE"."SUBOBJECT_NAME" "SUBOBJECT_NAME",
"OLD_TABLE"."OBJECT_ID" "OBJECT_ID",
"OLD_TABLE"."DATA_OBJECT_ID" "DATA_OBJECT_ID",
"OLD_TABLE"."OBJECT_TYPE" "OBJECT_TYPE",
"OLD_TABLE"."CREATED" "CREATED",
"OLD_TABLE"."LAST_DDL_TIME" "LAST_DDL_TIME",
"OLD_TABLE"."TIMESTAMP" "TIMESTAMP",
"OLD_TABLE"."STATUS" "STATUS",
"OLD_TABLE"."TEMPORARY" "TEMPORARY",
"OLD_TABLE"."GENERATED" "GENERATED",
"OLD_TABLE"."SECONDARY" "SECONDARY",
"OLD_TABLE"."NAMESPACE" "NAMESPACE",
"OLD_TABLE"."EDITION_NAME" "EDITION_NAME",
"OLD_TABLE"."DATA" "DATA",
"OLD_TABLE"."ID" "ID"
FROM "LCSONTOS"."OLD_TABLE" "OLD_TABLE") CURRENT$,
(SELECT MLOG$."ID",
SYS.MVAggRawBitOr(MLOG$.CHANGE_VECTOR$$) CHANGE_VECTOR$$
FROM "LCSONTOS"."MLOG$_OLD_TABLE" MLOG$
WHERE "SNAPTIME$$" > :1
AND ("DMLTYPE$$" != 'D')
GROUP BY MLOG$."ID") LOG$
WHERE CURRENT$."ID" = LOG$."ID"
CURRENT$."OBJECT_NAME",
CURRENT$."SUBOBJECT_NAME",
CURRENT$."OBJECT_ID",
CURRENT$."DATA_OBJECT_ID",
CURRENT$."OBJECT_TYPE",
CURRENT$."CREATED",
CURRENT$."LAST_DDL_TIME",
CURRENT$."TIMESTAMP",
CURRENT$."STATUS",
CURRENT$."TEMPORARY",
CURRENT$."GENERATED",
CURRENT$."SECONDARY",
CURRENT$."NAMESPACE",
CURRENT$."EDITION_NAME",
CURRENT$."DATA",
CURRENT$."ID",
LOG$.CHANGE_VECTOR$$
FROM (SELECT "OLD_TABLE"."OWNER" "OWNER",
"OLD_TABLE"."OBJECT_NAME" "OBJECT_NAME",
"OLD_TABLE"."SUBOBJECT_NAME" "SUBOBJECT_NAME",
"OLD_TABLE"."OBJECT_ID" "OBJECT_ID",
"OLD_TABLE"."DATA_OBJECT_ID" "DATA_OBJECT_ID",
"OLD_TABLE"."OBJECT_TYPE" "OBJECT_TYPE",
"OLD_TABLE"."CREATED" "CREATED",
"OLD_TABLE"."LAST_DDL_TIME" "LAST_DDL_TIME",
"OLD_TABLE"."TIMESTAMP" "TIMESTAMP",
"OLD_TABLE"."STATUS" "STATUS",
"OLD_TABLE"."TEMPORARY" "TEMPORARY",
"OLD_TABLE"."GENERATED" "GENERATED",
"OLD_TABLE"."SECONDARY" "SECONDARY",
"OLD_TABLE"."NAMESPACE" "NAMESPACE",
"OLD_TABLE"."EDITION_NAME" "EDITION_NAME",
"OLD_TABLE"."DATA" "DATA",
"OLD_TABLE"."ID" "ID"
FROM "LCSONTOS"."OLD_TABLE" "OLD_TABLE") CURRENT$,
(SELECT MLOG$."ID",
SYS.MVAggRawBitOr(MLOG$.CHANGE_VECTOR$$) CHANGE_VECTOR$$
FROM "LCSONTOS"."MLOG$_OLD_TABLE" MLOG$
WHERE "SNAPTIME$$" > :1
AND ("DMLTYPE$$" != 'D')
GROUP BY MLOG$."ID") LOG$
WHERE CURRENT$."ID" = LOG$."ID"
I have also traced it, but looking at the results did not give me any clue.
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 2 0.03 0.04
Yes, it is waiting for sequential read exactly the same way as it did in my case when I was on-site.
Obviously we hit a bug, but unfortunately I did not find anything similar which comes close to this issue on Metalink. I was lucky, because the table I had to reorg, is being only inserted, so I created a log table with an insert trigger in order to catch new records while CTAS was running. After that my script added those newly inserted records and created necessary dependant objects (indices, keys, etc.) and collected statistics.