OLTP type 64 compression and ‘enq: TX – allocate ITL entry’ on Exadata

Posted in: Technical Track

Recently we’ve seen a strange problem with the deadlocks at the client database on Exadata, Oracle version 11.2.0.4 . Wait events analysis showed that sessions were waiting for “enq: TX – allocate ITL entry” event. It was strange because there are at most two sessions making DMLs and at least two ITL slots are available in the affected tables blocks. I made some block dumps and found that affected blocks contain the OLTP-compressed data, Compression Type = 64 (DBMS_COMPRESSION Constants – Compression Types).  Actually table has the “compress for query high” attribute, but direct path inserts have never used, so I’m not expecting any compressed data here. Compression Type 64 is very specific type. Oracle migrates data out of HCC compression units into Type 64 compression blocks in case of updates of HCC compressed data. We made some tests and were able to reproduce Type 64 compression without direct path operations. Here is one of the test cases. MSSM tablespace has been used, but problem is reproducible with ASSM too.

create table z_tst(num number, rn number, name varchar2(200)) compress for query high partition by list(num)
(
partition p1 values(1),
partition p2 values(2));

Table created.

insert into z_tst select mod(rownum , 2) + 1, rownum, lpad('1',20,'a') from dual connect by level <= 2000;

2000 rows created.

commit;

Commit complete.

select dbms_compression.get_compression_type(user, 'Z_TST', rowid) comp, count(*)  cnt from Z_tst
group by dbms_compression.get_compression_type(user, 'Z_TST', rowid);

      COMP        CNT
---------- ----------
        64       2000

select  dbms_rowid.rowid_block_number(rowid) blockno, count(*) cnt from z_tst a
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO        CNT
---------- ----------
      3586        321
      2561        679
      3585        679
      2562        321

select name, value from v$mystat a, v$statname b where a.statistic# = b.statistic# and lower(name) like '%compress%' and value != 0;

NAME                                                    VALUE
-------------------------------------------------- ----------
heap block compress                                        14
HSC OLTP Compressed Blocks                                  4
HSC Compressed Segment Block Changes                     2014
HSC OLTP Non Compressible Blocks                            2
HSC OLTP positive compression                              14
HSC OLTP inline compression                                14
EHCC Block Compressions                                     4
EHCC Attempted Block Compressions                          14

alter system dump datafile 16 block min 2561 block max 2561;

We can see that all rows are compressed by compression type 64. From the session statistics we can see that HCC had been in place before the data was migrated into OLTP Compressed Blocks. I think, this is not an expected behavior and there is should not be any compression involved at all. Let’s take a look into the block dump:

Block header dump:  0x04000a01
 Object id on Block? Y
 seg/obj: 0x6bfdc  csc: 0x06.f5ff8a1  itc: 2  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0055.018.0002cd54  0x00007641.5117.2f  --U-  679  fsc 0x0000.0f5ffb9a
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x04000a01
data_block_dump,data header at 0x7fbb48919a5c
===============
tsiz: 0x1fa0
hsiz: 0x578
pbl: 0x7fbb48919a5c
     76543210
flag=-0----X-
ntab=2
nrow=680
frre=-1
fsbo=0x578
fseo=0x5b0
avsp=0x6
tosp=0x6
        r0_9ir2=0x1
        mec_kdbh9ir2=0x1
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-LNOC      Archive compression: N
                fcls_9ir2[3]={ 0 32768 32768 }
                perm_9ir2[3]={ 0 2 1 }

It’s bit odd that avsp (available space) and tosp (total space) = 6 bytes. So there is no free space in the block at all, but I’m expecting to see 10% pctfee defaults here since it’s OLTP compression.
Let’s try to update two different rows in the same type 64 compressed block:

select rn from z_tst where DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) = 3586 and rownum <= 4;

        RN
----------
      1360
      1362
      1364
      1366

From the first session:
update z_tst set name = 'a' where rn = 1360;
From the second:
update z_tst set name = 'a' where rn = 1362;
-- waiting here

Second session waits on “enq: TX – allocate ITL entry” event.

Summary

In some cases HCC and subsequent OLTP, type 64 compression can take place even without direct path operations (probably a bug).

OLTP, type 64 compressed block, in contrast to regular OLTP compression, can have no free space after data load.

In case of DML operations, the whole type 64 compressed block gets locked (probably a bug).

Better not to set HCC attributes on segments until the real HCC compression operation.

 

email

Interested in working with Vyacheslav? Schedule a tech call.

4 Comments. Leave new

I don’t have hands on an Exadata platform, so I can’t test it, but I’d be interested to check the following:

a) is the ITL wait related to the block where the original rows are, or the block where the rows are going to be migrated to by the update – what do the two ITLs look like after the first update ? If the former, do you take two ITL entries, one for the row and one for the token affected by the update.

b) is the “whole block locked” effect due to the fact that all the rows in the block are dependent on the same token (note that your “nrow” = 680, but your “Lck” = 679, so there’s just one row in the token table and you’re trying to change the number of rows dependent on that token from both sessions). If you set the data to have two tokens in the block could you update rows from two separate sessions if the updates related to different tokens.

Reply
Vyacheslav Rasskazov
December 23, 2014 11:35 pm

Jonathan, I’ve made some tests with rows updates related to the different tokens. First of all, I’ve used more traditional way for type 64 compression creation by using the update on HCC blocks.
[code lang="sql"]create table hc_tst compress for query high as select rownum rn, mod(rownum, 10) name from dual connect by level &lt;= 1000;

Table created.

update HC_TST set name = name;

1000 rows updated.

commit;

Commit complete.

select a.*, dbms_rowid.rowid_block_number(rowid) blk from hc_tst a where dbms_compression.get_compression_type('RASSKAZOV', 'HC_TST', rowid) = 64 and rownum &lt;= 10;

RN NAME BLK
---------- ---------- ----------
1 1 130
2 2 130
3 3 130
4 4 130
5 5 130
6 6 130
7 7 130
8 8 130
9 9 130
10 0 130

10 rows selected.
[/code]Now let’s try to update first and second rows that have the different tokens from the different sessions.
[code lang="sql"]update hc_tst set name = name where rn = 1;

1 row updated.
[/code]Excerpts from the block 130 dump.
[code lang="sql"]data_block_dump,data header at 0x7f9548de1a5c
===============
tsiz: 0x1fa0
hsiz: 0x5d8
pbl: 0x7f9548de1a5c
76543210
flag=-0------
ntab=2
nrow=733
frre=-1
fsbo=0x5d8
fseo=0x8fe
avsp=0x8
tosp=0x8
r0_9ir2=0x0
mec_kdbh9ir2=0x0
76543210
shcf_kdbh9ir2=----------
76543210
flag_9ir2=--R-LN-C Archive compression: N
fcls_9ir2[0]={ }
0x16:pti[0] nrow=10 offs=0
0x1a:pti[1] nrow=723 offs=10
...
Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x01ad.007.0002363b 0x0000306d.1480.0f C--- 0 scn 0x0006.1a165006
0x02 0x0006.00d.00098daa 0x000042b2.f76f.37 ---- 1 fsc 0x0000.00000000
...
tab 1, row 0, @0x8fe
tl: 9 fb: --H----- lb: 0x2 cc: 0
nrid: 0x04000083.115
bindmp: 20 02 00 04 00 00 83 01 15[/code]We have, as expected, 10 tokens in the block. ITL slot 0x02 is occupied by our update and the row has migrated. Here is the ITL slots from the block where the row went to:[code lang="sql"]
Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x01ad.007.0002363b 0x0000307b.1480.3c C--- 0 scn 0x0006.1a165006
0x02 0x0006.00d.00098daa 0x000042b2.f76f.36 ---- 1 fsc 0x0000.00000000
0x03 0x0000.000.00000000 0x00000000.0000.00 C--- 0 scn 0x0000.00000000
[/code]As far as I remember, every migrated row requires own ITL slot, so all looks as expected for me. Now let’s try to update second row from the another session:
[code lang="sql"]update hc_tst set name = name where rn = 2;[/code]Hanging here.
[code lang="sql"]select event, row_wait_block# from v$session where event like 'enq%';

EVENT ROW_WAIT_BLOCK#
------------------------------ ---------------
enq: TX - allocate ITL entry 130[/code]v$session points to the source block.

Reply
Yury Pudovchenko
December 26, 2014 5:01 am

Hi, Vyacheslav

What is the Exadata SW version ? (imageinfo).
The version 11.2.3.3.1+ use row level locking.
Previous versions use Compression Unit level locking.

Regards,
Yury

Reply
Vyacheslav Rasskazov
December 29, 2014 4:32 pm

Hi Yury,

Thank you for the information about row-level locking in 11.2.3.3.1. In my tests I’ve used version 11.2.3.3.0.131014.1. But needs to say that I’m testing type 64, OLTP compression where data is not located in the CU. But it would be good to repeat tests with 11.2.3.3.1+ since behavior definitely could be changed.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *