先说概念:
-行迁移migrate通常由 update 操作引起
-行链接chain 通常由 insert 操作引起。
-创建/查询这些链接/迁移行的 sql 语句将由于更多的 i/o 工作而降低性能。
-要诊断链接/迁移的行,请使用 analyze 命令 ,查询 v$sysstat 视图和user_tables
-要删除链接/迁移的行,请使用更高的pctfree,使用alter table move,exp/imp。
-在大多数情况下,链接是不可避免的,特别是当这涉及具有大列(如longs,lob等)的表时。如果不同的表中有很多链接行,并且这些表的平均行长度不是那么大,则可以考虑使用较大的块大小重建数据库。
-具有超过 255 列的表可能会强制链接
再说解决方法:
-alter table move (注意索引、物化视图会失效,需要重建)
-增大pctfree (只对新数据有效,或见下面的精细控制方法)
-导出导入(为保持数据一致性最好停止应用)
-避免单表超过255个列
--行迁移模拟
create table test (c1 varchar2(20));
begin
for i in 1..1000 loop
insert into test values (null);
end loop;
end;
/
commit;
analyze table test compute statistics;
select pct_free,pct_used,avg_row_len,chain_cnt,blocks from user_tables where table_name='test';
--不执行analyze命令 chain_cnt列不会有数值
select chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used, blocks
from user_tables
where table_name = 'test';
--产生行迁移
update test set c1='hello row move here';
analyze table test compute statistics;
select pct_free,pct_used,avg_row_len,chain_cnt,blocks from user_tables where table_name='test';
--确认
@?/rdbms/admin/utlchain.sql
analyze table test list chained rows;
select count(*) from chained_rows;
--治理
alter table test move; --或者到出再导入 ctas可能会遇到long类型列而报错
analyze table test compute statistics;
select pct_free,pct_used,avg_row_len,chain_cnt,blocks from user_tables where table_name='test';
select name, value from v$sysstat where name = 'table fetch continued row';
awr中 table fetch continued row
精细控制方法
并不是所有的块都存在行迁移的话,只处理那些需要合并的
-- get the name of the table with migrated rows:
accept table_name prompt 'enter the name of the table with migrated rows: '
-- clean up from last execution
set echo off
drop table migrated_rows;
drop table chained_rows;
-- create the chained_rows table
@$oracle_home/rdbms/admin/utlchain.sql
set echo on
spool fix_mig
-- list the chained and migrated rows
analyze table &table_name list chained rows;
-- copy the chained/migrated rows to another table
create table migrated_rows as
select orig.*
from &table_name orig, chained_rows cr
where orig.rowid = cr.head_rowid
and cr.table_name = upper('&table_name');
-- delete the chained/migrated rows from the original table
delete from &table_name where rowid in (select head_rowid from chained_rows);
-- copy the chained/migrated rows back into the original table
insert into &table_name select * from migrated_rows;
spool off
有时候觉得查询一个记录数不多的表很慢,或者创建索引遇到db file sequential read,有可能就是遇到了行迁移/链接的情况(基于p1的结果),最简单的方法是参考上面绿色字体sql。
参考:
row chaining and row migration (doc id 122020.1)
how to identify, avoid and eliminate chained and migrated rows ? (doc id 746778.1)
续貂:
明明应该是db file scatter read的事件,怎么看到的却是db file sequential read事件?
select p1,count(*)from dba_hist_active_sess_history
where to_char(sample_time,'yyyymmdd hh24:mi')
between '20220722 08:30' and '20220722 09:30' and event='db file sequential read'
-- and sql_id=&sqlid
-- and user_id=(select user_id from dba_users where username='&username')
group by p1;
p1: the absolute file number 文件号
p2: the block being read first block#
p3: the number of blocks (should be 1) block数量
因此通过看p1 对应的文件是哪个表空间的即可初步判定是行链接还是别的会话修改了数据而导致从undo中获取未提交之前的信息。
阅读(388) | 评论(0) | 转发(0) |