做app简单还是网站,wordpress主题存放,如何提高网站知名度,aws wordpress 站群目录 拉链表概述缓慢变化维拉链表定义 拉链表的实现常规拉链表历史数据每日新增数据历史数据与新增数据的合并 分区拉链表 拉链表概述
缓慢变化维
通常我们用一张维度表来维护维度信息#xff0c;比如用户手机号码信息。然而随着时间的变化#xff0c;某些用户信息会发生改… 目录 拉链表概述缓慢变化维拉链表定义 拉链表的实现常规拉链表历史数据每日新增数据历史数据与新增数据的合并 分区拉链表 拉链表概述
缓慢变化维
通常我们用一张维度表来维护维度信息比如用户手机号码信息。然而随着时间的变化某些用户信息会发生改变这就是所谓的缓慢变化维。需要注意的是这里的缓慢变化是相对事实表而言的事实表的变化速度要快得多。
针对缓慢变化维问题通常有以下几种处理方式
1仅保留每个用户最新的一条维度信息
这种方法比较简单粗暴维度只考虑最新就行保证了维度值的唯一性。但缺点是无法查看历史信息在需要回溯查看数据的场景就不适用了可能需要去原始数据查询及其不方便。
2仅保留每个用户最初的一条维度信息
这种就相当于一次填写终身不允许修改那么在实际关联数据时很可能获取的是无效的维度信息。比如某个用户的手机号以及变了但是维度表中仍然保留最初的手机号这就导致数据关联结果是错误的。而且对于用户来说一旦手残录入错误就无法再更改用户的体验也是不好的。
3用新增行的方式在维度表中同时保留所有变化的维度信息
这种方式其实跟拉链表很接近了就是用户每改一次信息就在维度表中新增一行只不过这里的历史数据和新增数据如何区分以及他们的有效时间范围如何区分就是需要着重考虑的问题了。
4用新增列的方式在维度表中同时保留所有变化的维度信息
这个方式的优势就是维度表的行数可以不变只需要新增列但是缺点也很明显新增列意味着表结构会一直变化而且也没有办法确定到底要新增几列。
拉链表定义
拉链表就是记录一个事物从开始到当前状态的变化过程的数据表主要是用于维度发生变化的场景也即我们常说的缓慢变化维。
比如说我们用一张维度表记录用户的手机号码但是随着时间推进用户可能某一天会换手机号这时我们的维度表就需要相应的更改这时我们就可以用拉链表来进行记录这就实现了保留历史数据的同时还能查询最新维度信息。可以说拉链表其实是解决缓慢变化维的最佳方案了。
一个简单的拉链表示例如下
useridtelstart_dtend_dt011112024010120240601012222024060299991231023332024010199991231
每行记录都表示一个用户的属性值以及对应的日期有效范围如果是最新的数据则结束日期是99991231。用户01的联系方式发生过变化因此会有两条数据记录。
拉链表的实现
常规拉链表
历史数据
现在有一批数据如下所示表示用户的属性值以及传回来的日期和时间戳单位s
with data1 as (select 01 as userid, ab as addr, 20220101 as dt, 1641039513 as ts union allselect 01 as userid, ab as addr, 20220103 as dt, 1641211200 as ts union allselect 01 as userid, cd as addr, 20220108 as dt, 1641607200 as ts union allselect 02 as userid, ab as addr, 20220101 as dt, 1641039480 as ts union allselect 02 as userid, bc as addr, 20220104 as dt, 1641261600 as ts union allselect 02 as userid, cd as addr, 20220109 as dt, 1641639600 as ts union allselect 03 as userid, ab as addr, 20220101 as dt, 1641038400 as ts union allselect 03 as userid, cd as addr, 20220101 as dt, 1641002400 as ts union allselect 03 as userid, ab as addr, 20220107 as dt, 1641520800 as ts
)历史数据的处理规则
1同一天仅保留最新一条数据
select userid, addr, dt, ts
from (select userid, addr, dt, tsrow_number() over (partition by userid, dt order by ts desc) rnfrom data1
) ta
where rn 1;2获取每个用户每个属性最早的一条数据
with data2 as (select userid, addr, dt, tsfrom (select userid, addr, dt, ts,row_number() over (partition by userid, dt order by ts desc) rnfrom data1) tawhere rn 1
)
select userid, addr, dt, ts
from (selectuserid, addr, dt, ts,row_number() over (partition by userid, addr order by dt) rnfrom data2
) tb
where rn 1;这样处理以后数据如下所示
3获取当前行的下一行日期数据并处理截止日期
这一步我们需要得到每个用户每个属性的下一行用来获取当前属性的截止日期。截止日期的处理条件如果为空则用99991231填充否则就用next_dt减一天来填充。
上一步的处理结果我们放到data3中部分代码会做省略处理
with data3 as (select userid, addr, dt, tsfrom (selectuserid, addr, dt, ts,row_number() over (partition by userid, addr order by dt) rnfrom data2) tbwhere rn 1
)
selectuserid, addr, dt start_dt,if(next_dt is null, 99991231, date_format(date_add(from_unixtime(unix_timestamp(next_dt, yyyyMMdd), yyyy-MM-dd), -1), yyyyMMdd)) end_dt
from (selectuserid, addr, dt, ts,lead(dt) over (partition by userid order by dt) next_dtfrom data3
) tc得到的结果如下
完整的代码如下
with data1 as (select 01 as userid, ab as addr, 20220101 as dt, 1641039513 as ts union allselect 01 as userid, ab as addr, 20220103 as dt, 1641211200 as ts union allselect 01 as userid, cd as addr, 20220108 as dt, 1641607200 as ts union allselect 02 as userid, ab as addr, 20220101 as dt, 1641039480 as ts union allselect 02 as userid, bc as addr, 20220104 as dt, 1641261600 as ts union allselect 02 as userid, cd as addr, 20220109 as dt, 1641639600 as ts union allselect 03 as userid, ab as addr, 20220101 as dt, 1641038400 as ts union allselect 03 as userid, cd as addr, 20220101 as dt, 1641002400 as ts union allselect 03 as userid, ab as addr, 20220107 as dt, 1641520800 as ts
)
, data2 as (select userid, addr, dt, tsfrom (select userid, addr, dt, ts,row_number() over (partition by userid, dt order by ts desc) rnfrom data1) tawhere rn 1
)
, data3 as (select userid, addr, dt, tsfrom (selectuserid, addr, dt, ts,row_number() over (partition by userid, addr order by dt) rnfrom data2) tbwhere rn 1
)
selectuserid, addr, dt start_dt,if(next_dt is null, 99991231, date_format(date_add(from_unixtime(unix_timestamp(next_dt, yyyyMMdd), yyyy-MM-dd), -1), yyyyMMdd)) end_dt
from (selectuserid, addr, dt, ts,lead(dt) over (partition by userid order by dt) next_dtfrom data3
) tc每日新增数据
新增数据如下
with new_data1 as (select 01 as userid, ab as addr, 20220121 as dt, 1642723200 as ts union allselect 02 as userid, cd as addr, 20220121 as dt, 1642723200 as ts union allselect 04 as userid, ef as addr, 20220121 as dt, 1642723200 as ts union allselect 04 as userid, xg as addr, 20220121 as dt, 1642723300 as ts union allselect 05 as userid, xy as addr, 20220127 as dt, 1642723200 as ts
)新增数据的处理
1保留最新一条数据
新增数据的处理很简单因为一般是增量读取某一天的数据因此我们只要保证每个用户只保留最新一条数据即可。
select userid, addr, dt, ts
from (select userid, addr, dt, ts,row_number() over (partition by userid, dt order by ts desc) rnfrom new_data1
) ta
where rn 1处理之后结果如下所示可以看到每个用户只剩下了最新的一条数据
2结束日期均设置为99991231
with new_data2 as (select userid, addr, dt, tsfrom (select userid, addr, dt, ts,row_number() over (partition by userid, dt order by ts desc) rnfrom new_data1) tawhere rn 1
)
select userid, addr, dt start_dt, 99991231 end_dt
from new_data2;历史数据与新增数据的合并
1历史数据与新增数据的全连接
取历史数据的开链数据结束日期为99991231与新增数据进行全连接
select t1.userid old_userid, t1.addr old_addr, t1.start_dt old_start_dt, t1.end_dt old_end_dt,t2.userid new_userid, t2.addr new_addr, t2.start_dt new_start_dt, t2.end_dt new_end_dt
from (select userid, addr, start_dt, end_dtfrom history_datawhere end_dt 99991231
) t1
full join new_data t2
on t1.userid t2.userid
;全连接的结果如下
2全连接以后的条件处理
a新旧属性相同或新旧属性不同且旧属性开始日期较大则仅保留old数据
主要针对两种情况
一是当新旧属性相同时仅保留旧属性这是因为大多数情况下旧属性的日期比较早。不过如果出现重刷数据时可能新属性的日期早于旧属性这时应当只保留旧属性。
二是当新旧属性不同且旧属性的开始日期大于新属性的开始日期时这也是发生了回刷数据的情况此时仅保留旧属性。
selectold_userid userid, old_addr addr, old_start_dt start_dt, old_end_dt end_dt
from data_join
where old_addr new_addr or (old_addr ! new_addr and old_start_dt new_start_dt);需要处理的数据是这一条
b新旧属性不同new不为空时保留new否则保留old
此时针对的是三种情况
一是只有old数据则保留old数据二是只有new数据则保留new数据三是old与new都不为空且不相同时仅保留new数据。
selectcoalesce(new_userid, old_userid) userid,coalesce(new_addr, old_addr) addr,coalesce(new_start_dt, old_start_dt) start_dt,coalesce(new_end_dt, old_end_dt) end_dt
from data_join
where old_addr is null or new_addr is null or (old_addr ! new_addr and old_start_dt new_start_dt);这里处理的数据是这几条
cold与new同时不为空且不相同保留old数据并对old数据的结束日期做处理
此时这条数据的new部分已经在第二种情形中做了保留而old数据需要做一个闭链处理也就是用新增数据的开始日期做填充。
selectold_userid userid,old_addr addr,old_start_dt start_dt,date_format(from_unixtime(unix_timestamp(new_start_dt, yyyyMMdd)-24*3600, yyyy-MM-dd), yyyyMMdd) end_dt
from data_join
where old_addr ! new_addr and old_start_dt new_start_dt;这里处理的是这条数据
完整的代码如下
with history_data as (select 01 as userid, ab as addr, 20220101 as start_dt, 20220107 as end_dt union allselect 01 as userid, cd as addr, 20220108 as start_dt, 99991231 as end_dt union allselect 02 as userid, ab as addr, 20220101 as start_dt, 20220103 as end_dt union allselect 02 as userid, bc as addr, 20220104 as start_dt, 20220108 as end_dt union allselect 02 as userid, cd as addr, 20220109 as start_dt, 99991231 as end_dt union allselect 03 as userid, ab as addr, 20220101 as start_dt, 99991231 as end_dt
)
, new_data as (select 01 as userid, ab as addr, 20220121 as start_dt, 99991231 as end_dt union allselect 02 as userid, cd as addr, 20220121 as start_dt, 99991231 as end_dt union allselect 04 as userid, xg as addr, 20220121 as start_dt, 99991231 as end_dt union allselect 05 as userid, xy as addr, 20220121 as start_dt, 99991231 as end_dt
)
, data_join as (select t1.userid old_userid, t1.addr old_addr, t1.start_dt old_start_dt, t1.end_dt old_end_dt,t2.userid new_userid, t2.addr new_addr, t2.start_dt new_start_dt, t2.end_dt new_end_dtfrom (select userid, addr, start_dt, end_dtfrom history_datawhere end_dt 99991231) t1full join new_data t2on t1.userid t2.userid
)
selectold_userid userid, old_addr addr, old_start_dt start_dt, old_end_dt end_dt
from data_join
where old_addr new_addr or (old_addr ! new_addr and old_start_dt new_start_dt)
union all
selectcoalesce(new_userid, old_userid) userid,coalesce(new_addr, old_addr) addr,coalesce(new_start_dt, old_start_dt) start_dt,coalesce(new_end_dt, old_end_dt) end_dt
from data_join
where old_addr is null or new_addr is null or (old_addr ! new_addr and old_start_dt new_start_dt)
union all
selectold_userid userid,old_addr addr,old_start_dt start_dt,date_format(from_unixtime(unix_timestamp(new_start_dt, yyyyMMdd)-24*3600, yyyy-MM-dd), yyyyMMdd) end_dt
from data_join
where old_addr ! new_addr and old_start_dt new_start_dt;最终的结果如下
分区拉链表
分区拉链表其实只要将end_dt当作分区日期即可这样每次取历史数据的开链数据与新增数据计算得到的数据中包含了一部分99991231分区数据一部分是新增日期分区通常是该日期前一天数据。之后采用动态分区写入的方式覆盖写指定分区即可。
分区拉链表的优势
写入时只需要按分区写入不需要全表覆盖写当数据表的体量较大时优势比较大 文章转载自: http://www.morning.stcds.cn.gov.cn.stcds.cn http://www.morning.mkhwx.cn.gov.cn.mkhwx.cn http://www.morning.rwlsr.cn.gov.cn.rwlsr.cn http://www.morning.xkgyh.cn.gov.cn.xkgyh.cn http://www.morning.kjxgc.cn.gov.cn.kjxgc.cn http://www.morning.bhgnj.cn.gov.cn.bhgnj.cn http://www.morning.mtbsd.cn.gov.cn.mtbsd.cn http://www.morning.pwggd.cn.gov.cn.pwggd.cn http://www.morning.rdlong.com.gov.cn.rdlong.com http://www.morning.dpflt.cn.gov.cn.dpflt.cn http://www.morning.qlrtd.cn.gov.cn.qlrtd.cn http://www.morning.nbqwr.cn.gov.cn.nbqwr.cn http://www.morning.mgmqf.cn.gov.cn.mgmqf.cn http://www.morning.bwzzt.cn.gov.cn.bwzzt.cn http://www.morning.cwwbm.cn.gov.cn.cwwbm.cn http://www.morning.reababy.com.gov.cn.reababy.com http://www.morning.ebpz.cn.gov.cn.ebpz.cn http://www.morning.nmngq.cn.gov.cn.nmngq.cn http://www.morning.qkgwz.cn.gov.cn.qkgwz.cn http://www.morning.jfymz.cn.gov.cn.jfymz.cn http://www.morning.madamli.com.gov.cn.madamli.com http://www.morning.cwnqd.cn.gov.cn.cwnqd.cn http://www.morning.mqbzk.cn.gov.cn.mqbzk.cn http://www.morning.kcypc.cn.gov.cn.kcypc.cn http://www.morning.smdnl.cn.gov.cn.smdnl.cn http://www.morning.kyfrl.cn.gov.cn.kyfrl.cn http://www.morning.rtzd.cn.gov.cn.rtzd.cn http://www.morning.jgmlb.cn.gov.cn.jgmlb.cn http://www.morning.yppln.cn.gov.cn.yppln.cn http://www.morning.kjksn.cn.gov.cn.kjksn.cn http://www.morning.fkyqt.cn.gov.cn.fkyqt.cn http://www.morning.blxor.com.gov.cn.blxor.com http://www.morning.fnzbx.cn.gov.cn.fnzbx.cn http://www.morning.kzslk.cn.gov.cn.kzslk.cn http://www.morning.mknxd.cn.gov.cn.mknxd.cn http://www.morning.lkjzz.cn.gov.cn.lkjzz.cn http://www.morning.xkwyk.cn.gov.cn.xkwyk.cn http://www.morning.llyqm.cn.gov.cn.llyqm.cn http://www.morning.jhgxh.cn.gov.cn.jhgxh.cn http://www.morning.rflcy.cn.gov.cn.rflcy.cn http://www.morning.rpdmj.cn.gov.cn.rpdmj.cn http://www.morning.jhwqp.cn.gov.cn.jhwqp.cn http://www.morning.jlxld.cn.gov.cn.jlxld.cn http://www.morning.lwtld.cn.gov.cn.lwtld.cn http://www.morning.fygbq.cn.gov.cn.fygbq.cn http://www.morning.hphqy.cn.gov.cn.hphqy.cn http://www.morning.jxtbr.cn.gov.cn.jxtbr.cn http://www.morning.wtnwf.cn.gov.cn.wtnwf.cn http://www.morning.sfdsn.cn.gov.cn.sfdsn.cn http://www.morning.mqtzd.cn.gov.cn.mqtzd.cn http://www.morning.jqrp.cn.gov.cn.jqrp.cn http://www.morning.kxqfz.cn.gov.cn.kxqfz.cn http://www.morning.znlhc.cn.gov.cn.znlhc.cn http://www.morning.wwgpy.cn.gov.cn.wwgpy.cn http://www.morning.qrcxh.cn.gov.cn.qrcxh.cn http://www.morning.mqlsf.cn.gov.cn.mqlsf.cn http://www.morning.qzdxy.cn.gov.cn.qzdxy.cn http://www.morning.tqhpt.cn.gov.cn.tqhpt.cn http://www.morning.mdwlg.cn.gov.cn.mdwlg.cn http://www.morning.rkqqf.cn.gov.cn.rkqqf.cn http://www.morning.xesrd.com.gov.cn.xesrd.com http://www.morning.mjbjq.cn.gov.cn.mjbjq.cn http://www.morning.hwpcm.cn.gov.cn.hwpcm.cn http://www.morning.dsgdt.cn.gov.cn.dsgdt.cn http://www.morning.fnkcg.cn.gov.cn.fnkcg.cn http://www.morning.kfmlf.cn.gov.cn.kfmlf.cn http://www.morning.drqrl.cn.gov.cn.drqrl.cn http://www.morning.fwcjy.cn.gov.cn.fwcjy.cn http://www.morning.pzdxg.cn.gov.cn.pzdxg.cn http://www.morning.gpsr.cn.gov.cn.gpsr.cn http://www.morning.lpsjs.com.gov.cn.lpsjs.com http://www.morning.gtdf.cn.gov.cn.gtdf.cn http://www.morning.xjnjb.cn.gov.cn.xjnjb.cn http://www.morning.jxfsm.cn.gov.cn.jxfsm.cn http://www.morning.wqrdx.cn.gov.cn.wqrdx.cn http://www.morning.gqjzp.cn.gov.cn.gqjzp.cn http://www.morning.tjkth.cn.gov.cn.tjkth.cn http://www.morning.xq3nk42mvv.cn.gov.cn.xq3nk42mvv.cn http://www.morning.mdmqg.cn.gov.cn.mdmqg.cn http://www.morning.xqmd.cn.gov.cn.xqmd.cn