17. Slowly Changing Dimension(SCD) Type 2 Using Mapping Data Flow in Azure Data Factory

17. Slowly Changing Dimension(SCD) Type 2 Using Mapping Data Flow in Azure Data Factory

WafaStudies

3 года назад

53,108 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@rajeshmanepalli7367
@rajeshmanepalli7367 - 20.11.2023 20:39

good explaination

Ответить
@RobertLenior
@RobertLenior - 16.11.2023 13:59

In SSIS this is very very easy to accomplish, why is it still so cumbersome in ADF?

Ответить
@pankajmandania1785
@pankajmandania1785 - 12.10.2023 12:32

Good explaination. What is i have duplicate rows in the source file? How do i filter them?

Ответить
@likhybadri789
@likhybadri789 - 11.09.2023 07:13

I was trying scd type2 using dataflows to make it dynamic , but on the frst run it is failing bcz I haven't choose the option of inspect schema to make it use for any delta table . Any workaround for this? The solution is atlst it should be able to read the header though the delta table is empty but am getting an error on the source side when the table is empty

Ответить
@islammatkarimov2353
@islammatkarimov2353 - 04.09.2023 23:14

Sir it is not working, the values still remains 1 for all + it does not recognise the old data, it literally inserts all data

Ответить
@VinodKumar-lg3bu
@VinodKumar-lg3bu - 24.03.2023 21:13

good one dude thanks for explaining .

Ответить
@nagoorpashashaik8400
@nagoorpashashaik8400 - 21.02.2023 09:32

Can we do SCD Type 2 on Delta file using mapping data flow

Ответить
@vinnyakhil
@vinnyakhil - 09.02.2023 17:21

I am getting this error. Cannot insert explicit value for identity column in table when identity_insert is set to off. Can any one help on this

Ответить
@vaishnosharma3248
@vaishnosharma3248 - 02.02.2023 17:37

can we use scd2 in real time data ?

Ответить
@imkind1976
@imkind1976 - 03.12.2022 23:30

I see surrogate key is initially inserted for target record..but in source record surrogate key is not there, can you explain how surrogate key is mapped for the newly inserted records

Ответить
@reneepoore4435
@reneepoore4435 - 10.11.2022 19:02

Good video but all the noise from the kids in the background was very distracting and loud.

Ответить
@ramubuddi8396
@ramubuddi8396 - 04.11.2022 15:40

I have implemented as your explanation.. but i am facing an issue that, key column does not exist in sink...here is the screen shot.

Ответить
@yaminikanumetta4397
@yaminikanumetta4397 - 06.09.2022 16:05

Hi ,
How to implement incremental load using Primary key ? can you please explain it

Ответить
@imkind1976
@imkind1976 - 02.09.2022 09:45

May i know how the surrogate key is generated in dim table?

Ответить
@raghavendrareddy4765
@raghavendrareddy4765 - 22.07.2022 09:42

Great work Maheer,
How to load parquet file from on-premises to Azure SQL database using Azure Data factory

Ответить
@suprobhosantra
@suprobhosantra - 08.05.2022 20:06

@WafaStudies I am facing a problem implementing scd2 using exist transformation instead of lookup u used here. But I guess the problem will be same for both the implementation. Here we need to make sure we are finishing the update inside the table first. If the new records are accidentally inserted in table first then lookup will fetch newly inserted columns also as matching and therefore all the columns are getting marked as non active. But the order of execution of the parallel streams are not in our hand. How to solve this? Any idea?

Ответить
@curious_mind7575
@curious_mind7575 - 26.02.2022 17:39

We did not check Md5 values for attributes whose employee id is already present in source and target…

Ответить
@rajpes1833
@rajpes1833 - 18.02.2022 09:29

Can you make a video in which you can include Start date and End date, and dynamically the dates getting updated for type-2 scd. I see that is a necessity and many people face this issue.

Ответить
@kenpoken1
@kenpoken1 - 11.12.2021 01:34

Nice job. Please keep them coming. How About a video on SCD type 4 implementations

Ответить
@theroy068
@theroy068 - 11.11.2021 06:21

SCD type 2 was explained properly but one scinerio was not covered suppose we received same record from the source which is already present in the target. In that case also it will create new records and will update the old record as inactive under this logic.

Ответить
@bhosdiwalechacha9702
@bhosdiwalechacha9702 - 18.10.2021 05:14

Good explanation. But I guess you forgot to add a check if there is any change in any one of the column coming from the source file. Because you'll update the row in target only if you find any change in the source and destination.

Ответить
@MohammedKhan-np7dn
@MohammedKhan-np7dn - 04.10.2021 18:59

Nice and Superb Explanation. Thanks alot Maheer.

Ответить
@mayank5644
@mayank5644 - 30.08.2021 11:17

Create a branch from source use alter row to update the records in sink that are present in source and in the branch just use insert

Ответить
@shaksrini
@shaksrini - 09.08.2021 18:57

Best way to implement SCD Type 2 😀👍 very well explained

Ответить
@lingay3850
@lingay3850 - 24.07.2021 17:33

Hello. How about doing it in sql server and not in query editor? Like doing mapping on Azure data factory but the result or the output will be seen in sql server.😊

Ответить
@martand89
@martand89 - 14.07.2021 23:31

Could you please tell me how your pipeline behave if you do not change anything. In my case, it is inserting a new row with isrecent=1 and changing the previous value isrecent =0, but As I am not changing anything then it should not be inserted again.

Ответить
@amitpundir3983
@amitpundir3983 - 12.07.2021 09:58

Great work Maheer, couple of observations
1. Type 2 dimension needs EffectiveStartDate / EffectiveEndDate too. If we add these columns updating all history rows will always reset these dates which fails type2 idea. Also, not good for performance , as we are always updating all history rows be it millions.
2. During 1st execution, we should have a capability of verifying although source has an entry for EmpId=1001 but is it really updated coz only in that case it make sense to INSERT and UPDATE history rows else we are simply duplicating rows with no changes.

Ответить
@zikoraswrld
@zikoraswrld - 01.07.2021 07:13

Hi, if I was archiving rows in a database. During the copying process from one data base to another. I want to delete what ever I’m archiving from the source. Is there a place where I could write a query that does that instead of using alter rows etc because the expression builder is just not what I need

Ответить
@thesujata_m
@thesujata_m - 15.05.2021 00:02

For scd 1 instead of update we have to just delete that row and further activities are not required in sink2 , right?

Ответить
@dipanjanmukherjee4338
@dipanjanmukherjee4338 - 08.05.2021 18:41

In the update branch, instead of lookup and filter, we could replace it with join (inner)

Ответить
@vinayagamoorthyboobalan6268
@vinayagamoorthyboobalan6268 - 30.04.2021 08:59

Nice explanation WafaStudios..
I have a doubt is that how to handle the rows which are not having any updates in Source? With this example, even the unaffected data also will be updated in the destination unnecessarily.. Looking for your reply and thanks in advance..

Ответить
@himanshutrivedi4956
@himanshutrivedi4956 - 17.04.2021 16:51

This is really good video and helpful too just one suggestion can you add record_create_date and record_expire_date and then upload ..It would be great..

Ответить
@Global-Post
@Global-Post - 05.04.2021 06:04

thanks Maheer

Ответить
@Ravi-gu5ww
@Ravi-gu5ww - 02.04.2021 09:58

Good one maheer along with add duplicate records form source and make some columns as scd type 1 and some as scd 2 for same table and also incremental load as new session

Ответить
@kromerm
@kromerm - 31.03.2021 08:50

Nice technique, great job! One small nitpick ... I'd prefer if you used true() instead of 1==1 for your Alter Row Update policy :)

Ответить
@dvsrikanth22
@dvsrikanth22 - 31.03.2021 06:11

great explanation... explained in very easy way to understand the concept

Ответить
@bhawnabedi9627
@bhawnabedi9627 - 30.03.2021 19:21

Nice info

Ответить