Комментарии:
good explaination
ОтветитьIn SSIS this is very very easy to accomplish, why is it still so cumbersome in ADF?
ОтветитьGood explaination. What is i have duplicate rows in the source file? How do i filter them?
ОтветитьI was trying scd type2 using dataflows to make it dynamic , but on the frst run it is failing bcz I haven't choose the option of inspect schema to make it use for any delta table . Any workaround for this? The solution is atlst it should be able to read the header though the delta table is empty but am getting an error on the source side when the table is empty
ОтветитьSir it is not working, the values still remains 1 for all + it does not recognise the old data, it literally inserts all data
Ответитьgood one dude thanks for explaining .
ОтветитьCan we do SCD Type 2 on Delta file using mapping data flow
ОтветитьI am getting this error. Cannot insert explicit value for identity column in table when identity_insert is set to off. Can any one help on this
Ответитьcan we use scd2 in real time data ?
ОтветитьI see surrogate key is initially inserted for target record..but in source record surrogate key is not there, can you explain how surrogate key is mapped for the newly inserted records
ОтветитьGood video but all the noise from the kids in the background was very distracting and loud.
ОтветитьI have implemented as your explanation.. but i am facing an issue that, key column does not exist in sink...here is the screen shot.
ОтветитьHi ,
How to implement incremental load using Primary key ? can you please explain it
May i know how the surrogate key is generated in dim table?
ОтветитьGreat work Maheer,
How to load parquet file from on-premises to Azure SQL database using Azure Data factory
@WafaStudies I am facing a problem implementing scd2 using exist transformation instead of lookup u used here. But I guess the problem will be same for both the implementation. Here we need to make sure we are finishing the update inside the table first. If the new records are accidentally inserted in table first then lookup will fetch newly inserted columns also as matching and therefore all the columns are getting marked as non active. But the order of execution of the parallel streams are not in our hand. How to solve this? Any idea?
ОтветитьWe did not check Md5 values for attributes whose employee id is already present in source and target…
ОтветитьCan you make a video in which you can include Start date and End date, and dynamically the dates getting updated for type-2 scd. I see that is a necessity and many people face this issue.
ОтветитьNice job. Please keep them coming. How About a video on SCD type 4 implementations
ОтветитьSCD type 2 was explained properly but one scinerio was not covered suppose we received same record from the source which is already present in the target. In that case also it will create new records and will update the old record as inactive under this logic.
ОтветитьGood explanation. But I guess you forgot to add a check if there is any change in any one of the column coming from the source file. Because you'll update the row in target only if you find any change in the source and destination.
ОтветитьNice and Superb Explanation. Thanks alot Maheer.
ОтветитьCreate a branch from source use alter row to update the records in sink that are present in source and in the branch just use insert
ОтветитьBest way to implement SCD Type 2 😀👍 very well explained
ОтветитьHello. How about doing it in sql server and not in query editor? Like doing mapping on Azure data factory but the result or the output will be seen in sql server.😊
ОтветитьCould you please tell me how your pipeline behave if you do not change anything. In my case, it is inserting a new row with isrecent=1 and changing the previous value isrecent =0, but As I am not changing anything then it should not be inserted again.
ОтветитьGreat work Maheer, couple of observations
1. Type 2 dimension needs EffectiveStartDate / EffectiveEndDate too. If we add these columns updating all history rows will always reset these dates which fails type2 idea. Also, not good for performance , as we are always updating all history rows be it millions.
2. During 1st execution, we should have a capability of verifying although source has an entry for EmpId=1001 but is it really updated coz only in that case it make sense to INSERT and UPDATE history rows else we are simply duplicating rows with no changes.
Hi, if I was archiving rows in a database. During the copying process from one data base to another. I want to delete what ever I’m archiving from the source. Is there a place where I could write a query that does that instead of using alter rows etc because the expression builder is just not what I need
ОтветитьFor scd 1 instead of update we have to just delete that row and further activities are not required in sink2 , right?
ОтветитьIn the update branch, instead of lookup and filter, we could replace it with join (inner)
ОтветитьNice explanation WafaStudios..
I have a doubt is that how to handle the rows which are not having any updates in Source? With this example, even the unaffected data also will be updated in the destination unnecessarily.. Looking for your reply and thanks in advance..
This is really good video and helpful too just one suggestion can you add record_create_date and record_expire_date and then upload ..It would be great..
Ответитьthanks Maheer
ОтветитьGood one maheer along with add duplicate records form source and make some columns as scd type 1 and some as scd 2 for same table and also incremental load as new session
ОтветитьNice technique, great job! One small nitpick ... I'd prefer if you used true() instead of 1==1 for your Alter Row Update policy :)
Ответитьgreat explanation... explained in very easy way to understand the concept
ОтветитьNice info
Ответить