Тэги:
#Azure #Data_Factory #Data_Flow #Azure_4_Everyone #Adam_Marczak #Mapping_Data_Flow #Spark #ADF #big_data #SSISКомментарии:
can you pyspark or sql in Expression functions ?
are only scale
I love you, Adam!
I have been struggling with using expression builder in Data Flow. I can't seem to figure out how to write the code. This video just made it look less complex. I'll be devoting more time to it.
Great! You are the best Adam.
ОтветитьPerfect
ОтветитьThanks!
ОтветитьI owe you my paycheck tbh 😅🤣
Ответитьbest video on azure I have ever seen❤❤
ОтветитьWill it work with pipe (“|”) separated value file instead of csv?
ОтветитьHello Adam , i follow these steps but i have a problem : i didn't find the source columns when i go to derived column component to write expression based on existing column. in your video , total columns in source component show = 3 , for me =0 ? i changed the source from csv to sql table and i didn't found the solution.
ОтветитьGreat video! Most videos seem to focus mostly on the evertisement material straight from Azure. At best they show you the very dumb step of copying data from a file to DB.
This is the first video I saw where you actually show how you can do something useful with the data and close to real life scenario.
Thank you.
Good explanation there.
ОтветитьHELLO I"M FROM RRRRRRUSSIA
ОтветитьThank you Adam.
Ответитьvery nice
ОтветитьVery good explaining the Data Flow. Thanks Mr.Adam.
Ответить👍 Its amazing , Practical implementation of Data Flow.
ОтветитьThanks for such good video
ОтветитьIt must be very challenging to do all this thing in English for you I imagine, Adam! Congratulations for pushing through despite the difficulty. 🙂
ОтветитьAmazing Video, we want other parts !
ОтветитьVideo is excellent. I want to know the problem statement which Data flow is solving?
ОтветитьNice video.
Just curious. Can you explain toInteger(trim(right(title,6),'()')) in detail please. Like how this command executes?
Very well explained and demonstrated. Really helpful to get started with Data flows.
ОтветитьThank you, Adam. As always, you rock.
ОтветитьGreat tutorial
ОтветитьSo helpful! Thank you very much Adam!
ОтветитьI really like your tutorials. I have been looking for a "table partition switching" tutorial but haven't found any good ones. May be you could do one for us? I am sure it'll be very popular as there aren't any good ones out there and it is an important topic in certifications :-)
ОтветитьGreat video.
Question: Under "New Datasets", is there a capability to drop data into Snowflake? I see S3, Redshift, etc.
I appreciate the video and feedback!
an error message e.g. handshake_failure when the data flow source retrieve data from API, can anyone help? thanks.
ОтветитьExcellent tutorials
ОтветитьSo nice of your talent explaining the data flow in simple way. Thank you so much Mr.Adam.
ОтветитьAdam, great video.I m new to Data Flow and I have one doubt, I want to implement File level checks in Data Flow but not able to do it. All tasks are performing data level checks like exist or conditional split. Is it possible to implement File level check like whether file exist or not in storage account?
ОтветитьGreat
ОтветитьYour videos are really great and helped me understand lot of concepts of Azure. Can you please make one using SSIS package and show how to use that within Azure Data Factory
ОтветитьHello Adam, thanks a bunch for this excellent video. The tutorial was very thorough and anyone new can easily follow. I do have a question though. I am trying to replicate an SQL query into the Data Flow, however, I have had no luck so far.
The query is as follows:
Select ZipCode, State
From table
Where State in ('AZ', 'AL', 'AK', 'AR', 'CO', 'CA', 'CT'...... LIST OF 50 STATES);
I tried using Filter, Conditional Split and Exists transforms, but could not achieve the desired result. Being new to the Cloud Platform, I am having a bit of trouble.
Might I request you please cover topics like Data Subsetting/Filtering (WHERE and IN Clauses etc.) in your tutorials.
Appreciate your time and help in putting together these practical implementations.
Nice one Adam. Cool one. Keep doing fabulous videos always fella.
Many THanks.
Your channel is totally underrated, man
ОтветитьHow do you delete from target based on data from the Source? I'm really struggling to understand if i have a column with a value that I want to delete in the target table. Everything seems to be geared up to altering source data coming in
Ответитьbest tutorial ever... 💪🏻💪🏻💪🏻
ОтветитьWould you plan to make video for introduction of each transforamtion components? Thanks
ОтветитьDoes any of these option changed now? Because I am not able to see any data debug option to be enabled, and directly preview data in dataset itself.
Ответить-1979 and ,12
This is why complex logic is needed. Nice tutorial :)
Hi Adam, Thanks for making this videos, very clear and concise. I have a question (sorry not related to this video) regarding Conditional split - Can the output stream activities, run in parallel ?
ОтветитьGreat video! Thanks Adam!
ОтветитьI need to join header with data. header is dynamic. how can i retain the order of merge ?
ОтветитьWouldn't it be simpler to do all of this using code.
Ответитьvery good explanation Adam. keep it up.
ОтветитьCan you how to add the aggregation column to the same output?
ОтветитьAdam, excellent presentation of ADF concept. I find all your videos really helpful in understanding the ADF concept. One question in regards to the sink dataset in dataflow, how can I create dynamic folder in my blob storage based on the year, month and day when this dataflow was triggered?
Ответитьexcellent explanation with simple scenario. Thank you.
Ответить