Комментарии:
connecting to data lake storage videos are confusing, why should we use service principal when we can access through Azure KeyVault directly?
ОтветитьThank you so much for your video. It was a much needed help.
Ответить♥♥
ОтветитьI want to know to what is the benefit of using this service principle I'd ,name and value under oauth function while we can access files from blobstorage using just secret scope itself directly. Is there any advantage of using this ?
Ответитьlove it, very well explained.
Ответитьcontent explanation is very nice .but one suggestion is that use proper naming conventions . that will give more understanding to new users.
ОтветитьVery crisp and clean thank you for this vedio
ОтветитьHi, One question . Dont you have to mount the file system again by using these Azure service principal's configuration ?
I think you are able to read the Data coz your storage is already mounted by direct access keys ??
Thanks for the video; it is very informative. Using this method, do you need to execute the spark.conf.set() commands every time you restart the cluster? My guess is that you would since you are only affecting configs of this specific spark session.
ОтветитьHi sir i have been following every video in this db playlist.
could you tell me how many more videos can be there to complete this DB playlist??
Hi, can you treat how to set up a shared external Hive metastore to be used across multiple Databricks workspaces, the purpose is to be able to reference dev workspace data in prod instance
Ответить10 api access through adf in azure 6 success and 4 failed how to get only failed api?
ОтветитьPlease make video on polybase and jdbc approach.
ОтветитьThank a lot sir
ОтветитьSir, how can we configure Azure data bricks Hive metastore to some external etl tool like informatica . The purpose is to fetch data from hive tables and use databricks engine for push down optimization to improve the performance of the data fetching.
Ответить