Комментарии:
My God. Too much knowledge. So beautiful. How can I work in these field, using all this fantastic tools for production in business!! Where are the companies using these stuff. I want to work with them. 😔
ОтветитьThank you so much I appreciate it
ОтветитьThank you laurence 🙏🏻
ОтветитьHello Lawrence, is there a page listing out all the softwares deployable in Tensorflow, as you described in the Google open source ecosystem, in this video? I will really appreciate it if you might share their links or a page in the description. So we can follow up. ;)
ОтветитьInstructive video ! I can have an overwiew of many tools offers by Google to reach my goals ?
Thanks
Thanks for sharing. Really inspiring and useful, as usual
ОтветитьEverytime I watch the video by Laurence, I feel the pasion, decision, confidence, persistence, stuffs like these. Thank you Laurence!
ОтветитьWould like to know Google advocacy in engineering it is like developer relation engineer?
ОтветитьThank you
ОтветитьThanks
Ответитьaight imma head out
ОтветитьYou don't have to use TFX only to reduce APK size and keep the model untethered from your app, or even to ensure the latest version is fetched every time inference is to be performed. Firebase ML offers model hosting that covers (and that easily) every single one of the above use-cases/requirements.
The reason for using TFX above everything is accuracy. Models running on local devices won't be as accurate as the ones running on high-end computers with gigabytes of ram and a ton of raw processing power. This accuracy, as always is achieved at the cost of latency, which is further increased by the round trip the request has to make to and from the servers. It will also tightly couple your app with internet connectivity requirment. On the other hand, the Firebase Hosted Model will need to be downloaded only once and will get stored in cache, until a new model is uploaded to the console. There are options to configure when the model should be updated within the app as well. However, I'm not saying that TFX is "worthless", it isn't in any way on earth.
Platforms like TFX are crucially required for use-cases where accuracy takes precedence, and internet connectivity is not scarce. For example, the Trevor Project, an initiative in providing support to LGBTQ+ community for suicide prevention via an AI-Counsellor cannot risk forgoing the slightest bit of accuracy that the model can provide. In these cases, TFX will prove a blessing since it will be able to host/scale the model as well as provide maximum accuracy with on-demand, on-cloud inference.
In the end, this boils down to your use-case. The accuracy/latency tradeoff bride is one that can't be avoided completely in this field, and you'll have to make a choice of you want to build a successful product. What might be a great platform for something as critical as suicide prevention, might not be your first choice in, for example, a live video-analysis app that runs inference on a live camera input stream, since the server round trips would introduce too much latency to be useful. The potential is infinite here, but to land in the right place, you need to make the right calls.