The Road Ahead: Google's Comprehensive Platform for Open Source Machine Learning

The Road Ahead: Google's Comprehensive Platform for Open Source Machine Learning

TensorFlow

1 год назад

15,495 Просмотров

Google offers the only comprehensive free and open source ecosystem for AI and Machine Learning that covers everything from data, to model definition and training, deployment, ops, accelerated hardware, and tools for responsible use of AI. In this talk, Laurence will take you through the big thinking that Google is putting into the future of AI and ML for hobbyists, developers, and researchers.

Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow
Ссылки и html тэги не поддерживаются


Комментарии:

Alessandro Bruni
Alessandro Bruni - 31.10.2022 22:04

My God. Too much knowledge. So beautiful. How can I work in these field, using all this fantastic tools for production in business!! Where are the companies using these stuff. I want to work with them. 😔

Ответить
Elena Cairo
Elena Cairo - 28.10.2022 03:25

Thank you so much I appreciate it

Ответить
Python Arabic Community
Python Arabic Community - 12.10.2022 13:00

Thank you laurence 🙏🏻

Ответить
LegHurts
LegHurts - 08.10.2022 05:05

Hello Lawrence, is there a page listing out all the softwares deployable in Tensorflow, as you described in the Google open source ecosystem, in this video? I will really appreciate it if you might share their links or a page in the description. So we can follow up. ;)

Ответить
Damien
Damien - 07.10.2022 20:22

Instructive video ! I can have an overwiew of many tools offers by Google to reach my goals ?
Thanks

Ответить
Marco Sanguineti
Marco Sanguineti - 01.10.2022 14:53

Thanks for sharing. Really inspiring and useful, as usual

Ответить
CHEN QU
CHEN QU - 23.09.2022 14:17

Everytime I watch the video by Laurence, I feel the pasion, decision, confidence, persistence, stuffs like these. Thank you Laurence!

Ответить
Mohamed Rasvi
Mohamed Rasvi - 21.09.2022 14:56

Would like to know Google advocacy in engineering it is like developer relation engineer?

Ответить
Abdallah Elmisallati
Abdallah Elmisallati - 16.09.2022 16:44

Thank you

Ответить
Ram Gopal Nalluri vijayawada
Ram Gopal Nalluri vijayawada - 15.09.2022 14:19

Thanks

Ответить
EVER GREEN
EVER GREEN - 14.09.2022 17:17

aight imma head out

Ответить
Rohan Manchanda
Rohan Manchanda - 13.09.2022 14:11

You don't have to use TFX only to reduce APK size and keep the model untethered from your app, or even to ensure the latest version is fetched every time inference is to be performed. Firebase ML offers model hosting that covers (and that easily) every single one of the above use-cases/requirements.

The reason for using TFX above everything is accuracy. Models running on local devices won't be as accurate as the ones running on high-end computers with gigabytes of ram and a ton of raw processing power. This accuracy, as always is achieved at the cost of latency, which is further increased by the round trip the request has to make to and from the servers. It will also tightly couple your app with internet connectivity requirment. On the other hand, the Firebase Hosted Model will need to be downloaded only once and will get stored in cache, until a new model is uploaded to the console. There are options to configure when the model should be updated within the app as well. However, I'm not saying that TFX is "worthless", it isn't in any way on earth.

Platforms like TFX are crucially required for use-cases where accuracy takes precedence, and internet connectivity is not scarce. For example, the Trevor Project, an initiative in providing support to LGBTQ+ community for suicide prevention via an AI-Counsellor cannot risk forgoing the slightest bit of accuracy that the model can provide. In these cases, TFX will prove a blessing since it will be able to host/scale the model as well as provide maximum accuracy with on-demand, on-cloud inference.

In the end, this boils down to your use-case. The accuracy/latency tradeoff bride is one that can't be avoided completely in this field, and you'll have to make a choice of you want to build a successful product. What might be a great platform for something as critical as suicide prevention, might not be your first choice in, for example, a live video-analysis app that runs inference on a live camera input stream, since the server round trips would introduce too much latency to be useful. The potential is infinite here, but to land in the right place, you need to make the right calls.

Ответить