Back to Basics: Concurrency - Arthur O'Dwyer - CppCon 2020

Back to Basics: Concurrency - Arthur O'Dwyer - CppCon 2020

CppCon

3 года назад

100,181 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@on2k23nm
@on2k23nm - 15.12.2023 18:44

Back to Basics is excellent !

Ответить
@mworld
@mworld - 04.12.2023 06:14

It's very handy when the data you want to work on is already broken up, e.g multiple files. Just process each file into it's own location in memory then aggregate the multiple data structures back together. Sometimes it makes sense to not write everything to the same data structure (array, vector, etc) for all your threads. Mutex/locks are slow, my suggestion is try to avoid them if possible.

Ответить
@AnotherIndian-kj1ro
@AnotherIndian-kj1ro - 05.11.2023 13:56

Great video - learnt busy-wait, latch, condition variable etc from it.

Ответить
@azoller
@azoller - 21.08.2023 00:23

Thanks, Arthur!

Ответить
@intvnut
@intvnut - 24.04.2023 22:03

The blue/green pattern at the end of the talk sounds a lot like the Read-Copy-Update pattern used in the Linux kernel. RCU does a bit more, by tracking readers and serializing writers with a mutex.

The blue/green pattern is more like a CAS-based optimistic copy/update pattern which won't tell you how many readers are outstanding unless you hold onto the old `blue`. (That could matter if you want to determine when "everybody" sees the new setting.) As used here, the `shared_ptr` avoids the ABA problem we normally have to worry about with CAS based optimistic updates, so that much is nice.

Ответить
@PeteBrubaker
@PeteBrubaker - 07.03.2023 14:06

Also why do you say "puttr" when "pointer" is the same number of syllables?

Ответить
@PeteBrubaker
@PeteBrubaker - 07.03.2023 07:13

"If you have to ask the question you probably shouldn't be doing it yourself." How else are you supposed to learn. Not a great response there...

Ответить
@tourdesource
@tourdesource - 02.03.2023 03:43

Can I quote you on the volatile thing? The way you said that cracked me up.

Ответить
@stemei86
@stemei86 - 09.02.2023 15:14

I love the bathroom analogy. Two persons trying to use the toilet without synchronization can lead to very undefined behavior.

Ответить
@kamilziemian995
@kamilziemian995 - 18.10.2022 20:22

I love "Back to Basics" talks. You can learn so much from them.

Ответить
@kajonkeatirattanasinchai1368
@kajonkeatirattanasinchai1368 - 29.07.2022 07:00

Not just informative, educative, but also entertaining!
Thanks for the talk. :-)

Ответить
@Zettymaster
@Zettymaster - 20.07.2022 12:51

Is it really UB if i spin on an atomic? since the comparison operators have an implicit load with a memory order of std::memory_order_seq_cst, so the compiler is not allowed to optimize the load away.
if it was a non atomic bool i would 100% agree, but not with atomics.

Ответить
@videofountain
@videofountain - 03.07.2022 18:20

On Slide 6 ... there is a cacheLine (without a number), cacheLine1, and cacheLine2. Is cacheLine(without a number) intentional?

Ответить
@konstantinburlachenko2843
@konstantinburlachenko2843 - 29.05.2022 17:37

I do not agree with slide 14. The busy wait is a solution if you do busy wait with thread yielding. If std;;mutex acquire requires kernel space object then typically system call is 1000 clocks. The lock is not free. So question how std::mutex is implemented.

Ответить
@konstantinburlachenko2843
@konstantinburlachenko2843 - 29.05.2022 17:32

Does std::atomic writes guarantee memory fence?

Ответить
@konstantinburlachenko2843
@konstantinburlachenko2843 - 29.05.2022 17:31

Does thread::join guarantee that memory fence will be inserted into a thread for which join is called?

Ответить
@konstantinburlachenko2843
@konstantinburlachenko2843 - 29.05.2022 17:29

It’s most likely that code in slide 10 is incorrect - there is no volatile type specifier for result variable. Please correct me if I am wrong.

Ответить
@darkopz
@darkopz - 18.03.2022 00:15

Am I the only one that questions why people pronounce ptr as putter when I’m pretty sure it stands for pointer.

Ответить
@daidai4615
@daidai4615 - 20.02.2022 19:31

Good material, learnt a lot. Thanks!

Ответить
@UNagano589
@UNagano589 - 07.02.2022 20:27

What is the maximum number of threads by using std:thread ? How do I know ?

Ответить
@strakhov
@strakhov - 26.01.2022 23:01

Thank you very much Arthur, a very well prepared talk and slides!

Ответить
@antonfernando8409
@antonfernando8409 - 25.11.2021 23:32

Pretty cool talk.

Ответить
@MagnificentImbecil
@MagnificentImbecil - 14.09.2021 09:33

This is a great presentation. Thank you.

On Slide 49 (The "blue/green" pattern (write-side), I wonder whether an ABBA bug exists, with the possible result that:

- thread B loads the `g_config` global shared_ptr to the `blue` local `shared_ptr`, then makes a local copy of `*blue`, then modifies the local copy, then compares `g_config` against `blue` and finds them equal and modifies `g_config`;

- but, in the meanwhile, after thread B has loaded `g_config` into its `blue`, other threads have made several modifications, with the caveat that the latest `ConfigMap` happens to reside in memory at the same address used for an older `ConfigMap` -- thus thread B "finds [them] equal".

Perhaps a solution to this is introducing an immovable version number. Comparing ever-incrented version numbers is safer than comparing addresses of objects: the former cannot be re-used, the latter can be re-used.

Ответить
@steveneumeyer681
@steveneumeyer681 - 11.08.2021 01:28

don't use volatile ever - can you explain why not?

Ответить
@jjp8710
@jjp8710 - 09.07.2021 01:49

This is an amazing talk, well explained and very useful, Thanks

Ответить
@NKernytskyy
@NKernytskyy - 15.06.2021 22:18

Excellent lecture for noob like myself.
Concurrency 101 Primer done right!

Ответить
@greatbullet7372
@greatbullet7372 - 30.04.2021 19:12

Time-Sharing is basicly another terminology for Scheduling

Ответить
@think2086
@think2086 - 03.04.2021 08:56

As for why we call it "blocking," I finally understood the other day: think of it as traffic. There are only so many pipelines in the CPU. If a software thread occupies one of them, like a car occupies a lane of a highway, instead of going away while waiting, so other code on another software thread can use those hardware threads in the mean time, it rather just sits there and blocks all the work piling up behind it. It'd be like you stopping your car in the middle of the freeway to text back your Tinder crush right away, instead of pulling over to the side of the road first. In either case, you're not making forward progress as you wait to finish the texting session, but in the former case you also block everyone else that could have used that lane.

Ответить
@MarcoBergamin
@MarcoBergamin - 11.03.2021 01:04

Nice talk!

Ответить
@myown236
@myown236 - 08.03.2021 18:49

How to notify worker threads to stop working and terminate?

Ответить
@guanwang
@guanwang - 06.03.2021 05:01

Awesome talk!! Learnt a lot. Thank you! 👍

Ответить
@WyMustIGo
@WyMustIGo - 05.03.2021 22:42

I agree that busy waiting is foolish, but... this would prevent the compiler optimization: volatile std::atomic<bool> ready;

Ответить
@mckdoeful
@mckdoeful - 31.10.2020 19:46

very good talk

Ответить
@SamWhitlock
@SamWhitlock - 23.10.2020 20:36

Does std::latch imply a memory fence for the threads arriving at it? (not that that's necessarily a great pattern)

Ответить
@mauricio-poppe
@mauricio-poppe - 07.10.2020 18:03

Just what I needed, thank you!

Ответить
@thorsten9211
@thorsten9211 - 07.10.2020 10:28

Thank You!

Ответить
@YingDai
@YingDai - 07.10.2020 04:56

I have been using some of these basic concurrency stuff for years --- copy and paste from stackflow. The talk is still very helpful for me

Ответить
@DamianReloaded
@DamianReloaded - 07.10.2020 03:06

Good talk. Thanks!

Ответить