Комментарии:
Back to Basics is excellent !
ОтветитьIt's very handy when the data you want to work on is already broken up, e.g multiple files. Just process each file into it's own location in memory then aggregate the multiple data structures back together. Sometimes it makes sense to not write everything to the same data structure (array, vector, etc) for all your threads. Mutex/locks are slow, my suggestion is try to avoid them if possible.
ОтветитьGreat video - learnt busy-wait, latch, condition variable etc from it.
ОтветитьThanks, Arthur!
ОтветитьThe blue/green pattern at the end of the talk sounds a lot like the Read-Copy-Update pattern used in the Linux kernel. RCU does a bit more, by tracking readers and serializing writers with a mutex.
The blue/green pattern is more like a CAS-based optimistic copy/update pattern which won't tell you how many readers are outstanding unless you hold onto the old `blue`. (That could matter if you want to determine when "everybody" sees the new setting.) As used here, the `shared_ptr` avoids the ABA problem we normally have to worry about with CAS based optimistic updates, so that much is nice.
Also why do you say "puttr" when "pointer" is the same number of syllables?
Ответить"If you have to ask the question you probably shouldn't be doing it yourself." How else are you supposed to learn. Not a great response there...
ОтветитьCan I quote you on the volatile thing? The way you said that cracked me up.
ОтветитьI love the bathroom analogy. Two persons trying to use the toilet without synchronization can lead to very undefined behavior.
ОтветитьI love "Back to Basics" talks. You can learn so much from them.
ОтветитьNot just informative, educative, but also entertaining!
Thanks for the talk. :-)
Is it really UB if i spin on an atomic? since the comparison operators have an implicit load with a memory order of std::memory_order_seq_cst, so the compiler is not allowed to optimize the load away.
if it was a non atomic bool i would 100% agree, but not with atomics.
On Slide 6 ... there is a cacheLine (without a number), cacheLine1, and cacheLine2. Is cacheLine(without a number) intentional?
ОтветитьI do not agree with slide 14. The busy wait is a solution if you do busy wait with thread yielding. If std;;mutex acquire requires kernel space object then typically system call is 1000 clocks. The lock is not free. So question how std::mutex is implemented.
ОтветитьDoes std::atomic writes guarantee memory fence?
ОтветитьDoes thread::join guarantee that memory fence will be inserted into a thread for which join is called?
ОтветитьIt’s most likely that code in slide 10 is incorrect - there is no volatile type specifier for result variable. Please correct me if I am wrong.
ОтветитьAm I the only one that questions why people pronounce ptr as putter when I’m pretty sure it stands for pointer.
ОтветитьGood material, learnt a lot. Thanks!
ОтветитьWhat is the maximum number of threads by using std:thread ? How do I know ?
ОтветитьThank you very much Arthur, a very well prepared talk and slides!
ОтветитьPretty cool talk.
ОтветитьThis is a great presentation. Thank you.
On Slide 49 (The "blue/green" pattern (write-side), I wonder whether an ABBA bug exists, with the possible result that:
- thread B loads the `g_config` global shared_ptr to the `blue` local `shared_ptr`, then makes a local copy of `*blue`, then modifies the local copy, then compares `g_config` against `blue` and finds them equal and modifies `g_config`;
- but, in the meanwhile, after thread B has loaded `g_config` into its `blue`, other threads have made several modifications, with the caveat that the latest `ConfigMap` happens to reside in memory at the same address used for an older `ConfigMap` -- thus thread B "finds [them] equal".
Perhaps a solution to this is introducing an immovable version number. Comparing ever-incrented version numbers is safer than comparing addresses of objects: the former cannot be re-used, the latter can be re-used.
don't use volatile ever - can you explain why not?
ОтветитьThis is an amazing talk, well explained and very useful, Thanks
ОтветитьExcellent lecture for noob like myself.
Concurrency 101 Primer done right!
Time-Sharing is basicly another terminology for Scheduling
ОтветитьAs for why we call it "blocking," I finally understood the other day: think of it as traffic. There are only so many pipelines in the CPU. If a software thread occupies one of them, like a car occupies a lane of a highway, instead of going away while waiting, so other code on another software thread can use those hardware threads in the mean time, it rather just sits there and blocks all the work piling up behind it. It'd be like you stopping your car in the middle of the freeway to text back your Tinder crush right away, instead of pulling over to the side of the road first. In either case, you're not making forward progress as you wait to finish the texting session, but in the former case you also block everyone else that could have used that lane.
ОтветитьNice talk!
ОтветитьHow to notify worker threads to stop working and terminate?
ОтветитьAwesome talk!! Learnt a lot. Thank you! 👍
ОтветитьI agree that busy waiting is foolish, but... this would prevent the compiler optimization: volatile std::atomic<bool> ready;
Ответитьvery good talk
ОтветитьDoes std::latch imply a memory fence for the threads arriving at it? (not that that's necessarily a great pattern)
ОтветитьJust what I needed, thank you!
ОтветитьThank You!
ОтветитьI have been using some of these basic concurrency stuff for years --- copy and paste from stackflow. The talk is still very helpful for me
ОтветитьGood talk. Thanks!
Ответить