Комментарии:
Now the question: would running threads within each multiprocess process be even faster?
ОтветитьIt's a good new that they're getting rid of GIL.
ОтветитьAs the founder of a company who adds random noise to audio files, I'm very sad to think that you don't find our products very useful. 😂😂😂
ОтветитьThe function inside Pool() does not read global variables. Can you please show a way to fix that? It has something to do with this Queue() class, isn't it? The Docs are a bit confusing
ОтветитьMore videos on threading/asyncio please 😊
ОтветитьAny chance for a follow-up using this inside of a Class? And compare it with pathos.multiprocessing?
Ответитьi paid for the whole cpu i will use the whole cpu
ОтветитьIf speed is important, why bother with Python at all why not just stick with C and Fortran?
Ответитьliked the way to speak about all three modules asynchio, threading, multiprocessingin one vidoe
ОтветитьMe: Audio Engineer
You: 'Let's add random noise to a bunch of audio files'
Me: AOWERIGLHAOLEIRGHAOPEIRGHSOLIERHGNBapoiewhftgliakrhegaopilerhgsireghsSGrfegplihjsoigjhsdfgoiHJPOIGHSOIDFHGSAPODRJfg
Edit: Thanks for the intro to multiprocessing in Python
Thanks bro.
ОтветитьI tried to do the same with Anaconda Spyder on Windows 11. Unfortunately I could not reproduce any of the examples in this video. I have used both the multiprocessing and the multiprocess modules. The problem is that:
When I am using the "multiprocessing" module, the program waits indefinitely at the for-statement after the map and at the console I get the message "Warning. multiprocessing may need the main file to exist"
When I am using the "multiprocess" module, the program waits indefinitely at the same point, however I get no message on the console.
What I am trying to do is to break up a large process into a number of subprocesses and after they are concluded, to gather their results, sum them up and continue from that point on.
After watching a great number of videos of the issue, I have not met any solution that actually works on Anaconda Spyder Windows 11.
Is there anyone with some suggestion?
Thank you
Brilliant video. Absolutely flipping gold
Ответитьcool
Ответитьwhat if _name_ is not _main_ ?
ОтветитьThank you for great explanation
ОтветитьI didn't quite understand pitfall number 3, when you showed:
`items = [np.random.normal(size=10000) for _ in range(1000)] `
Why is this a pitfall?
Also, for the fib demonstration...
For some reason, fib took 1.35s vs nfib took 35.05s
Even the normal implementation took less time than multiprocessing at 12.93s
I even copied the fib and n_fib from your github to ensure that I wasn't doing something wrong
But I can't seem to replicate your results
This video was so helpful ! I recently converted my mass encryption script to use multiprocessing. To encrypt my dataset of 450 Mb of images, it went from an estimated 11 hours to just 10 minutes, doing the work at around 750 Kb per second.
ОтветитьGreat teaching, simple and effective, I've using this Multiprocessing with my coroutin, my program is flying, lol
ОтветитьOMG you used start and end time as part of the code?!?!?!
In the meantime i get rejected on an interview because i didn't know how to write a decorator to do that same task.
I have 2 cores.
ОтветитьSo well explained. One nice2have thing - quick tip on how to debug (see summary of time2process) most cpu-intensive tasks (functions, like wav transformation in this case).
ОтветитьThis was just absolutely fantastic. I'm using this for processing radar data (numpy is involved), and the speedup is great!
But because "experience is a dear school, and a fool will have no other" I did spend several hours banging my head against the following:
>with Pool(8) as p:
> print( p.map(my_etl_func, a_list_of_filenames) )
This works fine, but if you replace p.map() with p.imap(), then the print() statement prints out an address of some kind of iterator. The same thing happens with p.imap_unordered(), of course.
The issue is that p.map() returns a conventional list, but p.imap() and p.imap_unordered() return an iterator.
You can print(list_thing) and something useful happens, but when you print(an_iterator_thing) you get gobbledegook that isn't useful.
It took me hours to figure out what was going on; hopefully that is hours that no one else has to spend. But, I have to admit that I probably learned more than those who will benefit from my folly.
----
For those who care about large binary datasets, I recommend HDF5 / h5py.
Awesome voice and helpfull video 😍
ОтветитьThis is useful.
More HORSEPOWER...
If i have an lost of x,y coordinats and i need to calculate distance between each one of them. so to make it faster i cut the 1000 elements array into 5 samller 200 elemnts arrays. than how do i make fisrt core process 1 array, second core the 2 one and so on?
ОтветитьTHANK YOU MY BROTHER FROM ANOTHER COUNTRY AND ANOTHER FAMILY!!!
ОтветитьHow "Unlocking your CPU cores in C (feat. multiprocessing)......???"
ОтветитьThank you! <3
ОтветитьI was looking for a normal crack for a long time and stumbled upon yoWay...
ОтветитьThis helped a lot thank you
ОтветитьGreat video, the program works great
ОтветитьI was looking for this kind of lessons for years. please do more.
Ответитьtook me a while due to mistake, but it works thanks
Ответитьall is good
ОтветитьThere is so much python I don't know yet.
ОтветитьIs it recommendable to use a queue instead of a list to store the items? Every threat or process gets an item off the queue?
ОтветитьLove your videos. I usually watch all of them just for fun but this has enabled me to speed up a very heavy optimization for my science stuff. Ty for your dedication. I can ensure that it has real world implications :)
ОтветитьSoo, what do you do of the object isn't pickable?
ОтветитьGreat video. I was not aware of that module! As it happens, I've spent the last six weeks writing something that can run thousands of processes and aggregate the results. I'm not going to throw it away after watching this video, but I will ponder how I might've designed it differently had I known.
ОтветитьFeat.
Force idling all of your CPU cores, while doing minimal useful work.
I'm sure you have something in the works, but do you have any plans to cover the concurrent.futures thread pool and process pool classes? I was looking into their underlying code, and I can't really tell why they perform so much better than regular thread/process pools from the multiprocessing and multiprocessing.dummy libraries
ОтветитьWhile I'm not using multi-threading in my current work, I'll definitely save this video so I can one day return to it!
ОтветитьCan async be used in conjuction with multiprocessing?
ОтветитьHELP
Ответить