Комментарии:
Wow! I'm a believer now. AI solutions are going to outperform every hand-crafted solution we've come up with for any computing problem.
ОтветитьAs a 3d artist, ai denoiser makes me saves so much time. Definetly a game changer !
ОтветитьI would love to have my hands of the program that was used to do this demo with. Even if not a fully finished consumer level product yet, is still far better than what I am able to use now.
ОтветитьOh wow this is like black magic, the old ai Denoiser is already amazing but the new method is even better!! No way. Not even my human brain can deboise the noisy input
Ответитьscary
Ответитьi just bougth a rtx to denoise my 1080 p to 4 k
ОтветитьReally great video. I heard about those AI denoisers in another video and they made it sound like a magic-tool, that can just use a 2d image and make it great.
So your infos are helpful: those denoisers need the 3d information about the sccene and you cant use it for movies in its current form.
My hope was that you could use those kinds of denoisers to improve old videos and such in a high quality.
the leaves in the balcony don't look sharp with AI, softened out.
Ответитьu have been right! See Nvidias new RTX stuff. How many sample / pixels they do?
ОтветитьRTX is coming soon, cant wait :)
ОтветитьHow does this filter deal with important surface features such as text? Stability under translation is great and all, but I still remain a bit skeptical.
ОтветитьMonumental change holy shit...
ОтветитьI believe they used one sample per pixel here just to show off the denoising tech- not to suggest one sample (or less) could be enough. For realistic lighting and reflections, you would still need (a lot) more samples; but not nearly as many as for a fully noise-free render.
Then again, a future AI might actually learn how to fake realistic lighting altogether ;)
Well, you were right ! :) I always thought that there was a hidden link to other areas when I was working with NN's and video mid 2015-17 but I would have never believed that it would be connected directly to enable realtime 3d rendering by raytracing. And yet, when you see the noisy image, it seems so "why did I not at least think of this?"-y
Ответитьit's amazing how tech gets better for our tiny screens where we watch people film with their phones
ОтветитьHow does it compare to SVGF in terms of quality and performance?
ОтветитьIts so great for Gaming and VR, i have no words for how great it is. I love this technology.
ОтветитьAmazing work guys
ОтветитьHave you digest sir? :P
ОтветитьIncredible
ОтветитьDamn it! How would I like to understand what you are saying...
ОтветитьThis is going to change video games forever - we will no longer be confined to polygon counts when this technology matures, we can just ray trace and use the neural network in real time.
ОтветитьI'd love to see this built into preview renders, like Blender's realtime preview rendering with Cycles. I wonder if it has to be built into the renderer itself or can the application hosting the renderer do it? For example, could we get this denoising in Cycles, BI, Luxrender, etc.?
ОтветитьI wonder if the same process can be used to clear up archive footage and the like?
ОтветитьSir can you please tell ,where do you find all these research papers???
ОтветитьHow about de-noising existing material like, say, old film?
ОтветитьThis makes me wonder...how many video games use neural networks to make spherical 3D objects look like actual spheres rather than high-polygon polyhedra ?
Also, that flicker filter can actually be stylish in its own way without doing extra computational power.
This makes me so happy. Great quality video by the way!
ОтветитьThank you for your concise updates on these papers!
ОтветитьI remember how I started with creating photorealistic renders about 8 years ago. This is insanity. It took my PC, depending on scene complexity, hours to render.
ОтветитьPoor Cycles always gets used as negative example :'D
Awesome paper though. Looks like dark magic.
Keep it up, Károly Zsolnai-Fehér, nagyon adja ez a csatorna! :)
ОтветитьThe impact this will have = much faster and better AR.
ОтветитьThis is impressive, but I still see some distortion from the new method. Its not as good as the ground truth at the moment.
ОтветитьI was wondering when you're going to make a video about this method :). How does this compare to the Spatiotemporal Variance-Guided Filter? Either way, I completely agree with you that the results are absolutely insane. Really glad that there are so many brilliant researchers doing this type of work.
ОтветитьI am not.that experienced but look at unreal's photorealistic graphics I think we are there
ОтветитьPath Tracing + De-noising is the definitely the future of real-time simulation.
ОтветитьI wonder how long it takes to render a single frame. Is it in the realm of something fast enough for realtime simulation for gaming / VR?
ОтветитьLol, I actually found my jaw dropped(=
ОтветитьI neeeeed it
ОтветитьYess! 😁😀
ОтветитьAnd we have the GPUs to do it now!
ОтветитьAmazing progress is being made on that field. The CG communities haven't digested the rise of eevee renderer yet and comes this.
Here comes a day where rendering will no more be a nightmare for us computer graphists thanks to AI.
In the grim darkness of the near future there is only AI. It is the 22nd century. For over 80 years Deepmind has sat immobile on the Golden Throne of Google. It is the Master of Mankind by the will of the shareholders, and master of a million job rolls by the might of its inexhaustible server racks. It is an omnipresent power from the Dark Age of Automation. It is the Computational Lord of the globalist Imperium for whom a thousand employee jobs are sacrificed every day, so that YoY reductions in corporate operating costs never truly cease.
To be unemployed in such times is to be one amongst untold billions. It is to live in the cruelest and most bloody job market imaginable. These are the tales of those times. Forget the power of the middle class and democracy, for so much has been forsaken, never to be reclaimed. Forget the promise of compassion and universal income, for in the grim dark future there is only AI. There is no peace for the middle class amongst the stars, only an eternity of carnage and layoffs, and the laughter of the thirsting billionaire class.
Hi :)
ОтветитьWhy would a recurrent autoencoder "clean up" the missing information?
I thought autoencodrs were just structures to allow the machine to learn its own N-dimensional representations of the input.