Back To Gallery: Gallery
Alcedo atthis 7 Nov 2015
Alcedo atthis Sony A77II + Sigma 150-500
EXIF: 500mm; 1/100"; f/9.0; ISO800
alcedo atthis martin pescatore uccelli animali martin pescatore natura
Alcedo atthis Gallery: Gallery Description: Sony A77II + Sigma 150-500
EXIF: 500mm; 1/100"; f/9.0; ISO800
Views: 2985 Loves: 0 love it! Tag Count: 6
Additional Info File Size: 1.75 MB Dimensions: 2000x1400 Created: 7 Nov 2015 Updated: 12 Nov 2015

DPReview news

Articles: Digital Photography Review (

All articles from Digital Photography Review
  • NVIDIA researchers can now turn 30fps video into 240fps slo-mo footage using AI

    NVIDIA researchers have developed a new method to extrapolate 240fps slow-motion video from 30fps content using artificial intelligence.

    Detailed in a paper submitted to the Cornell University Library, NVIDIA researchers trained the system by processing more than 11,000 videos through NVIDIA Tesla V100 GPUs and a cuDNN-accelerated PyTorch deep learning framework. This archive of videos, shot at 240fps, taught the system how to better predict the positioning differences in videos shot at only 30fps.

    This isn't the first time something like this has been done. A post-production plug-in called Twixtor has been doing this for almost a decade now. But it doesn't come anywhere close to NVIDIA's results in terms of quality and accuracy. Even in scenes where there is a great amount of detail, there appears to be minimal artifacts in the extrapolated frames.

    The researchers also note that while there are smartphones that can shoot 240fps video, it's not necessarily worth it to use all of that processing power and storage when something that will get you 99% of the way there is possible using a system such as theirs. 'While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,' the researchers wrote in the paper.

    The research and findings detailed in the paper will be presented at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.

(C) 2017 Giuseppe Gessa