This restoration and upscale technique is based on PaNup; quite a lot of time is passed since I published that, and it’s time to revise it using better sources, even if the idea is the same – and a new name reflects more the sources used.
In few words, with MergeHdUp™ (read: merged up) it’s possible to restore details that were lost during the compression/conversion from the master to the home media, using two (or, better, more than two) video tracks from different sources, even better using different codecs.The theory behind the idea: the master used for any movie released for home viewing – physical media or streaming – is high quality, uncompressed (or lossless); whatever they were shot on film or digital, normally the horizontal resolutions are 2K or 4K; now, to be ported to HD (1920px wide) or UHD (3840px wide) they could crop sides, or downscale. I don’t know how many were actually cropped, but I guess most of them were downscaled.
Let’s put apart the UHD home media, and focus on HD: usually we have full HD (1080p) but sometimes we got 720p on some streaming; for example, when the 4K 1.85:1 uncompressed (lossless) master 16bit 4:4:4 4096×2214 is downscaled to lossy compressed 8bit 4:2:0 1920×1038 (or 1280×692), a lot is lost of course, but WHAT is lost is due to the downscale method, codec and bitrate used.
Then, if we take two sources that used the same master, let’s say 1080p MPEG-2 and 1080p AVC, it’s usually possible to notice that some details are different between them. Now, cleaning each one – denoising, sharpening etc. in a good way, without taking away (too many) details and grain, but only mainly to get rid of compression artifacts while enhancing minute details – then upscaling them using various methods, and finally merging them (equally or based on relative quality) it’s usually possible to “squeeze” some details more than doing the same processes using only one source.
Let’s see an example of a random movie using two sources, both streaming, one 720p AVC 2.4mbps and another 1080p AVC 13.6mbps:
https://i.postimg.cc/qqQ96g9s/randommovie-720-dn.png
https://i.postimg.cc/51cxjhJS/randommovie-1080-dn.png
I denoised, sharpened, denoised each one, then upscaled them to UHD, and finally merged them equally, all in Avisynth.
Then I put the 1080p on GigaPixel AI 5.4.5, upscaled it to UHD using 100% denoise, nothing else.
Here you can see them compared:
https://screenshotcomparison.com/comparison/27779
single images:
https://i.postimg.cc/BsxJFmQ9/randommovi…-Hd-Up.png
https://i.postimg.cc/npppGqCs/randommovi…-5-4-5.png
Now let’s see some comparisons using three sources – one AVC from WEB, two from HDTV (one AVC and one MPEG-2); each one sharpened then upscaled and averaged, result was finally denoised:
AMZN Vs MergeHdUp: https://screenshotcomparison.com/comparison/27864
AMZN Vs GigaPixel: https://screenshotcomparison.com/comparison/27865
GigaPixel Vs MergeHdUp: https://screenshotcomparison.com/comparison/27866
some thoughts:
- I prefer the MergeHdUp™ at normal viewing distance for two sources, and always with three sources
- processing time for two sources on Avisynth is comparable to one image on GigaPixel
- processing time for three sources on Avisynth is higher than GigaPixel, but more rewarding
- using a grain plate should help in any case
- it’s possible that using newer versions of GigaPixel AI and/or different settings, result may improve a bit
- using GigaPixel AI to upscale both 720p and 1080p and then merging them lead to a very waxy picture
- the quality of the sources used here is quite low; using higher quality ones – for example HD-DVD, BD, D-Theater, HDTV – with different codecs – VC1, MPEG-2, AVC, HEVC – results are even better
- using three or more sources, albeit more difficult, leads to better result as you can see
Recent Comments