But MPC-HC + madvr - 100% oversatturated.
#Control alt j perfect keylogger mac wont open generator#
Yes, but 100% green in rec.709 space should not vary outside the generator and patterns AVSHD 709 So you had to use different test patterns for MPC-HC+madVR, right? So you get the correct/expected results when using madTPG? And incorrect results with MPC-HC+madVR? Is there any difference between enabling/disabling the 3dlut in MPC-HC+madVR? Maybe the 3dlut is simply not applied at all when using MPC-HC+madVR? Or is it applied, but somehow incorrectly? That is important to find out. How did you test this with MPC-HC+madVR? HCFR can only use madTPG to show test patterns, not MPC-HC+madVR. I have all sorts of ideas and plans for madVR, but I'm not ready to talk about it now. And the switching itself costs a bit performance, as well. Basically it could happen that madVR would then switch back & forth between two different profiles all the time. If there are dropped frames, it would automatically lower the settings, if rendering time gets lower, it would automatically higher the settings.ĭynamically switching between different profiles, depending on how high the rendering times are, is a "dangerous" thing to do. perhaps even better, it could try various settings until it gets close to its maximum allowed rendering time, and cache these results. MadVR could do these tests and build these profiles itself. What I would manually do is open each resolution of video (288p, 360p, 480p, 720p, 1080p) in Widescreen, press CTRL+J and create a profile that brings the rendering time as close to 16.6ms as possible. Let's say I'm running with SVP and madVR has 16.6ms to render each frame. Why do the video profiles have to be configured manually? Thinking about it, it could be auto-configured pretty easily. And there's a new option in the "rendering -> general settings" to enable/disable (2).ĭoes this mean that AMD users should enable the trade quality for performance options "don't use 'copyback' for DXVA deinterlacing (Intel, NVidia)" and "don't use 'copyback' for DXVA decoding (Intel, NVidia)"? And the reason it doesn't mention AMD is that no quality is lost? There's a trade-quality-for-performance option to switch between (1) and (3). And solution (3) is a copy operation on the GPU, which can be lossless with AMD GPUs, but produces a slightly blurred chroma channel with NVidia and Intel GPUs. Solution (2) is conversion/processing via OpenCL, introduced in v0.87.0. Solution (1) is copyback (downloading the NV12 data to CPU RAM, then re-uploading it to GPU in a different format). There are 3 different ways madVR can use to do that. So madVR has to convert the NV12 surfaces somehow to make them pixel shader compatible. Unfortunately pixel shaders can't use them directly. DXVA2 decoding and deinterlacing outputs NV12 surfaces.