HyperAI
Back to Headlines

Understanding MFG's Impact: Why Baseline Input Latency Matters in Framegen-Enhanced Gaming Performance

2 days ago

In the rapidly evolving landscape of PC graphics, artificial intelligence and neural rendering techniques are increasingly being used to enhance gaming performance. Technologies like Nvidia's DLSS (Deep Learning Super Sampling), AMD's FSR (FidelityFX Super Resolution), and Intel's XeSS (Xe Super Sampling) allow gamers to boost frame rates at the cost of minor image quality degradation. However, multi-frame generation (MFG) takes this a step further by generating additional frames to achieve even higher frame rates, often through in-game toggles. While MFG and similar technologies can make visuals smoother, they do not necessarily improve the responsiveness of the game, which is crucial for a seamless user experience. This issue, known as input latency, refers to the delay between user input (like a mouse click) and the corresponding visual output on the screen. High input latency can lead to an unpleasant gaming experience, characterized by issues like "rubber banding" or "swimminess," where player actions feel disconnected from the game's response. The Importance of Measuring Input Latency Nvidia and Intel already offer tools to measure this latency. Nvidia's FrameView application captures a "PC Latency" metric, while Intel’s PresentMon measures "All Input to Photon Latency." These tools provide valuable insights, though they are not perfect and can exhibit compatibility issues. Despite these limitations, higher latency readings generally correlate with a worse gaming experience. Nvidia has highlighted this issue in specific cases. For example, in Microsoft Flight Simulator, a native frame rate of 50 FPS resulted in an input latency of 172 ms, which is significantly higher than what many players consider acceptable. To address this, Nvidia introduced the concept of the Maximum Acceptable Latency Threshold (MALT) as part of its marketing for the RTX 5090. This metric ensures that the boosted frame rates are accompanied by a sufficiently low input latency, providing a more holistic view of performance. Normalizing Performance to MALT To explore the effectiveness of normalizing performance to MALT, we conducted tests using Alan Wake II. We set a 60 ms threshold for input latency and adjusted settings to see how different GPUs performed. The results show that more powerful GPUs generally offer both lower input latency and higher output frame rates. For instance, the RTX 5080, when paired with DLSS Ultra Performance, could achieve 190 FPS at 1440p without exceeding the 60 ms MALT. In contrast, the RTX 4090 required DLSS Performance to hit the same latency, resulting in a lower output frame rate of 97 FPS. However, using DLSS Ultra Performance on both the RTX 5080 and RTX 4090 revealed that while the RTX 5080 could output 190 FPS with MFG 4X, the RTX 4090 offered a more responsive experience with an average input latency of 46 ms and 133 FPS output. Evaluating MFG Claims The ability of framegen technologies, particularly MFG, to enhance performance while maintaining acceptable input latency varies across different GPUs. Intel, for example, recommends a minimum of 60 FPS for optimal Xe Frame Generation (XeFG) performance, and AMD advises against using FSR 3 Frame Generation with frame rates below 30 FPS pre-interpolation. Nvidia, however, does not specify a recommended frame rate for MFG but includes Frame Warp, a reprojection technique, in DLSS 4. Frame Warp can reduce perceived latency by reorienting output frames based on the user’s recent inputs. This flexibility allows MFG to potentially tolerate higher input latencies compared to other framegen methods. Marketing Transparency Transparent communication about input latency is essential for accurate performance evaluations. Nvidia’s marketing for the RTX 5090 emphasizes low input latency, whereas the RTX 5050's marketing often highlights output frame rates without mentioning the underlying input latency. This discrepancy can mislead consumers, as a game’s smoothness is not solely dependent on high output frame rates. For example, in Nvidia’s marketing materials for the RTX 5050, games like Cyberpunk 2077 and Avowed cap out at around 160 FPS with MFG 4X, suggesting a baseline of about 40 FPS. While this might be acceptable for some, it underscores the need for clear disclosure of the input latency assumptions used. Practical Implications Framegen and MFG can be transformative for high-refresh-rate monitors, making high frame rates more attainable with mid-range GPUs. For instance, the RTX 5060 Ti 16GB can achieve 200 FPS at 1440p using DLSS Ultra Performance and MFG 4X, a performance level previously achievable only with a high-end RTX 5090. This capability is especially valuable for budget-conscious gamers and those who do not require the lowest possible input latency, such as in slower-paced games. However, for competitive or fast-paced games, where low input latency is crucial, higher-end GPUs remain indispensable. They offer better image quality at the same latency or can push frame rates at higher resolutions without compromising responsiveness. Conclusion In conclusion, while framegen technologies like MFG can dramatically boost output frame rates, they must be evaluated in conjunction with input latency to provide a complete picture of gaming performance. Vendors and media should adopt a more transparent approach, clearly indicating the MALT assumptions when discussing the benefits of these technologies. This ensures that consumers can make informed decisions and understand the true value of their hardware investments.

Related Links