Official Updated 01/20/2026
Preface: Data engineers perform seamless preprocessing, a foundational stage where they gather messy, raw data from diverse sources, clean it (handling missing values, outliers, inconsistencies), integrate disparate datasets, and transform it into a unified, structured format, making it ready and reliable for data scientists to perform advanced feature engineering (creating new, meaningful features) and ultimately build better machine learning models. This ensures a high-quality, consistent input, preventing “garbage in, garbage out” for the modeling phase.
Background: Transformers4Rec is pre-installed in the merlin-pytorch container available from the NVIDIA GPU Cloud (NGC) catalog. This container is part of the NVIDIA Merlin ecosystem and is specifically designed to support sequential and session-based recommendation tasks using PyTorch.
The workflow can show you where we speculated design weakness of CVE-2025-33233.
NVTabular for preprocessing → PyTorch for training → Triton for serving—means PyTorch is a critical component. If its loading function is insecure, Merlin’s container is exposed regardless of NVIDIA’s own code. The workflow can display the location of suspected design flaw CVE-2025-33233.
If Transformers4Rec internally uses torch.load (which is common for loading PyTorch models) and relies on weights_only=True for safety, then CVE-2025-32434 could be the root cause or at least a contributing factor.
NVIDIA might have classified it as a separate CVE because the exploit path involves their product’s integration with PyTorch, making it a product-level exposure rather than just a dependency issue.
Vulnerability details: CVE-2025-33233 NVIDIA Merlin Transformers4Rec for all platforms contains a vulnerability where an attacker could cause a code injection issue. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.
Official announcement: Please refer to the following link for details-