Category Archives: Potential Risk of CVE

Please do not underestimate CVE-2023-31364. The official remedial measures were announced the details on February 24, 2026. (27-02-2026)

Preface: AMD EPYC processors (including the latest 9005 Series) fully incorporate an I/O Memory Management Unit (IOMMU). In AMD’s architecture, this technology is known as AMD-Vi (AMD I/O Virtualization). It serves as a foundational component for the hardware-level security and isolation.

Background: In a virtualized environment, the IOMMU (AMD-Vi) acts as the essential bridge between “physical hardware” and the Guest VM. When you enable hardware passthrough, the IOMMU functions as both a hardware-level “translator” and a “security guard.” The following details how IOMMU participates in the operation of guest virtual machines:

About Memory Address Mapping (DMA Remapping)

This is the most critical function of the IOMMU.

  • The Problem: A Guest VM operates using Guest Physical Addresses (GPA), which are virtualized. However, a physical device (like a NIC or GPU) requires Host Physical Addresses (HPA) to function.
  • The Solution: When a driver inside the Guest VM commands a device to perform a Direct Memory Access (DMA), the IOMMU intercepts the request. It uses a translation table (provided by the hypervisor) to instantly map the GPA to the HPA. This allows the Guest VM to interact with hardware at full speed without knowing the host’s actual memory layout.

Vulnerability details: CVE-2023-31364  Improper handling of direct memory writes in the input-output memory management unit could allow a malicious guest virtual machine (VM) to flood a host with writes, potentially causing a fatal machine check error resulting in denial of service.

The above details and VFIO code demonstrate (refer to attached diagram) the mechanism allowing a virtual machine to directly access hardware via IOMMU mapping, which is essential for launching the CVE-2023-31364 attack. The vulnerability occurs when a guest utilizes this direct path to send malicious, high-volume write requests, causing a flawed IOMMU to trigger a fatal Machine Check Error (MCE) and crash the host.

Official announcement: Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7059.html

CVE-2026-3061: Out of bounds read in Media in Google Chrome (26-02-2026)

Preface: In the computer industry, the term “sustainability” encompasses design flaws and their remedies. If someone tells you that their hardware and software products have never been found to have any vulnerabilities to date, their design is likely perfect. However, they still need to maintain its sustainability.

Background: In Google Chrome, “Media” refers to the suite of features and APIs used to handle, control, and debug audio, video, and images. Here is the translated breakdown of what it encompasses and how to customize it.

What is “Media” in Chrome?

  1. Global Media Control (Media Hub):
    Located in the top-right corner (a music note icon), this hub allows you to play, pause, or skip tracks across all open tabs without switching to the specific page.
  2. DevTools Media Panel:
    A hidden tool for developers (found via F12 > Three Dots > More tools > Media) used to inspect video resolution, codecs (like AV1), and playback errors in real-time.
  3. Built-in Media Player:
    Chrome acts as a standalone player. You can drag and drop MP4, MP3, JPG, or PDF files directly into a tab to view them.
  4. Casting:
    Integrated support for Google Cast, allowing you to send audio or video from a tab to a TV or Nest speaker. 

Media in Google Chrome contains Global Media Control, DevTools Media Panel, Built-in Media Player and Casting.

Vulnerability details: Out of bounds read in Media in Google Chrome prior to 145.0.7632.116 allowed a remote attacker to perform an out of bounds memory read via a crafted HTML page. (Chromium security severity: High)

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2026-3061

CVE-2025-33179 and CVE-2025-33180: About NVIDIA Cumulus Linux and NVOS products (25-02-2026)

Preface: NVIDIA InfiniBand switches are based on technology from Mellanox Technologies. Nvidia Spectrum switches are based on technology from Mellanox Technologies. The Spectrum switch ASIC portfolio, originally developed by Mellanox for high-performance Ethernet networking, was rebranded under Nvidia and is now a core component of Nvidia’s networking division. NVIDIA completed the acquisition of Mellanox Technologies, a major supplier of high-performance interconnect technology (switches, NICs), in April 2020 for approximately $7 billion. This strategic move enhanced NVIDIA’s data center networking capabilities, specifically in InfiniBand and Ethernet, to support AI and high-performance computing.

Background: Cumulus Linux is optimized for Ethernet fabrics, while NVOS/Onyx is largely utilized in high-performance InfiniBand environments.

-Key switches supporting NVIDIA Cumulus Linux include:

  • Spectrum-4: SN5600, SN5600D, SN5400
  • Spectrum-2/3: SN3700, SN3700C, SN4600, SN4700
  • Spectrum-1: SN2700, SN2100, SN2745 

Example: Spectrum-4 series (including the SN5600, SN5600D, and SN5400) is a line of physical Ethernet switches (hardware)

Use Cases: Ideal for hyperscale cloud data centers and enterprise AI networks, emphasizing scalability and full customizability.

-NVOS (NVIDIA Onyx) or similar OS typically supports:

Quantum/Quantum-2 InfiniBand: Switches designed for high-performance AI, such as the Quantum-2 series.

Use Cases: Focused on High-Performance Computing (HPC) and large-scale AI training clusters (AI Factories), particularly environments utilizing NVLink for GPU interconnects.

Note: As of early 2026, NVIDIA is focusing on standardizing the management commands (NVUE) across both systems to reduce the complexity of automation workflows when transitioning between different operating systems.

Cumulus Linux (Native Linux): When you SSH in, you land in a standard Debian Linux bash shell. You configure the switch using the NVUE (NVIDIA User Experience) object model via the nv command (e.g., nv set interface swp1…).

Vulnerability Note: The CVEs (CVE-2025-33179/33180) specifically target the NVUE API and CLI engine found in Cumulus Linux 5.x and later.

Vulnerability details:

CVE-2025-33179 NVIDIA Cumulus Linux and NVOS products contain a vulnerability in the NVUE interface, where a low-privileged user could run an unauthorized command. A successful exploit of this vulnerability might lead to escalation of privileges.

CVE-2025-33180 NVIDIA Cumulus Linux and NVOS products contain a vulnerability in the NVUE interface, where a low-privileged user could inject a command. A successful exploit of this vulnerability might lead to escalation of privileges.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5722

AMD-SB-3043 all aspects, side-channel analysis for privacy applications on confidential VMs (24th Feb 2026)

Preface: AMD-SB-3042 is a formal advisory for a specific vulnerability, while AMD-SB-3043 is an advisory regarding an analytical tool (SNPeek) used to detect such vulnerabilities.

Background: If AMD zen5 operating AMD SEV-SNP, traditional hypervisor especially VMware or Hyper-V management include encryption will handle by hypervisor embedded in Xen5, so SNPeek collect the traffic is un-encrypted data. Details shown as below:

Limitations of SNPeek If using a tool like SNPeek to intercept traffic on the host side: It can only see data marked as “Shared” (usually for use by the hypervisor to assist with network or disk I/O). Data in Private Memory is always encrypted to SNPeek; the hypervisor cannot read its plaintext content at all. Potential Risk Warning Despite strong hardware encryption, the recently discovered StackWarp (CVE-2025-29943) vulnerability shows that a malicious hypervisor could still influence the execution path of Zen 5 virtual machines by manipulating the CPU’s internal “Stack Engine.” While this doesn’t mean it can directly “read” encrypted memory, it can achieve indirect attacks.

AMD-SB-3043: Analytical Framework (SNPeek)

  • Nature: A bulletin regarding a research framework and toolkit for evaluating side-channel risks in Confidential VMs (CVM).
  • Core Content: Describes the SNPeek open-source toolkit.
  • Function & Purpose:
    • SNPeek is not a single vulnerability but an automated analysis pipeline that uses machine learning to assess how sensitive a CVM application is to side-channel attacks.
    • It helps developers quantify how much information an application might leak when running in encrypted environments like SEV-SNP.
    • It provides configurable attack primitives to help developers locate “weak points” in their code and guides the implementation of mitigations (e.g., oblivious memory access).

Official details and announcement: AMD’s assessment is that all side-channel techniques demonstrated in the paper fall within the category of already known, documented, and out of scope behaviors according to the published SEV/SNP threat model.  AMD has  introduced features on Zen 5 processors—specifically Ciphertext Hiding and PMC Virtualization—that address the ciphertext visibility and HPC based leakage paths highlighted by the researchers.

AMD recommends software developers employ existing best practices, including constant-time algorithms, and avoiding secret-dependent data accesses where appropriate. Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3043.html

CVE-2025-33239 and CVE-2025-33240: Regarding NVIDIA Megatron Bridge (20th Feb 2026)

Preface: Artificial intelligence is both harmful and beneficial. Why is it harmful? Fundamentally, it reduces opportunities for low-skilled jobs. Speakers chant slogans like “Smart living, increased productivity.” However, its underlying problems seem difficult to conceal, so you can learn about the latest developments in artificial intelligence from online newspapers and articles. Today, when you seek answers from artificial intelligence, the answers it provides may not be the truth! Why have humans been able to survive and thrive on Earth for thousands of years? The answer is: survival of the fittest.

Background: Megatron-Core and Megatron-LM are open-source tools that are typically used together to train LLMs at scale across GPUs. Megatron-Core expands the capability of Megatron-LM.

NeMo Megatron Bridge is utilized by AI researchers, infrastructure engineers, and developers focused on high-performance training and fine-tuning of large language models (LLMs) and foundation models, particularly those bridging the Hugging Face ecosystem with NVIDIA’s Megatron-Core. NVIDIA H100 GPU introduced support for a new datatype, FP8 (8-bit floating point), enabling higher throughput of matrix multiplies and convolutions. Megatron Bridge uses the NVIDIA TransformerEngine (TE) to leverage speedups from FP8.

While NVIDIA developed Megatron Bridge to facilitate checkpoint conversion between NVIDIA NeMo and other deep learning frameworks, OpenAI utilizes its own internal infrastructure. As of 2026, NVIDIA Megatron Bridge is primarily used by large enterprises, Cloud Service Providers (CSPs), and Sovereign AI initiatives that need to train or deploy open-source models (such as Llama 3, Mistral, or Qwen) at massive scale on NVIDIA hardware.

Vulnerability details:

CVE-2025-33239 NVIDIA Megatron Bridge contains a vulnerability in a data merging tutorial, where malicious input could cause a code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

CVE-2025-33240 NVIDIA Megatron Bridge contains a vulnerability in a data shuffling tutorial, where malicious input could cause a code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5781

CVE-2025-33245: NVIDIA NeMo 2.0+ shifts away from pickle (19th Feb 2026)

Preface: NeMo 2.0 is NVIDIA’s major modernization of the NeMo ecosystem.

Two things to remember about NeMo 2.0:

1. NeMo 2.0 is the training & model building framework.

It focuses on:

•               Model architectures (LLMs, ASR, TTS, multimodal)

•               Training pipelines

•               NeMo Run + NeMo-based microservices

•               Distributed GPU/accelerated workflows

2. NeMo Guardrails and NeMo Curator are NOT part of the NeMo 2.0 training stack.

They live adjacent to NeMo 2.0, serving two different lifecycle phases.

Background: NeMo 1.x modules (ASR collections, VAD, etc.) used pickle because they relied heavily on Python multiprocessing and Python objects.

NeMo 2.0 is moving toward language  and framework agnostic formats

Instead of pickle, NeMo 2.0 favors:

•               Safetensors (for weights)

•               JSON / YAML (for metadata)

•               Parquet (for curated datasets)

•               Numpy / torch tensors loaded explicitly

•               HuggingFace compatible formats

These formats are:

•               Safe

•               Portable across hardware and OS

•               Usable by non Python systems

•               Compatible with cloud trust boundaries

NeMo Curator and NeMo Guardrails are designed to avoid pickle entirely

Even though older NeMo components still used pickle internally:

  • NeMo Curator does not ingest pickle data
  • NeMo Guardrails never used pickle at all
  • NeMo 2.0 framework minimizes it or removes it

This aligns with modern security guidance for LLM infrastructure.

Vulnerability details: CVE-2025-33245 NVIDIA NeMo Framework contains a vulnerability where malicious data could cause remote code execution. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5762

CVE-2026-20700: Improved state management in Apple products to resolved a memory corruption issue. (17th Feb 2026)

Preface: Swift is memory-safe by default. Unlike C, your enum and String cannot “overflow” a buffer and crash the app unless you use the Unsafe prefix.

Background: Swift is memory-safe by default. Use Enums to represent mutually exclusive states (e.g., loading, success, error) to eliminate “impossible” states. [.]onAppear { manager.fetchData() } will run every time the view appears, meaning every time SwiftUI reconstructs or re‑displays this DataView, it triggers fetchData() again.\ This can lead to multiple overlapping async calls unless explicitly prevented. The enum-based state machine helps protect against impossible logical states, but it does not prevent multiple requests from firing. Swift’s memory safety doesn’t stop logical repetition or resource exhaustion.

Ref: In Swift, “checking the bounds of a memory buffer” typically refers to ensuring you don’t access memory outside of an allocated range (like an Array or UnsafeBufferPointer).

Vulnerability details: CVE-2026-20700 A memory corruption issue was addressed with improved state management. This issue is fixed in watchOS 26.3, tvOS 26.3, macOS Tahoe 26.3, visionOS 26.3, iOS 26.3 and iPadOS 26.3. An attacker with memory write capability may be able to execute arbitrary code. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 26. CVE-2025-14174 and CVE-2025-43529 were also issued in response to this report.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2026-20700

https://support.apple.com/en-us/126353

CVE-2025-61969 Prequel: AMD uProf allow arbitrary file read/write operations (16 Feb 2026)

Preface: In short, the ioctl concept exists in both, but the implementation is different.

While Linux uses a standard ioctl system call, Windows provides a similar interface through its own set of functions. They are not directly compatible. 

  • Linux (ioctl): A universal Unix-like system call used to perform hardware-specific operations that fall outside standard read/write.
  • Windows (DeviceIoControl): Part of the Win32 API, this function sends control codes directly to a device driver. It is the architectural equivalent of ioctl on Windows.

Background: AMD uProf (AMD MICRO-prof) is a software profiling analysis tool for x86 applications running on Windows, Linux® and FreeBSD operating systems and provides event information unique to the AMD “Zen”-based processors and AMD Instinct™ MI Series accelerators. AMD uProf enables the developer to better understand the limiters of application performance and evaluate improvements.

According to the latest AMD uProf official documentation, supported versions include:

Windows 10 (up to 22H2), Windows 11 (up to 25H2) and Windows Server 2019, 2022, and 2025

Key Components on Windows

After installation on Windows, you can use the following tools:

  • AMDuProf (GUI): A visual interface for performing CPU and power consumption analysis.
  • AMDuProfCLI: A command-line tool for automating instruction code or remote analysis.
  • AMDuProfPcm: A tool specifically designed for system-level analysis (such as IPC and memory bandwidth).
  • System Analysis: Monitors system-level performance metrics such as IPC (Instructions Per Clock), memory bandwidth, and cache usage.
  • Power Profiling: Tracks system thermal and power consumption characteristics in real time, displaying the frequency, temperature, and energy consumption of each component.
  • Microarchitecture Analysis: Detects microarchitectural issues in the source code and provides specific hardware event information for AMD “Zen” series processors.
  • GPU and Heterogeneous Analysis: Supports analysis of GPU activity, kernels, and scheduling for AMD Instinct MI series accelerators.

Vulnerability details: CVE-2025-61969 Incorrect permission assignment in AMD µProf performance analysis tool-suite may allow a local user-privileged attacker to achieve privilege escalation, potentially resulting in arbitrary code execution.

An external researcher reported a vulnerability in the AMD uProf performance analysis tool-suite, specifically within the AMDPowerProfiler.sys driver, that could allow arbitrary file read/write operations due to insufficient access control checks.

AMD determined that this issue occurs because the driver fails to properly validate user access when handling IOCTL requests, potentially allowing unprivileged users to escalate privileges and resulting in arbitrary code execution.

Official announcement: Please refer to the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-9022.html

CVE-2026-2242: Vulnerabilities in janet-lang may also affect partner devices. (11th Feb 2026)

Preface: Janet has a small footprint: it fits in environments where 2.5 MB of RAM is considered “plenty”. While Janet does not run on the GPU (it is a CPU-bound language), it is often used as the control/orchestration layer on heterogeneous AI platforms. Because Janet is written in C and compiles to a small binary (roughly 200–300 KB), it is frequently used on ARM Cortex-based systems. While Janet does not run on the GPU (it is a CPU-bound language), it is often used as the control/orchestration layer on heterogeneous AI platforms.

Background: Janet can be used to manage the data pipeline, calling into C/C++ libraries that handle heavy GPU lifting via CUDA. If code allow external scripts or users to submit code dynamically, it can use Janet’s built-in eval-string function. Does it vulnerable to CVE-2026-2242.

My speculation: Using eval-string does expose your Jetson pipeline to CVE‑2026‑2242, because:

CVE‑2026‑2242 is triggered during compilation of Janet code, and eval-string compiles code dynamically. If a malicious user submits a specially-crafted Janet expression that enters the vulnerable path inside:

janetc_if  →  specials[.]c

then Janet may perform an out‑of‑bounds read, which can cause:

  • interpreter crash
  • denial of service
  • undefined behavior inside the Janet process
Even though the CVE requires “local execution,”

-allowing remote users to submit code and then calling eval-string makes that local execution possible.

Therefore, theJetson pipeline becomes exploitable.

Vulnerability details: A vulnerability was determined in janet-lang janet up to 1.40.1. This impacts the function janetc_if of the file src/core/specials.c. Executing a manipulation can lead to out-of-bounds read. The attack needs to be launched locally. The exploit has been publicly disclosed and may be utilized. This patch is called c43e06672cd9dacf2122c99f362120a17c34b391. It is advisable to implement a patch to correct this issue.

Official announcement: Please refer to the link for more details –

https://www.tenable.com/cve/CVE-2026-2242

CVE-2026-25592: An Arbitrary File Write vulnerability has been identified in Microsoft’s Semantic Kernel .NET SDK (10th Feb 2026)

Preface: Microsoft has launched several products (Semantic Kernel, Microsoft.Extensions.AI, and Azure.OpenAI), which initially caused confusion for developers. Furthermore, the Semantic Kernel is currently being “upgraded” to the new Microsoft Agent Framework, leading some developers to question future support for the Semantic Kernel.

Python, with its rich libraries and large open-source community, remains the “universal language” in artificial intelligence research and data science. LangChain, as the primary alternative, is also based on Python. Python remains dominant in the field of artificial intelligence/machine learning.

Background: Microsoft developed Semantic Kernel as an open-source SDK to bridge conventional programming languages (C#, Python, Java) with advanced LLMs, enabling developers to build enterprise-grade, agentic AI applications. It simplifies orchestrating complex AI workflows, allows swapping models without rewriting code, and ensures secure, compliant integration of AI with existing systems.

Semantic Kernel uses plugins (formerly skills or functions) to extend its capabilities beyond its core prompt engineering functionality and integrate with external services, data sources, and API.

  • Semantic Kernel acts as an orchestrator, using the LLM to decide which plugins to use and when, effectively automating complex tasks that involve multiple steps and tools. The LLM determines the necessary sequence of actions to fulfill a user’s request.
  • Plugins allow the LLM to interact with real-world applications and data. For example, a plugin could retrieve real-time weather information, search a database, book a flight, or send an email.

The airline and travel industry is beginning to use Microsoft Semantic Kernel to build intelligent, AI-powered applications, particularly for automating customer service and booking processes. Developers are using Semantic Kernel to build dialogue agents that can understand complex booking instructions, such as “book the cheapest flight from Hong Kong to Tokyo”, and handle the booking process independently.

Vulnerability details: CVE-2026-25592 Semantic Kernel is an SDK used to build, orchestrate, and deploy AI agents and multi-agent systems. Prior to 1.70.0, an Arbitrary File Write vulnerability has been identified in Microsoft’s Semantic Kernel .NET SDK, specifically within the SessionsPythonPlugin. The problem has been fixed in Microsoft.SemanticKernel.Core version 1.70.0. As a mitigation, users can create a Function Invocation Filter which checks the arguments being passed to any calls to DownloadFileAsync or UploadFileAsync and ensures the provided localFilePath is allow listed.

Official announcement: Please refer to the link for details –https://nvd.nist.gov/vuln/detail/CVE-2026-25592