Category Archives: Potential Risk of CVE

CVE-2025-0932: Arm fixes userspace vulnerability in Mali GPU driver (8th Aug 2025)

Preface: The Valhall family of Mali GPUs uses the same top-level architecture as the previous generation Bifrost GPUs. The Valhall family uses a unified shader core architecture.

The Arm 5th generation GPU architecture, including the Immortalis and Mali GPUs, represents a modern design for mobile and other client devices.

Background: ioctl (Input/Output Control) is the primary syscall used by userspace GPU drivers to communicate with the kernel-space driver. It allows sending custom commands and structured data to the driver.

Typical ioctl operations in Mali drivers include:

  • MALI_IOCTL_ALLOC_MEM: Allocate GPU-accessible memory
  • MALI_IOCTL_FREE_MEM: Free previously allocated memory
  • MALI_IOCTL_SUBMIT_JOB: Submit a GPU job (e.g., shader execution)
  • MALI_IOCTL_WAIT_JOB: Wait for job completion
  • MALI_IOCTL_MAP_MEM: Map memory to userspace

The path bifrost-drivers/driver/product/kernel/drivers/gpu/arm indicates that the code within this directory is part of the kernel-space drivers for Arm Mali Bifrost GPUs.

Vulnerability details: Use After Free vulnerability in Arm Ltd Bifrost GPU Userspace Driver, Arm Ltd Valhall GPU Userspace Driver, Arm Ltd Arm 5th Gen GPU Architecture Userspace Driver allows a non-privileged user process to perform valid GPU processing operations, including via WebGL or WebGPU, to gain access to already freed memory.

Scope of impact: This issue affects Bifrost GPU Userspace Driver: from r48p0 through r49p3, from r50p0 through r51p0; Valhall GPU Userspace Driver: from r48p0 through r49p3, from r50p0 through r54p0; Arm 5th Gen GPU Architecture Userspace Driver: from r48p0 through r49p3, from r50p0 through r54p0.

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-0932

https://developer.arm.com/documentation/110626/latest

Ref: Typo, attached code is free after use, is part of the remedy. The use after free not shown.

AMD response to EDK2 SMM MCE Enablement Issue (7th Aug 2025)

Preface: While it’s technically possible to update UEFI firmware from within a Linux user space environment, it’s not a common or recommended practice. Most UEFI updates are designed to be installed through specific utilities provided by the motherboard manufacturer, often requiring a bootable medium or a dedicated Windows application.

Background: EDK II, also known as EDK2, is an open-source firmware development environment for the Unified Extensible Firmware Interface (UEFI) and Platform Initialization (PI) specifications. It’s a modern, feature-rich, and cross-platform environment developed by the Tianocore project. Think of it as the official development environment for UEFI applications and a core component of many platforms’ firmware

TianoCore is an open-source community focused on developing and promoting the Unified Extensible Firmware Interface (UEFI). It provides a firmware development environment, primarily known as EDK II, which is used for building UEFI firmware, drivers, and applications. TianoCore is a reference implementation of UEFI and is widely adopted by the industry.

Technical details: A researcher reported a bug in the open source EDK2 system management interrupt (SMI) entry code when an MCE occurs near the start of the SMI handler.  An attacker who can inject a machine check exception (MCE) could cause execution to jump to an attacker-controlled interrupt handler, leading to arbitration code execution.

Ref: On AMD EPYC processors, the System Management Mode (SMM) functionality is indeed implemented within the System Management Unit (SMU), which is a distinct block of logic on the processor die.

The System Management Unit (SMU) contains a mailbox function to facilitate communication between the SMU and other system components, including the CPU and operating system. This mailbox acts as a communication channel for sending commands and data, and receiving responses, enabling the SMU to perform its tasks related to system management, power management, and hardware control.

Official announcement: Please refer to the following link for detailshttps://www.amd.com/en/resources/product-security/bulletin/amd-sb-7043.html

2025-23318 and CVE-2025-23319: About NVIDIA Triton Inference Server (6th Aug 2025)

Preface: Nvidia’s security advisories released on August 4, 2025 (e.g., CVE-2025-23318, CVE-2025-23319) are specifically related to the Python backend. The Triton backend for Python. The goal of Python backend is to let you serve models written in Python by Triton Inference Server without having to write any C++ code.

Background: NVIDIA Triton Inference Server is an open-source inference serving software that streamlines the deployment and execution of AI models from various deep learning and machine learning frameworks. It achieves this flexibility through a modular system of backends. 

Each backend within Triton is responsible for executing models from a specific framework. When an inference request arrives for a particular model, Triton automatically routes the request to the necessary backend for execution. 

Key backend frameworks supported by Triton include:

  • TensorRT: NVIDIA’s high-performance deep learning inference optimizer and runtime.
  • TensorFlow: A popular open-source machine learning framework.
  • PyTorch: Another widely used open-source machine learning library.
  • ONNX: An open standard for representing machine learning models.
  • OpenVINO: Intel’s toolkit for optimizing and deploying AI inference.
  • Python: A versatile backend that can execute models written directly in Python and also serves as a dependency for other backends.
  • RAPIDS FIL (Forest Inference Library): For efficient inference of tree models (e.g., XGBoost, LightGBM, Scikit-Learn).

This modular backend architecture allows Triton to provide a unified serving solution for a wide range of AI models, regardless of the framework they were trained in.

Vulnerability details:

CVE-2025-23318: NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause an out-of-bounds write. A successful exploit of this vulnerability might lead to code execution, denial of service, data tampering, and information disclosure.

CVE-2025-23319: NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause an out-of-bounds write by sending a request. A successful exploit of this vulnerability might lead to remote code execution, denial of service, data tampering, or information disclosure.

Official announcement: Please see the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5687

CVE-2025-23310: The NVIDIA Triton Inference Server for Windows and Linux suffers from a stack buffer overflow due to specially crafted input. (5th Aug 2025)

Preface: The NVIDIA Triton Inference Server API supports both HTTP/REST and GRPC protocols. These protocols allow clients to communicate with the Triton server for various tasks such as model inferencing, checking server and model health, and managing model metadata and statistics.

Background: NVIDIA Triton™ Inference Server, part of the NVIDIA AI platform and available with NVIDIA AI Enterprise, is open-source software that standardizes AI model deployment and execution across every workload.

The Asynchronous Server Gateway Interface (ASGI) is a calling convention for web servers to forward requests to asynchronous-capable Python frameworks, and applications. It is built as a successor to the Web Server Gateway Interface (WSGI).

NVIDIA Triton Inference Server integrates a built-in web server to expose its functionality and allow clients to interact with it. This web server is fundamental to how Triton operates and provides access to its inference capabilities on both Windows and Linux environments.

Vulnerability details: CVE-2025-23310 – NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause stack buffer overflow by specially crafted inputs. A successful exploit of this vulnerability might lead to remote code execution, denial of service, information disclosure, and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5687

CVE-2025-54574: About Squid. Stay alert! (4 Aug 2025)

NVD Published Date: 08/01/2025

NVD Last Modified: 08/01/2025

Preface: While HTTP/1.0 is largely obsolete, HTTP/1.1 remains in widespread use, despite the newer HTTP/2 and HTTP/3 protocols. Though HTTP/1.1 has been updated in recent years, its core functionality is still foundational for much of the web.

Does processing Uniform Resource Names consume memory?

Yes, processing Uniform Resource Names (URNs) can consume memory. While URNs themselves are symbolic names and don’t directly represent the resource’s location or data, they need to be processed to resolve them, which often involves memory allocation for parsing, data storage, and potential redirection handling.

Background: Squid Proxy is a caching proxy, and that’s a key aspect of how it functions. It’s not just a proxy that forwards requests; it also stores copies of frequently accessed web content locally. This caching behavior significantly speeds up subsequent requests for the same content, making it faster and more efficient than a simple forwarding proxy.

A “Trivial-HTTP response,” often abbreviated as THTTP, refers to a convention for encoding resolution service requests and responses using the HTTP/1.0 or HTTP/1.1 protocols, as defined in RFC 2169.

Squid Proxy is primarily developed using C++. While it utilizes some C language components and libraries, the dominant language in its codebase is C++.

Ref: STCB, in the context of Squid cache, refers to the StoreEntry data structure, which is a key component of how Squid caches web content in memory. It’s a relatively small amount of metadata associated with each cached object, stored in memory to speed up access and retrieval

Vulnerability details: Squid is a caching proxy for the Web. In versions 6.3 and below, Squid is vulnerable to a heap buffer overflow and possible remote code execution attack when processing URN due to incorrect buffer management. This has been fixed in version 6.4. To work around this issue, disable URN access permissions.

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-54574

CVE-2025-54576: Design weakness in OAuth2-Proxy 7.10.0 and below (1 Aug 2025)

Preface: Regular Expressions are efficient in that one line of code can save you writing hundreds of lines. But they’re normally slower (even pre-compiled) than thoughtful hand written code simply due to the overhead. Generally the simpler the objective the worse Regular Expressions are. They’re better for complex operations.

Background: OAuth2 Proxy is used to add authentication to applications that don’t natively support it, acting as a reverse proxy that handles authentication using OAuth2 providers like Google, GitHub, or Okta. It simplifies the process of adding authentication to existing applications by separating the authentication logic from the application code. This allows developers to focus on building their core application logic without needing to implement complex authentication workflows.

Vulnerability details: In versions 7.10.0 and below, oauth2-proxy deployments are vulnerable when using the skip_auth_routes configuration option with regex patterns. Attackers can bypass authentication by crafting URLs with query parameters that satisfy configured regex patterns, allowing unauthorized access to protected resources. The issue stems from skip_auth_routes matching against the full request URI. Deployments using skip_auth_routes with regex patterns containing wildcards or broad matching patterns are most at risk.

Resolution: This issue is fixed in version 7.11.0

Workarounds include: auditing all skip_auth_routes configurations for overly permissive patterns, replacing wildcard patterns with exact path matches where possible, ensuring regex patterns are properly anchored (starting with ^ and ending with $), or implementing custom validation that strips query parameters before regex matching.

Official announcement: Please see the link for details https://nvd.nist.gov/vuln/detail/CVE-2025-54576

CVE-2025-43209: Processing maliciously crafted web content may lead to an unexpected Safari crash (31-07-2025)

Preface: In essence, built-in browsers are not just about browsing; they are about maintaining control over the core functionality and user experience of the operating system.

Background: Safari and Edge, while built-in, utilize rendering engines derived from the KHTML project, specifically WebKit and Blink, respectively. WebKit is used in Safari, and Blink, a fork of WebKit, powers the Chromium-based Edge. These engines are not just for browsing; they handle the visual rendering of web content within the browser.

In Safari and Edge, the rendering engines (WebKit for Safari and Chromium for Edge) initially interact with the networking component to fetch the necessary resources for a webpage. This workflow prioritizes efficient data retrieval, enabling the browser to display content to the user as quickly as possible.

Safari’s rendering engine, WebKit, is developed and maintained by Apple, according to Apple. WebKit is an open-source project that was originally forked from KDE’s KHTML and KJS engines. Safari is a web browser developed by Apple and is the default browser on macOS, iOS, iPadOS, and visionOS.

Vulnerability details: An out-of-bounds access issue was addressed with improved bounds checking. This issue is fixed in macOS Sequoia 15.6, iPadOS 17.7.9, iOS 18.6 and iPadOS 18.6, tvOS 18.6, macOS Sonoma 14.7.7, watchOS 11.6, visionOS 2.6, macOS Ventura 13.7.7. Processing maliciously crafted web content may lead to an unexpected Safari crash.

Ref: Out-of-Bounds Read (e.g., CVE-2025-43209)

-Reads memory outside the allocated buffer.

-Can leak: Pointers (used to bypass ASLR) or Object metadata (used for type confusion).

-Often used as a first stage in a multi-step exploit.

Official announcement: Please refer to the link for details https://nvd.nist.gov/vuln/detail/CVE-2025-43209

CVE-2025-54419: Design weakness in version 5[.]0[.]1, Node-SAML (30th July 2025)

Preface: SSO isn’t completely secure; in fact, it depends on the design of the entire system. This month, a YouTuber, known for his camera skills, posted a video about his experience, which resulted in him losing all his miles redeemed in February 2025. He contacted airline customer service, but received no reasonable response. The airline strictly adhered to SSO certification regulations. The truth later came to light this month (July 2025).

Background: node-saml is a specific library for implementing SAML 2.0 authentication in Node.js applications. The node-saml is designed for Node.js, meaning its API and integration patterns are tailored for the JavaScript ecosystem. Other SAML libraries exist for different programming languages (e.g., Java, Python, .NET), each with its own conventions and dependencies.

A SAML response or assertion signed with the Identity Provider’s (IdP) private key is considered a validly signed document. This digital signature ensures the integrity and authenticity of the SAML message, confirming it hasn’t been tampered with and originates from a trusted IdP.

SAML relies on digital signatures to ensure the integrity and authenticity of messages exchanged between the Identity Provider (IdP) and the Service Provider (SP). The IdP digitally signs SAML responses and assertions using its private key. The SP then uses the corresponding public key (obtained from the IdP’s signing certificate) to verify the signature, ensuring the message hasn’t been tampered with and originates from a trusted IdP.

Vulnerability details: A SAML library not dependent on any frameworks that runs in Node. In version 5.0.1, Node-SAML loads the assertion from the (unsigned) original response document. This is different than the parts that are verified when checking signature. This allows an attacker to modify authentication details within a valid SAML assertion. For example, in one attack it is possible to remove any character from the SAML assertion username. To conduct the attack an attacker would need a validly signed document from the identity provider (IdP). This is fixed in version 5.1.0.

Official announcement: Please refer to the link for details – https://www.tenable.com/cve/CVE-2025-54419

CVE-2025-8183: About µD3TN protocol.

A spaceman came travel, cyber security in space (29-07-2025)

NVD Published Date: 07/25/2025

NVD Last Modified: 07/25/2025

Preface: Essentially, any industry or application that requires communication in environments with unreliable or intermittent network conditions can benefit from BPv7’s capabilities. µD3TN has been successfully tested in Low Earth Orbit (LEO) on the OPS-SAT satellite, demonstrating its ability to handle the unique challenges of space communication, such as high latency and intermittent connectivity.

Background: The uD3TN project, a software implementation of the Delay-/Disruption-Tolerant Networking (DTN) Bundle Protocol, incorporates an allocator that functions similarly to the C standard library’s malloc dynamic memory allocator.

This allocator within uD3TN is responsible for managing memory allocation and deallocation for various components and data structures used within the DTN protocol stack. This includes, for example, the allocation of memory for bundles, which are the fundamental data units in DTN, as well as for internal structures and buffers required for bundle processing, forwarding, and storage.

The design of this allocator aims to provide efficient memory management within the constraints and requirements of a DTN implementation, potentially considering factors such as resource limitations in embedded systems or the need for robust handling of intermittent connectivity.

Vulnerability details: NULL Pointer Dereference in µD3TN via non-singleton destination Endpoint Identifier allows remote attacker to reliably cause DoS.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2025-8183

Security Focus: CVE‑2025‑23284 NVIDIA vGPU software contains a vulnerability (25-07-2025)

Preface: Memory Allocation Flow:

  1. User-space request (e.g., CUDA malloc or OpenGL buffer allocation).
  2. Driver calls memmgrCreateHeap_IMPL() to create a memory heap.
  3. Heap uses pmaAllocatePages() to get physical memory.
  4. Virtual address space is mapped using UVM or MMU walker.
  5. Memory is returned to user-space or GPU context.

Background:

An OS-agnostic binary is a compiled program designed to run on multiple operating systems without requiring separate builds for each. This means the binary file can be executed on different OS platforms without modification, achieving a level of portability that’s not common with traditional compiled software.

The core loadable module within the NVIDIA vGPU software package is the NVIDIA kernel driver, specifically named nvidia[.]ko. This module facilitates communication between the guest virtual machine (VM) and the physical NVIDIA GPU. It’s split into two main components: an OS-agnostic binary and a kernel interface layer. The OS-agnostic component, for example, nv-kernel[.]o_binary for the nvidia[.]ko module, is provided as a pre-built binary to save time during installation. The kernel interface layer is specific to the Linux kernel version and configuration.

Vulnerability details:

CVE-2025-23285: NVIDIA vGPU software contains a vulnerability in the Virtual GPU Manager, where a malicious guest could cause a stack buffer overflow. A successful exploit of this vulnerability might lead to code execution, denial of service, information disclosure, or data tampering.

CVE2025-23283: NVIDIA vGPU software for Linux-style hypervisors contains a vulnerability in the Virtual GPU Manager, where a malicious guest could cause stack buffer overflow. A successful exploit of this vulnerability might lead to code execution, denial of service, escalation of privileges, information disclosure, or data tampering.

Official announcement: Please see the url for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5670