Category Archives: Potential Risk of CVE

CVE-2025-22413: (ANDROID (KVM (arm64)) Don’t run a protected VCPU if it isn’t runnable! (5 March 2025)

Preface: The protected Kernel-based Virtual Machine (pKVM) is an advanced virtualization technology built on top of the Linux Kernel-based Virtual Machine (KVM). It is designed to enhance security and isolation for virtual machines (VMs) running on Android devices.

Key points about pKVM:

Enhanced Security: pKVM restricts access to the payloads running in guest VMs marked as ‘protected’ at the time of creation. This ensures that even if the host Android system is compromised, the guest VMs remain secure.

Isolation: It provides strong confidentiality and integrity guarantees by isolating memory and devices into individual protected VMs (pVMs).

Compatibility: pKVM is compatible with existing operating systems and workloads that rely on KVM-based virtual machines.

Background: In the context of pKVM, a vCPU (virtual Central Processing Unit) represents a virtualized CPU core assigned to a virtual machine (VM). Each vCPU in a VM’s operating system corresponds to one physical CPU core.

In pKVM, vCPUs are used to manage and allocate processing power to protected virtual machines (pVMs), ensuring that each VM has the necessary resources to operate securely and efficiently.

Vulnerability details: Don’t run a protected VCPU in pKVM if it isn’t in a runnable PSCI state. For protected VMs, the PSCI state is the reference state for whether they are runnable or not.

Official announcement: Please refer to the link for details – https://android.googlesource.com/kernel/common/+/1a3366f0d3d9b94a8c025d9863edc3b427435c4c

CVE-2025-0078: Ensuring that the identity of the requesting service is included and verified during inter-process communication (4th Mar 2025)

Preface: The Gospel of Matthew 24:37

As it was in the days of Noah, so it will be at the coming of the Son of Man. For in the days before the flood, people were eating and drinking,..etc

Background: In Android, the ServiceManager is a key component in the Binder IPC (Inter-Process Communication) mechanism. It manages system services and provides a way for clients to obtain references to these services.

Here’s a brief overview of how the ServiceManager operates:

  1. Initialization: The ServiceManager is started by the init process during the system boot. It is defined in the init.rc script, which specifies the service and its executable path.
  2. Service Registration: When a service wants to register with the ServiceManager, it calls the addService method. This method takes the service name and a reference to the service’s Binder interface.
  3. Service Lookup: Clients can query the ServiceManager to get a reference to a registered service using the getService method. This method returns the Binder interface of the requested service.
  4. Security and Permissions: Starting from Android 8.1, SELinux policies have become stricter. Services must be defined in the plat_service_contexts file to be allowed to register with the ServiceManager. This ensures that only authorized services can be registered and accessed..
  5. Communication: Once a service is registered, clients can communicate with it through Binder IPC. The ServiceManager acts as a mediator, ensuring that the communication is secure and efficient.

Vulnerability details: local privilege escalation

Bug fixes: The setRequestingSid(true) method in the ServiceManager is used to enable the inclusion of the Security Identifier (SID) in service requests. This is part of the security framework in Android, ensuring that the identity of the requesting service is included and verified during inter-process communication (IPC).

Official announcement: Please refer to the vendor announcement for details – https://android.googlesource.com/platform/frameworks/native/+/c32d4defe0f4e5cad86437d6672de7a76caf1a79

CVE-2020-24658: A year-old vulnerability is still hiding in embedded systems (3rd Mar 2025)

Preface: Many programmers continue to use Arm Compiler 5 for several reasons:

Developers who have been using Arm Compiler 5 for years are familiar with its quirks and features, making it easier for them to continue using it rather than learning a new toolchain.

Furthermore, Arm Compiler 5 supports older ARM architectures that may not be fully supported by newer compilers.

Background: When compiling ARM code with stack protection, the –protect_stack option is used to safeguard against stack buffer overflows and potential malicious tampering. Here are the conditions under which a function is considered vulnerable and thus protected:

  1. Arm Compiler 5:
    A function is considered vulnerable if it contains a char or wchar_t array of any size1.
  2. Arm Compiler 6:
    With -fstack-protector, a function is considered vulnerable if it contains:
    -A character array larger than 8 bytes.
    -An 8-bit integer array larger than 8 bytes.
    -A call to alloca() with either a variable size or a constant size bigger than 8 bytes1.
    With -fstack-protector-strong, a function is considered vulnerable if it contains:
    -An array of any size and type.
    -A call to alloca().
    -A local variable that has its address taken1.

Using these options helps improve the overall security and integrity of your code by preventing stack buffer overflows

Vulnerability details: In certain circumstances the stack protection feature can be rendered ineffective, leaving the protected function vulnerable to stack-based buffer overflows.

An undetected stack overflow can lead to a function return address being overwritten, potentially causing a crash or hang or allowing an attacker to gain control over program execution.

Official announcement: Please refer to the vendor announcement for detail – https://developer.arm.com/documentation/110262/1-1/?lang=en

CVE-2024-36353: CROSS-PROCESS GPU MEMORY DISCLOSURE (27-02-2025)

Preface: Regarding its use in HPC clusters, the Radeon PRO V710 is indeed suitable. It is supported by AMD’s ROCm platform, which is optimized for HPC and AI workloads. Additionally, it is used in Azure’s NVads V710 v5-series virtual machines, which are designed for GPU-accelerated applications, including HPC.

Background: The global memory of the AMD Radeon™ PRO V710 is the 28 GB of GDDR6 memory. This memory is connected via a 224-bit memory interface and operates at an effective speed of 18 Gbps1. The memory is used for storing data that the GPU processes, such as textures, frame buffers, and other computational data.

The NVIDIA Container Toolkit is specifically designed to work with NVIDIA GPUs and their CUDA framework. It is not compatible with AMD GPUs. For AMD GPUs, you should use the ROCm (Radeon Open Compute) platform, which provides similar functionality for containerized environments.

OpenCL (Open Computing Language) in AMD ROCm (Radeon Open Compute) serves as a framework for writing programs that execute across heterogeneous platforms, including CPUs, GPUs, and other processors. Specifically, in the context of AMD ROCm, OpenCL allows developers to harness the computational power of AMD GPUs for high-performance, data-parallel computing tasks.

Vulnerability details: Insufficient clearing of GPU global memory could allow a malicious process running on the same GPU to read left over memory values potentially leading to loss of confidentiality.

Official announcement: Please refer to the link for details https://www.amd.com/en/resources/product-security/bulletin/amd-sb-6019.html

CVE-2024-0148: NVIDIA Jetson Linux and IGX OS image contains a vulnerability in the UEFI firmware RCM boot mode (25-02-2025)

Preface: NVIDIA IGX Orin software is used by a variety of organizations, particularly those in industrial and medical environments. This platform is designed to support AI applications at the edge, providing high performance, advanced functional safety, and security.

Some specific use cases include:

  • Industrial Automation: Companies use IGX Orin to enhance manufacturing processes with AI-driven automation and predictive maintenance.
  • Healthcare: Medical institutions leverage IGX Orin for AI-powered diagnostics, medical imaging, and patient monitoring.
  • Robotics: Robotics companies utilize IGX Orin for developing intelligent robots that can operate safely alongside humans.

The platform’s versatility and robust support make it suitable for any organization looking to deploy AI solutions in demanding environments.

Background: The NVIDIA IGX Orin Developer Kit runs the Holopack 2.0 Developer Preview software. Holopack is a comprehensive solution for end-to-end GPU accelerated AI application development and testing. Holopack supports two GPU modes:

iGPU – Holopack deploys drivers and libraries to support NVIDIA Ampere sets on NVIDIA IGX Orin modules into the GPU.

dGPU – Holopack deploys drivers and libraries to support optional NVIDIA RTX A6000 connected to PCIe slot Discrete GPU.

Its high-performance, low-power computing for deep learning, and computer vision makes Jetson the ideal platform for compute-intensive projects. The Jetson platform includes a variety of Jetson modules with NVIDIA JetPack™ SDK.

Vulnerability details: NVIDIA Jetson Linux and IGX OS image contains a vulnerability in the UEFI firmware RCM boot mode, where an unprivileged attacker with physical access to the device could load untrusted code. A successful exploit might lead to code execution, escalation of privileges, data tampering, denial of service, and information disclosure. The scope of the impacts can extend to other components.

Remark: UEFI supply-chain allows for many of these shared libraries to be integrated in various ways, including compiled from source, licensed for modification and reuse and finally as a dynamic or static linked executable.

Official announcement: Please refer to the vendor announcement for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5617

CVE-2024-46975 : GPU DDK – rgxfw_write_robustness_buffer allows arbirtary catreg set mapping (23rd Feb 2025)

Preface: A Memory Management Unit (MMU) is the hardware system which performs both virtual memory mapping and checks the current privilege to keep user processes separated from the operating system — and each other. In addition it helps to prevent caching of ‘volatile’ memory regions (such as areas containing I/O peripherals.

Background: Generally speaking, GPU firmware and driver functionality do utilize the L2 cache. The L2 cache in a GPU is a larger, shared cache that helps improve memory access speeds and reduce latency for various operations. It plays a crucial role in optimizing the performance of GPU-accelerated tasks by storing frequently accessed data closer to the GPU cores.

The L2 cache is particularly important for managing memory access across different Streaming Multiprocessors (SMs) within the GPU. By efficiently handling memory requests and reducing the need for crossbar communication, the L2 cache helps minimize latency and improve overall task performance.

Vulnerability details: Kernel software installed and running inside a Guest VM may exploit memory shared with the GPU Firmware to write data into another Guest’s virtualised GPU memory.

Official announcement: Please refer to the vendor announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-46975

CVE‑2024‑53870, CVE‑2024‑53871, CVE‑2024‑53872, CVE‑2024‑53873, CVE‑2024‑53874, CVE‑2024‑53875, CVE‑2024‑53876, CVE‑2024‑53877, CVE‑2024‑53878 and CVE‑2024‑53879 (21-02-2025)

Released on February 18, 2025

Preface: In NVIDIA CUDA, cuobjdump and nvdisasm are two binary utilities used for examining and disassembling CUDA binaries (cubin files).

cuobjdump

  • Purpose: It can disassemble CUDA binaries and extract PTX (Parallel Thread Execution) code from host binaries, executables, object files, static libraries, and external fatbinary files.
  • Usage: cuobjdump is versatile as it accepts both cubin files and host binaries.
  • Features: It provides basic disassembly and extraction capabilities but lacks advanced display options and control flow analysis.

nvdisasm

  • Purpose: It is specifically designed to disassemble cubin files.
  • Usage: Unlike cuobjdump, nvdisasm only accepts cubin files.
  • Features: It offers richer output options, including advanced display options and control flow analysis.

These tools are essential for developers who need to inspect and debug the compiled CUDA code.

Background: Parallel processing is a method in computing of running two or more processors (CPUs) to handle separate parts of an overall task. Breaking up different parts of a task among multiple processors will help reduce the amount of time to run a program. GPUs render images more quickly than a CPU because of its parallel processing architecture, which allows it to perform multiple calculations across streams of data simultaneously. The CPU is the brain of the operation, responsible for giving instructions to the rest of the system, including the GPU(s).

NVIDIA CUDA provides a simple C/C++ based interface. The CUDA compiler leverages parallelism built into the CUDA programming model as it compiles your program into code.
CUDA is a parallel computing platform and programming interface model created by Nvidia for the development of software which is used by parallel processors. It serves as an alternative to running simulations on traditional CPUs.

Vulnerability details:

The following two design flaws are associated with these CVEs:

CVE‑2024‑53870, CVE‑2024‑53871, CVE‑2024‑53872, CVE‑2024‑53873, CVE‑2024‑53874, CVE‑2024‑53875, CVE‑2024‑53876, CVE‑2024‑53877, CVE‑2024‑53878 and CVE‑2024‑53879

NVIDIA CUDA toolkit for Linux and Windows contains a vulnerability in the cuobjdump binary, where a user could cause a crash by passing a malformed ELF file to cuobjdump. A successful exploit of this vulnerability might lead to a partial denial of service.

NVIDIA CUDA toolkit for all platforms contains a vulnerability in the nvdisasm binary, where a user could cause an out-of-bounds read by passing a malformed ELF file to nvdisasm. A successful exploit of this vulnerability might lead to a partial denial of service.

Official announcement: Please refer to the vendor announcement for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5594

CVE-2024-57258 – Integer overflows in memory allocation in Das U-Boot  (19-02-2025)

Preface: U-Boot is both a first-stage and second-stage bootloader. It is loaded by the system’s ROM (e.g. on-chip ROM of an ARM CPU) from a supported boot device, such as an SD card, SATA drive, NOR flash (e.g. using SPI or I²C), or NAND flash.

Background: Das U-Boot is an open source, primary boot loader used in embedded devices to package the instructions to boot the device’s operating system kernel. U-Boot uses commands similar to the BASH shell to manipulate environment variables. U-Boot supports TFTP (Trivial FTP), a stripped down FTP. So that user authentication is not required for downloading images into the board’s RAM

LK is the abbreviation of Little Kernel. LK is commonly used as bootloader in the Android system of Qualcomm platform. It is an open source project. LK is the boot part of the whole system, so it is not independent. However, LK currently only supports arm and x86 architectures. The notable feature of LK is that it implements a simple thread mechanism. And deeply customized and used with Qualcomm’s processors.

Vulnerability details: Integer overflows in memory allocation in Das U-Boot before 2025.01-rc1 occur for a crafted squashfs filesystem via sbrk, via request2size, or because ptrdiff_t is mishandled on x86_64.

Remark: An integer overflow is a type of software vulnerability that occurs when a variable, such as an integer, exceeds its assigned memory space. This can result in unexpected behavior or security issues, such as allowing an attacker to execute arbitrary code.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2024-57258

nodejs: GOAWAY HTTP/2 frames cause memory leak outside heap (CVE-2025-23085) 17-02-2025

Preface: If artificial intelligence could create the world. Do you know how his creation differs from Genesis? Artificial intelligence focuses on efficiency, and everything needs to be fast.

But God is concerned with the balance of nature. Therefore, the development of everything is not rapid.

Background: HTTP/2 enables full request and response multiplexing. In practice, this means a connection made to a web server from your browser can be used to send multiple requests and receive multiple responses. This eliminates some of the time it takes to establish a new connection for each request.

The GOAWAY frame in HTTP/2 (type=0x7) is used to initiate the shutdown of a connection or to signal serious error conditions. When a server sends a GOAWAY frame, it tells the client to stop creating new streams on the connection. However, it allows the server to finish processing any streams that were already in progress. This mechanism is useful for administrative actions, such as server maintenance, as it allows for a graceful shutdown without abruptly terminating ongoing request.

Vulnerability details: A memory leak could occur when a remote peer abruptly closes the socket without sending a GOAWAY notification. Additionally, if an invalid header was detected by nghttp2, causing the connection to be terminated by the peer, the same leak was triggered. This flaw could lead to increased memory consumption and potential denial of service under certain conditions. This vulnerability affects HTTP/2 Server users on Node.js v18.x, v20.x, v22.x and v23.x.

Official announcement: Please refer to the link for details – https://access.redhat.com/errata/RHSA-2025:1613

Cache-based Side-Channel Attack Against SEV (18th Feb 2024)

Originally posted by AMD 3rd Feb 2025

Updated Acknowledgement – 2025-02-17

Preface: FIPS 186-5 removes DSA as an approved digital signature algorithm “due to a lack of use by industry and based on academic analyses that observed that implementations of DSA may be vulnerable to attacks if domain parameters are not properly generated.

February 3, 2023 – NIST published Federal Information Processing Standard (FIPS) 186-5, Digital Signature Standard (DSS), along with NIST Special Publication (SP) 800-186, Recommendations for Discrete Logarithm-based Cryptography: Elliptic Curve Domain Parameters.  

Background: The SEV feature relies on elliptic-curve cryptography for its secure key generation, which runs when a VM is launched. The VM initiates the elliptic-curve algorithm by providing points along its NIST (National Institute of Standards and Technology) curve and relaying the data based on the private key of the machine.

Vulnerability details: AMD has received a report from researchers at National Taiwan University detailing cache-based side-channel attacks against Secure Encrypted Virtualization (SEV).

Remedy: AMD recommends software developers employ existing best practices for prime and probe attacks (including constant-time algorithms) and avoid secret-dependent data accesses where appropriate.  AMD also recommends following previously published guidance regarding Spectre type attacks (refer to the link in the reference section below), as it believes the previous guidance remains applicable to mitigate these vulnerabilities.

Supplement: The lack of authentication in the memory encryption is one major drawback of the Secure Memory Encryption (SME) design, which has been demonstrated in fault injection attacks. SEV inherits this security issue. Therefore, a malicious hypervisor may alter the ciphertext of the encrypted memory without triggering faults in the guest VM.

Office announcement: Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3010.html