2024-53022: Memory corruption may occur during communication between primary and guest VM (6th Mar 2025)

Preface: QNX hypervisors are available in two variants: QNX Hypervisor and QNX Hypervisor for Safety.

The QNX Hypervisor variant (QH), which includes QNX Hypervisor 8.0, is not a safety-certified product. It must not be used in a safety-related production system.

If you are building a safety-related system, you must use the QNX Hypervisor for Safety (QHS) variant that has been built and approved for use in the type of system you are building, and you must use it only as specified in its Safety Manual. The latest QHS release is QNX Hypervisor for Safety 2.2, which is based on QNX SDP 7.1.

Background:  Functions like mprotect() are not commonly used in QNX hypervisor memory resource management for reasons:

  1. Memory Isolation: The hypervisor ensures that each VM (both primary and guest) has its own isolated memory space. This prevents one VM from accessing the memory of another, enhancing security and stability.
  2. Dynamic Memory Allocation: The hypervisor can dynamically allocate memory to VMs based on their needs. This means that if a guest VM requires more memory, the hypervisor can allocate additional memory from the available pool.
  3. Memory Ballooning: This technique allows the hypervisor to reclaim unused memory from VMs and reallocate it where needed. The balloon driver within the VM inflates to consume memory, which is then returned to the hypervisor.
  4. Memory Hotplug: The hypervisor can add or remove memory from a VM while it is running. This allows for flexible memory management without needing to restart the VM.

Vulnerability details: Memory corruption may occur during communication between primary and guest VM.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2024-53022

CVE-2025-22413: (ANDROID (KVM (arm64)) Don’t run a protected VCPU if it isn’t runnable! (5 March 2025)

Preface: The protected Kernel-based Virtual Machine (pKVM) is an advanced virtualization technology built on top of the Linux Kernel-based Virtual Machine (KVM). It is designed to enhance security and isolation for virtual machines (VMs) running on Android devices.

Key points about pKVM:

Enhanced Security: pKVM restricts access to the payloads running in guest VMs marked as ‘protected’ at the time of creation. This ensures that even if the host Android system is compromised, the guest VMs remain secure.

Isolation: It provides strong confidentiality and integrity guarantees by isolating memory and devices into individual protected VMs (pVMs).

Compatibility: pKVM is compatible with existing operating systems and workloads that rely on KVM-based virtual machines.

Background: In the context of pKVM, a vCPU (virtual Central Processing Unit) represents a virtualized CPU core assigned to a virtual machine (VM). Each vCPU in a VM’s operating system corresponds to one physical CPU core.

In pKVM, vCPUs are used to manage and allocate processing power to protected virtual machines (pVMs), ensuring that each VM has the necessary resources to operate securely and efficiently.

Vulnerability details: Don’t run a protected VCPU in pKVM if it isn’t in a runnable PSCI state. For protected VMs, the PSCI state is the reference state for whether they are runnable or not.

Official announcement: Please refer to the link for details – https://android.googlesource.com/kernel/common/+/1a3366f0d3d9b94a8c025d9863edc3b427435c4c

CVE-2025-0078: Ensuring that the identity of the requesting service is included and verified during inter-process communication (4th Mar 2025)

Preface: The Gospel of Matthew 24:37

As it was in the days of Noah, so it will be at the coming of the Son of Man. For in the days before the flood, people were eating and drinking,..etc

Background: In Android, the ServiceManager is a key component in the Binder IPC (Inter-Process Communication) mechanism. It manages system services and provides a way for clients to obtain references to these services.

Here’s a brief overview of how the ServiceManager operates:

  1. Initialization: The ServiceManager is started by the init process during the system boot. It is defined in the init.rc script, which specifies the service and its executable path.
  2. Service Registration: When a service wants to register with the ServiceManager, it calls the addService method. This method takes the service name and a reference to the service’s Binder interface.
  3. Service Lookup: Clients can query the ServiceManager to get a reference to a registered service using the getService method. This method returns the Binder interface of the requested service.
  4. Security and Permissions: Starting from Android 8.1, SELinux policies have become stricter. Services must be defined in the plat_service_contexts file to be allowed to register with the ServiceManager. This ensures that only authorized services can be registered and accessed..
  5. Communication: Once a service is registered, clients can communicate with it through Binder IPC. The ServiceManager acts as a mediator, ensuring that the communication is secure and efficient.

Vulnerability details: local privilege escalation

Bug fixes: The setRequestingSid(true) method in the ServiceManager is used to enable the inclusion of the Security Identifier (SID) in service requests. This is part of the security framework in Android, ensuring that the identity of the requesting service is included and verified during inter-process communication (IPC).

Official announcement: Please refer to the vendor announcement for details – https://android.googlesource.com/platform/frameworks/native/+/c32d4defe0f4e5cad86437d6672de7a76caf1a79

CVE-2020-24658: A year-old vulnerability is still hiding in embedded systems (3rd Mar 2025)

Preface: Many programmers continue to use Arm Compiler 5 for several reasons:

Developers who have been using Arm Compiler 5 for years are familiar with its quirks and features, making it easier for them to continue using it rather than learning a new toolchain.

Furthermore, Arm Compiler 5 supports older ARM architectures that may not be fully supported by newer compilers.

Background: When compiling ARM code with stack protection, the –protect_stack option is used to safeguard against stack buffer overflows and potential malicious tampering. Here are the conditions under which a function is considered vulnerable and thus protected:

  1. Arm Compiler 5:
    A function is considered vulnerable if it contains a char or wchar_t array of any size1.
  2. Arm Compiler 6:
    With -fstack-protector, a function is considered vulnerable if it contains:
    -A character array larger than 8 bytes.
    -An 8-bit integer array larger than 8 bytes.
    -A call to alloca() with either a variable size or a constant size bigger than 8 bytes1.
    With -fstack-protector-strong, a function is considered vulnerable if it contains:
    -An array of any size and type.
    -A call to alloca().
    -A local variable that has its address taken1.

Using these options helps improve the overall security and integrity of your code by preventing stack buffer overflows

Vulnerability details: In certain circumstances the stack protection feature can be rendered ineffective, leaving the protected function vulnerable to stack-based buffer overflows.

An undetected stack overflow can lead to a function return address being overwritten, potentially causing a crash or hang or allowing an attacker to gain control over program execution.

Official announcement: Please refer to the vendor announcement for detail – https://developer.arm.com/documentation/110262/1-1/?lang=en

PAGE PREFETCHER ATTACK – AMD ID: AMD-SB-7040 (28-2-2025)

Preface: Page prefetching is a technique used to improve performance by preloading data into the cache before it’s actually needed. However, the implementation and presence of a page prefetcher can vary depending on the CPU architecture and design.

Background: A page prefetcher attack is a type of side-channel attack that exploits the page prefetching mechanism in modern CPUs. Page prefetching is a performance optimization technique where the CPU predicts and loads pages of memory into the cache before they are actually needed. This can inadvertently create security vulnerabilities.

In a page prefetcher attack, an attacker can infer sensitive information by observing the patterns and timing of page prefetching operations. For example, the attacker might be able to determine which memory pages are being accessed by the victim, thereby gaining insights into the victim’s activities or extracting sensitive data.

About the topic: Researchers have disclosed to AMD a potential exploit, the page prefetcher attack (PPA), a prefetcher-based side-channel attack.

Manufacturer response: AMD has evaluated the paper and does not believe there are any new security implications. Please refer to the link – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7040.html

CVE-2024-36353: CROSS-PROCESS GPU MEMORY DISCLOSURE (27-02-2025)

Preface: Regarding its use in HPC clusters, the Radeon PRO V710 is indeed suitable. It is supported by AMD’s ROCm platform, which is optimized for HPC and AI workloads. Additionally, it is used in Azure’s NVads V710 v5-series virtual machines, which are designed for GPU-accelerated applications, including HPC.

Background: The global memory of the AMD Radeon™ PRO V710 is the 28 GB of GDDR6 memory. This memory is connected via a 224-bit memory interface and operates at an effective speed of 18 Gbps1. The memory is used for storing data that the GPU processes, such as textures, frame buffers, and other computational data.

The NVIDIA Container Toolkit is specifically designed to work with NVIDIA GPUs and their CUDA framework. It is not compatible with AMD GPUs. For AMD GPUs, you should use the ROCm (Radeon Open Compute) platform, which provides similar functionality for containerized environments.

OpenCL (Open Computing Language) in AMD ROCm (Radeon Open Compute) serves as a framework for writing programs that execute across heterogeneous platforms, including CPUs, GPUs, and other processors. Specifically, in the context of AMD ROCm, OpenCL allows developers to harness the computational power of AMD GPUs for high-performance, data-parallel computing tasks.

Vulnerability details: Insufficient clearing of GPU global memory could allow a malicious process running on the same GPU to read left over memory values potentially leading to loss of confidentiality.

Official announcement: Please refer to the link for details https://www.amd.com/en/resources/product-security/bulletin/amd-sb-6019.html

CVE-2024-0148: NVIDIA Jetson Linux and IGX OS image contains a vulnerability in the UEFI firmware RCM boot mode (25-02-2025)

Preface: NVIDIA IGX Orin software is used by a variety of organizations, particularly those in industrial and medical environments. This platform is designed to support AI applications at the edge, providing high performance, advanced functional safety, and security.

Some specific use cases include:

  • Industrial Automation: Companies use IGX Orin to enhance manufacturing processes with AI-driven automation and predictive maintenance.
  • Healthcare: Medical institutions leverage IGX Orin for AI-powered diagnostics, medical imaging, and patient monitoring.
  • Robotics: Robotics companies utilize IGX Orin for developing intelligent robots that can operate safely alongside humans.

The platform’s versatility and robust support make it suitable for any organization looking to deploy AI solutions in demanding environments.

Background: The NVIDIA IGX Orin Developer Kit runs the Holopack 2.0 Developer Preview software. Holopack is a comprehensive solution for end-to-end GPU accelerated AI application development and testing. Holopack supports two GPU modes:

iGPU – Holopack deploys drivers and libraries to support NVIDIA Ampere sets on NVIDIA IGX Orin modules into the GPU.

dGPU – Holopack deploys drivers and libraries to support optional NVIDIA RTX A6000 connected to PCIe slot Discrete GPU.

Its high-performance, low-power computing for deep learning, and computer vision makes Jetson the ideal platform for compute-intensive projects. The Jetson platform includes a variety of Jetson modules with NVIDIA JetPack™ SDK.

Vulnerability details: NVIDIA Jetson Linux and IGX OS image contains a vulnerability in the UEFI firmware RCM boot mode, where an unprivileged attacker with physical access to the device could load untrusted code. A successful exploit might lead to code execution, escalation of privileges, data tampering, denial of service, and information disclosure. The scope of the impacts can extend to other components.

Remark: UEFI supply-chain allows for many of these shared libraries to be integrated in various ways, including compiled from source, licensed for modification and reuse and finally as a dynamic or static linked executable.

Official announcement: Please refer to the vendor announcement for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5617

Python , have ever you though? (25-02-2025)

Preface: Maintaining a satellite’s orbit involves a combination of precise calculations and regular adjustments. Here are the key factors:

  1. Velocity and Gravity: A satellite stays in orbit by balancing its velocity (speed in a straight line) with the gravitational pull of the Earth. The satellite must travel fast enough to counteract the pull of gravity, which keeps it in a stable orbit.
  2. Orbital Station-Keeping: This involves small adjustments using thrusters to correct any deviations in the satellite’s path. These maneuvers ensure the satellite remains in its designated orbit.
  3. Fuel Management: Satellites carry a limited amount of fuel for these adjustments. Efficient fuel management is crucial for prolonging the satellite’s operational life.
  4. Monitoring and Control: Ground stations continuously monitor satellites and send commands to perform necessary adjustments. This helps in maintaining the satellite’s orbit and addressing any potential issues.

Background: The PyEphem module provides highly precise data on the planets and our solar system. This module leverages an extremely robust C library that allows you to pinpoint planets, perform interplanetary calculations and discover more data than you’ll ever know what to do with.

Best practice: If you’re using PyEphem, it’s a good idea to keep your Python environment and libraries up to date and to check the module’s GitHub repository for any reported issues or updates.

When the Sumerian advanced civilization met Cray HPC

(24-02-2025)

Preface: Is linear algebra used in real life? An example of where there is a lot of research on these things is in sparse matrix analysis, which comes up a lot in real world applications of linear algebra. For some buzzwords, popular topics like machine learning, neural networks, and computer graphics all use huge amounts of linear algebra.

Since a box’s length is independent of its width and breadth, space has three dimensions. Since any point in space may be described by a linear combination of three independent vectors, space is considered to be three-dimensional in the technical language of linear algebra.

In Einstein’s special relativity theory we live in 4 dimensional spacetime. Though the way we normally “imagine” the world, we tend to believe that we live in a 3 dimensional Newtonian space with a separate absolute time dimension.

Introduction: AI calculations often rely on various mathematical techniques, including linear algebra, Fourier transforms, and sparse matrix operations.

Some of the key math libraries in ROCm include:

  • rocBLAS: A library for basic linear algebra subprograms.
  • rocFFT: A library for fast Fourier transforms.
  • rocRAND: A library for random number generation.
  • rocSOLVER: A library for solving linear algebra problems.
  • rocSPARSE: A library for sparse matrix operations

These libraries are optimized for AMD hardware and provide similar functionality to NVIDIA’s cuBLAS, cuFFT, cuRAND, etc., making it easier for developers to port their applications between different hardware platforms.

What does ROCm stand for? ROCm initially stood for Radeon Open Compute platform; however, due to Open Compute being a registered trademark, ROCm is no longer an acronym — it is simply AMD’s open-source stack designed for GPU compute.

Official reference: If you are interested in ROCm, please refer to the following link – https://rocm.docs.amd.com/en/docs-5.7.1/reference/gpu_libraries/math.html

CVE-2024-46975 : GPU DDK – rgxfw_write_robustness_buffer allows arbirtary catreg set mapping (23rd Feb 2025)

Preface: A Memory Management Unit (MMU) is the hardware system which performs both virtual memory mapping and checks the current privilege to keep user processes separated from the operating system — and each other. In addition it helps to prevent caching of ‘volatile’ memory regions (such as areas containing I/O peripherals.

Background: Generally speaking, GPU firmware and driver functionality do utilize the L2 cache. The L2 cache in a GPU is a larger, shared cache that helps improve memory access speeds and reduce latency for various operations. It plays a crucial role in optimizing the performance of GPU-accelerated tasks by storing frequently accessed data closer to the GPU cores.

The L2 cache is particularly important for managing memory access across different Streaming Multiprocessors (SMs) within the GPU. By efficiently handling memory requests and reducing the need for crossbar communication, the L2 cache helps minimize latency and improve overall task performance.

Vulnerability details: Kernel software installed and running inside a Guest VM may exploit memory shared with the GPU Firmware to write data into another Guest’s virtualised GPU memory.

Official announcement: Please refer to the vendor announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-46975

antihackingonline.com