Part 1: Spectre-v2 Domain Isolation, does it fight against variants? (19-05-2025)

Preface: In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch will go before this is known definitively.

Background: Although the Spectre v2 vulnerability has been protected by domain isolation technologies such as IBPB, eIBRS and BHI_NO, which prevent attackers from training the indirect branch predictor with their own code, the VUSec security research team found that as long as the attacker allows the code to train itself (that is, training and speculative execution occur in the same privileged domain), the Spectre v2 attack can be re-implemented.

Modern CPUs use branch predictors to guess the target of indirect branches (like function pointers or virtual calls) to keep the pipeline full and improve performance. These predictors are part of the speculative execution engine.

How Attackers Use Their Own Code?

1. Attackers Use Their Own Code

2.Triggering Misprediction:

– When the victim code runs, the CPU may speculatively execute based on the attacker’s training.

– If the attacker has set things up correctly, the CPU speculatively executes code paths that leak sensitive data (e.g., via cache timing).

3.The attacker then uses cache timing attacks (like Flush+Reload or Prime+Probe) to infer what was speculatively executed, revealing secrets.

White paper: Researchers from VU Amsterdam have shared with AMD a paper exploring the effectiveness of domain isolation against Spectre-v2 type attacks.

AMD believes the techniques described by the researchers are not applicable to AMD products.

For more details, please refer to link – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7034.html

CVE-2025-47436: Heap-based Buffer Overflow vulnerability in Apache ORC. (15-5-2025)

Preface: Traditional row-based formats for data normalization have several limitations:

Complex Queries: Normalization often requires joining multiple tables to retrieve data, which can make queries more complex and slower.

Maintenance Challenges: Maintaining a highly normalized database can be more difficult, as changes to the schema may require updates to multiple tables.

Background: Apache ORC (Optimized Row Columnar) is a free and open-source, column-oriented data storage format designed for use in Hadoop and other big data processing systems. It was created to address the limitations of traditional row-based formats, providing a more efficient way to store and process large datasets. ORC is widely used by data processing frameworks like Apache Spark, Apache Hive, Apache Flink, and Apache Hadoop.

Vulnerability details: Heap-based Buffer Overflow vulnerability in Apache ORC. A vulnerability has been identified in the ORC C++ LZO decompression logic, where specially crafted malformed ORC files can cause the decompressor to allocate a 250-byte buffer but then attempts to copy 295 bytes into it. It causes memory corruption.

Remedy: This issue affects Apache ORC C++ library: through 1.8.8, from 1.9.0 through 1.9.5, from 2.0.0 through 2.0.4, from 2.1.0 through 2.1.1. Users are recommended to upgrade to version 1.8.9, 1.9.6, 2.0.5, and 2.1.2, which fix the issue.

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-47436

Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection – Part 2 (14-05-2025)

Preface: Before reading the detailed information, it is recommended to read Part 1 first.

Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection (Part 1)  –

http://www.antihackingonline.com/under-our-observation/privilege-desynchronization-cross-privilege-spectre-attacks-with-branch-privilege-injection-14-05-2025/

Technical details: It is to ensure the serialization of memory access. The internal operation is to add some delays in a series of memory accesses to ensure that the memory access after this instruction occurs after the memory access before this instruction is completed (no overlap occurs).

Performs a serializing operation on all load-from-memory instructions that were issued prior the LFENCE instruction. Specifically, LFENCE does not execute until all prior instructions have completed locally, and no later instruction begins execution until LFENCE completes.

AMD’s AutoIBRS (Automatic Indirect Branch Restricted Speculation) is designed to mitigate timing-based attacks, such as Spectre. AutoIBRS helps avoid the performance overhead associated with LFENCE by automatically restricting speculative execution of indirect branches. This mechanism reduces the need for frequent LFENCE instructions, thereby minimizing delays while still protecting against timing vulnerabilities.

Cyber security focus provided by ETH Zurich: Researchers from ETH Zurich have provided AMD with a paper titled “Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection.”
AMD reviewed the paper and believes that this vulnerability does not impact AMD CPUs. 

If supported by the processor, operating systems enable eIBRS or AutoIBRS to mitigate cross-privilege BTI attacks. These mitigations need to keep track of the privilege domain of branch instructions to work correctly, which is non-trivial due to the highly complex and asynchronous nature of branch prediction. For example, previous work has shown that branch predictions are updated before branches retire, and in certain cases even before they are decoded. Our first challenge revolves around analyzing the behavior of restricted branch prediction under race conditions.

Official announcement: Researchers from ETH Zurich have provided AMD with a paper titled “Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection.”
AMD reviewed the paper and believes that this vulnerability does not impact AMD CPUs. 

Please see the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7030.html

Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection (14-05-2025)

Preface: Enhanced IBRS (eIBRS) and Automatic IBRS (AutoIBRS) are features designed to mitigate the Spectre V2 vulnerability, which affects speculative execution in CPUs.

Background: AutoIBRS is a similar feature introduced by AMD in their Zen 4 processors. It automatically manages IBRS mitigation resources across privilege level transitions, offering better performance compared to Retpoline. This feature is particularly beneficial for AMD’s Ryzen 7000 and EPYC 9004 series processors.

AMD EPYC 9004 series processors are designed for data centers and high-performance computing (HPC) environments. They offer features like up to 96 “Zen 4” cores, 12 channels of DDR5 memory, and PCIe Gen5 support.

Cyber security focus provided by ETH Zurich: Researchers from ETH Zurich have provided AMD with a paper titled “Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection.”
AMD reviewed the paper and believes that this vulnerability does not impact AMD CPUs. 

If supported by the processor, operating systems enable eIBRS or AutoIBRS to mitigate cross-privilege BTI attacks. These mitigations need to keep track of the privilege domain of branch instructions to work correctly, which is non-trivial due to the highly complex and asynchronous nature of branch prediction. For example, previous work has shown that branch predictions are updated before branches retire, and in certain cases even before they are decoded. Our first challenge revolves around analyzing the behavior of restricted branch prediction under race conditions.

Official announcement: Researchers from ETH Zurich have provided AMD with a paper titled “Privilege Desynchronization: Cross-Privilege Spectre Attacks with Branch Privilege Injection.”
AMD reviewed the paper and believes that this vulnerability does not impact AMD CPUs. 

Please see the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7030.html

CVE-2025-21460: Improper Input Validation in Automotive Software platform based on QNX. (13th May 2025)

Preface: As of June 26, 2023, QNX software is now embedded in over 255 million vehicles worldwide, including most leading OEMs and Tier 1s, such as BMW, Bosch, Continental, Dongfeng Motor, Geely, Ford, Honda, Mercedes-Benz, Subaru, Toyota, Volkswagen, Volvo, and more.

Background: In Automotive Ethernet Audio Video Bridging (eAVB), reliable communication is not limited to audio alone. eAVB ensures efficient and reliable communication for both audio and video data, as well as other types of data that require low latency and high synchronization. This includes applications such as infotainment systems, advanced driver-assistance systems (ADAS), and vehicle-to-vehicle communication.

The standards for eAVB, including Time-Sensitive Networking (TSN), provide guaranteed latencies and the ability to build redundant network paths for safety-critical communications. This makes eAVB a versatile solution for various types of data within the automotive network.

Vulnerability details:

Improper Input Validation in Automotive Software platform based on QNX

Description: Memory corruption while processing a message, when the buffer is controlled by a Guest VM, the value can be changed continuously.

Official announcement: Please see the link for details – https://nvd.nist.gov/vuln/detail/CVE-2025-21460

About CVE-2025-37889 – ASoC framework Consistently treat platform_max as control value (12th May 2025)

Preface: The overall project goal of the ALSA System on Chip (ASoC) layer is to provide better ALSA support for embedded system-on-chip processors.

Advanced Linux Sound Architecture (ALSA) is a software framework and part of the Linux kernel that provides an application programming interface (API) for sound card device drivers.

Background: The snd_soc_put_volsw() function is part of the ALSA System on Chip (ASoC) layer in the Linux kernel. It is included in the kernel source, but whether it is available by default depends on the specific kernel configuration and the presence of ASoC support. Here’s a brief overview of its features:

Purpose: It sets the volume control values for the sound subsystem.

Arguments: It takes two arguments: kcontrol, which represents the mixer control, and ucontrol, which contains the control element information.

Return Value: It returns 0 on success.

Vulnerability details: ASoC: ops: Consistently treat platform_max as control value This reverts commit 9bdd10d57a88 (“ASoC: ops: Shift tested values in snd_soc_put_volsw() by +min”), and makes some additional related updates.

Speculated this is an enhancement and remediation for CVE-2022-48917

In the Linux kernel, the following vulnerability has been resolved: ASoC: ops: Shift tested values in snd_soc_put_volsw() by +min While the $val/$val2 values passed in from userspace are always >= 0 integers, the limits of the control can be signed integers and the $min can be non-zero and less than zero. To correctly validate $val/$val2 against platform_max, add the $min offset to val first. (CVE-2022-48917)

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-37889

CVE-2025-37834: About Linux vmscan[.]c (8th May 2025)

Preface: All systems based on the Linux kernel utilize the vmscan[.]c file for memory management. This file is integral to the kernel’s memory reclamation process, ensuring efficient use of system memory across various Linux distributions.

Background: The vmscan[.]c file in the Linux kernel is responsible for managing memory reclamation. It contains functions that help the system reclaim memory by scanning and freeing up pages that are no longer in use. This process is crucial for maintaining system performance and preventing memory shortages.

Some key functions within vmscan.c include:

kswapd: A kernel thread that periodically scans and frees up memory pages.

shrink_node: This function attempts to reclaim memory from a specific node.

shrink_zone: It works on reclaiming memory from a specific zone within a node.

These functions work together to ensure that the system has enough free memory to operate efficiently.

Vulnerability details: mm/vmscan: don’t try to reclaim hwpoison folio. The vulnerability has been resolved.

The enhancement in the vmscan[.]c file, specifically the handling of hardware-poisoned pages, is indeed part of the broader memory management improvements. This enhancement is not limited to the shrink_node function alone. It applies to various parts of the memory reclamation process, including functions like shrink_zone and shrink_folio_list.

Official announcement: Please see the link for details – https://nvd.nist.gov/vuln/detail/CVE-2025-37834

CVE-2024-49835 – Out-of-bounds Write in SPS Applications (8th May 2025)

Preface: Semi-Persistent Scheduling (SPS) is used in LTE and 5G networks to reduce control channel overhead for applications requiring persistent radio resource allocations, such as VoIP and VoLTE . The memory usage for SPS on Android devices can vary based on several factors, including the specific implementation and the network conditions.

A method and apparatus for determining validity of a semi-persistent scheduling (SPS) resource across multiple cells in a wireless communication system is provided. A user equipment (UE) receives a SPS resource configuration including time information related to validity of the SPS resource configuration from a network, and determines whether the SPS resource configuration is valid or not according to the time information.

Background: Semi-Persistent Scheduling (SPS) Workflow

  1. The RF module in the Snapdragon chip receives the SPS resource configuration from the network. This configuration includes time information related to the validity of the SPS resource.
  2. The Physical Layer (PHY) processes the received configuration to determine its validity based on the time information provided.
  3. If the configuration is valid, the Medium Access Control (MAC) layer handles the allocation of radio resources for multiple consecutive Transmission Time Intervals (TTIs). This reduces the need for frequent scheduling decisions and signaling overhead.
  4. The MAC layer coordinates with the Radio Link Control (RLC) layer to manage data transmission using the allocated resources. The RLC layer ensures data integrity and proper sequencing.
  5. The Digital Signal Processor (DSP) and Application Processor within the Snapdragon chip are responsible for executing the scheduling algorithms and managing the data flow.The configuration and scheduling information are stored in the shared memory accessible by both the DSP and the application processor.

Vulnerability details: Out-of-bounds Write in SPS Applications. Memory corruption while reading secure file. This is a type of memory access error that occurs when a program writes data from a memory address outside of the bounds of a buffer. This can result in the program writing data that does not belong to it, which can cause crashes, incorrect behavior, or even security vulnerabilities.

Official announcement: For details, please refer to the link –https://nvd.nist.gov/vuln/detail/cve-2024-49835

Mali GPU Driver Security Bulletin: CVE-2025-0427

(7th May 2025)

Last updated: 2 May 2025 (official)

Preface: An ioctl interface is a single system call by which userspace may communicate with device drivers. Requests on a device driver are vectored with respect to this ioctl system call, typically by a handle to the device and a request number.

Background: The Arm Mali GPU, when installed on an Android phone, works alongside the CPU rather than replacing it. The Mali GPU is specifically designed for handling graphics processing tasks, such as rendering images, animations, and videos, which helps to offload these tasks from the CPU. This allows the CPU to focus on other computational tasks, improving overall device performance and efficiency.

The Mali GPU itself does not have an embedded CPU; it is a separate component that works in conjunction with the device’s main CPU. This collaboration between the GPU and CPU ensures that graphics-intensive applications, like games and videos, run smoothly while maintaining efficient power usage.

Vulnerability details: Use After Free vulnerability in Arm Ltd Bifrost GPU Kernel Driver, Arm Ltd Valhall GPU Kernel Driver, Arm Ltd Arm 5th Gen GPU Architecture Kernel Driver allows a local non-privileged user process to perform valid GPU processing operations to gain access to already freed memory.

Impact: This issue affects Bifrost GPU Kernel Driver: from r8p0 through r49p3, from r50p0 through r51p0; Valhall GPU Kernel Driver: from r19p0 through r49p3, from r50p0 through r53p0; Arm 5th Gen GPU Architecture Kernel Driver: from r41p0 through r49p3, from r50p0 through r53p0.

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-0427

https://developer.arm.com/documentation/110465/latest

CVE-2024-49739 – GPU DDK misuse ptrace system call (6th May 2025)

Official release posted: 2nd May 2025

Since the manufacturer did not provide a detailed description, is the situation discovered by the manufacturer similar to this article details?

Preface:

Nvidia is a major player in the GPU market, known for its high-performance graphics cards used in gaming, professional visualization, data centers, and AI applications.

Imagination Technologies specializes in providing GPU processor solutions for graphics and AI vision applications. They focus on mobile devices, automotive, and embedded systems.

Background: All PowerVR GPUs are based on unique Tile Based Deferred Rendering (TBDR) architecture; the only true deferred rendering GPU architecture in the world.  True deferred rendering GPU architecture, specifically Tile-Based Deferred Rendering (TBDR), is a unique approach used by PowerVR GPUs.

Tile-Based Deferred Rendering (TBDR)

– Tile-Based Rendering: The screen is divided into small tiles, and each tile is processed individually. This allows the GPU to store data like color and depth buffers in internal memory, reducing the need for frequent access to system memory. This results in lower energy consumption and higher performance.

– Deferred Rendering: This technique defers texturing and shading operations until the visibility of each pixel in the tile is determined. Only the pixels that will be visible to the user consume processing resources, which enhances efficiency.

Vulnerability details: Software installed and run as a non-privileged user may conduct ptrace system calls to issue writes to GPU origin read only memory.

Resolution: The DDK Kernel module has been updated to address this  improper use of ptrace system call to prevent write requests to read-only memory.

Official announcement: Please see the link for details –

https://www.imaginationtech.com/gpu-driver-vulnerabilities