AMD response to method for privileged attackers with physical access to a motherboard (3rd Oct 2025)

Preface: AMD does not plan to release any mitigations in response to this report because the reported exploit is outside the scope of the published threat model for SEV-SNP.

Remark: A physical attack is not a cyber attack because “cyber” refers to actions within computer networks and digital systems, whereas a physical attack directly involves the physical world, such as breaking into a building or destroying hardware. While a physical attack can lead to cyber vulnerabilities or data breaches, the act itself is not inherently digital.

Background: SEV-SNP is a TEE that protects the confidentiality and integrity of whole VMs against an attacker with root privileges and physical access to the machine, enabling to run SEV-protected VMs without trusting the infrastructure provider and virtualization layers such as the hypervisor.

A Trusted Execution Environment (TEE) is a secure, isolated area within a device’s main processor, protected from the main operating system and other untrusted software. It uses special hardware to create a trusted space (a “secure world”) to run sensitive code and protect data’s confidentiality and integrity. TEEs are used for security-sensitive operations like biometric authentication, secure payments, and protecting private keys in crypto wallets.

The “probe” for Serial Presence Detect (SPD) data on DDR4 and DDR5 modules is an I2C bus and associated protocols that allow the motherboard’s firmware (BIOS) to read an EEPROM chip on the memory module.

How the Attack Works?

1.Attacker gains physical access to the system and modifies the SPD data.

2.They falsely report a larger memory size than actually exists.

3.This causes the memory controller to use ghost address bits, creating aliasing — multiple physical addresses pointing to the same memory location.

4.The attacker can then:

-Overwrite encrypted guest memory.

-Inject malicious data into memory regions.

-Bypass SEV-SNP’s memory integrity protections, which assume correct physical mappings.

Official announcement: For more details, please refer to the link –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3024.html

CVE-2025-10657: About Enhanced Container Isolation (2nd Oct 2025)

Preface: Standardized AI/ML model packaging: With OCI artifacts, models can be versioned, distributed, and tracked like container images. This promotes consistency and traceability across environments.Docker Desktop, specifically through its Docker Model Runner feature, can be used to run various AI models, particularly Large Language Models (LLMs) and other AI models that can be packaged as OCI Artifacts.

OCI Artifacts are any arbitrary files associated with software applications, extending the standardized OCI (Open Container Initiative) image format to include content beyond container images, such as Helm charts, Software Bill of Materials (SBOMs), digital signatures, and provenance data. These artifacts leverage the same fundamental OCI structure of manifest, config, and layers and are stored and distributed using OCI-compliant registries and tools like the ORAS CLI.

Background: A container desktop, such as Docker Desktop, acts as a local development environment and a management host for CI/CD pipelines by providing consistent, isolated environments for building, testing, and deploying containerized applications. It enables developers to package applications with their dependencies into portable containers, eliminating “works on my machine” issues and ensuring application uniformity across development, testing, and production. This simplifies the entire software delivery process, accelerating the development lifecycle by integrating container management directly into the developer’s workflow.

Vulnerability details: In a hardened Docker environment, with Enhanced Container Isolation ( ECI https://docs.docker.com/enterprise/security/hardened-desktop/enhanced-container-isolation/ ) enabled, an administrator can utilize the command restrictions feature https://docs.docker.com/enterprise/security/hardened-desktop/enhanced-container-isolation/config/#command-restrictions  to restrict commands that a container with a Docker socket mount may issue on that socket. Due to a software bug, the configuration to restrict commands was ignored when passed to ECI, allowing any command to be executed on the socket. This grants excessive privileges by permitting unrestricted access to powerful Docker commands. The vulnerability affects only Docker Desktop 4.46.0 users that have ECI enabled and are using the Docker socket command restrictions feature. In addition, since ECI restricts mounting the Docker socket into containers by default, it only affects containers which are explicitly allowed by the administrator to mount the Docker socket.

Official announcement: For more details, please see the link –

https://nvd.nist.gov/vuln/detail/CVE-2025-10657

CVE-2025-59936: About get-jwks, OAuth 2.0, and OpenID Connect (OIDC). Be vigilant! (30th Sep, 2025)

Preface: JSON Web Key Sets (JWKS) are a popular and essential component for secure, decentralized authentication systems, particularly in OAuth 2.0 and OpenID Connect (OIDC) flows, where they provide a standardized, interoperable, and scalable method for clients to obtain the public keys needed to verify the digital signatures of JSON Web Tokens (JWTs) without requiring synchronous communication with the identity provider.

Background: Using a JSON Web Key Set (JWKS) eliminates the need for resource servers to resend keys, as they can automatically retrieve new keys from the JWKS endpoint to verify tokens after key rotation, reducing manual effort and downtime. The resource server caches the JWKS document and uses the kid (Key ID) from the token to find the correct public key to validate the signature.

Benefits of using JWKS:

Automated Key Rotation: No manual updates are needed for clients or resource servers when keys are rotated.

Reduced Downtime: Applications can dynamically fetch new keys, minimizing the need for restarts or manual configuration during key rotation.

Simplified Management: A centralized JWKS endpoint simplifies the process of managing public keys across multiple clients and systems.

Enhanced Security: By rotating keys regularly, the window of vulnerability for a compromised key is limited to the time-to-live of the token, minimizing the impact of a potential breach.

Vulnerability details: get-jwks contains fetch utils for JWKS keys. In versions prior to 11.0.2, a vulnerability in get-jwks can lead to cache poisoning in the JWKS key-fetching mechanism. When the iss (issuer) claim is validated only after keys are retrieved from the cache, it is possible for cached keys from an unexpected issuer to be reused, resulting in a bypass of issuer validation. This design flaw enables a potential attack where a malicious actor crafts a pair of JWTs, the first one ensuring that a chosen public key is fetched and stored in the shared JWKS cache, and the second one leveraging that cached key to pass signature validation for a targeted iss value. The vulnerability will work only if the iss validation is done after the use of get-jwks for keys retrieval. This issue has been patched in version 11.0.2.

Official announcement: Please refer to the website for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-59936

CVE-2025-55780: AI LLM developers should not underestimate Mupdf design flaw! (29-09-2025)

Preface: LLMs are built on machine learning: specifically, a type of neural network called a transformer model. How do LLMs read PDFs? The first step was to extract the text blocks from the PDF using pdfplumber . Each text block came with its coordinates, which allowed to analyze their spatial relationships. Next, I created a “window” around each text block to capture its surrounding context. 

Background: MuPDF is not widely known by consumers as a popular standalone application, but it is popular and growing in popularity among developers, particularly those working with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems, due to its powerful and lightweight nature.

Large Language Models (LLMs) do not directly “read” PDF files in their native binary format. Instead, they interact with the extracted content of the PDF. MuPDF, through its Python binding PyMuPDF (or its specialized variant PyMuPDF4LLM), plays a crucial role in this process by enabling efficient and accurate extraction of information from PDFs.

Vulnerability details: A null pointer dereference occurs in the function break_word_for_overflow_wrap() in MuPDF 1.26.4 when rendering a malformed EPUB document. Specifically, the function calls fz_html_split_flow() to split a FLOW_WORD node, but does not check if node->next is valid before accessing node->next->overflow_wrap, resulting in a crash if the split fails or returns a partial node chain.

Official announcement: For more details, see the link

https://nvd.nist.gov/vuln/detail/CVE-2025-55780

CVE-2025-23348 and CVE-2025-23349: About NVIDIA Megatron-LM (26-09-2025)

Preface: For years, OpenAI’s GPT series has been a dominant force, while NVIDIA’s Megatron-LM has provided a powerful framework for training these massive models.

NVIDIA Megatron-LM faces competition from several other frameworks especially Microsoft DeepSpeed, Hugging Face Accelerate, JAX/Flax and PyTorch Lightning.

Both PyTorch Lightning and NVIDIA Megatron-LM are built on top of the PyTorch library. PyTorch provides the fundamental tensor operations and deep learning primitives, while these frameworks add abstractions and tools for more efficient and scalable model development and training.

Background: The full GPT pre-training process:

A script such as pretrain_gpt[.]py orchestrates the following major steps to train the model from scratch on billions of parameters and terabytes of data. This structure contains four steps:

  1. Data preparation
  2. Distributed setup
  3. Core training loop
  4. Model saving and evaluation

The design objective of a script like orqa/unsupervised/nq.py is to prepare the GPT model for open-domain question answering (QA), a task that is not typically a part of standard, large-scale unsupervised pre-training. The script specifically uses the Natural Questions (NQ) dataset to enhance the model’s ability to retrieve information from a large corpus of documents and generate answers, all without the direct use of a labeled QA dataset for this step.

Vulnerability details:

CVE-2025-23348: NVIDIA Megatron-LM for all platforms contains a vulnerability in the pretrain_gpt script, where malicious data created by an attacker may cause a code injection issue. A successful exploit of this vulnerability may lead to code execution, escalation of privileges, information disclosure, and data tampering.

CVE-2025-23349: NVIDIA Megatron-LM for all platforms contains a vulnerability in the tasks/orqa/unsupervised/nq.py component, where an attacker may cause a code injection. A successful exploit of this vulnerability may lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for more details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5698

Is the impact of the CVE-2025-10184 vulnerability not limited to PoC test devices? (25-09-2025)

Preface: The com.android[.]providers[.]telephony and com[.]android[.]phone packages are not similar in function; they serve different and distinct purposes in the Android telephony system.

This package (com[.]android[.]providers[.]telephony) is a content provider that manages and provides access to telephony-related data. 

  • Database manager: It contains data related to phone operations, including the history and content of SMS and MMS messages, call logs, and the list of Access Point Names (APNs) used for mobile data connections.
  • Data access: Other apps must request permission to access this package’s database to read or write call logs, SMS, and other telephony data.

Background: The Telephony provider and its associated classes like com[.]android[.]providers[.]telephony[.]PushMessageProvider are common in Android smartphones as they are core components of the operating system responsible for managing SMS and MMS messages. com[.]android[.]providers[.]telephony[.]PushShopProvider and com[.]android[.]providers[.]telephony[.]ServiceNumberProvider are also standard components for managing push messages and service numbers, respectively.

Vulnerability details:

The vulnerability allows any application installed on the device to read SMS/MMS data and metadata from the system-provided Telephony provider without permission, user interaction, or consent. The user is also not notified that SMS data is being accessed. This could lead to sensitive information disclosure and could effectively break the security provided by SMS-based Multi-Factor Authentication (MFA) checks. The root cause is a combination of missing permissions for write operations in several content providers (com[.]android[.]providers[.]telephony[.]PushMessageProvider, com[.]android[.]providers[.]telephony[.]PushShopProvider, com[.]android[.]providers[.]telephony[.]ServiceNumberProvider), and a blind SQL injection in the update method of those providers.

Ref: The issue stems from two main problems in the content providers:

Missing write permissions in several exported content providers:

com[.]android[.]providers[.]telephony[.]PushMessageProvider

com[.]android[.]providers[.]telephony[.]PushShopProvider

com[.]android[.]providers[.]telephony[.]ServiceNumberProvider

A blind SQL injection vulnerability in the update() method of these providers:

The where clause in SQL queries is passed unsanitized, allowing attackers to inject arbitrary SQL commands.

Official announcement: Please see the link for details –

https://www.tenable.com/cve/CVE-2025-10184

AMD responds to DRAM-related side-channel attacks (24th Sep 2025)

Preface: DDR5 memory has two independent 32-bit sub-channels per DIMM, while DDR4 uses a single 64-bit channel. There are many types of DDR5 DIMMs.

  • UDIMM (Unbuffered DIMM): Commonly used in consumer-grade desktops and laptops, UDIMMs provide a balance of performance and cost-efficiency.
  • RDIMM (Registered DIMM): Utilized in servers and workstations, RDIMMs include a register that buffers data, enhancing stability and allowing for larger memory capacities.
  • SODIMM (Small Outline DIMM): Designed for laptops and compact devices, SODIMMs offer a smaller form factor without sacrificing performance.

Background: DRAM side-channel attacks exploit timing differencesand row buffer behavior in the memory subsystem — particularly row conflicts and row hits — to infer sensitive information. These behaviors are fundamental to how DRAM works, regardless of whether it’s UDIMM, RDIMM, or SODIMM.

What does vary between DIMM types is:

  • Signal integrity and buffering (RDIMMs have registers that buffer commands)
  • Capacity and scalability
  • Latency and performance characteristics

However, the core vulnerability — the ability to observe timing differences due to row buffer behavior — exists across all types of DRAM. The attack feasibility may differ slightly due to architectural differences, but no DIMM type is inherently immune.

Researchers have provided AMD with a paper titled “Quo VADIS DDR5? Verifying Addressing of DRAM In Software.”

In this paper, the authors present an approach to verifying DRAM addressing functions from software using the DRAM row conflict side channel. The authors claim that the presented verification methodology provides a cheap and reliable alternative to verification using physical access and expensive measurement equipment such as oscilloscopes. They also demonstrate that they exploited the row conflict side channel as a covert channel and a website fingerprinting attack with a high success rate.

Security Focus: University Researchers discovered the previously unknown rank selection side channel and reverse engineer its function on two DDR4 and two DDR5 systems. These results enable novel DDR5 row-conflict side-channel attacks, which we demonstrated in two scenarios: a covert channel with 1.39 Mbit/s, and a website fingerprinting attack with an F1 score of 84 % on DDR4 and 74 % on DDR5. They conclude that as reverse-engineering of DRAM address functions remains relevant, our new verification methodology provides a cheap and reliable alternative to verification using expensive physical measurements.

Official announcement: Please see the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7036.html

Chypnosis on FPGAs – AMD is investigating whether on specific devices and components are affected and plans to provide updates as new findings emerge.(22nd Sep 2025)

Preface: AMD uses FPGAs (Field-Programmable Gate Arrays) in High-Performance Computing (HPC) by offering accelerator cards and adaptive SoCs that allow users to program custom hardware for HPC workloads in fields like machine learning, data analytics, and scientific simulations.

AMD manufactures FPGA-based accelerator cards that enable users to program applications directly onto the FPGA, eliminating the lengthy card design process. These cards install as-is in servers, accelerating workloads in financial computing, machine learning, computational storage, and data analytics.

Background: The XADC is an integrated, on-chip block within certain AMD (formerly Xilinx) FPGAs that performs analog-to-digital conversion (ADC) and also includes on-chip sensors for voltage and temperature monitoring. The FPGA provides the programmable logic to process the digitized data from the XADC, use it for control, or access it through the FPGA’s interconnects like the Dynamic Reconfiguration Port (DRP) or JTAG interface.

Xilinx ADCs (XADCs), particularly flash ADCs, have disadvantages related to high power consumption, large physical size, and limited resolution due to the large number of comparators required for higher bit depth. Non-linearity can also introduce signal distortion and measurement errors, while the integration of ADCs directly into FPGAs may not be feasible for all applications due to the required external components.

Security Focus of an Academic Research Paper: Attacks on the Programmable Logic (PL) in AMD Artix™ 7 Series FPGA Devices.

Artix 7 FPGAs and Artix™ UltraScale+ difference – Key Differences at a Glance:

The main difference is that Artix™ UltraScale+ FPGAs are a newer, higher-performance family built on a 16nm FinFET process, offering improved power efficiency, higher transceiver speeds, and more advanced features like enhanced DSP blocks and hardened memory, while the Artix 7 FPGAs are older devices built on a 28nm process. UltraScale+ also features ASIC-class clocking, supports faster memory interfaces like LPDDR4x and DDR4, and includes advanced security features.

Vulnerability details: The academic research paper introducing the new approach demonstrates the attack on the programmable logic (PL) in AMD Artix™ 7-Series FPGA devices. It shows that the on-chip XADC-based voltage monitor is too slow to detect and/or execute a tamper response to clear memory contents. Furthermore, they show that detection circuits that have been developed to detect clock freezing2 are ineffective as well. In general, the attack can be applied on all ICs that do not have effective tamper responses to clear sensitive data in case of an undervoltage event.

Official announcement: Please see the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-8018.html

CVE-2025-10585: Type Confusion in V8 (22nd Sep 2025)

Preface: Type confusion is a vulnerability where a program accesses a resource using an incompatible type, leading to unexpected behavior or memory corruption. This often occurs when a program misinterprets the type of data being used, potentially leading to the execution of the wrong code or the disclosure of sensitive information. This can happen due to issues with type casting, memory layout mismatches, or speculative execution, and it’s a common foundation for various software attacks.

Background: V8 is Google’s open source high-performance JavaScript and Web Assembly engine, written in C++. It is used in Chrome and in Node.js, among others. V8 provides the core JavaScript execution environment that Node.js is built upon. It allows Node.js to: Execute JavaScript code outside the browser.

V8 is Google’s high-performance JavaScript engine used in Chrome and Node.js. It compiles JavaScript directly into machine code, optimizing execution through techniques like just-in-time (JIT) compilation. V8 uses multiple tiers of compilers (Ignition, Sparkplug, Maglev, Turbofan) and an efficient garbage collector to manage memory. Its design prioritizes speed and efficiency, making it a key component in modern web development.

Vulnerability details: CVE-2025-10585: Type Confusion in V8. Reported by Google Threat Analysis Group on 2025-09-16. Google has patched the issue, but details are restricted to prevent further exploitation until most users have updated.

Official announcement: Please refer to the link for details

https://chromereleases.googleblog.com/2025/09/stable-channel-update-for-desktop_17.html

CVE-2025-3231: About ARM Mali. Learn more about the details (19th Sep 2025)

NVD Published Date: 09/08/2025
NVD Last Modified: 09/08/2025

Preface: The Mali kernel driver and userspace libraries are found in different locations depending on whether the system is Android or a general Linux distribution, and also based on the specific Mali GPU generation and the SoC vendor’s implementation.

Background: Mali GPU is a hardware accelerator.

  • It does not run an OS itself.
  • It relies on kernel-space and user-space drivers (like the Mali kernel driver and userspace libraries) to interface with the operating system (Linux, Android, etc.).

ioctl (Input/Output Control) is the primary syscall used by userspace GPU drivers to communicate with the kernel-space driver. It allows sending custom commands and structured data to the driver.

Typical ioctl operations in Mali drivers include:

  • MALI_IOCTL_ALLOC_MEM: Allocate GPU-accessible memory
  • MALI_IOCTL_FREE_MEM: Free previously allocated memory
  • MALI_IOCTL_SUBMIT_JOB: Submit a GPU job (e.g., shader execution)
  • MALI_IOCTL_WAIT_JOB: Wait for job completion
  • MALI_IOCTL_MAP_MEM: Map memory to userspace

Vulnerability details: CVE-2025-3212 is a vulnerability in the kernel driver that interfaces with the Mali GPU. Here’s what that means:

  • The vulnerability is in software, not the hardware.
  • It allows a local non-privileged user to exploit the driver to access freed memory, which could contain sensitive data or allow privilege escalation.
  • The Mali GPU hardware itself is not “vulnerable” in the sense of having a flaw — but it becomes a vector for exploitation because of the flawed driver.

Official announcement: Please refer to the link for details – https://developer.arm.com/documentation/110627/1-0/

antihackingonline.com