CVE-2025-7427: Affected Products (Arm Development Studio before 2025.0) [23-07-2025]

Preface: Arm Development Studio (Arm DS) is a suite of software tools designed for developing and debugging software for Arm-based SoCs (Systems on Chip). Thus, Arm DS optimizing software for Arm-based hardware, primarily in embedded systems and IoT devices. These tools help developers write, compile, and debug code for various Arm processors, including those found in SoCs. Applications built with Arm DS can be deployed to various targets, including microcontrollers, embedded Linux systems, and even Android devices.

Background: Arm Development Studio is available for both Windows and Linux operating systems. Specifically, it supports 64-bit x86 host platforms for both Windows 10 and Linux distributions like Red Hat Enterprise Linux 7 and Ubuntu Desktop Editions 18.04 LTS and 20.04 LTS.

Redistributable Packages: For distributing applications built with Arm Development Studio, developers might need to include redistributable packages like those related to the Visual C++ runtime libraries, which are also provided as [.]dll files.

Vulnerability details: CVE-2025-32702: Improper neutralization of special elements used in a command (‘command injection’) in Visual Studio allows an unauthorized attacker to execute code locally.

Official announcement: Please refer to the URL for details https://nvd.nist.gov/vuln/detail/CVE-2025-32702

Why CVE-2025-53770 and CVE-2025-53771 do not affect SharePoint Online, even though they exploit how SharePoint processes serialized input in on-prem environments. (22-07-2025)

Preface: About a decade ago, a technical white paper evaluated how many cybersecurity experts a bank should hire to manage cybersecurity.

Although Tier 1 financial institutions have information security controls in-house, their effectiveness cannot be compared to managed security services. The examples at the time were AWS cloud and cybersecurity controls. Now, the details of these two CVE records extend this story again.

Background: The architectural differences between SharePoint Online (Office 365) and SharePoint Server (on-premises) in the context of vulnerabilities like CVE-2025-53770 and CVE-2025-53771.

According to Microsoft’s official guidance, these vulnerabilities only affect on-premises SharePoint Server installations (2016, 2019, and Subscription Edition). Do not impact SharePoint Online in Microsoft 365.

The reason SharePoint Online is not vulnerable involves multiple layers of architectural and operational differences, including:

Microsoft controls the infrastructure runtime environment.

In SharePoint Online, developers cannot deploy full-trust code or access server-side object models like SPSite or SPWeb. This eliminates many attack vectors that exist in on-prem environments. In SharePoint Online Intelligent Proxy and Request Filtering. These systems can detect and block unsafe deserialization attempts before they reach backend services.

Reference:

The SPWeb parameter in SharePoint refers to an object that represents a SharePoint website (or subsite) within a site collection. It’s used in PowerShell cmdlets like Get-SPWeb, New-SPWeb, Set-SPWeb, and Remove-SPWeb to interact with and manage these websites.

The term “SPSite parameter” generally refers to parameters used with the Get-SPSite, New-SPSite, and Set-SPSite cmdlets in SharePoint PowerShell. These parameters are used to specify or configure site collection properties, such as the URL, owner, template, quota, or lock state.

Vulnerability details:

CVE-2025-53770 is a “deserialisation of untrusted data” vulnerability. Successful exploitation could allow an unauthenticated remote attacker to execute arbitrary code on the SharePoint Server. CVE-2025-53770 addresses a partial fix for CVE-2025-49704 released in Microsoft’s July 2025.

CVE-2025-53771 is a “path traversal”, “improper neutralisation”, and “improper input validation” vulnerability. CVE-2025-53771 addresses a partial fix for CVE-2025-49706 released in Microsoft’s July 2025 scheduled security updates.

Ref: Avoiding exposure of vulnerable endpoints like /_layouts/15/ToolPane.aspx to the internet is directly related to CVE-2025-53771, as well as CVE-2025-53770. These two vulnerabilities are chained together in an exploit known as ToolShell.

Official announcement: Please refer to the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-53770

https://nvd.nist.gov/vuln/detail/CVE-2025-53771

CVE-2023-4969 – Researchers from Trail of Bits reported a potential vulnerability, titled “LeftoverLocals.”, actually this GPU design weakness are fickle! (21-07-2025)

Preface: “LeftoverLocals” allows recovery of data from GPU local memory created by other processes on Apple, Qualcomm, AMD, and Imagination GPUs. LeftoverLocals affects the security posture of the entire GPU application, especially LLM and machine learning models running on affected GPU platforms. NVD published on January 16, 2024. So far, AMD appears to be the only company actively taking remediation measures.

Background: Researchers from Trail of Bits reported a potential vulnerability, titled “LeftoverLocals” article to public on 16th January 2024. The corrective action was taken by AMD in following schedule.

2025-07-18: Updated the Mitigation section for AMD Radeon Graphics

2025-06-23: Updated the Mitigation section for Data Center Graphics, AMD Radeon Graphics, and revised Client Processors table

2025-04-07: Updated the Mitigation section for Data Center Graphics, AMD Radeon Graphics, and Client Processors

2025-02-11: Updated the Mitigation section – Data Center Graphics

2025-01-15: Mitigation section has been updated and AMD Ryzen™ AI 300 Series Processor (Formerly codenamed) “Strix Point” FP8 has been added to the Client Processors list

2024-11-07: Mitigation has been updated for MI300 and MI300A

Updated driver version from 24.x.y to 25.x.y

2024-10-30: Updated mitigation targets

2024-08-02: Updated AMD Software: Adrenalin Edition and PRO Edition versions.

Removed: AMD Ryzen™ 3000 Series Processors with Radeon™ Graphics (Not affected)

Added: AMD Ryzen™ 8000 Series Processors with Radeon™ Graphics and AMD Ryzen™ 7030 Series Processors with Radeon™ Graphics

2024-07-30: Updated the Mitigation section of AMD RadeonTM Graphics and Client processors product tables

Updated Data Center Graphics Inter-VM and Bare Metal/Intra-VM Mitigation product tables

Updated mitigation section month for driver update rollout

2024-05-07: Added Vega products and Mitigation section with Product tables

2024-01-26: Updated Graphics and Data Center Graphics products

2024-01-16: Initial publication

Vulnerability details: CVE-2023-4969: A GPU kernel can read sensitive data from another GPU kernel (even from another user or app) through an optimized GPU memory region called _local memory_ on various architectures.

Official announcement: Please refer to the official link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-6010.html

Remark: In step 5, CU2 is written incorrectly. The correct word should be CU.

CVE-2025-23270: NVIDIA Jetson Linux contains a vulnerability in UEFI Management mode (20th July 2025)

Preface: To enter UEFI Management mode on a Jetson device, you’ll typically need to access it during the boot process by pressing a specific key (like F2, F10, or Del) before the OS starts loading. Once in UEFI, you can configure settings related to booting, such as boot order and device selection.

Background: CUDA is a parallel computing platform and programming model developed by NVIDIA, designed to leverage the power of GPUs for general-purpose computing. Linux for Tegra (L4T) is NVIDIA’s customized Linux distribution based on Ubuntu, optimized for their Tegra family of system-on-chips (SoCs), including those used in Jetson development kits. Essentially, L4T provides the operating system and necessary drivers for running CUDA-enabled applications on NVIDIA’s embedded platforms.

NVIDIA Jetson Linux is a customized version of the Linux operating system specifically designed for NVIDIA Jetson embedded computing modules. It provides a complete software stack, including the Linux kernel, bootloader, drivers, and libraries, tailored for the Jetson platform’s hardware and intended for edge AI and robotics applications.

Vulnerability details:

CVE-2025-23270 NVIDIA Jetson Linux contains a vulnerability in UEFI Management mode, where an unprivileged local attacker may cause exposure of sensitive information via a side channel vulnerability. A successful exploit of this vulnerability might lead to code execution, data tampering, denial of service, and information disclosure.

CVE-2025-23269 NVIDIA Jetson Linux contains a vulnerability in the kernel where an attacker may cause an exposure of sensitive information due to a shared microarchitectural predictor state that influences transient execution. A successful exploit of this vulnerability may lead to information disclosure.

Official announcement: Please see the link for details

https://nvidia.custhelp.com/app/answers/detail/a_id/5662

“When error occurs, the data remaining on cache memory. When OS started, a malicious program stored in device then executes read on shared memory.”

CVE-2025-23263: About NVIDIA DOCA-Host and Mellanox OFED (17th July 2025)

Preface: Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED) is a software stack developed by NVIDIA (formerly Mellanox) that provides a tested and packaged version of the OpenFabrics Enterprise Distribution (OFED) for Mellanox network adapters. It enables high-performance networking capabilities, including RDMA and kernel bypass, for InfiniBand and Ethernet (RoCE) technologies.

Background: NVIDIA introduced DOCA-OFED in the DOCA-Host package. DOCA-Host is a unified package for host servers that includes all the basic components of DOCA and MLNX_OFED. MLNX_OFED is a single Virtual Protocol Interconnect (VPI) software stack that operates across all NVIDIA network adapter solutions.

Nvidia has also developed the Computing Unified Device Architecture (CUDA) and Data Center Infrastructure Single-Chip Architecture (DOCA) for CPU and GPU, CPU and DPU.

During the GTC conference in the fall of 2020, the SmartNIC technology they acquired after acquiring network equipment manufacturer Mellanox was officially unveiled under the name of DPU. Mellanox’s BlueField product line is considered a DPU (Data Processing Unit) because it’s designed to offload and accelerate data-centric tasks, such as networking, storage, and security, from the CPU. Essentially, DPUs like BlueField act as a specialized co-processor, handling tasks that would otherwise consume valuable CPU resources, improving overall system performance and efficiency. NVIDIA BlueField DPUs (Data Processing Units) are designed as System on a Chip (SoC) devices.

Vulnerability details: NVIDIA DOCA-Host and Mellanox OFED contain a vulnerability in the VGT+ feature, where an attacker on a VM might cause escalation of privileges and denial of service on the VLAN.

Official announcement: Please refer to url for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5654

CVE-2025-23266 and CVE-2025-23266: NVIDIA Container Toolkit design weakness (16-07-2025)

Preface: Docker Compose is a tool that makes it easier to define and manage multi-container Docker applications. It simplifies running interconnected services, such as a frontend, backend API, and database, by allowing them to be launched and controlled together.

Docker Compose is a utility for defining and running multi-container Docker applications. Furthermore, Docker Compose responsible manages the container lifecycle. Container lifecycle management is a critical process of overseeing the creation, deployment, and operation of a container until its eventual decommissioning.

Background: Docker Compose v2.30.0 has introduced lifecycle hooks, making it easier to manage actions tied to container start and stop events. This feature lets developers handle key tasks more flexibly while keeping applications clean and secure.

Vulnerability details:

CVE-2025-23266: NVIDIA Container Toolkit for all platforms contains a vulnerability in some hooks used to initialize the container, where an attacker could execute arbitrary code with elevated permissions. A successful exploit of this vulnerability might lead to escalation of privileges, data tampering, information disclosure, and denial of service.

CVE-2025-23267: NVIDIA Container Toolkit for all platforms contains a vulnerability in the update-ldcache hook, where an attacker could cause a link following by using a specially crafted container image. A successful exploit of this vulnerability might lead to data tampering and denial of service.

Official announcement: Please refer to url for details

https://nvidia.custhelp.com/app/answers/detail/a_id/5659

Ref: Does Disabling Hooks Disable Container Lifecycle Management?

Hooks – In this context, hooks are scripts or binaries that run during container lifecycle events (e.g., prestart, poststart). The CUDA compatibility hook injects libraries or environment variables needed for CUDA apps.

Disabling the Hook – Prevents the automatic injection of CUDA compatibility libraries into containers. This does not disable the entire container lifecycle, but it removes one automation step in the lifecycle.

CVE-2025-53818: Command Injection in MCP Server github-kanban-mcp-server (15th July 2025)

Preface: Does it good when artificial Intelligence use Open Source software? Yes, using open-source software is generally considered a positive aspect for artificial intelligence development. It fosters collaboration, transparency, and faster innovation, while also potentially reducing costs and biases. However, it’s crucial to acknowledge potential risks like misuse and the need for responsible development practices.

Background: The Model Context Protocol (MCP) is an open standard, open-source framework designed to standardize how AI models, particularly large language models (LLMs), interact with external tools, systems, and data sources. Think of it as a universal adapter, similar to USB-C, for AI applications, allowing them to easily connect to and utilize various data and tools.

A Kanban MCP Server is a server component that manages Kanban boards using the Model Context Protocol (MCP). It allows AI assistants and other systems to interact with and manipulate Kanban boards programmatically, enabling automation and integration of workflows.

Vulnerability details: GitHub Kanban MCP Server is a Model Context Protocol (MCP) server for managing GitHub issues in Kanban board format and streamlining LLM task management. Versions 0.3.0 and 0.4.0 of the MCP Server are written in a way that is vulnerable to command injection vulnerability attacks as part of some of its MCP Server tool definition and implementation. The MCP Server exposes the tool `add_comment` which relies on Node.js child process API `exec` to execute the GitHub (`gh`) command, is an unsafe and vulnerable API if concatenated with untrusted user input.

Workaround: As of time of publication, no known patches are available.

But you can securely rewrite the vulnerable handleAddComment function using execFile or the GitHub REST API to avoid command injection risks.

Workaround 1: Using execFile (Safer Shell Execution)

execFile does not invoke a shell, so special characters in inputs (like ;, &&, etc.) are treated as literal arguments, not commands

Workaround 2: Using GitHub REST API via @octokit/rest

– No shell involved.

– Fully typed and authenticated.

– GitHub officially supports and maintains this SDK.

Official announcement: Please refer to url for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-53818

AMD-based AI systems combining AMD rocBLAS and Intel MKL can become fast supercomputer in the world (14-07-2025)

Preface: Supercomputers rely on math libraries to efficiently handle the complex numerical computations required for scientific simulations and modeling. These libraries provide optimized routines for linear algebra, numerical analysis, and other mathematical operations, enabling supercomputers to perform these calculations much faster than with general-purpose code.

While math libraries are a crucial component, they are not the sole key to boosting overall AI performance on supercomputers. Supercomputers excel at AI due to their parallel processing capabilities, specialized hardware like GPUs and TPUs, and efficient memory management, not just the math libraries they use. Math libraries are essential for performing the calculations required by AI algorithms, but they rely on the underlying hardware architecture and software infrastructure of the supercomputer to deliver that performance.

Background: AMD rocBLAS 6.0.2 is a version of AMD’s library for Basic Linear Algebra Subprograms (BLAS) optimized for AMD GPUs within the ROCm platform. It provides high-performance, robust implementations of BLAS operations, similar to legacy BLAS but adapted for GPU execution using the HIP programming language. Specifically, version 6.0.2 is a point release that includes minor bug fixes to improve the stability of applications using AMD’s MI300 GPUs. It also introduces new driver features for system qualification on partner server offerings.

Using AMD rocBLAS and Intel MKL (2016 or later) together can be beneficial because MKL, while optimized for Intel CPUs, can sometimes perform suboptimally on AMD CPUs. rocBLAS, on the other hand, is specifically optimized for AMD GPUs and CPUs, providing a performance boost on AMD hardware.

Why Mix rocBLAS and MKL?

  • rocBLAS: Optimized for AMD GPUs (and CPUs via ROCm stack).
  • MKL: Optimized for Intel CPUs, but still useful for certain CPU-bound tasks.
  • Mixing: You can selectively use each library for the operations where it performs best.

– END-

CVE-2025-30403: A heap-buffer-overflow vulnerability is possible in mvfst via a specially crafted message during a QUIC session. (13th Jul 2025)

Preface: mvfst (Pronounced move fast) is a client and server implementation of IETF QUIC protocol in C++ by Facebook. QUIC is a UDP based reliable, multiplexed transport protocol that will become an internet standard.

Background: QUIC (Quick UDP Internet Connections), was designed with the primary goal of enhancing the speed and reliability of internet connections, particularly for latency-sensitive and bandwidth-intensive applications. It aims to reduce connection setup time, improve data transfer speeds, and enhance security compared to traditional TCP and TLS protocols.

The QUIC protocol is a key component in modern CDN (Content Delivery Network) strategies, particularly with the rise of HTTP/3. QUIC, developed by Google and standardized by the IETF, is a transport layer protocol that offers significant performance and security improvements over traditional TCP, especially in the context of CDNs.

Vulnerability details: A heap-buffer-overflow vulnerability is possible in mvfst via a specially crafted message during a QUIC session. This issue affects mvfst versions prior to v2025.07.07.00.

Does removing maxBatchSize affect performance?

Yes, potentially.

To offset any performance degradation from removing maxBatchSize, CDNs may:

-Optimize packet scheduling and batching elsewhere in the QUIC stack to maintain throughput.

-Use adaptive batching: Dynamically adjust how many packets are processed based on system load and traffic patterns.

-Deploy hardware acceleration: Offload QUIC processing to specialized hardware (e.g., SmartNICs or FPGAs).

-Leverage edge caching: Reduce the need for frequent QUIC connections by serving more content directly from edge nodes.

Official announcement: Please refer to the url  for details – https://nvd.nist.gov/vuln/detail/CVE-2025-30403

Nvidia security focus – Rowhammer attack potential risk – July 2025 (11th July 2025)

Preface: The Rowhammer effect, a hardware vulnerability in DRAM chips, was first publicly presented and analyzed in June 2014 at the International Symposium on Computer Architecture (ISCA). This research, conducted by Yoongu Kim et al., demonstrated that repeatedly accessing a specific row in a DRAM chip can cause bit flips in nearby rows, potentially leading to security breaches.

Background: Nvidia has shifted from “copy on flip” to asynchronous copy mechanisms in their GPU architecture, particularly with the Ampere architecture and later. This change allows for more efficient handling of data transfers between memory and the GPU, reducing latency and improving overall performance, especially in scenarios with high frame rates or complex computations.

When System-Level ECC is enabled, it prevents attackers from successfully executing Rowhammer attacks by ensuring memory integrity. The memory controller detects and corrects bit flips, making it nearly impossible for an attacker to exploit them for privilege escalation or data corruption.

Technical details: Modern DRAMs, including the ones used by NVIDIA, are potentially susceptible to Rowhammer. The now decade-old Rowhammer problem has been well known for CPU memories (e.g., DDR, LPDDR). Recently, researchers at the University of Toronto demonstrated a successful Rowhammer exploitation on a NVIDIA A6000 GPU with GDDR6 memory where System-Level ECC was not enabled. In the same paper, the researchers showed that enabling System-Level ECC mitigates the Rowhammer problem. 

Official announcement: Technical details: see link – https://nvidia.custhelp.com/app/answers/detail/a_id/5671

antihackingonline.com