All posts by admin

CVE-2025-48507: About calling processor into Arm Trusted Firmware (26th Nov 2025)

Preface: AMD’s Zynq™ UltraScale+™ RFSoCs are a family of highly integrated adaptive Systems-on-Chip (SoCs) that combine a multi-core Arm® processing system, programmable logic (FPGA fabric), and direct RF-sampling data converters (ADCs and DACs) on a single chip. CVE-2025-48507 Affected Devices: Kria™ SOM, Zynq™ UltraScale+™ MPSoCs and Zynq™ UltraScale+™ RFSoCs.

Background: The crypto operations in Arm® Trusted Firmware (TF-A) are part of a subsystem, which can be implemented through various components like the Runtime Security Engine (RSE) or a dedicated secure enclave. This subsystem provides hardware-assisted security services, such as cryptographic acceleration and secure storage, which are distinct from the main processor and are protected by the system’s security architecture.

From a cyber security perspective, calling a processor into TF-A is different because it uses a specialized, secure boot process and requires the processor to switch to a secure state via a Secure Monitor Call (SMC) instruction, as outlined in the Arm Developer and Trusted Firmware-A Documentation. This differs from standard OS calls which typically use different mechanisms for switching between user and kernel modes.

*Secure Monitor Call (SMC): TF-A calls are initiated using the SMC instruction, which is specifically designed for secure operations and causes the processor to switch to a privileged secure state (like EL3).

Vulnerability details: The security state of the calling processor into Arm® Trusted Firmware (TF-A) is not used and could potentially allow non-secure processors access to secure memories, access to crypto operations, and the ability to turn on and off subsystems within the SOC.

Official announcement: Please refer to the link for details – https://www.tenable.com/cve/CVE-2025-48507

CVE-2025-65947: The thread_amount function calls, risk level change according to definitions. (25th Nov 2025)

Published: 2025-11-21

Preface: The “mach kernel” in iOS refers to the **Mach kernel component of the XNU hybrid kernel, which is the core of Apple’s iOS operating system. XNU is a hybrid kernel that merges the Mach microkernel with components from the BSD Unix system to create a single, cohesive kernel that runs iOS and other Apple operating systems like macOS. This kernel architecture applies to all current iOS versions and will continue to be used in future versions running on Apple Silicon.

Background: In Apple platforms, the thread_amount function calls task_threads (via Mach kernel APIs) which allocates memory for the thread list. iOS uses a hybrid kernel called XNU, which is an acronym for “X is Not Unix”. This kernel combines components of the Mach microkernel and the FreeBSD Unix kernel to form the core of Apple’s Darwin operating system, which underpins iOS, macOS, watchOS, and other Apple platforms.

Vulnerability details: thread-amount is a tool that gets the amount of threads in the current process. Prior to version 0.2.2, there are resource leaks when querying thread counts on Windows and Apple platforms. In Windows platforms, the thread_amount function calls CreateToolhelp32Snapshot but fails to close the returned HANDLE using CloseHandle. Repeated calls to this function will cause the handle count of the process to grow indefinitely, eventually leading to system instability or process termination when the handle limit is reached. In Apple platforms, the thread_amount function calls task_threads (via Mach kernel APIs) which allocates memory for the thread list. The function fails to deallocate this memory using vm_deallocate. Repeated calls will result in a steady memory leak, eventually causing the process to be killed by the OOM (Out of Memory) killer. This issue has been patched in version 0.2.2.

Official announcement: Please refer to the link for details –

https://www.tenable.com/cve/CVE-2025-65947

CVE-2025-29934 – A vulnerability exists in the TLB entries of certain AMD CPUs. (20th Nov 2025)

Official announcement: November 11, 2025

Preface: AMD EPYC processors support an Input-Output Memory Management Unit (IOMMU), a hardware component essential for managing memory access for peripheral devices. IOMMU technology on EPYC platforms is a critical component for virtualization, device passthrough (PCIe passthrough), and system security.

Background: SEV-SNP mitigates many TLB (Translation Lookaside Buffer) attacks by having the CPU hardware perform automatic TLB flushes to maintain consistency between the TLB and the guest’s memory mappings, preventing a malicious hypervisor from poisoning a VM’s TLB. In contrast, earlier versions of SEV (without SNP) relied on the hypervisor for TLB consistency, which created a vulnerability to TLB poisoning attacks where the hypervisor could manipulate TLB entries between processes in a VM.

AMD Reverse Page Table, also known as the Reverse Map Table (RMP), is a hardware-managed data structure used in AMD processors, particularly with SEV-SNP (Secure Encrypted Virtualization – Secure Nested Paging), to track memory page ownership and security states. It works by having a single, system-wide table where the index is the physical memory frame number, and the entry contains the virtual page number and the process ID (or other identifier) that owns it. This is different from traditional page tables, which are process-specific and indexed by virtual page numbers. 

Ref: Improper handling of invalid nested page table entries in the IOMMU may allow a privileged attacker to induce page table entry (PTE) faults to bypass RMP checks in SEV-SNP, potentially leading to a loss of guest memory integrity.

Vulnerability details: CVE-2025-29934 – A bug within some AMD CPUs could allow a local admin-privileged attacker to run a SEV-SNP guest using stale TLB entries, potentially resulting in loss of data integrity.

Official announcement: Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3029.html#details

Cyber security focus for NVIDIA Isaac-GR00T – About CVE-2025-33183 and CVE-2025-33184 (24th Nov 2025)

Updated 11/19/2025 09:03 AM

Preface: NVIDIA Isaac is an AI robot development platform consisting of NVIDIA-accelerated libraries, application frameworks, and AI models that accelerate the development of AI robots such as autonomous mobile robots (AMRs), arms and manipulators, and humanoids.

NVIDIA Isaac GR00T N1 is the world’s first open foundation model for generalized humanoid robot reasoning and skills. This cross-embodiment model takes multimodal input, including language and images, to perform manipulation tasks in diverse environments.

Background:

Isaac GR00T N1.5 uses vision and text transformers to encode the robot’s image observations and text instructions. The architecture handles a varying number of views per embodiment by concatenating image token embeddings from all frames into a sequence, followed by language token embeddings.

A token is the most fundamental data unit in a text document, essential for enabling AI to understand and process information. In Natural Language Processing (NLP), tokenization refers to breaking down larger texts into smaller, manageable pieces called tokens.

Please refer to the attached diagram:

The System 1 reasoning module is a pre-trained Vision-Language Model (VLM) that runs on NVIDIA GPU. It processes the robot’s visual perception and language instruction to interpret the environment and understand the task goal. Subsequently, a Diffusion Transformer, trained with action flow-matching, serves as the System 2 action module. It cross-attends to the VLM output tokens and employs embodiment specific encoders and decoders to handle variable state and action dimensions for motion generation.

Vulnerability details: See below –

CVE-2025-33183         NVIDIA Isaac-GR00T for all platforms contains a vulnerability in a Python component, where an attacker could cause a code injection issue. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

CVE-2025-33184         NVIDIA Isaac-GR00T for all platforms contains a vulnerability in a Python component, where an attacker could cause a code injection issue. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5725

CVE-2025-13223: About Chrome – Officially recommendation, patch immediately (20th Nov 2025)

Add hot topics: Here’s what the official details say:

  • CVE-2025-13223 is a type confusion vulnerability in V8, the JavaScript and WebAssembly engine used by Chrome.
  • It affects Google Chrome prior to version 142.0.7444.175.
  • The flaw occurs because V8 incorrectly assumes the type of an object at runtime, which can lead to heap corruption when those assumptions are violated.
  • Attackers can exploit this by crafting a malicious HTML page that triggers the type confusion, allowing remote code execution or browser crashes.
  • The vulnerability is classified under CWE-843: Access of Resource Using Incompatible Type (‘Type Confusion’).
  • Severity: High, CVSS score 8.8.
  • It has been actively exploited in the wild, making it a zero-day prior to patch release. [nvd.nist.gov], [cvedetails.com], [thehackernews.com], [intruceptlabs.com], [securitybo…levard.com]

In-depth analysis of the Android 0-Click vulnerability – CVE-2025-48593. The issue has been resolved.  (20th Nov 2025)

Published: 2025-11-17

Preface: An HFP (Hands-Free Profile) device is a Bluetooth device that supports the Hands-Free Profile, which allows for hands-free calling and control of a mobile phone, such as a car’s infotainment system or a wireless headset. It enables features like answering, making, and ending calls, as well as voice dialing and call waiting, using the paired phone’s microphone and speaker.

Background: Unlike many security threats that require users to click on malicious links or download files, this vulnerability operates silently without any user intervention.

Android’s Bluetooth module is responsible for managing the device’s Bluetooth protocol stack and settings. This module is part of the AOSP open-source code library and is used by the Android system and manufacturer firmware. Android’s Bluetooth functionality fundamentally relies on a modified Linux kernel, which is the core of the Android operating system. The device drivers for hardware components like Bluetooth reside within the kernel system. In the affected version, the HF client module lacks necessary state and boundary checks when handling the Bluetooth device discovery database. The suspicious part is that bta_hf_client_scb_init() is called during registration and also after disable, and if the timer callback is still active during this transition, it could access freed or partially reinitialized memory.

Vulnerability details: CVE-2025-48593 – In bta_hf_client_cb_init of bta_hf_client_main[.]cc, there is a possible remote code execution due to a use after free. This could lead to remote code execution with no additional execution privileges needed. User interaction is not needed for exploitation.

Official announcement: Please refer to the link for details – https://www.tenable.com/cve/CVE-2025-48593

CVE-2025-52538: About AMD Xilinx Run Time (XRT) 19th Nov 2025

Last updated: November 11, 2025

Preface: AMD Xilinx refers to the former independent company Xilinx, which was acquired by Advanced Micro Devices (AMD) in February 2022.

Xilinx’s current and past product lines include: Field-Programmable Gate Arrays (FPGAs), System-on-Chip (SoC) Devices, Adaptive Compute Acceleration Platforms (ACAPs), Data Center Accelerator Cards & System-on-Modules (SoMs).

Xilinx provides countless meta layers that enable developers to build all the necessary components for running Linux on Xilinx SoCs.

Background: In XRT, the xocl driver manages device memory through the abstraction of buffer objects (BOs), which are allocated using specific I/O control (ioctl) commands from user space via the XRT core library APIs. User-facing applications do not directly interact with kernel functions, but use the XRT API to manage memory.

Device memory allocation is modeled as buffer objects (bo). For each bo driver tracks the host pointer backed by scatter gather list – which provides backing storage on host – and the corresponding device side allocation of contiguous buffer in one of the memory mapped DDRs/BRAMs, etc.

Remark: The xocl driver is a key Linux kernel component of XRT specifically designed for PCIe-based platforms, managing user-facing functions and communication with the FPGA.

Vulnerability details: CVE-2025-52538 – Improper input validation within the XOCL driver may allow a local attacker to generate an integer overflow condition, potentially resulting in loss of confidentiality or availability.

From a cybersecurity perspective:

•       The XOCL driver manages device memory via buffer objects (BOs) and uses ioctl commands for allocation.

•       The vulnerability occurs because size calculations for BOs were not properly validated, leading to potential overflow when adding offsets or sizes.

•       AMD’s patch reportedly adds stricter input validation and bounds checking before performing arithmetic operations.

Official announcement: Please refer to the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-8014.html

CVE-2025-42890 – Design flaws of SQL Anywhere Monitor (18th Nov 2025)

NVD Last Modified: 11/12/2025

Preface: The original graphical interface for the SQL Anywhere Monitor relied on Adobe Flash, which reached its end-of-life in December 2020. Consequently, the GUI interface is no longer provided in modern versions, and users must use non-GUI methods. SAP recommends using SAP SQL Anywhere Cockpit or SAP Solution Manager as modern, supported tools to replace the legacy SQL Anywhere Monitor for comprehensive monitoring requirements.

Background: Sybase SQL Anywhere (now renamed SAP SQL Anywhere) allows hardcoding credentials in various profiles, connection strings, and scripts, but this poses serious security risks and is strongly discouraged. Passwords stored in this way are typically in plaintext or only slightly obfuscated, making them easily accessible to unauthorized users.

ODBC Data Sources (DSNs): The Windows ODBC Data Source Administrator allows saving the username and password in the DSN configuration.

Ref: SAP SQL Anywhere has historically been a popular choice for specific use cases due to its strengths in embedded and mobile applications with intermittent connectivity. However, its overall market popularity has declined, and it is generally considered a legacy system, with mainstream maintenance ending in the near future (around 2028).

Vulnerability details: SQL Anywhere Monitor (Non-GUI) baked credentials into the code,exposing the resources or functionality to unintended users and providing attackers with the possibility of arbitrary code execution.This could cause high impact on confidentiality integrity and availability of the system.

Official announcement: Please refer to the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-42890

CVE-2025-33185: About NVIDIA AIStore  (17th Nov 2025)

Official Updated 10th Nov 2025 05:39 AM

Preface: The core design objective of NVIDIA AIStore (AIS) is to provide a high-performance, linearly scalable, and flexible storage solution specifically optimized for large-scale AI/ML and data analytics workloads. NVIDIA AIStore (AIS) provides secure access via a standalone Authentication Server (AuthN) that uses OAuth 2.0 compliant JSON Web Tokens (JWT) for token-based authentication.

The AuthN server is part of the broader NVIDIA AIStore project, which is publicly available on GitHub. It provides token-based secure access using the JSON Web Tokens (JWT) framework.

Background: The security of a signed JWT relies on a secret key (for HMAC algorithms like HS256) or a public/private key pair (for RSA or ECDSA). This key is used to create a digital signature that ensures the token’s integrity and authenticity—proving it has not been tampered with. If the application’s source code, configuration files, or version control system contains this secret key in plain text, it violates the principle of confidentiality for credentials. An attacker who discovers this hard-coded secret.

Vulnerability details: NVIDIA AIStore contains a vulnerability in AuthN. A successful exploit of this vulnerability might lead to escalation of privileges, information disclosure, and data tampering.

Impacts: Escalation of privileges, information disclosure, data tampering

Remediation: Updated to v3.31

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5724

CVE-2025-33202: About NVIDIA Triton Inference Server (14th Nov 2025)

Official Updated 11/10/2025 05:39 AM

Preface: Clients can communicate with Triton using either an HTTP/REST protocol, a GRPC protocol, or by an in-process C API or its C++ wrapper. Triton supports HTTP/REST and gRPC, both of which involve complex header parsing.

In the context of the Open Inference Protocol (OIP), also known as KServe V2 Protocol. The protocol defines a standardized interface for model inference, which implies that compliant inference servers must be capable of parsing incoming requests and serializing outgoing responses according to the protocol’s defined message formats.

Background: To define a parser that filters the payload for Triton using the KServe V2 (Open Inference Protocol), you need to handle the following:

Key Considerations

1.Protocol Compliance – The parser must understand the OIP message format:

-Inference Request: Includes inputs, outputs, parameters.

-Inference Response: Includes model_name, outputs, parameters.

-Data can be in JSON (for REST) or Protobuf (for gRPC).

2.Filtering Logic – Decide what you want to filter:

-Specific tensor names?

-Certain data types (e.g., FP32, INT64)?

-Large payloads (e.g., skip tensors above a size threshold)?

-Security checks (e.g., reject malformed headers)?

3.Shared Memory Handling – If shared memory is used, the parser should:

-Validate shared_memory_region references.

-Ensure the payload does not redundantly include tensor data when shared memory is specified.

Vulnerability details: NVIDIA Triton Inference Server for Linux and Windows contains a vulnerability where an attacker could cause a stack overflow by sending extra-large payloads. A successful exploit of this vulnerability might lead to denial of service.

Official announcement: Please see the official link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5723