All posts by admin

Where did 3I/ATLAS come from? (7th Oct 2025)

Preface: The WOW signal was detected at a frequency near the hydrogen line, around 1420 MHz. This specific frequency was favored by SETI researchers because it’s a “quiet” band of the electromagnetic spectrum, also known as the “water hole,” where transmissions are protected from terrestrial interference and are naturally emitted by the most common element in the universe, hydrogen.

Background: Harvard astrophysicist Avi Loeb suggests the interstellar object 3I/ATLAS is in the same general direction as the 1977 Wow! Signal, with the two aligning within about 9 degrees. This proximity has led to speculation, including Loeb’s call for radio astronomers to investigate if 3I/ATLAS is emitting any signals, though there is currently no direct scientific evidence linking the two events.

WOW signal –  This signal comes from the globular cluster M55 in Sagittarius.

M55 – M55 is not a constellation, but a globular cluster located in the constellation Sagittarius. Also known as the Specter Cluster or NGC 6809, it is an ancient ball of over 100,000 stars, around 12.5 billion years old.

From a scientific point of view, are there any questions?

Spectroscopic analysis of interstellar comet 3I/ATLAS has revealed the presence of nickel atoms, but notably without iron, which is a puzzling and anomalous observation compared to typical comets from our solar system and even the previous interstellar visitor 2I/Borisov. This unusual nickel-to-iron ratio is significantly higher and suggests nickel is released via photochemical processes, possibly from nickel carbonyls, rather than from the high-temperature sublimation of dust grains that would typically release both metals together.

Details provided by Professor LoebPlease see the link for details:

https://avi-loeb.medium.com/why-is-the-orbital-plane-of-3i-atlas-inclined-by-5-degrees-relative-to-the-ecliptic-plane-3b07e5222bff

CVE-2025-23272: About NVIDIA nvJPEG library (6th Oct 2025)

Preface: The nvJPEG library provides low-latency decoding, encoding, and transcoding for common JPEG formats used in computer vision applications such as image classification, object detection and image segmentation.

Background: The nvJPEG library enables the following functions: use the JPEG image data stream as input; retrieve the width and height of the image from the data stream, and use this retrieved information to manage the GPU memory allocation and the decoding.

To use the nvJPEG library, start by calling the helper functions for initialization. Create nvJPEG library handle with one of the helper functions nvjpegCreateSimple() or nvjpegCreateEx() . Create JPEG state with the helper function nvjpegJpegStateCreate() . See nvJPEG Type Declarations and nvjpegJpegStateCreate() .

The nvJPEG library provides high-performance, GPU accelerated JPEG decoding functionality for image formats commonly used in deep learning and hyperscale multimedia applications.

Ref: Arrays in C/C++ are zero-indexed, meaning that if an array has `n` elements, valid indices range from `0` to `n-1`. Accessing an index outside this range leads to out-of-bounds access. Pointers in C/C++ provide direct memory manipulation capabilities, but this power comes with the risk of “out-of-bounds” access.

Vulnerability details: NVIDIA nvJPEG library contains a vulnerability where an attacker can cause an out-of-bounds read by means of a specially crafted JPEG file. A successful exploit of this vulnerability might lead to information disclosure or denial of service.

Official announcement: For more details, please click the link.

https://nvd.nist.gov/vuln/detail/CVE-2025-23272

AMD response to method for privileged attackers with physical access to a motherboard (3rd Oct 2025)

Preface: AMD does not plan to release any mitigations in response to this report because the reported exploit is outside the scope of the published threat model for SEV-SNP.

Remark: A physical attack is not a cyber attack because “cyber” refers to actions within computer networks and digital systems, whereas a physical attack directly involves the physical world, such as breaking into a building or destroying hardware. While a physical attack can lead to cyber vulnerabilities or data breaches, the act itself is not inherently digital.

Background: SEV-SNP is a TEE that protects the confidentiality and integrity of whole VMs against an attacker with root privileges and physical access to the machine, enabling to run SEV-protected VMs without trusting the infrastructure provider and virtualization layers such as the hypervisor.

A Trusted Execution Environment (TEE) is a secure, isolated area within a device’s main processor, protected from the main operating system and other untrusted software. It uses special hardware to create a trusted space (a “secure world”) to run sensitive code and protect data’s confidentiality and integrity. TEEs are used for security-sensitive operations like biometric authentication, secure payments, and protecting private keys in crypto wallets.

The “probe” for Serial Presence Detect (SPD) data on DDR4 and DDR5 modules is an I2C bus and associated protocols that allow the motherboard’s firmware (BIOS) to read an EEPROM chip on the memory module.

How the Attack Works?

1.Attacker gains physical access to the system and modifies the SPD data.

2.They falsely report a larger memory size than actually exists.

3.This causes the memory controller to use ghost address bits, creating aliasing — multiple physical addresses pointing to the same memory location.

4.The attacker can then:

-Overwrite encrypted guest memory.

-Inject malicious data into memory regions.

-Bypass SEV-SNP’s memory integrity protections, which assume correct physical mappings.

Official announcement: For more details, please refer to the link –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3024.html

CVE-2025-10657: About Enhanced Container Isolation (2nd Oct 2025)

Preface: Standardized AI/ML model packaging: With OCI artifacts, models can be versioned, distributed, and tracked like container images. This promotes consistency and traceability across environments.Docker Desktop, specifically through its Docker Model Runner feature, can be used to run various AI models, particularly Large Language Models (LLMs) and other AI models that can be packaged as OCI Artifacts.

OCI Artifacts are any arbitrary files associated with software applications, extending the standardized OCI (Open Container Initiative) image format to include content beyond container images, such as Helm charts, Software Bill of Materials (SBOMs), digital signatures, and provenance data. These artifacts leverage the same fundamental OCI structure of manifest, config, and layers and are stored and distributed using OCI-compliant registries and tools like the ORAS CLI.

Background: A container desktop, such as Docker Desktop, acts as a local development environment and a management host for CI/CD pipelines by providing consistent, isolated environments for building, testing, and deploying containerized applications. It enables developers to package applications with their dependencies into portable containers, eliminating “works on my machine” issues and ensuring application uniformity across development, testing, and production. This simplifies the entire software delivery process, accelerating the development lifecycle by integrating container management directly into the developer’s workflow.

Vulnerability details: In a hardened Docker environment, with Enhanced Container Isolation ( ECI https://docs.docker.com/enterprise/security/hardened-desktop/enhanced-container-isolation/ ) enabled, an administrator can utilize the command restrictions feature https://docs.docker.com/enterprise/security/hardened-desktop/enhanced-container-isolation/config/#command-restrictions  to restrict commands that a container with a Docker socket mount may issue on that socket. Due to a software bug, the configuration to restrict commands was ignored when passed to ECI, allowing any command to be executed on the socket. This grants excessive privileges by permitting unrestricted access to powerful Docker commands. The vulnerability affects only Docker Desktop 4.46.0 users that have ECI enabled and are using the Docker socket command restrictions feature. In addition, since ECI restricts mounting the Docker socket into containers by default, it only affects containers which are explicitly allowed by the administrator to mount the Docker socket.

Official announcement: For more details, please see the link –

https://nvd.nist.gov/vuln/detail/CVE-2025-10657

CVE-2025-59936: About get-jwks, OAuth 2.0, and OpenID Connect (OIDC). Be vigilant! (30th Sep, 2025)

Preface: JSON Web Key Sets (JWKS) are a popular and essential component for secure, decentralized authentication systems, particularly in OAuth 2.0 and OpenID Connect (OIDC) flows, where they provide a standardized, interoperable, and scalable method for clients to obtain the public keys needed to verify the digital signatures of JSON Web Tokens (JWTs) without requiring synchronous communication with the identity provider.

Background: Using a JSON Web Key Set (JWKS) eliminates the need for resource servers to resend keys, as they can automatically retrieve new keys from the JWKS endpoint to verify tokens after key rotation, reducing manual effort and downtime. The resource server caches the JWKS document and uses the kid (Key ID) from the token to find the correct public key to validate the signature.

Benefits of using JWKS:

Automated Key Rotation: No manual updates are needed for clients or resource servers when keys are rotated.

Reduced Downtime: Applications can dynamically fetch new keys, minimizing the need for restarts or manual configuration during key rotation.

Simplified Management: A centralized JWKS endpoint simplifies the process of managing public keys across multiple clients and systems.

Enhanced Security: By rotating keys regularly, the window of vulnerability for a compromised key is limited to the time-to-live of the token, minimizing the impact of a potential breach.

Vulnerability details: get-jwks contains fetch utils for JWKS keys. In versions prior to 11.0.2, a vulnerability in get-jwks can lead to cache poisoning in the JWKS key-fetching mechanism. When the iss (issuer) claim is validated only after keys are retrieved from the cache, it is possible for cached keys from an unexpected issuer to be reused, resulting in a bypass of issuer validation. This design flaw enables a potential attack where a malicious actor crafts a pair of JWTs, the first one ensuring that a chosen public key is fetched and stored in the shared JWKS cache, and the second one leveraging that cached key to pass signature validation for a targeted iss value. The vulnerability will work only if the iss validation is done after the use of get-jwks for keys retrieval. This issue has been patched in version 11.0.2.

Official announcement: Please refer to the website for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-59936

CVE-2025-55780: AI LLM developers should not underestimate Mupdf design flaw! (29-09-2025)

Preface: LLMs are built on machine learning: specifically, a type of neural network called a transformer model. How do LLMs read PDFs? The first step was to extract the text blocks from the PDF using pdfplumber . Each text block came with its coordinates, which allowed to analyze their spatial relationships. Next, I created a “window” around each text block to capture its surrounding context. 

Background: MuPDF is not widely known by consumers as a popular standalone application, but it is popular and growing in popularity among developers, particularly those working with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems, due to its powerful and lightweight nature.

Large Language Models (LLMs) do not directly “read” PDF files in their native binary format. Instead, they interact with the extracted content of the PDF. MuPDF, through its Python binding PyMuPDF (or its specialized variant PyMuPDF4LLM), plays a crucial role in this process by enabling efficient and accurate extraction of information from PDFs.

Vulnerability details: A null pointer dereference occurs in the function break_word_for_overflow_wrap() in MuPDF 1.26.4 when rendering a malformed EPUB document. Specifically, the function calls fz_html_split_flow() to split a FLOW_WORD node, but does not check if node->next is valid before accessing node->next->overflow_wrap, resulting in a crash if the split fails or returns a partial node chain.

Official announcement: For more details, see the link

https://nvd.nist.gov/vuln/detail/CVE-2025-55780

CVE-2025-23348 and CVE-2025-23349: About NVIDIA Megatron-LM (26-09-2025)

Preface: For years, OpenAI’s GPT series has been a dominant force, while NVIDIA’s Megatron-LM has provided a powerful framework for training these massive models.

NVIDIA Megatron-LM faces competition from several other frameworks especially Microsoft DeepSpeed, Hugging Face Accelerate, JAX/Flax and PyTorch Lightning.

Both PyTorch Lightning and NVIDIA Megatron-LM are built on top of the PyTorch library. PyTorch provides the fundamental tensor operations and deep learning primitives, while these frameworks add abstractions and tools for more efficient and scalable model development and training.

Background: The full GPT pre-training process:

A script such as pretrain_gpt[.]py orchestrates the following major steps to train the model from scratch on billions of parameters and terabytes of data. This structure contains four steps:

  1. Data preparation
  2. Distributed setup
  3. Core training loop
  4. Model saving and evaluation

The design objective of a script like orqa/unsupervised/nq.py is to prepare the GPT model for open-domain question answering (QA), a task that is not typically a part of standard, large-scale unsupervised pre-training. The script specifically uses the Natural Questions (NQ) dataset to enhance the model’s ability to retrieve information from a large corpus of documents and generate answers, all without the direct use of a labeled QA dataset for this step.

Vulnerability details:

CVE-2025-23348: NVIDIA Megatron-LM for all platforms contains a vulnerability in the pretrain_gpt script, where malicious data created by an attacker may cause a code injection issue. A successful exploit of this vulnerability may lead to code execution, escalation of privileges, information disclosure, and data tampering.

CVE-2025-23349: NVIDIA Megatron-LM for all platforms contains a vulnerability in the tasks/orqa/unsupervised/nq.py component, where an attacker may cause a code injection. A successful exploit of this vulnerability may lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for more details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5698

Is the impact of the CVE-2025-10184 vulnerability not limited to PoC test devices? (25-09-2025)

Preface: The com.android[.]providers[.]telephony and com[.]android[.]phone packages are not similar in function; they serve different and distinct purposes in the Android telephony system.

This package (com[.]android[.]providers[.]telephony) is a content provider that manages and provides access to telephony-related data. 

  • Database manager: It contains data related to phone operations, including the history and content of SMS and MMS messages, call logs, and the list of Access Point Names (APNs) used for mobile data connections.
  • Data access: Other apps must request permission to access this package’s database to read or write call logs, SMS, and other telephony data.

Background: The Telephony provider and its associated classes like com[.]android[.]providers[.]telephony[.]PushMessageProvider are common in Android smartphones as they are core components of the operating system responsible for managing SMS and MMS messages. com[.]android[.]providers[.]telephony[.]PushShopProvider and com[.]android[.]providers[.]telephony[.]ServiceNumberProvider are also standard components for managing push messages and service numbers, respectively.

Vulnerability details:

The vulnerability allows any application installed on the device to read SMS/MMS data and metadata from the system-provided Telephony provider without permission, user interaction, or consent. The user is also not notified that SMS data is being accessed. This could lead to sensitive information disclosure and could effectively break the security provided by SMS-based Multi-Factor Authentication (MFA) checks. The root cause is a combination of missing permissions for write operations in several content providers (com[.]android[.]providers[.]telephony[.]PushMessageProvider, com[.]android[.]providers[.]telephony[.]PushShopProvider, com[.]android[.]providers[.]telephony[.]ServiceNumberProvider), and a blind SQL injection in the update method of those providers.

Ref: The issue stems from two main problems in the content providers:

Missing write permissions in several exported content providers:

com[.]android[.]providers[.]telephony[.]PushMessageProvider

com[.]android[.]providers[.]telephony[.]PushShopProvider

com[.]android[.]providers[.]telephony[.]ServiceNumberProvider

A blind SQL injection vulnerability in the update() method of these providers:

The where clause in SQL queries is passed unsanitized, allowing attackers to inject arbitrary SQL commands.

Official announcement: Please see the link for details –

https://www.tenable.com/cve/CVE-2025-10184

AMD responds to DRAM-related side-channel attacks (24th Sep 2025)

Preface: DDR5 memory has two independent 32-bit sub-channels per DIMM, while DDR4 uses a single 64-bit channel. There are many types of DDR5 DIMMs.

  • UDIMM (Unbuffered DIMM): Commonly used in consumer-grade desktops and laptops, UDIMMs provide a balance of performance and cost-efficiency.
  • RDIMM (Registered DIMM): Utilized in servers and workstations, RDIMMs include a register that buffers data, enhancing stability and allowing for larger memory capacities.
  • SODIMM (Small Outline DIMM): Designed for laptops and compact devices, SODIMMs offer a smaller form factor without sacrificing performance.

Background: DRAM side-channel attacks exploit timing differencesand row buffer behavior in the memory subsystem — particularly row conflicts and row hits — to infer sensitive information. These behaviors are fundamental to how DRAM works, regardless of whether it’s UDIMM, RDIMM, or SODIMM.

What does vary between DIMM types is:

  • Signal integrity and buffering (RDIMMs have registers that buffer commands)
  • Capacity and scalability
  • Latency and performance characteristics

However, the core vulnerability — the ability to observe timing differences due to row buffer behavior — exists across all types of DRAM. The attack feasibility may differ slightly due to architectural differences, but no DIMM type is inherently immune.

Researchers have provided AMD with a paper titled “Quo VADIS DDR5? Verifying Addressing of DRAM In Software.”

In this paper, the authors present an approach to verifying DRAM addressing functions from software using the DRAM row conflict side channel. The authors claim that the presented verification methodology provides a cheap and reliable alternative to verification using physical access and expensive measurement equipment such as oscilloscopes. They also demonstrate that they exploited the row conflict side channel as a covert channel and a website fingerprinting attack with a high success rate.

Security Focus: University Researchers discovered the previously unknown rank selection side channel and reverse engineer its function on two DDR4 and two DDR5 systems. These results enable novel DDR5 row-conflict side-channel attacks, which we demonstrated in two scenarios: a covert channel with 1.39 Mbit/s, and a website fingerprinting attack with an F1 score of 84 % on DDR4 and 74 % on DDR5. They conclude that as reverse-engineering of DRAM address functions remains relevant, our new verification methodology provides a cheap and reliable alternative to verification using expensive physical measurements.

Official announcement: Please see the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7036.html

Chypnosis on FPGAs – AMD is investigating whether on specific devices and components are affected and plans to provide updates as new findings emerge.(22nd Sep 2025)

Preface: AMD uses FPGAs (Field-Programmable Gate Arrays) in High-Performance Computing (HPC) by offering accelerator cards and adaptive SoCs that allow users to program custom hardware for HPC workloads in fields like machine learning, data analytics, and scientific simulations.

AMD manufactures FPGA-based accelerator cards that enable users to program applications directly onto the FPGA, eliminating the lengthy card design process. These cards install as-is in servers, accelerating workloads in financial computing, machine learning, computational storage, and data analytics.

Background: The XADC is an integrated, on-chip block within certain AMD (formerly Xilinx) FPGAs that performs analog-to-digital conversion (ADC) and also includes on-chip sensors for voltage and temperature monitoring. The FPGA provides the programmable logic to process the digitized data from the XADC, use it for control, or access it through the FPGA’s interconnects like the Dynamic Reconfiguration Port (DRP) or JTAG interface.

Xilinx ADCs (XADCs), particularly flash ADCs, have disadvantages related to high power consumption, large physical size, and limited resolution due to the large number of comparators required for higher bit depth. Non-linearity can also introduce signal distortion and measurement errors, while the integration of ADCs directly into FPGAs may not be feasible for all applications due to the required external components.

Security Focus of an Academic Research Paper: Attacks on the Programmable Logic (PL) in AMD Artix™ 7 Series FPGA Devices.

Artix 7 FPGAs and Artix™ UltraScale+ difference – Key Differences at a Glance:

The main difference is that Artix™ UltraScale+ FPGAs are a newer, higher-performance family built on a 16nm FinFET process, offering improved power efficiency, higher transceiver speeds, and more advanced features like enhanced DSP blocks and hardened memory, while the Artix 7 FPGAs are older devices built on a 28nm process. UltraScale+ also features ASIC-class clocking, supports faster memory interfaces like LPDDR4x and DDR4, and includes advanced security features.

Vulnerability details: The academic research paper introducing the new approach demonstrates the attack on the programmable logic (PL) in AMD Artix™ 7-Series FPGA devices. It shows that the on-chip XADC-based voltage monitor is too slow to detect and/or execute a tamper response to clear memory contents. Furthermore, they show that detection circuits that have been developed to detect clock freezing2 are ineffective as well. In general, the attack can be applied on all ICs that do not have effective tamper responses to clear sensitive data in case of an undervoltage event.

Official announcement: Please see the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-8018.html