All posts by admin

Please do not underestimate CVE-2023-31364. The official remedial measures were announced the details on February 24, 2026. (27-02-2026)

Preface: AMD EPYC processors (including the latest 9005 Series) fully incorporate an I/O Memory Management Unit (IOMMU). In AMD’s architecture, this technology is known as AMD-Vi (AMD I/O Virtualization). It serves as a foundational component for the hardware-level security and isolation.

Background: In a virtualized environment, the IOMMU (AMD-Vi) acts as the essential bridge between “physical hardware” and the Guest VM. When you enable hardware passthrough, the IOMMU functions as both a hardware-level “translator” and a “security guard.” The following details how IOMMU participates in the operation of guest virtual machines:

About Memory Address Mapping (DMA Remapping)

This is the most critical function of the IOMMU.

  • The Problem: A Guest VM operates using Guest Physical Addresses (GPA), which are virtualized. However, a physical device (like a NIC or GPU) requires Host Physical Addresses (HPA) to function.
  • The Solution: When a driver inside the Guest VM commands a device to perform a Direct Memory Access (DMA), the IOMMU intercepts the request. It uses a translation table (provided by the hypervisor) to instantly map the GPA to the HPA. This allows the Guest VM to interact with hardware at full speed without knowing the host’s actual memory layout.

Vulnerability details: CVE-2023-31364  Improper handling of direct memory writes in the input-output memory management unit could allow a malicious guest virtual machine (VM) to flood a host with writes, potentially causing a fatal machine check error resulting in denial of service.

The above details and VFIO code demonstrate (refer to attached diagram) the mechanism allowing a virtual machine to directly access hardware via IOMMU mapping, which is essential for launching the CVE-2023-31364 attack. The vulnerability occurs when a guest utilizes this direct path to send malicious, high-volume write requests, causing a flawed IOMMU to trigger a fatal Machine Check Error (MCE) and crash the host.

Official announcement: Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7059.html

CVE-2026-3061: Out of bounds read in Media in Google Chrome (26-02-2026)

Preface: In the computer industry, the term “sustainability” encompasses design flaws and their remedies. If someone tells you that their hardware and software products have never been found to have any vulnerabilities to date, their design is likely perfect. However, they still need to maintain its sustainability.

Background: In Google Chrome, “Media” refers to the suite of features and APIs used to handle, control, and debug audio, video, and images. Here is the translated breakdown of what it encompasses and how to customize it.

What is “Media” in Chrome?

  1. Global Media Control (Media Hub):
    Located in the top-right corner (a music note icon), this hub allows you to play, pause, or skip tracks across all open tabs without switching to the specific page.
  2. DevTools Media Panel:
    A hidden tool for developers (found via F12 > Three Dots > More tools > Media) used to inspect video resolution, codecs (like AV1), and playback errors in real-time.
  3. Built-in Media Player:
    Chrome acts as a standalone player. You can drag and drop MP4, MP3, JPG, or PDF files directly into a tab to view them.
  4. Casting:
    Integrated support for Google Cast, allowing you to send audio or video from a tab to a TV or Nest speaker. 

Media in Google Chrome contains Global Media Control, DevTools Media Panel, Built-in Media Player and Casting.

Vulnerability details: Out of bounds read in Media in Google Chrome prior to 145.0.7632.116 allowed a remote attacker to perform an out of bounds memory read via a crafted HTML page. (Chromium security severity: High)

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2026-3061

CVE-2025-33179 and CVE-2025-33180: About NVIDIA Cumulus Linux and NVOS products (25-02-2026)

Preface: NVIDIA InfiniBand switches are based on technology from Mellanox Technologies. Nvidia Spectrum switches are based on technology from Mellanox Technologies. The Spectrum switch ASIC portfolio, originally developed by Mellanox for high-performance Ethernet networking, was rebranded under Nvidia and is now a core component of Nvidia’s networking division. NVIDIA completed the acquisition of Mellanox Technologies, a major supplier of high-performance interconnect technology (switches, NICs), in April 2020 for approximately $7 billion. This strategic move enhanced NVIDIA’s data center networking capabilities, specifically in InfiniBand and Ethernet, to support AI and high-performance computing.

Background: Cumulus Linux is optimized for Ethernet fabrics, while NVOS/Onyx is largely utilized in high-performance InfiniBand environments.

-Key switches supporting NVIDIA Cumulus Linux include:

  • Spectrum-4: SN5600, SN5600D, SN5400
  • Spectrum-2/3: SN3700, SN3700C, SN4600, SN4700
  • Spectrum-1: SN2700, SN2100, SN2745 

Example: Spectrum-4 series (including the SN5600, SN5600D, and SN5400) is a line of physical Ethernet switches (hardware)

Use Cases: Ideal for hyperscale cloud data centers and enterprise AI networks, emphasizing scalability and full customizability.

-NVOS (NVIDIA Onyx) or similar OS typically supports:

Quantum/Quantum-2 InfiniBand: Switches designed for high-performance AI, such as the Quantum-2 series.

Use Cases: Focused on High-Performance Computing (HPC) and large-scale AI training clusters (AI Factories), particularly environments utilizing NVLink for GPU interconnects.

Note: As of early 2026, NVIDIA is focusing on standardizing the management commands (NVUE) across both systems to reduce the complexity of automation workflows when transitioning between different operating systems.

Cumulus Linux (Native Linux): When you SSH in, you land in a standard Debian Linux bash shell. You configure the switch using the NVUE (NVIDIA User Experience) object model via the nv command (e.g., nv set interface swp1…).

Vulnerability Note: The CVEs (CVE-2025-33179/33180) specifically target the NVUE API and CLI engine found in Cumulus Linux 5.x and later.

Vulnerability details:

CVE-2025-33179 NVIDIA Cumulus Linux and NVOS products contain a vulnerability in the NVUE interface, where a low-privileged user could run an unauthorized command. A successful exploit of this vulnerability might lead to escalation of privileges.

CVE-2025-33180 NVIDIA Cumulus Linux and NVOS products contain a vulnerability in the NVUE interface, where a low-privileged user could inject a command. A successful exploit of this vulnerability might lead to escalation of privileges.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5722

AMD-SB-3043 all aspects, side-channel analysis for privacy applications on confidential VMs (24th Feb 2026)

Preface: AMD-SB-3042 is a formal advisory for a specific vulnerability, while AMD-SB-3043 is an advisory regarding an analytical tool (SNPeek) used to detect such vulnerabilities.

Background: If AMD zen5 operating AMD SEV-SNP, traditional hypervisor especially VMware or Hyper-V management include encryption will handle by hypervisor embedded in Xen5, so SNPeek collect the traffic is un-encrypted data. Details shown as below:

Limitations of SNPeek If using a tool like SNPeek to intercept traffic on the host side: It can only see data marked as “Shared” (usually for use by the hypervisor to assist with network or disk I/O). Data in Private Memory is always encrypted to SNPeek; the hypervisor cannot read its plaintext content at all. Potential Risk Warning Despite strong hardware encryption, the recently discovered StackWarp (CVE-2025-29943) vulnerability shows that a malicious hypervisor could still influence the execution path of Zen 5 virtual machines by manipulating the CPU’s internal “Stack Engine.” While this doesn’t mean it can directly “read” encrypted memory, it can achieve indirect attacks.

AMD-SB-3043: Analytical Framework (SNPeek)

  • Nature: A bulletin regarding a research framework and toolkit for evaluating side-channel risks in Confidential VMs (CVM).
  • Core Content: Describes the SNPeek open-source toolkit.
  • Function & Purpose:
    • SNPeek is not a single vulnerability but an automated analysis pipeline that uses machine learning to assess how sensitive a CVM application is to side-channel attacks.
    • It helps developers quantify how much information an application might leak when running in encrypted environments like SEV-SNP.
    • It provides configurable attack primitives to help developers locate “weak points” in their code and guides the implementation of mitigations (e.g., oblivious memory access).

Official details and announcement: AMD’s assessment is that all side-channel techniques demonstrated in the paper fall within the category of already known, documented, and out of scope behaviors according to the published SEV/SNP threat model.  AMD has  introduced features on Zen 5 processors—specifically Ciphertext Hiding and PMC Virtualization—that address the ciphertext visibility and HPC based leakage paths highlighted by the researchers.

AMD recommends software developers employ existing best practices, including constant-time algorithms, and avoiding secret-dependent data accesses where appropriate. Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3043.html

Edge TPU (an ASIC accelerator developed by Google) – Episode 1 (23rd Feb 2026)

Preface: PyCoral is specifically a TPU processing technique. While TensorFlow Lite (TFLite) can run on a standard CPU, PyCoral is the dedicated library used to delegate those operations to the Edge TPU hardware.

PyCoral API: This is a Python library specifically designed by Google to run inference on Coral Edge TPU hardware, such as the Coral USB Accelerator or M.2 modules. It is built on top of TensorFlow Lite.

Nvidia H100: This is a high-end data center GPU based on the Hopper architecture. It uses Nvidia’s proprietary software stack, including the CUDA toolkit, TensorRT, and the Transformer Engine to accelerate AI workloads.

Background: It is accurate to say that foundational memory management principles—specifically allocation and copying (malloc/new, memcpy)—are the basis for both CUDA/TensorRT and Coral API inference, though they operate on different memory spaces.

  • CUDA/TensorRT (GPU-centric): Uses cudaMalloc and cudaMemcpy to manage dedicated GPU device memory.
  • PyCoral API/TFLite (CPU-centric/Edge): Primarily uses malloc or new for CPU-based input/output buffers and memcpy to manage memory within host memory, even when interacting with the Edge TPU.

In both cases, efficient management of data movement between host (CPU) and device (GPU/TPU) is key, making memory allocation and copying the common denominator.

PyCoral API (pycoral module): This is a Python library built on top of the TensorFlow Lite Python API (tflite_runtime). It provides convenience functions and additional features (like model pipelining and on-device transfer learning) to simplify development with Python.Coral C++ API (libcoral): This is a C++ library built on top of the TensorFlow Lite C++ API. It offers the same functionality as the PyCoral API but for C++ applications.

Cyber security focus: But the most common vulnerability occurs when developers call [.]get() to obtain the raw pointer, and then continue to use that raw pointer after the std::unique_ptr has gone out of scope or been destroyed. Is the C++ TPU programming related to this issue? Please refer to the recommendations in the diagram for details.

On February 22, 2026, the primary coronal hole aligning with Earth is located in the Sun’s northern hemisphere.

Preface: Earlier in February 2026, a separate large coronal hole was observed in the Sun’s southern hemisphere. However, the specific feature aligning with Earth today, February 22, is the one in the northern hemisphere.

Current Space Weather Forecast (Feb 22, 2026)

Geomagnetic Activity: Forecasters expect “unsettled to active” levels. The arrival of fast-moving solar particles is predicted to elevate geomagnetic activity to levels between Kp 2 and Kp 4.

Solar Activity: Overall activity is currently rated as Low to Very Low, with the solar disk notably appearing “spotless” for the first time in several years.

Impact: These conditions are unlikely to cause major technological disruptions but may trigger minor disturbances in the magnetic field and potential auroral activity at high latitudes.

Reference: And on February 22 Earth’s magnetic field may grow disturbed as fast solar wind from a coronal hole now at a geoeffective position arrives. Please refer to the link for details – https://www.solarmonitor.org/full_disk.php?date=20260222&type=saia_00193&indexnum=1

CVE-2025-33239 and CVE-2025-33240: Regarding NVIDIA Megatron Bridge (20th Feb 2026)

Preface: Artificial intelligence is both harmful and beneficial. Why is it harmful? Fundamentally, it reduces opportunities for low-skilled jobs. Speakers chant slogans like “Smart living, increased productivity.” However, its underlying problems seem difficult to conceal, so you can learn about the latest developments in artificial intelligence from online newspapers and articles. Today, when you seek answers from artificial intelligence, the answers it provides may not be the truth! Why have humans been able to survive and thrive on Earth for thousands of years? The answer is: survival of the fittest.

Background: Megatron-Core and Megatron-LM are open-source tools that are typically used together to train LLMs at scale across GPUs. Megatron-Core expands the capability of Megatron-LM.

NeMo Megatron Bridge is utilized by AI researchers, infrastructure engineers, and developers focused on high-performance training and fine-tuning of large language models (LLMs) and foundation models, particularly those bridging the Hugging Face ecosystem with NVIDIA’s Megatron-Core. NVIDIA H100 GPU introduced support for a new datatype, FP8 (8-bit floating point), enabling higher throughput of matrix multiplies and convolutions. Megatron Bridge uses the NVIDIA TransformerEngine (TE) to leverage speedups from FP8.

While NVIDIA developed Megatron Bridge to facilitate checkpoint conversion between NVIDIA NeMo and other deep learning frameworks, OpenAI utilizes its own internal infrastructure. As of 2026, NVIDIA Megatron Bridge is primarily used by large enterprises, Cloud Service Providers (CSPs), and Sovereign AI initiatives that need to train or deploy open-source models (such as Llama 3, Mistral, or Qwen) at massive scale on NVIDIA hardware.

Vulnerability details:

CVE-2025-33239 NVIDIA Megatron Bridge contains a vulnerability in a data merging tutorial, where malicious input could cause a code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

CVE-2025-33240 NVIDIA Megatron Bridge contains a vulnerability in a data shuffling tutorial, where malicious input could cause a code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5781

CVE-2025-33245: NVIDIA NeMo 2.0+ shifts away from pickle (19th Feb 2026)

Preface: NeMo 2.0 is NVIDIA’s major modernization of the NeMo ecosystem.

Two things to remember about NeMo 2.0:

1. NeMo 2.0 is the training & model building framework.

It focuses on:

•               Model architectures (LLMs, ASR, TTS, multimodal)

•               Training pipelines

•               NeMo Run + NeMo-based microservices

•               Distributed GPU/accelerated workflows

2. NeMo Guardrails and NeMo Curator are NOT part of the NeMo 2.0 training stack.

They live adjacent to NeMo 2.0, serving two different lifecycle phases.

Background: NeMo 1.x modules (ASR collections, VAD, etc.) used pickle because they relied heavily on Python multiprocessing and Python objects.

NeMo 2.0 is moving toward language  and framework agnostic formats

Instead of pickle, NeMo 2.0 favors:

•               Safetensors (for weights)

•               JSON / YAML (for metadata)

•               Parquet (for curated datasets)

•               Numpy / torch tensors loaded explicitly

•               HuggingFace compatible formats

These formats are:

•               Safe

•               Portable across hardware and OS

•               Usable by non Python systems

•               Compatible with cloud trust boundaries

NeMo Curator and NeMo Guardrails are designed to avoid pickle entirely

Even though older NeMo components still used pickle internally:

  • NeMo Curator does not ingest pickle data
  • NeMo Guardrails never used pickle at all
  • NeMo 2.0 framework minimizes it or removes it

This aligns with modern security guidance for LLM infrastructure.

Vulnerability details: CVE-2025-33245 NVIDIA NeMo Framework contains a vulnerability where malicious data could cause remote code execution. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5762

Learn more about AMD ID: AMD-SB-3042 (Control Flow Reconstruction using HPCs) [18 Feb 2026]

Preface: AMD EPYC processors are extensively used for High-Performance Computing (HPC) clusters, powering some of the world’s most advanced supercomputers. They are specifically engineered to handle compute-intensive tasks such as scientific simulations, weather forecasting, and complex molecular modelling.

Background: AMD Infinity Guard is a suite of security features built directly into the silicon of the AMD EPYC processor. While it interacts with and protects firmware, its foundation is hardware-based. When AMD Infinity Guard forms Secure Encrypted Virtualization (SEV), the encryption keys are not stored on an external hard disk or in standard bare-metal memory. Instead, they are kept entirely within the processor’s hardware. The actual data belonging to your virtual machine is stored in the system’s “bare-metal” RAM (DRAM), but it is fully encrypted.

In a traditional setup, the hypervisor has “God mode”—it can see everything. With AMD SEV-SNP, the hardware creates a Trusted Execution Environment (TEE) where the hypervisor is demoted to a simple “data mover” that is cryptographically blocked from the VM’s secrets.

Ref: CounterSEVeillance is a novel side-channel attack that targets AMD SEV-SNP (Secure Encrypted Virtualization-Secure Nested Paging), a technology designed to protect confidential virtual machines (VMs) from a malicious hypervisor. Unlike previous attacks that might manipulate the VM’s state, CounterSEVeillance is primarily a passive side-channel attack, making it difficult to detect. 

Official article details

Summary: Researchers from Universities of Durham and of Luebeck have reported a way for a malicious hypervisor to monitor performance counters and potentially recover data from a guest VM. 

Affected Products and Mitigation: Performance counters are not protected by Secure Encrypted Virtualization (SEV, SEV-ES, or SEV-SNP).  AMD has defined support for performance counter virtualization in APM Vol 2, section 15.39. Performance Monitoring Counters (PMC) virtualization, available on AMD products starting with AMD EPYC™ 9005 Series Processors, is designed to protect performance counters from the type of monitoring described by the researchers.

For processors released prior to AMD EPYC™ 9005 Series Processors, AMD recommends software developers employ existing best practices, including avoiding secret-dependent data access or control flows where appropriate to help mitigate this potential vulnerability.

Official announcement: Please refer to the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3042.html

CVE-2026-20700: Improved state management in Apple products to resolved a memory corruption issue. (17th Feb 2026)

Preface: Swift is memory-safe by default. Unlike C, your enum and String cannot “overflow” a buffer and crash the app unless you use the Unsafe prefix.

Background: Swift is memory-safe by default. Use Enums to represent mutually exclusive states (e.g., loading, success, error) to eliminate “impossible” states. [.]onAppear { manager.fetchData() } will run every time the view appears, meaning every time SwiftUI reconstructs or re‑displays this DataView, it triggers fetchData() again.\ This can lead to multiple overlapping async calls unless explicitly prevented. The enum-based state machine helps protect against impossible logical states, but it does not prevent multiple requests from firing. Swift’s memory safety doesn’t stop logical repetition or resource exhaustion.

Ref: In Swift, “checking the bounds of a memory buffer” typically refers to ensuring you don’t access memory outside of an allocated range (like an Array or UnsafeBufferPointer).

Vulnerability details: CVE-2026-20700 A memory corruption issue was addressed with improved state management. This issue is fixed in watchOS 26.3, tvOS 26.3, macOS Tahoe 26.3, visionOS 26.3, iOS 26.3 and iPadOS 26.3. An attacker with memory write capability may be able to execute arbitrary code. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 26. CVE-2025-14174 and CVE-2025-43529 were also issued in response to this report.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2026-20700

https://support.apple.com/en-us/126353