All posts by admin

On February 22, 2026, the primary coronal hole aligning with Earth is located in the Sun’s northern hemisphere.

Preface: Earlier in February 2026, a separate large coronal hole was observed in the Sun’s southern hemisphere. However, the specific feature aligning with Earth today, February 22, is the one in the northern hemisphere.

Current Space Weather Forecast (Feb 22, 2026)

Geomagnetic Activity: Forecasters expect “unsettled to active” levels. The arrival of fast-moving solar particles is predicted to elevate geomagnetic activity to levels between Kp 2 and Kp 4.

Solar Activity: Overall activity is currently rated as Low to Very Low, with the solar disk notably appearing “spotless” for the first time in several years.

Impact: These conditions are unlikely to cause major technological disruptions but may trigger minor disturbances in the magnetic field and potential auroral activity at high latitudes.

Reference: And on February 22 Earth’s magnetic field may grow disturbed as fast solar wind from a coronal hole now at a geoeffective position arrives. Please refer to the link for details – https://www.solarmonitor.org/full_disk.php?date=20260222&type=saia_00193&indexnum=1

CVE-2025-33239 and CVE-2025-33240: Regarding NVIDIA Megatron Bridge (20th Feb 2026)

Preface: Artificial intelligence is both harmful and beneficial. Why is it harmful? Fundamentally, it reduces opportunities for low-skilled jobs. Speakers chant slogans like “Smart living, increased productivity.” However, its underlying problems seem difficult to conceal, so you can learn about the latest developments in artificial intelligence from online newspapers and articles. Today, when you seek answers from artificial intelligence, the answers it provides may not be the truth! Why have humans been able to survive and thrive on Earth for thousands of years? The answer is: survival of the fittest.

Background: Megatron-Core and Megatron-LM are open-source tools that are typically used together to train LLMs at scale across GPUs. Megatron-Core expands the capability of Megatron-LM.

NeMo Megatron Bridge is utilized by AI researchers, infrastructure engineers, and developers focused on high-performance training and fine-tuning of large language models (LLMs) and foundation models, particularly those bridging the Hugging Face ecosystem with NVIDIA’s Megatron-Core. NVIDIA H100 GPU introduced support for a new datatype, FP8 (8-bit floating point), enabling higher throughput of matrix multiplies and convolutions. Megatron Bridge uses the NVIDIA TransformerEngine (TE) to leverage speedups from FP8.

While NVIDIA developed Megatron Bridge to facilitate checkpoint conversion between NVIDIA NeMo and other deep learning frameworks, OpenAI utilizes its own internal infrastructure. As of 2026, NVIDIA Megatron Bridge is primarily used by large enterprises, Cloud Service Providers (CSPs), and Sovereign AI initiatives that need to train or deploy open-source models (such as Llama 3, Mistral, or Qwen) at massive scale on NVIDIA hardware.

Vulnerability details:

CVE-2025-33239 NVIDIA Megatron Bridge contains a vulnerability in a data merging tutorial, where malicious input could cause a code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

CVE-2025-33240 NVIDIA Megatron Bridge contains a vulnerability in a data shuffling tutorial, where malicious input could cause a code injection. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5781

CVE-2025-33245: NVIDIA NeMo 2.0+ shifts away from pickle (19th Feb 2026)

Preface: NeMo 2.0 is NVIDIA’s major modernization of the NeMo ecosystem.

Two things to remember about NeMo 2.0:

1. NeMo 2.0 is the training & model building framework.

It focuses on:

•               Model architectures (LLMs, ASR, TTS, multimodal)

•               Training pipelines

•               NeMo Run + NeMo-based microservices

•               Distributed GPU/accelerated workflows

2. NeMo Guardrails and NeMo Curator are NOT part of the NeMo 2.0 training stack.

They live adjacent to NeMo 2.0, serving two different lifecycle phases.

Background: NeMo 1.x modules (ASR collections, VAD, etc.) used pickle because they relied heavily on Python multiprocessing and Python objects.

NeMo 2.0 is moving toward language  and framework agnostic formats

Instead of pickle, NeMo 2.0 favors:

•               Safetensors (for weights)

•               JSON / YAML (for metadata)

•               Parquet (for curated datasets)

•               Numpy / torch tensors loaded explicitly

•               HuggingFace compatible formats

These formats are:

•               Safe

•               Portable across hardware and OS

•               Usable by non Python systems

•               Compatible with cloud trust boundaries

NeMo Curator and NeMo Guardrails are designed to avoid pickle entirely

Even though older NeMo components still used pickle internally:

  • NeMo Curator does not ingest pickle data
  • NeMo Guardrails never used pickle at all
  • NeMo 2.0 framework minimizes it or removes it

This aligns with modern security guidance for LLM infrastructure.

Vulnerability details: CVE-2025-33245 NVIDIA NeMo Framework contains a vulnerability where malicious data could cause remote code execution. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5762

Learn more about AMD ID: AMD-SB-3042 (Control Flow Reconstruction using HPCs) [18 Feb 2026]

Preface: AMD EPYC processors are extensively used for High-Performance Computing (HPC) clusters, powering some of the world’s most advanced supercomputers. They are specifically engineered to handle compute-intensive tasks such as scientific simulations, weather forecasting, and complex molecular modelling.

Background: AMD Infinity Guard is a suite of security features built directly into the silicon of the AMD EPYC processor. While it interacts with and protects firmware, its foundation is hardware-based. When AMD Infinity Guard forms Secure Encrypted Virtualization (SEV), the encryption keys are not stored on an external hard disk or in standard bare-metal memory. Instead, they are kept entirely within the processor’s hardware. The actual data belonging to your virtual machine is stored in the system’s “bare-metal” RAM (DRAM), but it is fully encrypted.

In a traditional setup, the hypervisor has “God mode”—it can see everything. With AMD SEV-SNP, the hardware creates a Trusted Execution Environment (TEE) where the hypervisor is demoted to a simple “data mover” that is cryptographically blocked from the VM’s secrets.

Ref: CounterSEVeillance is a novel side-channel attack that targets AMD SEV-SNP (Secure Encrypted Virtualization-Secure Nested Paging), a technology designed to protect confidential virtual machines (VMs) from a malicious hypervisor. Unlike previous attacks that might manipulate the VM’s state, CounterSEVeillance is primarily a passive side-channel attack, making it difficult to detect. 

Official article details

Summary: Researchers from Universities of Durham and of Luebeck have reported a way for a malicious hypervisor to monitor performance counters and potentially recover data from a guest VM. 

Affected Products and Mitigation: Performance counters are not protected by Secure Encrypted Virtualization (SEV, SEV-ES, or SEV-SNP).  AMD has defined support for performance counter virtualization in APM Vol 2, section 15.39. Performance Monitoring Counters (PMC) virtualization, available on AMD products starting with AMD EPYC™ 9005 Series Processors, is designed to protect performance counters from the type of monitoring described by the researchers.

For processors released prior to AMD EPYC™ 9005 Series Processors, AMD recommends software developers employ existing best practices, including avoiding secret-dependent data access or control flows where appropriate to help mitigate this potential vulnerability.

Official announcement: Please refer to the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3042.html

CVE-2026-20700: Improved state management in Apple products to resolved a memory corruption issue. (17th Feb 2026)

Preface: Swift is memory-safe by default. Unlike C, your enum and String cannot “overflow” a buffer and crash the app unless you use the Unsafe prefix.

Background: Swift is memory-safe by default. Use Enums to represent mutually exclusive states (e.g., loading, success, error) to eliminate “impossible” states. [.]onAppear { manager.fetchData() } will run every time the view appears, meaning every time SwiftUI reconstructs or re‑displays this DataView, it triggers fetchData() again.\ This can lead to multiple overlapping async calls unless explicitly prevented. The enum-based state machine helps protect against impossible logical states, but it does not prevent multiple requests from firing. Swift’s memory safety doesn’t stop logical repetition or resource exhaustion.

Ref: In Swift, “checking the bounds of a memory buffer” typically refers to ensuring you don’t access memory outside of an allocated range (like an Array or UnsafeBufferPointer).

Vulnerability details: CVE-2026-20700 A memory corruption issue was addressed with improved state management. This issue is fixed in watchOS 26.3, tvOS 26.3, macOS Tahoe 26.3, visionOS 26.3, iOS 26.3 and iPadOS 26.3. An attacker with memory write capability may be able to execute arbitrary code. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 26. CVE-2025-14174 and CVE-2025-43529 were also issued in response to this report.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2026-20700

https://support.apple.com/en-us/126353

CVE-2025-61969 Prequel: AMD uProf allow arbitrary file read/write operations (16 Feb 2026)

Preface: In short, the ioctl concept exists in both, but the implementation is different.

While Linux uses a standard ioctl system call, Windows provides a similar interface through its own set of functions. They are not directly compatible. 

  • Linux (ioctl): A universal Unix-like system call used to perform hardware-specific operations that fall outside standard read/write.
  • Windows (DeviceIoControl): Part of the Win32 API, this function sends control codes directly to a device driver. It is the architectural equivalent of ioctl on Windows.

Background: AMD uProf (AMD MICRO-prof) is a software profiling analysis tool for x86 applications running on Windows, Linux® and FreeBSD operating systems and provides event information unique to the AMD “Zen”-based processors and AMD Instinct™ MI Series accelerators. AMD uProf enables the developer to better understand the limiters of application performance and evaluate improvements.

According to the latest AMD uProf official documentation, supported versions include:

Windows 10 (up to 22H2), Windows 11 (up to 25H2) and Windows Server 2019, 2022, and 2025

Key Components on Windows

After installation on Windows, you can use the following tools:

  • AMDuProf (GUI): A visual interface for performing CPU and power consumption analysis.
  • AMDuProfCLI: A command-line tool for automating instruction code or remote analysis.
  • AMDuProfPcm: A tool specifically designed for system-level analysis (such as IPC and memory bandwidth).
  • System Analysis: Monitors system-level performance metrics such as IPC (Instructions Per Clock), memory bandwidth, and cache usage.
  • Power Profiling: Tracks system thermal and power consumption characteristics in real time, displaying the frequency, temperature, and energy consumption of each component.
  • Microarchitecture Analysis: Detects microarchitectural issues in the source code and provides specific hardware event information for AMD “Zen” series processors.
  • GPU and Heterogeneous Analysis: Supports analysis of GPU activity, kernels, and scheduling for AMD Instinct MI series accelerators.

Vulnerability details: CVE-2025-61969 Incorrect permission assignment in AMD µProf performance analysis tool-suite may allow a local user-privileged attacker to achieve privilege escalation, potentially resulting in arbitrary code execution.

An external researcher reported a vulnerability in the AMD uProf performance analysis tool-suite, specifically within the AMDPowerProfiler.sys driver, that could allow arbitrary file read/write operations due to insufficient access control checks.

AMD determined that this issue occurs because the driver fails to properly validate user access when handling IOCTL requests, potentially allowing unprivileged users to escalate privileges and resulting in arbitrary code execution.

Official announcement: Please refer to the link for details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-9022.html

AMD ID: AMD-SB-8022 – Closer look in Optical Probing of Readback CRC Bus (13th Feb 2026)

Preface: AMD 7000 series (7-series) processors are extensively used to build High Performance Clusters (HPC). AMD provides 7-series solutions for both enterprise-grade and consumer/prosumer levels:

  • AMD EPYC™ 7002 and 7003 Series: These server-grade processors (codenamed “Rome” and “Milan”) are specifically designed for commercial and scientific HPC. They offer up to 64 cores per socket, high memory bandwidth (8 channels), and extensive PCIe Gen4 lanes to reduce data bottlenecks.
  • AMD Ryzen™ 7000 Series: While typically consumer CPUs, they are often used for “personal HPC” or small-scale clusters due to their high clock speeds and performance-per-dollar for specialized parallel computing tasks.

Background: The “Readback CRC Bus” refers to the internal logic path or mechanism in FPGAs (especially AMD/Xilinx devices) used to perform readback cyclic redundancy checks.

This is not a physical external “bus,” but a key component of the configuration logic, primarily used to ensure the data integrity of the FPGA’s internal configuration memory. Its core functions.

Academic studies and AMD’s bulletin describe attacks where researchers collect near‑infrared photon emissions that escape from transistor switching events on the FPGA die.

This depends on silicon’s bandgap (~1.1 eV ⇒ transparent above ~1100 nm). Because of this:

  • Visible light cannot pass through silicon.
  • Near‑IR and SWIR (1.1–2.3 µm) passes through with relatively low attenuation.
  • The plastic/epoxy package is often more opaque, so attacking from the backside of a thinned die is normal.

The reason backside emissions are detectable:

  • switching transistors emit very weak photons,
  • silicon becomes transparent above ~1100 nm,
  • the backside offers a direct path to the active circuitry after thinning,
  • the metallization layers on the front side block light.

This is the same principle used in IRIS inspection methods, which also rely on silicon’s IR transparency for imaging.

Technical Summary: By leveraging a physical optical side channel, an attacker could recover plaintext configuration data from encrypted bitstreams. AMD recommends maintaining good physical security practices and keeping  systems closed unless needed for maintenance and repairs by authorized personnel.

 Affected Products and Mitigation: The testing by the academics was done on AMD Xilinx 7 series FPGAs. This is a physical back side attack and is outside of the threat model for AMD 7-series FPGAs.

Official announcement: Please refer to the link for more details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-8022.html

AMD ID: AMD-SB-6026 – AMD does not believe that the reported vulnerability exists within the MI3XX GPU designs. (12th Feb 2026)

Preface: The MI3xx series (specifically the AMD Instinct MI300 and MI350 series) is designed and manufactured by AMD. These chips are not traditional graphics cards for gaming; they are high-performance GPU accelerators specifically designed for Generative AI, large-scale AI training, and High-Performance Computing (HPC).

Background: In the AMD Instinct MI300A architecture, the cache is technically known as the MALL (Memory Attached Last Level) cache. While “MIG” is a term commonly associated with NVIDIA’s Multi-Instance GPU technology, the MI300A’s shared last-level cache is officially branded as the AMD Infinity Cache.

Is L3/Limited-Level Cache (LLC) shared across all cores?

  • GPU L3/Infinity Cache (MALL)
  • Shared across all clients (CPU & GPU).
  • The MI300A features a massive 256 MB shared Last-Level Cache (LLC), often called the AMD Infinity Cache or MALL (Memory Attached Last Level).

This specific cache is located on the I/O Die (IOD) and sits beyond the coherence point, meaning it is accessible by both the 24 CPU cores and the 228 GPU Compute Units.

  • The MI300A uses a truly shared last‑level cache (MALL).
  • Shared caches always raise the theoretical possibility of side channels.
  • But only if an attacker can cause and observe measurable eviction‑based interference.
  • AMD claims their virtualization model prevents this for GPU workloads.

Ref: NVIDIA H100 GPUs with Multi-Instance GPU (MIG) enabled provide full hardware-level isolation, ensuring that each partitioned “GPU Instance” (GI) has its own dedicated high-bandwidth memory (HBM3), compute cores, and L2 cache. Each MIG instance has its own independent path through the memory system, including dedicated cross-switch ports, L2 cache groups, memory controllers, and DRAM address buses. Many cache-based side-channel attacks rely heavily on the time delays (latency differences) associated with accessing memory in the L2 (or L3/LLC) cache.

Security Focus: The researchers shared with AMD a report titled “Behind Bars: A Side-Channel Attack on NVIDIA H100 MIG Cache Partitioning Using Memory Barriers”.

Based on MI3XX GPU architectural analysis, AMD has determined that the Guest VM-initiated operations of kernel launch related memory operations only impact the local XCD partition spatially allocated to the Guest VM and do not result in any observable interference on any other Guest VM load operations. Therefore, AMD does not believe that the reported vulnerability exists within the MI3XX GPU designs.

Official announcement: Please refer to the link for more details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-6026.html

CVE-2026-2242: Vulnerabilities in janet-lang may also affect partner devices. (11th Feb 2026)

Preface: Janet has a small footprint: it fits in environments where 2.5 MB of RAM is considered “plenty”. While Janet does not run on the GPU (it is a CPU-bound language), it is often used as the control/orchestration layer on heterogeneous AI platforms. Because Janet is written in C and compiles to a small binary (roughly 200–300 KB), it is frequently used on ARM Cortex-based systems. While Janet does not run on the GPU (it is a CPU-bound language), it is often used as the control/orchestration layer on heterogeneous AI platforms.

Background: Janet can be used to manage the data pipeline, calling into C/C++ libraries that handle heavy GPU lifting via CUDA. If code allow external scripts or users to submit code dynamically, it can use Janet’s built-in eval-string function. Does it vulnerable to CVE-2026-2242.

My speculation: Using eval-string does expose your Jetson pipeline to CVE‑2026‑2242, because:

CVE‑2026‑2242 is triggered during compilation of Janet code, and eval-string compiles code dynamically. If a malicious user submits a specially-crafted Janet expression that enters the vulnerable path inside:

janetc_if  →  specials[.]c

then Janet may perform an out‑of‑bounds read, which can cause:

  • interpreter crash
  • denial of service
  • undefined behavior inside the Janet process
Even though the CVE requires “local execution,”

-allowing remote users to submit code and then calling eval-string makes that local execution possible.

Therefore, theJetson pipeline becomes exploitable.

Vulnerability details: A vulnerability was determined in janet-lang janet up to 1.40.1. This impacts the function janetc_if of the file src/core/specials.c. Executing a manipulation can lead to out-of-bounds read. The attack needs to be launched locally. The exploit has been publicly disclosed and may be utilized. This patch is called c43e06672cd9dacf2122c99f362120a17c34b391. It is advisable to implement a patch to correct this issue.

Official announcement: Please refer to the link for more details –

https://www.tenable.com/cve/CVE-2026-2242

CVE-2026-25592: An Arbitrary File Write vulnerability has been identified in Microsoft’s Semantic Kernel .NET SDK (10th Feb 2026)

Preface: Microsoft has launched several products (Semantic Kernel, Microsoft.Extensions.AI, and Azure.OpenAI), which initially caused confusion for developers. Furthermore, the Semantic Kernel is currently being “upgraded” to the new Microsoft Agent Framework, leading some developers to question future support for the Semantic Kernel.

Python, with its rich libraries and large open-source community, remains the “universal language” in artificial intelligence research and data science. LangChain, as the primary alternative, is also based on Python. Python remains dominant in the field of artificial intelligence/machine learning.

Background: Microsoft developed Semantic Kernel as an open-source SDK to bridge conventional programming languages (C#, Python, Java) with advanced LLMs, enabling developers to build enterprise-grade, agentic AI applications. It simplifies orchestrating complex AI workflows, allows swapping models without rewriting code, and ensures secure, compliant integration of AI with existing systems.

Semantic Kernel uses plugins (formerly skills or functions) to extend its capabilities beyond its core prompt engineering functionality and integrate with external services, data sources, and API.

  • Semantic Kernel acts as an orchestrator, using the LLM to decide which plugins to use and when, effectively automating complex tasks that involve multiple steps and tools. The LLM determines the necessary sequence of actions to fulfill a user’s request.
  • Plugins allow the LLM to interact with real-world applications and data. For example, a plugin could retrieve real-time weather information, search a database, book a flight, or send an email.

The airline and travel industry is beginning to use Microsoft Semantic Kernel to build intelligent, AI-powered applications, particularly for automating customer service and booking processes. Developers are using Semantic Kernel to build dialogue agents that can understand complex booking instructions, such as “book the cheapest flight from Hong Kong to Tokyo”, and handle the booking process independently.

Vulnerability details: CVE-2026-25592 Semantic Kernel is an SDK used to build, orchestrate, and deploy AI agents and multi-agent systems. Prior to 1.70.0, an Arbitrary File Write vulnerability has been identified in Microsoft’s Semantic Kernel .NET SDK, specifically within the SessionsPythonPlugin. The problem has been fixed in Microsoft.SemanticKernel.Core version 1.70.0. As a mitigation, users can create a Function Invocation Filter which checks the arguments being passed to any calls to DownloadFileAsync or UploadFileAsync and ensures the provided localFilePath is allow listed.

Official announcement: Please refer to the link for details –https://nvd.nist.gov/vuln/detail/CVE-2026-25592