All posts by admin

AMD ID: AMD-SB-6026 – AMD does not believe that the reported vulnerability exists within the MI3XX GPU designs. (12th Feb 2026)

Preface: The MI3xx series (specifically the AMD Instinct MI300 and MI350 series) is designed and manufactured by AMD. These chips are not traditional graphics cards for gaming; they are high-performance GPU accelerators specifically designed for Generative AI, large-scale AI training, and High-Performance Computing (HPC).

Background: In the AMD Instinct MI300A architecture, the cache is technically known as the MALL (Memory Attached Last Level) cache. While “MIG” is a term commonly associated with NVIDIA’s Multi-Instance GPU technology, the MI300A’s shared last-level cache is officially branded as the AMD Infinity Cache.

Is L3/Limited-Level Cache (LLC) shared across all cores?

  • GPU L3/Infinity Cache (MALL)
  • Shared across all clients (CPU & GPU).
  • The MI300A features a massive 256 MB shared Last-Level Cache (LLC), often called the AMD Infinity Cache or MALL (Memory Attached Last Level).

This specific cache is located on the I/O Die (IOD) and sits beyond the coherence point, meaning it is accessible by both the 24 CPU cores and the 228 GPU Compute Units.

  • The MI300A uses a truly shared last‑level cache (MALL).
  • Shared caches always raise the theoretical possibility of side channels.
  • But only if an attacker can cause and observe measurable eviction‑based interference.
  • AMD claims their virtualization model prevents this for GPU workloads.

Ref: NVIDIA H100 GPUs with Multi-Instance GPU (MIG) enabled provide full hardware-level isolation, ensuring that each partitioned “GPU Instance” (GI) has its own dedicated high-bandwidth memory (HBM3), compute cores, and L2 cache. Each MIG instance has its own independent path through the memory system, including dedicated cross-switch ports, L2 cache groups, memory controllers, and DRAM address buses. Many cache-based side-channel attacks rely heavily on the time delays (latency differences) associated with accessing memory in the L2 (or L3/LLC) cache.

Security Focus: The researchers shared with AMD a report titled “Behind Bars: A Side-Channel Attack on NVIDIA H100 MIG Cache Partitioning Using Memory Barriers”.

Based on MI3XX GPU architectural analysis, AMD has determined that the Guest VM-initiated operations of kernel launch related memory operations only impact the local XCD partition spatially allocated to the Guest VM and do not result in any observable interference on any other Guest VM load operations. Therefore, AMD does not believe that the reported vulnerability exists within the MI3XX GPU designs.

Official announcement: Please refer to the link for more details –

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-6026.html

CVE-2026-2242: Vulnerabilities in janet-lang may also affect partner devices. (11th Feb 2026)

Preface: Janet has a small footprint: it fits in environments where 2.5 MB of RAM is considered “plenty”. While Janet does not run on the GPU (it is a CPU-bound language), it is often used as the control/orchestration layer on heterogeneous AI platforms. Because Janet is written in C and compiles to a small binary (roughly 200–300 KB), it is frequently used on ARM Cortex-based systems. While Janet does not run on the GPU (it is a CPU-bound language), it is often used as the control/orchestration layer on heterogeneous AI platforms.

Background: Janet can be used to manage the data pipeline, calling into C/C++ libraries that handle heavy GPU lifting via CUDA. If code allow external scripts or users to submit code dynamically, it can use Janet’s built-in eval-string function. Does it vulnerable to CVE-2026-2242.

My speculation: Using eval-string does expose your Jetson pipeline to CVE‑2026‑2242, because:

CVE‑2026‑2242 is triggered during compilation of Janet code, and eval-string compiles code dynamically. If a malicious user submits a specially-crafted Janet expression that enters the vulnerable path inside:

janetc_if  →  specials[.]c

then Janet may perform an out‑of‑bounds read, which can cause:

  • interpreter crash
  • denial of service
  • undefined behavior inside the Janet process
Even though the CVE requires “local execution,”

-allowing remote users to submit code and then calling eval-string makes that local execution possible.

Therefore, theJetson pipeline becomes exploitable.

Vulnerability details: A vulnerability was determined in janet-lang janet up to 1.40.1. This impacts the function janetc_if of the file src/core/specials.c. Executing a manipulation can lead to out-of-bounds read. The attack needs to be launched locally. The exploit has been publicly disclosed and may be utilized. This patch is called c43e06672cd9dacf2122c99f362120a17c34b391. It is advisable to implement a patch to correct this issue.

Official announcement: Please refer to the link for more details –

https://www.tenable.com/cve/CVE-2026-2242

CVE-2026-25592: An Arbitrary File Write vulnerability has been identified in Microsoft’s Semantic Kernel .NET SDK (10th Feb 2026)

Preface: Microsoft has launched several products (Semantic Kernel, Microsoft.Extensions.AI, and Azure.OpenAI), which initially caused confusion for developers. Furthermore, the Semantic Kernel is currently being “upgraded” to the new Microsoft Agent Framework, leading some developers to question future support for the Semantic Kernel.

Python, with its rich libraries and large open-source community, remains the “universal language” in artificial intelligence research and data science. LangChain, as the primary alternative, is also based on Python. Python remains dominant in the field of artificial intelligence/machine learning.

Background: Microsoft developed Semantic Kernel as an open-source SDK to bridge conventional programming languages (C#, Python, Java) with advanced LLMs, enabling developers to build enterprise-grade, agentic AI applications. It simplifies orchestrating complex AI workflows, allows swapping models without rewriting code, and ensures secure, compliant integration of AI with existing systems.

Semantic Kernel uses plugins (formerly skills or functions) to extend its capabilities beyond its core prompt engineering functionality and integrate with external services, data sources, and API.

  • Semantic Kernel acts as an orchestrator, using the LLM to decide which plugins to use and when, effectively automating complex tasks that involve multiple steps and tools. The LLM determines the necessary sequence of actions to fulfill a user’s request.
  • Plugins allow the LLM to interact with real-world applications and data. For example, a plugin could retrieve real-time weather information, search a database, book a flight, or send an email.

The airline and travel industry is beginning to use Microsoft Semantic Kernel to build intelligent, AI-powered applications, particularly for automating customer service and booking processes. Developers are using Semantic Kernel to build dialogue agents that can understand complex booking instructions, such as “book the cheapest flight from Hong Kong to Tokyo”, and handle the booking process independently.

Vulnerability details: CVE-2026-25592 Semantic Kernel is an SDK used to build, orchestrate, and deploy AI agents and multi-agent systems. Prior to 1.70.0, an Arbitrary File Write vulnerability has been identified in Microsoft’s Semantic Kernel .NET SDK, specifically within the SessionsPythonPlugin. The problem has been fixed in Microsoft.SemanticKernel.Core version 1.70.0. As a mitigation, users can create a Function Invocation Filter which checks the arguments being passed to any calls to DownloadFileAsync or UploadFileAsync and ensures the provided localFilePath is allow listed.

Official announcement: Please refer to the link for details –https://nvd.nist.gov/vuln/detail/CVE-2026-25592

The main updates in iOS 26.2.1 include support for the second-generation AirTag and general bug fixes. (9 Feb 2026)

Preface: According to Apple’s security update documentation, these two critical zero-day vulnerabilities, CVE-2025-43529 and CVE-2025-14174, were officially patched in iOS 26.2 released on December 15, 2025. CVE-2025-46285 is also typically a security patch included with iOS 26.2.

Upgrading to 26.2.1 will indeed ensure your iPhone is protected against the aforementioned CVE vulnerabilities, but these protections were already in place in version 26.2. If you are already on 26.2.1, the current system is the most secure official release as of today (February 6, 2026).

Focus: There are many services available that can easily provide facial recognition capabilities, including Amazon Rekognition, OpenCV, and Microsoft Azure.

Apple uses its own internal frameworks like Core ML and Vision to perform on-device machine learning and image analysis. With the help of Core ML, facial recognition is definitely possible using the Vision framework, although that requires integrating a previously trained model into your app.

Although iOS 26.2.1 was intended to fix bugs, some users encountered the following problems after updating:

  • Issues with Face ID in iOS 26.2.1: Face ID is unresponsive when unlocking or entering third-party apps (such as banking apps).
  • Some users have reported that after the update, apps that previously supported Face ID login no longer display the facial recognition window.
  • Reports indicate that iOS 26.2.1 may have a storage space reporting error, indirectly causing instability in system components such as Face ID.

Summary:  the Face ID fixes in iOS 26.2.1 are more focused on underlying system stability (such as updates to the Secure Enclave feature and adjustments to the audio/camera pipeline) rather than errors in the Core ML integration logic. If you encounter a situation where the Vision framework cannot recognize faces while developing your application, you usually need to check the model’s VNRequest configuration.

Although not weekly or mandated:

Apple performs:

  • Static code scanning
  • Behavioral analysis on device
  • Privacy API usage scanning
  • Analysis of network calls

Make sure:

✔ No private API usage
✔ No runtime permission circumvention
✔ No silent data upload

Comet 3I/ATLAS is visiting our solar system as an interstellar traveller. What can we learn from it? (6-Feb-2026)

Preface: Philosophy is the “love of wisdom,” defined as the systematic, rational study of fundamental questions regarding existence, knowledge, truth, ethics, and the meaning of life. Civilization develops along with the progress of science, technology and culture.

Earth’s civilization dates back thousands of years. During this period, many inexplicable events occurred, leading to suspicions that they might be related to extraterrestrial civilizations. Despite the discovery of structures similar to the ruin of Puma Punku, the mainstream view avoids the topic. NASA has confirmed that 3I/ATLAS is an interstellar comet. Therefore, details regarding its potential connection to extraterrestrial life are no longer valid. Perhaps you’ve already lost interest?

Background: From a scientific perspective, exploring unknown objects in space requires making many technological assumptions. For example, this unknown object did not form naturally. Could it pose a threat to humanity? Astronomers and professors focus on its scientific parameters and spectral analysis, but conversely: if 3I/ATALS is a monitoring device, it might also conclude that our Moon and Earth did not form by chance!

Scientific details: The Moon’s existence is not just a coincidence; it is Earth’s gravitational stabilizer.

The Gyroscopic Effect (Axial Stability)

Think of Earth as a spinning top. Without an outside force to steady it, a spinning top eventually wobbles and tips over.

The Moon as an Anchor: The Moon’s gravity exerts a constant pull on Earth’s equatorial bulge (the slight “spare tire” shape Earth has because of its rotation).

Stabilizing the Tilt: This “gravitational grip” keeps Earth’s axial tilt steady at about 23.5 degrees.

Preventing Chaos: Without the Moon, the gravity of other planets (like Jupiter) would pull Earth’s tilt into chaotic shifts, swinging anywhere from 0 to 85 degrees over millions of years

Reference: Puma Punku is an ancient, 6th-century, 3,850-meter-high, archaeological site in Bolivia featuring massive, intricately cut, and interlocking red sandstone and andesite blocks. Located near the larger Tiwanaku complex, these ruins are famed for their precise, H-shaped, and near-microscopic stonework, which some researchers suggest may indicate advanced engineering techniques.

End of reading.

CVE-2025-47366: Qualcomm remediation – focuses on Memory Corruption during deinitialization. (5th Feb 2026)

Preface: The iframe (Inline Frame) is an HTML element used to embed another document or website within the current web page (e.g., embedding a YouTube video or a Google Map).

Background: High-bandwidth Digital Content Protection (HDCP) in a Trusted Execution Environment (TEE) refers to securing the handshake, authentication, and encryption keys of audio/video content within a secure, isolated area of a device’s processor.

  • When a HDCP session is deinitialized, the non-secure buffer allocated for communication with the TEE is freed.
  • However, if the cleanup sequence does not enforce strict ordering, “lingering references” (such as asynchronous callbacks or TEE drivers) might still attempt to access that memory.
  • This results in a memory corruption (Use-After-Free), allowing a local attacker with low privileges to potentially escalate their rights or cause a system crash. 

This is a memory integrity issue, not a cryptographic one.  Memory corruption during deinitialization. The vulnerability resides in the way the HLOS (Android kernel/drivers) and TrustZone interact, the fix must be applied at the Firmware/Kernel level via a system update from the manufacturer (OEM). 

Vulnerability details:

Title: Exposed Dangerous Method or Function in HLOS

Description: Cryptographic issue when a Trusted Zone with outdated code is triggered by a HLOS providing incorrect input.

Technology Area: HLOS

Vulnerability Type: CWE-749 Exposed Dangerous Method or Function.

Risk Level High (CVSS Score: 7.8)

Affected Platforms: Multiple Qualcomm Chipsets (including Snapdragon series)

Official announcement: Please refer to the link for more details –

https://docs.qualcomm.com/securitybulletin/february-2026-bulletin.html

CVE-2026-25142: If you are using SandboxJS [@nyariv/sandboxjs] for IoT (ESP32) development, please be cautious! (5 Feb 2026)

Preface: The ESP32 is a low-cost, low-power System on a Chip (SoC) microcontrollers with integrated Wi-Fi and dual-mode Bluetooth, making it a cornerstone for modern Internet of Things (IoT) applications. It offers direct, high-level control over hardware peripherals, including GPIOs, built-in Flash memory, and network interfaces, with extensive support for low-power operation.

Background: When using SandboxJS (@nyariv/sandboxjs) for ESP32 or any Internet of Things (IoT) development, caution is essential. While the tool is designed to provide a “secure eval runtime environment,” a major vulnerability recently discovered could put your embedded devices at risk.

Core Security Risks

  • Prototype Pollution: A critical vulnerability (CVE-2025-34146) exists in versions 0.8.23 and earlier. An attacker could inject malicious JavaScript code into `Object.prototype`, potentially leading to a denial-of-service (DoS) attack or escape from the sandbox environment to execute arbitrary code.
  • Sandbox Escape: In early 2026, a critical escape vulnerability (GHSA-wxhw-j4hc-fmq6) was disclosed again. The reason was that the AsyncFunction was not properly isolated, which allowed attackers to access the entire scope and execute native commands.
  • Specific threats to IoT devices: Because ESP32 typically has direct control over hardware (GPIO, Flash memory, network), once the sandbox is breached, attackers may directly manipulate the physical device, steal keys stored in Flash memory, or even perform malicious firmware updates.

Vulnerability details: SandboxJS is a JavaScript sandboxing library. Prior to 0.8.27, SanboxJS does not properly restrict __lookupGetter__ which can be used to obtain prototypes, which can be used for escaping the sandbox / remote code execution. This vulnerability is fixed in 0.8.27.

Official announcement: Please refer to the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2026-25142

https://github.com/nyariv/SandboxJS/security/advisories/GHSA-9p4w-fq8m-2hp7

Recommendation:

Implement hardware isolation – Utilize ESP32’s hardware security features (such as Secure Boot, Flash encryption, and digital signature peripherals) to protect core keys, making it difficult for attackers to obtain sensitive credentials even if application-layer software is cracked. Consider alternatives – For embedded scenarios with extremely high security requirements, consider well-maintained JavaScript engines designed specifically for microcontrollers, such as Espruino or Moddable SDK.

CVE-2025-47363: In Qualcomm-specified products, memory corruption when calculating oversized partition sizes without proper checks. (4th Feb-2026)

Preface: ADAS data streams refer to the constant flow of real-time information collected from the vehicle’s environment by sensors like cameras, radar, lidar, and ultrasonic sensors. This data, along with processed information, is sent to the vehicle’s central computer (ADAS ECU) which uses it to perform functions such as object detection, lane keeping, and adaptive cruise control, ultimately improving safety and driving comfort. The Qualcomm Snapdragon SA9000P is a highly capable, leading-edge AI accelerator designed for Advanced Driver Assistance Systems (ADAS) and autonomous driving, frequently used in combination with the SA8540P SoC as part of the Snapdragon Ride platform.

Background: Qualcomm defines memory-conservative configurations in device trees primarily to optimize boot speed, ensure system stability, and manage the complex, carved-out memory architecture typical of modern mobile SoCs. By limiting available RAM during the initial boot, Qualcomm can skip initializing vast amounts of memory, resulting in significant boot time savings (e.g., 20-30ms per GB of RAM).

DTS is capable of providing attacker‑controlled (or misconfigured) large memory partitions, which is necessary for exploitability. But the DTS alone is not the vulnerability — the bug is in Qualcomm’s handling of these sizes in downstream drivers or frameworks.

Remark: Secure engineering limit for HLOS‑visible reserved regions: Do NOT exceed 1/16th of total DDR per region unless Qualcomm documentation explicitly permits it. So the “secure maximum” becomes: 2 GB per reserved-memory region. The recommended in safety‑critical domains): Limit to 1 GB.

Vulnerability details: CVE-2025-47363 integer Overflow or Wraparound in Automotive (Memory corruption when calculating oversized partition sizes without proper checks).

This means the vulnerable path occurs when a Qualcomm driver or subsystem performs arithmetic on a partition size, and the size is large enough to overflow internal calculations, resulting in corrupted pointers, truncated lengths, or allocated regions smaller or larger than expected.

Even if the original driver is not the bug — but it can exercise the buggy Qualcomm code by providing a large memory region, which may cause overflow inside Qualcomm subsystems.

Official announcement: Please refer to the link for details – https://docs.qualcomm.com/securitybulletin/february-2026-bulletin.html

Regarding Apple’s CVE-2025-46285: The handling of 32-bit timestamps in Swift and their security importance. (2 Feb 2026)

Preface: As of February 2026, Apple has issued urgent security updates—specifically iOS 26.2.1 and iOS 26.2—to patch critical vulnerabilities (CVE-2025-43529, CVE-2025-14174, and CVE-2025-46285) that were exploited in targeted attacks. These bugs, affecting the WebKit browser engine and Kernel, allow arbitrary code execution and unauthorized root privileges. Users must immediately update to protect their devices.

Background: Adopting 64-bit timestamps means changing how computers store time, replacing 32-bit integers with 64-bit integers to record seconds since the Unix Epoch (Jan 1, 1970). This shift eliminates the “Year 2038 problem,” extending the maximum representable date from January 2038 to over 292 billion years in the future, ensuring long-term system stability and precision.

Apple’s security hardening efforts following vulnerability CVE‑2025‑46285. CVE‑2025‑46285 was a system‑level integer‑overflow vulnerability in Apple platforms. It occurred because 32‑bit timestamps could overflow, and in certain OS internals this overflow allowed a malicious app to gain root privileges. Apple’s official fix for this vulnerability was to “adopt 64‑bit timestamps”, which eliminates the overflow condition entirely on the affected systems.

Vulnerability details: CVE-2025-46285: A Kernel vulnerability allowing apps to gain root privileges, bypassing app sandboxes.

Official announcement: Please refer to the link for more details

https://support.apple.com/en-us/100100

Reference:

https://nvd.nist.gov/vuln/detail/CVE-2025-46285#vulnCurrentDescriptionTitle

http://www.antihackingonline.com/cell-phone-iphone-android-windows-mobile/cve-2025-43529-apple-multiple-products-use-after-free-webkit-vulnerability-31-12-2025/

http://www.antihackingonline.com/cell-phone-iphone-android-windows-mobile/the-media-reports-in-january-2026-were-triggered-by-a-security-warning-issued-by-apple-on-december-16-2025-20th-jan-2026/

CVE-2025-33220 only applies to NVIDIA vGPU deployments running on hypervisors, such as TKGI clusters on vSphere. (2 Feb 2026)

Preface: When comparing VMware TKGI, Docker, and Kubernetes (K8s) for CUDA (NVIDIA’s parallel computing platform) workflows, the “best” choice depends on your scale and infrastructure.

Choose Docker – if you are a data scientist doing local model development.

Choose Native Kubernetes – if you are building a large-scale AI platform on physical hardware (Bare-metal) for maximum performance.

Choose VMware TKGI – if you need high availability, vGPU flexibility, and are already heavily invested in the VMware ecosystem.

Background: CVE‑2025‑33220 lives in the hypervisor’s vGPU Manager, not in:

  • Docker
  • Containerd
  • Kubernetes
  • NVIDIA Container Runtime
  • NVIDIA Docker runtime
  • PyTorch/TensorFlow workloads
  • CUDA libraries inside containers

CVE‑2025‑33220 requires:

  1. Freeing an object inside the hypervisor
  2. A later operation accessing that SAME freed internal heap structure
  3. The hypervisor NOT realizing the handle is stale
  4. A malformed RM object relationship or command sequence
  5. Conditions normal CUDA applications never generate

If there is no hypervisor-based vGPU, there is no attack surface, because:

  • The ioctl path stops at the bare‑metal NVIDIA GPU driver
  • There is no vGPU Manager backend
  • No vGPU protocol messages are generated
  • No hypervisor memory structures exist to exploit

The CVE is triggered only under very specific hypervisor‑internal states that normal or even “weird order” RMAPI usage will never produce.

Vulnerability details: CVE-2025-33220 – NVIDIA vGPU software contains a vulnerability in the Virtual GPU Manager, where a malicious guest could cause heap memory access after the memory is freed. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, data tampering, denial of service, or information disclosure.

Official announcement: Please refer to the link for details.- https://nvidia.custhelp.com/app/answers/detail/a_id/5747