All posts by admin

CVE-2026-33579: (OpenClaw 2026.3.28 or later) will also address a CVSS 9.9 token rotation race condition flaw allowing full admin access and remote code execution (9th April 2026)

Preface: Unlike ChatGPT, which is a conversational chatbot, OpenClaw is designed to act. It receives a high-level goal, breaks it down into structured tasks, calls APIs, executes shell commands, and iterates until the objective is complete.

Installing OpenClaw (formerly ClawdBot) to collaborate with OpenAI on a smartphone that already contains WhatsApp is designed to achieve autonomous, proactive, and secure personal AI assistance directly within a messaging interface.

Background: OpenClaw’s primary design objective is to transition AI from a passive, conversational interface into a proactive, action-oriented autonomous agent that can independently execute multi-step workflows across a user’s local operating system and external cloud services.

It is architected as an “AI Gateway” or agent runtime rather than a standalone model, serving as the “hands” for an artificial brain by connecting large language models (LLMs) to real-world tools, files, and messaging platforms.

Older versions of the OpenClaw might have stored permissions in a “sticky” historical field (the legacy role fields). Without this check:

•You might revoke an agent’s access in the new dashboard.

•The system might see “no active tokens” and accidentally “fall back” to old settings.

•The agent would regain access you intended to take away.

Vulnerability details: OpenClaw before 2026.3.28 contains a privilege escalation vulnerability in the /pair approve command path that fails to forward caller scopes into the core approval check. A caller with pairing privileges but without admin privileges can approve pending device requests asking for broader scopes including admin access by exploiting the missing scope validation in extensions/device-pair/index[.]ts and src/infra/device-pairing[.]ts.

Official announcement: Please refer to link for details –

https://nvd.nist.gov/vuln/detail/CVE-2026-33579

CVE-2026-35616 affecting FortiClient EMS 7.4.5 (9th Apr 2026)

Preface: Trusting HTTP headers—such as X-SSL-CLIENT-VERIFY, X-SSL-Client-S-DN, or X-Forwarded-User—as primary proof of authentication is highly dangerous unless specifically designed to be passed from a trusted proxy.

The core risk is header spoofing, where an attacker directly manipulates these headers to impersonate any user, bypassing authentication completely.

Background: Does Forticlient EMS use Django?

Yes, recent versions of FortiClient Endpoint Management Server (EMS), specifically in the 7.x branch, utilize the Django web framework for their web GUI and API backend.

Based on security research conducted in early 2026:

Web Application Structure: The FortiClient EMS web GUI is built on Python 3.10 bytecode and uses the Django framework, with core files located in /opt/forticlientems/fcm/fcm/.

Django Components: The application uses Django authentication middleware to handle certificate-based device authentication and API request processing.

API Security: The web interface relies on Django view decorators for API endpoint security.

Infrastructure: The application runs on an Apache web server with mod_wsgi to communicate with the Django application.

Vulnerability details: A improper access control vulnerability in Fortinet FortiClientEMS 7.4.5 through 7.4.6 may allow an unauthenticated attacker to execute unauthorized code or commands via crafted requests.

Official announcement: Please refer to link for details – https://nvd.nist.gov/vuln/detail/CVE-2026-35616

CVE-2026-24164 and CVE-2026-24165: About BioNeMo Framework (06 April -2026)

Preface: DNA models like DNABERT and Evo2 are Genomic Foundation Models (gLMs), which treat the DNA sequence of 4 letters (A (Adenine), C (Cytosine), T (Thymine), and G (Guanine).) as a “language” to learn the fundamental rules, patterns, and “syntax” governing life.

Similar to how Large Language Models (LLMs) like GPT are pre-trained on vast amounts of text to understand English, these DNA models are pre-trained on billions to trillions of base pairs (nucleotides) from diverse species to understand the “grammar” of genomes, including the 98% that is non-coding.

Background: For a DNA repository, NVIDIA BioNeMo (the life sciences extension of NeMo) handles the heavy lifting of transforming raw genetic sequences into “usable intelligence”. It is used for more than just simple normalization; it provides a specialized pipeline for pre-training, fine-tuning, and analyzing genomic data.

Here is how the workflow typically functions for DNA data:

1. Data Preparation & Preprocessing

Instead of generic text normalization, BioNeMo uses specialized scripts to prepare genomic data (like the GRCh38 human genome) for AI.

•Chunking: Breaking long chromosomal sequences into manageable segments (e.g., 512 nucleotides).

•Tokenization: Converting DNA “letters” (A, C, G, T) into numerical tokens. Advanced models like DNABERT-2 use Byte Pair Encoding (BPE) to process sequences up to 5x more efficiently than older methods.

•Standardization: Organizing raw genomic data into structured formats like FASTA or CSV that the training framework can ingest.

2. Categorization & Functional Prediction – details not described here

3. Downstream Analysis – details not described here

Vulnerability details:

CVE-2026-24164 – NVIDIA BioNeMo contains a vulnerability where a user could cause a deserialization of untrusted data. A successful exploit of this vulnerability might lead to code execution, denial of service, information disclosure, and data tampering.

CVE-2026-24165 – NVIDIA BioNeMo contains a vulnerability where a user could cause a deserialization of untrusted data. A successful exploit of this vulnerability might lead to code execution, denial of service, information disclosure, and data tampering.

Official announcement: Please refer to link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5808

The far side of the moon in April 2026 (8th Apr 2026)

Preface: The far side of the moon – During the Apollo missions, the far side of the moon was often in complete darkness or dim light due to mission scheduling.

Background: The far side of the moon—a sight never seen by astronauts in the 1960s. Is that so?

In fact, this is a common misconception! The 24 astronauts who flew on Apollo 8 through Apollo 17 between 1968 and 1972 were the first humans to see the far side of the Moon with their own eyes.

What will be different in 2026?

Although they also saw it, the current Artemis 2 mission (April 2026) has seen things that the “astronauts of the 1960s” could not, for several key reasons:

Better Lighting: During the Apollo missions, the far side was often in total darkness or poor lighting because of the mission timing. Artemis II is flying over when much more of the far side is sunlit, revealing details that were previously hidden in shadow.

Wider View: The Apollo capsules orbited very close to the surface (about 60-70 miles up), giving them a narrow, “slice-by-slice” view. Artemis II is flying about 4,000 miles above the surface, allowing the crew to see massive geological features—like the entire Orientale Basin—all at once.

Newer Discoveries: The Artemis crew is specifically looking for small, recently formed craters that didn’t exist 50 years ago.

Ref: The Artemis astronauts are currently on their way back to Earth after breaking the Apollo 13 distance record.

Official announcement: Track NASA’s Artemis II Mission in Real Time – https://www.nasa.gov/missions/artemis/artemis-2/track-nasas-artemis-ii-mission-in-real-time/

CVE-2026-24148, CVE-2026-24154 and CVE-2026-24153: About NVIDIA Jetson (2nd-April-2026)

Preface: NVIDIA JetPack and Jetson Linux (formerly L4T – Linux for Tegra) are the foundational software stacks for NVIDIA Jetson AI modules. Jetson Linux provides the essential BSP (bootloader, Linux kernel, Ubuntu rootfs, drivers), while JetPack SDK bundles this with developer tools, libraries (CUDA, TensorRT), and APIs for AI, computer vision, and robotics.

Background: The initrd and root file system (rootfs) unencrypted creates a significant security gap against local physical attacks. In a standard industrial or autonomous deployment, physical access is often the most direct threat to a machine’s integrity.

The Security Gap: Local Physical Access

When a Jetson device is left with its default, unencrypted configuration, an attacker with physical access can easily bypass system protections:

Because the bootloader cannot read encrypted files directly, it must first mount an unencrypted partition containing the kernel and initrd images. Without signing or encryption, these critical files can be replaced via a malicious USB or NVMe drive.

Ref: nvluks-srv-app is a NVIDIA Jetson Linux user-space application used to retrieve a unique, secure passphrase from the Trusted Execution Environment (TEE) to unlock encrypted partitions at boot time. It enables disk encryption on Jetson devices by facilitating secure communication between the normal operating system and the hardware-backed security services (OP-TEE).

Vulnerability details:

CVE-2026-24154 NVIDIA Jetson Linux has vulnerability in initrd, where an unprivileged attacker with physical access could inject incorrect command line arguments. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, denial of service, data tampering, and information disclosure.

CVE-2026-24148 NVIDIA Jetson for JetPack contains a vulnerability in the system initialization logic, where an unprivileged attacker could cause the initialization of a resource with an insecure default. A successful exploit of this vulnerability might lead to information disclosure of encrypted data, data tampering, and partial denial of service across devices sharing the same machine ID.

CVE-2026-24153 NVIDIA Jetson Linux has a vulnerability in initrd, where the nvluks trusted application is not disabled. A successful exploit of this vulnerability might lead to information disclosure.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5797

CVE-2026-5164: A flaw has been found in virtio-win. Don’t underestimate this; the field of artificial intelligence also needs virtio-win! (1st April 2026)

Preface: While NVIDIA CUDA provides powerful parallel processing capabilities on both Linux and Windows, developers still need to run Windows on top of Linux using virtio-win in several specific environments:

For example: Windows-Exclusive HPC Applications

Many specialized scientific and engineering applications are only developed for Windows and cannot be easily recompiled for Linux.

Background: To programmatically use RhelDoUnMap() while ensuring user requests are correctly validated, you must specifically address the descriptor count validation to prevent buffer overflows. This function is part of the virtio-win drivers used in Red Hat Enterprise Linux environments.

Key Components of virtio-win include Network (NetKVM), Storage (viostor / virtio-scsi), Memory Balloon (balloon), Serial (virtio-serial), Graphics (virtio-gpu), Input (virtio-input) and Guest Agent (qemu-ga). The RhelDoUnMap() function is part of the virtio-win driver suite, specifically within the VioStor (Virtio Storage) driver.

Vulnerability details : A flaw was found in virtio-win. The `RhelDoUnMap()` function does not properly validate the number of descriptors provided by a user during an unmap request. A local user could exploit this input validation vulnerability by supplying an excessive number of descriptors, leading to a buffer overrun. This can cause a system crash, resulting in a Denial of Service (DoS).

Official announcement: Please refer to the link for details –

https://nvd.nist.gov/vuln/detail/cve-2026-5164

About Trivy: Closer Look what is happen through CI/CD Ecosystems. Staying alert! (31st Mar 2026)

Preface: According to Mandiant, over a thousand SaaS environments have been impacted by ongoing supply chain compromises of Aqua Security’s open-source scanner Trivy, and researchers predict that the impact may grow by an order of magnitude.

Researchers have since reported multiple downstream attacks enabled by the compromise, possibly via implementations of Trivy. Sysdig observed the TeamPCP infostealer deployed in a GitHub action belonging to another software supply chain security developer, Checkmarx. Aikido Security reported attacks targeting the npm ecosystem and Kubernetes, spreading a persistent Python backdoor through “CanisterWorm,” which steals npm tokens to propagate itself through developers’ packages.

Background: Trivy is a popular open-source vulnerability and security scanner maintained by Aqua Security. Trivy is a “universal” scanner that consolidates multiple security checks into a single tool:

  • Vulnerability Scanning: Detects known vulnerabilities (CVEs) in operating system packages (e.g., Alpine, RHEL, Ubuntu) and language-specific dependencies (e.g., npm, pip, Go modules).
  • Misconfiguration Detection: Scans Infrastructure as Code (IaC) files like Terraform, Dockerfiles, Kubernetes manifests, and CloudFormation to find security flaws.
  • Secret Scanning: Identifies hardcoded sensitive information such as passwords, API keys, and tokens within code or container images.

Closer Look of Trivy design weakness:

The Mechanism: In Git, version tags (like @v1 or @v0.24.0) are mutable, meaning they can be reassigned to a different commit. The attackers used compromised credentials to “poison” 76 out of 77 existing version tags for trivy-action, pointing them to a new, malicious commit that contained a credential stealer.

The “Design Weakness”: Because pipelines are usually configured to pull the latest version of a tag automatically, thousands of organizations executed the malicious code without any changes to their own workflow files.

Official details: Please refer to the link for details  – https://nvd.nist.gov/vuln/detail/CVE-2026-33634

CVE-2026-24157 and CVE-2026-24159: To protect your system, NVIDIA recommends updating to NeMo Framework version 2.6.2 or later (30-03-2026)

Preface: NVIDIA NeMo is a widely adopted, end-to-end framework for building, customizing, and deploying generative AI models (LLMs) and conversational AI agents. It is primarily used to tailor open-source models—such as Llama, Mistral, and Google Gemma—using proprietary enterprise data.

Ollama, Mistral, and Google Gemma represent a powerful ecosystem for running local, open-weight Large Language Models (LLMs). Ollama acts as the engine to run models, while Mistral and Gemma are two of the most popular, high-performing model families designed to be efficient enough to run on personal computers.

Background: Regarding the restore_from() method, it is a core functionality used to load local checkpoint files with the .nemo extension.

Key Details of restore_from() –

  • Purpose: Fully restores a model instance, including its weights and configuration, from a local [.]nemo file for evaluation, inference, or fine-tuning.
  • File Structure: A [.]nemo file is an archive (specifically a tar.gz file) containing the model’s weights and a model_config[.]yaml file that defines its architecture.
  • Usage: It is called directly from the model’s base class (e.g., ASRModel[.]restore_from(restore_path="path/to/file[.]nemo")).

Vulnerability details: (see below)

CVE-2026-24157 NVIDIA NeMo Framework contains a vulnerability in checkpoint loading where an attacker could cause remote code execution. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure and data tampering.

CVE-2026-24159 NVIDIA NeMo Framework contains a vulnerability where an attacker may cause remote code execution. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, information disclosure and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5800

CVE-2026-24141: NVIDIA Model Optimizer for Windows and Linux contains a vulnerability in the ONNX quantization feature. (27th Mar 2026)

Preface: A design limitation has been discovered in the ONNX quantization function of the NVIDIA model optimizer for Windows and Linux. However, confusingly, the ONXY function appears to only work on Windows/RTX (not Linux). What is the actual design limitation?

A sophisticated technical question. The confusion often stems from the fact that while ONNX is the primary deployment format for Windows/RTX, the quantization process (where the vulnerability often lies) frequently occurs on Linux development servers.

Background: Why does the vulnerability affect both Linux and Windows?

Although ONNX is the target format for Windows AI PC applications, the NVIDIA Model Optimizer (ModelOpt) library is cross-platform.

*Linux as the “Factory”: Most developers use powerful Linux servers (with A100/H100 GPUs) to run the ModelOpt quantization scripts. They generate the optimized ONNX model on Linux and then “ship” it to Windows clients. Therefore, the vulnerability exists in the Linux-based conversion tools.

Vulnerability details: NVIDIA Model Optimizer for Windows and Linux contains a vulnerability in the ONNX quantization feature, where a user could cause unsafe deserialization by providing a specially crafted input file. A successful exploit of this vulnerability might lead to code execution, escalation of privileges, data tampering, and information disclosure. (Initial release – March 24, 2026)

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5798

Remedy CVE-2025-33244, securely deserialize data for use with APEX or PyTorch. (26th Mar 2026)

Preface: The core design goal of NVIDIA Apex is to achieve mixed precision training, which mainly involves a combination of **16-bit (FP16)** and **32-bit (FP32)**.

Background: In NVIDIA APEX, handling FP16 and FP32 data is primarily managed through the Automatic Mixed Precision (AMP) module. You don’t need to manually cast your data.

  • Use FP32 if you are doing scientific simulations that require extreme precision or if your model fails to converge using lower bit-depths.
  • Use FP16 for Inference (running a finished model on a phone or server) or when training Large Language Models (LLMs) to save massive amounts of time and electricity.

Current Gold Standard is FP16; balances speed and memory.

Scientific simulations (such as those simulating black holes or aircraft airflow) require extremely high numerical stability to prevent errors from accumulating over time; while LLM training is more like finding a probability distribution of the “general direction,” where speed and model size are more important than accuracy to the 15th decimal place.

Vulnerability details: NVIDIA APEX for Linux contains a vulnerability where an unauthorized attacker could cause a deserialization of untrusted data. This vulnerability affects environments that use PyTorch versions earlier than 2.6. A successful exploit of this vulnerability might lead to code execution, denial of service, escalation of privileges, data tampering, and information disclosure.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5782

Best Practices:

•Weight Files: Always convert and store your .pth or .bin files as .safetensors.

•API Inputs: Prefer Protocol Buffers (Protobuf) or JSON for real-time requests.

•Integrity Checks: Before deserializing, verify the file’s SHA-256 hash to ensure it hasn’t been tampered with during transit