Upcoming Mars solar conjunction (around late December 2025/early January 2026) will further delay recovery of Maven efforts.(26-12-2025)

Preface: On Dec. 16 and 20, 2025 NASA’s Curiosity team used the rover’s Mastcam instrument in an attempt to image MAVEN’s reference orbit, but MAVEN was not detected.

The initial loss of contact on December 6, 2025, was due to a routine orbital blackout as MAVEN passed behind Mars, but an unspecified anomaly prevented signal reacquisition when it reemerged; however, the upcoming Mars solar conjunction (around late December 2025/early January 2026) will further delay recovery efforts, as the Sun blocks communication paths, but it wasn’t the cause of the initial December 6th silence.

Background: The Earth-Sun conjunction timing isn’t perfectly annual because Earth’s orbit is an ellipse (oval), not a perfect circle, causing its distance and speed to the Sun to vary, and long-term orbital changes (Milankovitch cycles) also shift dates slightly over millennia, meaning solar conjunctions (when the Sun is between Earth and another planet) and even peak closeness (perihelion) happen at different Earth-orbit points annually.

The planetary orbits are elliptical and speeds vary (Kepler’s Laws), but a solar conjunction (when a planet passes behind the Sun from Earth’s perspective) typically happens once per orbit for that specific planet, around every 1-2 years for inner planets, or 2-5 years for Mars, due to Earth catching up, not multiple times annually for the same planet because of its own varying speed; the elliptical nature and speed changes influence when and how often these conjunctions occur over years, making them predictable cycles, not random occurrences within a single year for one planet.

Remark: Mercury and Venus are inner planets

Official announcement: Please refer to the link for details – https://science.nasa.gov/blogs/maven/2025/12/23/nasa-works-maven-spacecraft-issue-ahead-of-solar-conjunction/

Additional: http://www.antihackingonline.com/science/if-3i-atlas-did-not-arise-naturally-then-advanced-civilizations-could-send-carrier-signals-to-interfere-with-mavens-signals-20th-dec-2025/

When 3I/ATLAS Meets Santa Claus (24th Dec 2025)

Preface: O come, all ye faithful, joyful and triumphant,

O come ye, O come ye, to Bethlehem.

But this time, 3I/ATLAS’s destination is Jupiter!

No worries! Santa Claus is here to see you!

Background: Halley’s Comet, the famous periodic comet, last appeared in 1986 and is expected to return to Earth’s vicinity in mid-2061, reaching its closest point to the Sun (perihelion) around July 28, 2061, and closest to Earth on July 29, 2061.

Remark: Halley’s Comet was at perihelion around February 9, 1986.

Harris Comet (like Halley’s) is a native solar system comet from our Oort Cloud, looping regularly, while 3I/ATLAS is a rare visitor from another star system, an interstellar object with unique chemistry (more CO2, nickel) and extremely high speed, showing it’s not bound by our Sun’s gravity and highlighting differences in stellar nurseries across the galaxy.

Despite centuries of scientific exploration and research, our planet Earth still harbors an astonishing number of unknowns and some unexplained phenomena. From the depths of the mantle to the edge of the atmosphere, mysterious events and structures abound.

From a technical perspective, we’ve primarily focused on 3I/ATLAS over the past few months. In fact, many unknown phenomena have occurred in the past in earth, and we have no conclusions. But we cannot forget the structure of the Earth; the formation of the Moon precisely controls the tides of the ocean. It is actually the tidal friction generated by the Moon’s gravitational pull that actively slows the Earth’s rotation, rather than assisting it in spinning faster. Furthermore, the Earth’s atmosphere’s primary role in mitigating solar storms is absorption and shielding, working in conjunction with the planet’s magnetic field to protect life on the surface from harmful radiation and charged particles. It serves as a planetary “kevlar vest” against the energetic onslaught from the Sun.

The details above demonstrate that Earth possesses the ability to protect itself. Perhaps we should thank God, or perhaps we should thank the advanced civilization that created this environment.

So we are not alone!

Best wishes: Wishing you all the best in 2026 and a Merry Christmas!

CVE-2025-33226: NVIDIA NeMo Framework for all platforms contains a vulnerability (24th Dec 2025)

Official Updated 12/12/2025 02:17 PM

Preface: NVIDIA NeMo is a versatile, end-to-end platform used across a vast spectrum of industries, primarily to build, customize, and deploy generative AI agents and applications, such as custom chatbots and specialized assistants. Its primary users are companies in the computer software industry, but its use cases span many different sectors, including energy, telecommunications, financial services, healthcare, and retail.

NeMo is the framework for building models, while NIM (NVIDIA Inference Microservices) provides pre-packaged tools to easily deploy and manage those (and other) AI models as APIs, with NIM often using NeMo’s customized models for inference, creating a unified ecosystem.

Background: Uploading model checkpoints from the NVIDIA NeMo framework to NVIDIA NIM is essential for streamlining the transition from model development and training to optimized, scalable, production-ready inference. This integration merges the development power of NeMo with the robust, deployment-focused capabilities of NIM.

The process is more than a simple file transfer; it is a critical step in a comprehensive, end-to-end AI lifecycle management strategy. NVIDIA NeMo is a powerful framework for building, training, and fine-tuning large language models (LLMs) and other generative AI models, producing specialized .nemo checkpoints that contain model configurations and weights. NVIDIA NIM, or NVIDIA Inference Microservices, then takes these trained, domain-specific models and packages them into prebuilt, optimized, and secure microservices for deployment across various environments, whether in the cloud, on-premises data centers, or at the edge.

The restore_from() method is a function used for model loading and restoration. It allows you to: 

  • Load a previously trained model instance, including its weights and configuration, from a saved .nemo file.
  • Resume training from a specific point or perform fine-tuning/inference with a pre-trained model

Vulnerability details: CVE-2025-33226: NVIDIA NeMo Framework for all platforms contains a vulnerability where malicious data created by an attacker may cause a code injection. A successful exploit of this vulnerability may lead to code execution, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5736

CVE-2025-33235 is specifically about a vulnerability in NVIDIA’s implementation, not PyTorch or NVRx directly (23rd Dec 2025)

Official Updated 12/12/2025 02:20 PM

Preface: Given Google’s efforts to make its hardware compatible with PyTorch, why is the NVIDIA Resiliency Extension (NVRx) still needed? The answer lies in the complementarity of these technologies, the realities of large-scale distributed systems, and the practical necessity to ensure training efficiency and fault tolerance, especially when using NVIDIA GPUs, which remain the mainstream platform for AI development. While Google is working to make its Tensor Processing Units (TPUs) more compatible with PyTorch to provide an alternative, the NVIDIA Resiliency Extension aims to address the specific and pressing challenges encountered when performing large-scale distributed PyTorch training on NVIDIA’s widely used hardware, such as hardware on cloud platforms like Google Cloud.

Background: The open-source machine learning framework PyTorch was originally developed by researchers at Facebook AI Research (now Meta AI). It was first publicly released in September 2016. PyTorch-based workloads are artificial intelligence (AI) and machine learning (ML) tasks and applications that are developed, trained, and run using the PyTorch open-source deep learning framework.

The NVIDIA Resiliency Extension (NVRx) integrates multiple resiliency-focused solutions for PyTorch-based workloads. Users can modularly integrate NVRx capabilities into their own infrastructure to maximize AI training productivity at scale. The NVIDIA Resiliency Extension (NVRx) is a specific Python package that integrates several solutions to minimize downtime and maximize the effective training time for PyTorch-based workloads running on NVIDIA infrastructure. NVRx is a Python package that framework developers and users can modularly integrate into their own infrastructure. It is used in major NVIDIA frameworks like the NVIDIA NeMo Framework for building large language models, providing the underlying machinery for their resilient training capabilities.

NVRx provides utilities that run the actual checkpoint saving routines in the background. It employs mechanisms, often leveraging torch.multiprocessing, to fork a separate, temporary process dedicated to handling the I/O operations after the data has been quickly staged from GPU memory to CPU buffers.

Vulnerability details: CVE-2025-33235 – NVIDIA Resiliency Extension for Linux contains a vulnerability in the checkpointing core, where an attacker may cause a race condition. A successful exploit of this vulnerability may lead to information disclosure, data tampering, denial of service, or escalation of privileges.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5746

About CVE-2025-32210: NVIDIA Isaac Lab, a story can tell! (22-12-2025)

Official Updated 12/12/2025 02:19 PM

Preface: The primary difficulties in robotics learning involve bridging the gap between simulation and reality (the “reality gap”), enabling robust perception and decision-making in unpredictable environments, handling complex physical interactions like contact dynamics, ensuring safety and security, addressing high costs and lack of standardization, and overcoming workforce skill gaps, all while managing computational and power limitations.

Background: Isaac Lab uses NVIDIA Isaac Sim’s capabilities for realistic virtual environments, allowing researchers to train complex robot behaviors efficiently through parallel simulation and data generation, then transfer these policies to physical robots. The primary difficulties Isaac Lab tackles are:

While the physical world remains the definitive testbed, the acquisition of physical interaction data with robots is expensive, time-consuming, and often necessitates specialized instrumentation. These limitations are especially acute in rare but safety-critical situations. Events such as high-speed collisions, hardware malfunctions, or navigation in unpredictable human environments are difficult to reproduce and pose significant risks to equipment and human safety.

NVIDIA Isaac Sim manages computational and power limitations through GPU-accelerated design, customizable performance settings, and scalable deployment options, allowing users to balance performance, fidelity, and resource consumption. This approach empowers developers to tailor the simulation environment to their specific hardware capabilities and project needs, from local workstations to cloud-based multi-GPU setups.

Key contributions of Isaac Lab

• Modular and scalable framework: Built on NVIDIA Omniverse, enabling high-fidelity, GPU-accelerated simulation for complex robots and tasks.

• Advanced sensor simulation: Supports tiled RTX rendering, Warp-based custom sensors, and physics-based data for rich observation spaces.

• Seamless teleoperation and data collection: Integrates spacemouse, VR headsets, and other devices for large-scale demonstration capture.

• Extensive environment suite: Provides diverse, ready-to-use environments for reinforcement learning, imitation learning, and sim-to-real research.

Vulnerability details: CVE-2025-32210 – NVIDIA Isaac Lab contains a deserialization vulnerability.  A successful exploit of this vulnerability might lead to code execution.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5733

If 3I/ATLAS did not arise naturally, then advanced civilizations could send carrier signals to interfere with MAVEN’s signals.(20th Dec 2025)

Preface: NASA’s MAVEN spacecraft lost communication with Earth in early December 2025, shortly after observing the interstellar comet 3I/ATLAS as it passed Mars in October, with the outage happening after MAVEN went behind Mars, though NASA has not officially linked the events, noting only that it was operating normally before the blackout and teams are investigating.

Background: MAVEN (Mars Atmosphere and Volatile Evolution) uses an Electra UHF transceiver for crucial communication relay services, acting as a “bent pipe” to send data from rovers like Perseverance and Curiosity back to Earth, significantly boosting data return rates for Mars surface missions, despite its highly elliptical orbit making it an important, flexible part of the Mars Relay Network alongside other orbiters like MRO and TGO.

Hypothesis: The Wow! signal was a strong narrowband radio signal .Every UHF narrowband radio signal is built around a carrier signal. A carrier signal at 1420.4556 MHz (the Hydrogen Line frequency) can definitely interfere with UHF communications, as it’s within the general UHF band (300 MHz – 3 GHz). If 3I/ATLAS did not arise naturally, then advanced civilizations could send carrier signals to interfere with MAVEN’s signals.

End of Content.

CVE-2025-48507: Incorrect Secure Flag Usage in Versal™ Adaptive SoC Arm® Trusted Firmware (19th Dec 2025)

Revision Date : 2025-Dec-17

Preface: AMD removed “Arm” from “Arm Trusted Firmware” (TF-A) references in their documentation (like for their EPYC/ Ryzen CPUs) to signify that while it’s based on Arm’s open standard, their implementation is tailored for AMD hardware, making it AMD’s own secure boot/firmware solution, not just Arm’s code, promoting clearer branding and ownership for their specific silicon, even though it adheres to Arm’s secure architecture principles.

Background: General-purpose Versal™ adaptive SoCs combine programmable logic with embedded Arm® application and real-time CPU cores, a programmable network on chip, high-speed serial transceivers, programmable I/O, and a broad offering of hard IP.

As various operating systems from various different vendors can be present in an ARM system, performing power control requires a method of collaboration. Considering operation in Non-secure state, if a supervisory system that is managing power, whether it is executing at the OS level (EL1) or at hypervisor level (EL2), wants to enter an idle state, power up or power down a core, or reset or shut down the system, supervisory systems at other Exception levels will need to react to the power state change request.

“Secure” (as a flag/designation): This refers to the privilege level the request claims to have been initiated from. When a request, like a Power State Coordination Interface (PSCI) command, is marked as “secure,” it is asking to be treated as though it originated from the trusted Secure World, with full access rights to all system resources.

“Processor’s actual security state”: This refers to the physical, hardware-enforced execution state the CPU is currently operating in (either Non-Secure World or Secure World). The physical state determines which memory regions and peripherals the code can genuinely access.

Vulnerability details: The Secure Flag passed to VersalTM Adaptive SoC’s Trusted Firmware for Cortex®-A processors (TF-A) for Arm’s Power State Coordination Interface (PSCI) commands were incorrectly set to secure instead of using the processor’s actual security state. This would allow the PSCI requests to appear they were from processors in the secure state instead of the non-secure state.

The vulnerability affects Versal™ Adaptive SoC’s Arm® Trusted Firmware (TF-A) for Cortex-A processors.

Root cause: The Secure Flag passed to PSCI (Power State Coordination Interface) commands was incorrectly set to secure instead of reflecting the processor’s actual security state.

Impact: This misconfiguration allows PSCI requests from a non-secure processor to appear as if they originate from a secure state, potentially enabling unauthorized access or manipulation of secure resources.

Severity: CVSS v4.0 score is 1.0 (Low), but it compromises the integrity of the security model.

Official announcement: Please refer to the link for details.

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-8020.html

Android Security Bulletin—December 2025

CVE-2025-48633- Improper use of Binder identity manipulation in system code (18th Dec 2025)

Official announcement:  December 1, 2025 | Updated December 4, 2025

Preface: The improper use of Binder identity manipulation in Android system code refers to a class of security vulnerabilities where a malicious application or process can spoof its identity (specifically its User ID and Process ID) when communicating with a trusted system service via the Binder Inter-Process Communication (IPC) mechanism. This deception can allow a lower-privileged application to bypass security checks and execute sensitive operations with the privileges of a legitimate, trusted system process

Before the remedy – The code iterated through each user profile and created a context for that user. It then called AccountManager.getAccounts() for that specific user context. This means the check was per-user, and the caller only saw accounts for the user context it created. However, because the code temporarily cleared the Binder identity (Binder.clearCallingIdentity()), it was operating with system-level privileges during that loop. If misused, this could allow a component to access accounts across users, which is a privilege escalation risk.

What was the vulnerability?

The issue was in the Android system code that checked whether any accounts existed on the device. Please refer to the attached diagram for details.

Official announcement: For more details, please refer to the following link – https://source.android.com/docs/security/bulletin/2025-12-01

Do PCIe IOCTL and SERDES pose a risk to Google Ironwood TPU? (15th Dec 2025)

My originally scheduled for release on December 17, 2025, it was released ahead of schedule!

Preface: PCIe is one protocol used to transfer data across the SerDes. Different protocols can be used to transfer data across the SerDes.

SerDes (Serializer/Deserializer) in PCIe Gen 5 and Gen 2 are critical physical layer components that handle high-speed data conversion, but Gen 5 operates at a massive 32 GT/s per lane, doubling Gen 2’s 5 GT/s, requiring advanced signal integrity (like DSP, error correction) and sophisticated retimers to manage vastly higher frequencies (32 GHz vs. 5 GHz), while maintaining backward compatibility for seamless integration, crucial for modern AI/ML and data-intensive applications

Background: Recent AMD advisories (CVE-2025-9612/9613/9614) highlight vulnerabilities in PCIe Integrity and Data Encryption (IDE) and IOCTL-based SERDES control. These flaws allow attackers with low-level or physical access to manipulate PCIe traffic or exploit driver logic, potentially leading to:

  • Data Integrity Violations
  • Privilege Escalation
  • Denial of Service (DoS)

Google’s Ironwood TPU architecture uses PCIe Gen5 (host) and Gen2 (gBMC) links, both relying on SERDES for high-speed signaling. If TPU drivers expose IOCTL interfaces for SERDES configuration without strict validation, similar risks apply:

  • Direct Hardware Access: IOCTLs can alter PLL settings, lane equalization, and data rates.
  • Privilege Escalation: Weak access control could let non-root processes manipulate SERDES.
  • Kernel Attack Surface: Complex IOCTL handlers may enable buffer overflow or arbitrary code execution.

Impact on Cloud TPU Deployments: Ironwood’s scale—9,216 chips per pod, 1.77 PB shared memory, and 9.6 Tb/s interconnect bandwidth—amplifies the consequences of link instability or compromised isolation. Multi-tenant environments increase exposure to insider or advanced persistent threats.

Recommended Mitigations

Restrict IOCTL Access: Enforce root-only privileges for SERDES configuration commands.

Implement capability checks for sensitive operations (e.g., SERDES_SET_DATA_RATE).

Validate Input Parameters: Ensure strict bounds on PLL, lane equalization, and data rate settings.

Apply PCIe IDE Erratum #1 Guidance: Update firmware to address IDE vulnerabilities per PCI-SIG recommendations.

Enable Hardware Security Features: Use IOMMU and Zero-Trust principles for PCIe endpoints.

Continuous Monitoring: Deploy anomaly detection for PCIe link training and SERDES state changes.

– End of article –

About: CVE-2025-33214 – NVIDIA NVTabular for Linux and CVE-2025-33213 – NVIDIA Merlin Transformers4Rec for Linux (15th Dec 2025)

Preface: Suppose you’re using cuML’s model persistence feature to load a serialized model from disk or a remote source. If the source is not trusted or validated, and the deserialization uses pickle or similar unsafe methods, it could execute arbitrary code.

The attached diagram demonstrates arbitrary code execution via pickle, which aligns with CVE-2025-33214 and likely CVE-2025-33213 if input validation is missing.

Background: NVTabular is a component of NVIDIA Merlin, an open source framework for building and deploying recommender systems and works with the other Merlin components including Merlin Models, HugeCTR and Merlin Systems to provide end-to-end acceleration of recommender systems on the GPU.

NVTabular requires Python version 3.7+. Additionally, GPU support requires:

  • CUDA version 11.0+
  • NVIDIA Pascal GPU or later (Compute Capability >=6.0)
  • NVIDIA driver 450.80.02+
  • Linux or WSL

When running NVTabular on the Criteo 1TB Click Logs Dataset using a single V100 32GB GPU, feature engineering and preprocessing was able to be completed in 13 minutes. Furthermore, when running NVTabular on a DGX-1 cluster with eight V100 GPUs, feature engineering and preprocessing was able to be completed within three minutes. Combined with HugeCTR, the dataset can be processed and a full model can be trained in only six minutes.

NVIDIA Merlin™ accelerates the entire pipeline, from ingesting and training to deploying GPU-accelerated recommender systems. Merlin NVTabular is a feature engineering and preprocessing library designed to effectively manipulate terabytes of recommender system datasets and significantly reduce data preparation time. It provides efficient feature transformations, preprocessing, and high-level abstraction that accelerates computation on GPUs using the RAPIDS™ cuDF library.

Vulnerability details:

CVE-2025-33214 – NVIDIA NVTabular for Linux contains a vulnerability in the Workflow component, where a user could cause a deserialization issue. A successful exploit of this vulnerability might lead to code execution, denial of service, information disclosure, and data tampering.

CVE-2025-33213 – NVIDIA Merlin Transformers4Rec for Linux contains a vulnerability in the Trainer component where a user may cause a deserialization issue. A successful exploit of this vulnerability may lead to code execution, denial of service, information disclosure, and data tampering.

Official announcement: Please refer to the following link for details-

https://nvidia.custhelp.com/app/answers/detail/a_id/5739

antihackingonline.com