AMD response to EDK2 SMM MCE Enablement Issue (7th Aug 2025)

Preface: While it’s technically possible to update UEFI firmware from within a Linux user space environment, it’s not a common or recommended practice. Most UEFI updates are designed to be installed through specific utilities provided by the motherboard manufacturer, often requiring a bootable medium or a dedicated Windows application.

Background: EDK II, also known as EDK2, is an open-source firmware development environment for the Unified Extensible Firmware Interface (UEFI) and Platform Initialization (PI) specifications. It’s a modern, feature-rich, and cross-platform environment developed by the Tianocore project. Think of it as the official development environment for UEFI applications and a core component of many platforms’ firmware

TianoCore is an open-source community focused on developing and promoting the Unified Extensible Firmware Interface (UEFI). It provides a firmware development environment, primarily known as EDK II, which is used for building UEFI firmware, drivers, and applications. TianoCore is a reference implementation of UEFI and is widely adopted by the industry.

Technical details: A researcher reported a bug in the open source EDK2 system management interrupt (SMI) entry code when an MCE occurs near the start of the SMI handler.  An attacker who can inject a machine check exception (MCE) could cause execution to jump to an attacker-controlled interrupt handler, leading to arbitration code execution.

Ref: On AMD EPYC processors, the System Management Mode (SMM) functionality is indeed implemented within the System Management Unit (SMU), which is a distinct block of logic on the processor die.

The System Management Unit (SMU) contains a mailbox function to facilitate communication between the SMU and other system components, including the CPU and operating system. This mailbox acts as a communication channel for sending commands and data, and receiving responses, enabling the SMU to perform its tasks related to system management, power management, and hardware control.

Official announcement: Please refer to the following link for detailshttps://www.amd.com/en/resources/product-security/bulletin/amd-sb-7043.html

2025-23318 and CVE-2025-23319: About NVIDIA Triton Inference Server (6th Aug 2025)

Preface: Nvidia’s security advisories released on August 4, 2025 (e.g., CVE-2025-23318, CVE-2025-23319) are specifically related to the Python backend. The Triton backend for Python. The goal of Python backend is to let you serve models written in Python by Triton Inference Server without having to write any C++ code.

Background: NVIDIA Triton Inference Server is an open-source inference serving software that streamlines the deployment and execution of AI models from various deep learning and machine learning frameworks. It achieves this flexibility through a modular system of backends. 

Each backend within Triton is responsible for executing models from a specific framework. When an inference request arrives for a particular model, Triton automatically routes the request to the necessary backend for execution. 

Key backend frameworks supported by Triton include:

  • TensorRT: NVIDIA’s high-performance deep learning inference optimizer and runtime.
  • TensorFlow: A popular open-source machine learning framework.
  • PyTorch: Another widely used open-source machine learning library.
  • ONNX: An open standard for representing machine learning models.
  • OpenVINO: Intel’s toolkit for optimizing and deploying AI inference.
  • Python: A versatile backend that can execute models written directly in Python and also serves as a dependency for other backends.
  • RAPIDS FIL (Forest Inference Library): For efficient inference of tree models (e.g., XGBoost, LightGBM, Scikit-Learn).

This modular backend architecture allows Triton to provide a unified serving solution for a wide range of AI models, regardless of the framework they were trained in.

Vulnerability details:

CVE-2025-23318: NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause an out-of-bounds write. A successful exploit of this vulnerability might lead to code execution, denial of service, data tampering, and information disclosure.

CVE-2025-23319: NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause an out-of-bounds write by sending a request. A successful exploit of this vulnerability might lead to remote code execution, denial of service, data tampering, or information disclosure.

Official announcement: Please see the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5687

CVE-2025-23310: The NVIDIA Triton Inference Server for Windows and Linux suffers from a stack buffer overflow due to specially crafted input. (5th Aug 2025)

Preface: The NVIDIA Triton Inference Server API supports both HTTP/REST and GRPC protocols. These protocols allow clients to communicate with the Triton server for various tasks such as model inferencing, checking server and model health, and managing model metadata and statistics.

Background: NVIDIA Triton™ Inference Server, part of the NVIDIA AI platform and available with NVIDIA AI Enterprise, is open-source software that standardizes AI model deployment and execution across every workload.

The Asynchronous Server Gateway Interface (ASGI) is a calling convention for web servers to forward requests to asynchronous-capable Python frameworks, and applications. It is built as a successor to the Web Server Gateway Interface (WSGI).

NVIDIA Triton Inference Server integrates a built-in web server to expose its functionality and allow clients to interact with it. This web server is fundamental to how Triton operates and provides access to its inference capabilities on both Windows and Linux environments.

Vulnerability details: CVE-2025-23310 – NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause stack buffer overflow by specially crafted inputs. A successful exploit of this vulnerability might lead to remote code execution, denial of service, information disclosure, and data tampering.

Official announcement: Please refer to the link for details –

https://nvidia.custhelp.com/app/answers/detail/a_id/5687

3I/ATLAS’s , who are you? (4th Aug 2025)

Quote: Hawking advised against active attempts to contact alien civilizations, which could be dangerous, arguing that an advanced alien race could see the same bacteria as humans, which could lead to catastrophic consequences if they discovered Earth.

Ref: https://pmc.ncbi.nlm.nih.gov/articles/PMC11462274/

Background: 3I/ATLAS, also known as C/2025 N1 and previously as A11pl3Z, is an interstellar comet discovered by the Asteroid Terrestrial-impact Last Alert System station at Río Hurtado, Chile on 1 July 2025, when it was entering the inner Solar System at a distance of 4.5 astronomical units from the Sun.

Technical details: A team of researchers has presented the wild theory that an interstellar object might be hostile “alien technology” that could reach Earth in fall 2025. Below is the speculation based on evidences.

-3I/ATLAS orbital plane lies virtually in the Ecliptic, though retrogade, i = 175.11◦

-3I/ATLAS is too large to be an asteroid

-3I/ATLAS shows no evidence of cometary outgassing.

Ref: cometary outgassing provides the energy to push a comet away from the Sun. As a comet approaches the Sun, its icy nucleus warms, causing ices to sublimate and release gas and dust, forming a coma. This outgassing exerts a force on the comet, pushing it in the opposite direction of the escaping gas, which is a major factor in the comet’s trajectory.

– 3I/ATLAS approaches unusually close to Venus, Mars and Jupiter

– I/ATLAS achieves perihelion on the opposite side of the Sun to Earth

Remark: The Earth revolves around the Sun in an elliptical orbit, and its closest point to the Sun is called perihelion.

– The optimal point to do a reverse Solar Oberth and stay bound to the Sun is at perihelion.

– 3I/ATLAS’s incoming radiant made it hard to detect sooner

The incoming radiant of comet 3I/ATLAS aligning with the Galactic Center, a bright and crowded region of the sky, made it difficult to detect, according to an Instagram post. This unusual entry path, coupled with the comet’s potential “silent propulsion” (lacking typical outgassing) and close encounters with planets, contributed to its delayed detection.

Technical papers announcement: The technical paper was published on the preprint server arXiv on July 16, 2025. For more information, please refer to the link – https://arxiv.org/abs/2507.12213

Yahoo headlines from July 27, 2025 https://www.yahoo.com/news/articles/possibly-hostile-alien-object-could-023132776.html

CVE-2025-54574: About Squid. Stay alert! (4 Aug 2025)

NVD Published Date: 08/01/2025

NVD Last Modified: 08/01/2025

Preface: While HTTP/1.0 is largely obsolete, HTTP/1.1 remains in widespread use, despite the newer HTTP/2 and HTTP/3 protocols. Though HTTP/1.1 has been updated in recent years, its core functionality is still foundational for much of the web.

Does processing Uniform Resource Names consume memory?

Yes, processing Uniform Resource Names (URNs) can consume memory. While URNs themselves are symbolic names and don’t directly represent the resource’s location or data, they need to be processed to resolve them, which often involves memory allocation for parsing, data storage, and potential redirection handling.

Background: Squid Proxy is a caching proxy, and that’s a key aspect of how it functions. It’s not just a proxy that forwards requests; it also stores copies of frequently accessed web content locally. This caching behavior significantly speeds up subsequent requests for the same content, making it faster and more efficient than a simple forwarding proxy.

A “Trivial-HTTP response,” often abbreviated as THTTP, refers to a convention for encoding resolution service requests and responses using the HTTP/1.0 or HTTP/1.1 protocols, as defined in RFC 2169.

Squid Proxy is primarily developed using C++. While it utilizes some C language components and libraries, the dominant language in its codebase is C++.

Ref: STCB, in the context of Squid cache, refers to the StoreEntry data structure, which is a key component of how Squid caches web content in memory. It’s a relatively small amount of metadata associated with each cached object, stored in memory to speed up access and retrieval

Vulnerability details: Squid is a caching proxy for the Web. In versions 6.3 and below, Squid is vulnerable to a heap buffer overflow and possible remote code execution attack when processing URN due to incorrect buffer management. This has been fixed in version 6.4. To work around this issue, disable URN access permissions.

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-54574

CVE-2025-54576: Design weakness in OAuth2-Proxy 7.10.0 and below (1 Aug 2025)

Preface: Regular Expressions are efficient in that one line of code can save you writing hundreds of lines. But they’re normally slower (even pre-compiled) than thoughtful hand written code simply due to the overhead. Generally the simpler the objective the worse Regular Expressions are. They’re better for complex operations.

Background: OAuth2 Proxy is used to add authentication to applications that don’t natively support it, acting as a reverse proxy that handles authentication using OAuth2 providers like Google, GitHub, or Okta. It simplifies the process of adding authentication to existing applications by separating the authentication logic from the application code. This allows developers to focus on building their core application logic without needing to implement complex authentication workflows.

Vulnerability details: In versions 7.10.0 and below, oauth2-proxy deployments are vulnerable when using the skip_auth_routes configuration option with regex patterns. Attackers can bypass authentication by crafting URLs with query parameters that satisfy configured regex patterns, allowing unauthorized access to protected resources. The issue stems from skip_auth_routes matching against the full request URI. Deployments using skip_auth_routes with regex patterns containing wildcards or broad matching patterns are most at risk.

Resolution: This issue is fixed in version 7.11.0

Workarounds include: auditing all skip_auth_routes configurations for overly permissive patterns, replacing wildcard patterns with exact path matches where possible, ensuring regex patterns are properly anchored (starting with ^ and ending with $), or implementing custom validation that strips query parameters before regex matching.

Official announcement: Please see the link for details https://nvd.nist.gov/vuln/detail/CVE-2025-54576

CVE-2025-43209: Processing maliciously crafted web content may lead to an unexpected Safari crash (31-07-2025)

Preface: In essence, built-in browsers are not just about browsing; they are about maintaining control over the core functionality and user experience of the operating system.

Background: Safari and Edge, while built-in, utilize rendering engines derived from the KHTML project, specifically WebKit and Blink, respectively. WebKit is used in Safari, and Blink, a fork of WebKit, powers the Chromium-based Edge. These engines are not just for browsing; they handle the visual rendering of web content within the browser.

In Safari and Edge, the rendering engines (WebKit for Safari and Chromium for Edge) initially interact with the networking component to fetch the necessary resources for a webpage. This workflow prioritizes efficient data retrieval, enabling the browser to display content to the user as quickly as possible.

Safari’s rendering engine, WebKit, is developed and maintained by Apple, according to Apple. WebKit is an open-source project that was originally forked from KDE’s KHTML and KJS engines. Safari is a web browser developed by Apple and is the default browser on macOS, iOS, iPadOS, and visionOS.

Vulnerability details: An out-of-bounds access issue was addressed with improved bounds checking. This issue is fixed in macOS Sequoia 15.6, iPadOS 17.7.9, iOS 18.6 and iPadOS 18.6, tvOS 18.6, macOS Sonoma 14.7.7, watchOS 11.6, visionOS 2.6, macOS Ventura 13.7.7. Processing maliciously crafted web content may lead to an unexpected Safari crash.

Ref: Out-of-Bounds Read (e.g., CVE-2025-43209)

-Reads memory outside the allocated buffer.

-Can leak: Pointers (used to bypass ASLR) or Object metadata (used for type confusion).

-Often used as a first stage in a multi-step exploit.

Official announcement: Please refer to the link for details https://nvd.nist.gov/vuln/detail/CVE-2025-43209

CVE-2025-54419: Design weakness in version 5[.]0[.]1, Node-SAML (30th July 2025)

Preface: SSO isn’t completely secure; in fact, it depends on the design of the entire system. This month, a YouTuber, known for his camera skills, posted a video about his experience, which resulted in him losing all his miles redeemed in February 2025. He contacted airline customer service, but received no reasonable response. The airline strictly adhered to SSO certification regulations. The truth later came to light this month (July 2025).

Background: node-saml is a specific library for implementing SAML 2.0 authentication in Node.js applications. The node-saml is designed for Node.js, meaning its API and integration patterns are tailored for the JavaScript ecosystem. Other SAML libraries exist for different programming languages (e.g., Java, Python, .NET), each with its own conventions and dependencies.

A SAML response or assertion signed with the Identity Provider’s (IdP) private key is considered a validly signed document. This digital signature ensures the integrity and authenticity of the SAML message, confirming it hasn’t been tampered with and originates from a trusted IdP.

SAML relies on digital signatures to ensure the integrity and authenticity of messages exchanged between the Identity Provider (IdP) and the Service Provider (SP). The IdP digitally signs SAML responses and assertions using its private key. The SP then uses the corresponding public key (obtained from the IdP’s signing certificate) to verify the signature, ensuring the message hasn’t been tampered with and originates from a trusted IdP.

Vulnerability details: A SAML library not dependent on any frameworks that runs in Node. In version 5.0.1, Node-SAML loads the assertion from the (unsigned) original response document. This is different than the parts that are verified when checking signature. This allows an attacker to modify authentication details within a valid SAML assertion. For example, in one attack it is possible to remove any character from the SAML assertion username. To conduct the attack an attacker would need a validly signed document from the identity provider (IdP). This is fixed in version 5.1.0.

Official announcement: Please refer to the link for details – https://www.tenable.com/cve/CVE-2025-54419

CVE-2025-8183: About µD3TN protocol.

A spaceman came travel, cyber security in space (29-07-2025)

NVD Published Date: 07/25/2025

NVD Last Modified: 07/25/2025

Preface: Essentially, any industry or application that requires communication in environments with unreliable or intermittent network conditions can benefit from BPv7’s capabilities. µD3TN has been successfully tested in Low Earth Orbit (LEO) on the OPS-SAT satellite, demonstrating its ability to handle the unique challenges of space communication, such as high latency and intermittent connectivity.

Background: The uD3TN project, a software implementation of the Delay-/Disruption-Tolerant Networking (DTN) Bundle Protocol, incorporates an allocator that functions similarly to the C standard library’s malloc dynamic memory allocator.

This allocator within uD3TN is responsible for managing memory allocation and deallocation for various components and data structures used within the DTN protocol stack. This includes, for example, the allocation of memory for bundles, which are the fundamental data units in DTN, as well as for internal structures and buffers required for bundle processing, forwarding, and storage.

The design of this allocator aims to provide efficient memory management within the constraints and requirements of a DTN implementation, potentially considering factors such as resource limitations in embedded systems or the need for robust handling of intermittent connectivity.

Vulnerability details: NULL Pointer Dereference in µD3TN via non-singleton destination Endpoint Identifier allows remote attacker to reliably cause DoS.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2025-8183

The whole world is paying attention to Nvidia, but supercomputers using AMD are the super ones! (July 28, 2025)

Preface: The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 Exaflop/s on the HPL benchmark. El Capitan has 11,039,616 cores and is based on AMD 4th generation EPYC™ processors with 24 cores at 1.8 GHz and AMD Instinct™ MI300A accelerators. It uses the HPE Slingshot interconnect for data transfer and achieves an energy efficiency of 58.9 Gigaflops/watt. The system also achieved 17.41 Petaflop/s on the HPCG benchmark which makes it the new leader on this ranking as well. June 2025

Background: Does El Capitan Use Docker or Kubernetes? El Capitan does not use Docker directly, but it does use Kubernetes—specifically:

Kubernetes is deployed on Rabbit and worker nodes. It is part of a stateless orchestration layer integrated with the Tri-Lab Operating System Stack (TOSS).

Kubernetes is used alongside Flux (the resource manager) and Rabbit (the near-node storage system) to manage complex workflows.

Why Kubernetes Instead of Docker Alone?

While Docker is lightweight and flexible, Kubernetes offers orchestration, which is critical for:

  • Managing thousands of concurrent jobs.
  • Coordinating data movement and storage across Rabbit nodes.
  • Supporting AI/ML workflows and in-situ analysis.

But Kubernetes has a larger memory and CPU footprint than Docker alone.

Technical details: HPE Cray Operating System (COS) is a specialized version of SUSE Linux Enterprise Server designed for high-performance computing, rather than being a variant of Red Hat Enterprise Linux. It’s built to run large, complex applications at scale and enhance application efficiency, reliability, management, and data access. While COS leverages SUSE Linux, it incorporates features tailored for supercomputing environments, such as enhanced memory sharing, power monitoring, and advanced kernel debugging.

What Does Cray Modify?
Cray (now part of HPE) primarily modifies:
-The Linux kernel for performance tuning, scalability, and hardware support
-Adds HPC-specific enhancements, such as:
Optimized scheduling
NUMA-aware memory management
High-speed interconnect support (e.g., Slingshot)
Enhanced I/O and storage stack
-Integrates with Cray Shasta architecture and Slingshot interconnect

These modifications are layered on top of SUSE Linux, meaning the base OS remains familiar and enterprise-grade, but is tailored for supercomputing.

End.

Our world is full of challenges and hardships. But you must be happy every day!

antihackingonline.com