Category Archives: AI and ML

In the Linux kernel, CVE-2024-26921 vulnerability has been resolved. openvswitch is safe again. (19th Apr 2024)

Preface: Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license.  It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, 802.1ag).  In addition, it is designed to support distribution across multiple physical servers similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V.

Background: The buffers used by the kernel to manage network packets are referred to as sk_buffs in Linux. The buffers are always allocated as at least two separate components: a fixed size header of type struct sk_buff; and a variable length area large enough to hold all or part of the data of a single packet.

Vulnerability details: The vulnerability details involve 4 key files. The explanation may refer to CVE details. Please refer to the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2024-26921

Security Focus: A relevant old patch about the issue was : 8282f27449bf (“inet: frag: Always orphan skbs inside ip_defrag()”) [..] net/ipv4/ip_output[.]c depends on skb->sk being set, and probably to an inet socket, not an arbitrary one. If we orphan the packet in ipvlan, then downstream things like FQ packet scheduler will not work properly. We need to change ip_defrag() to only use skb_orphan() when really needed, ie whenever frag_list is going to be used.

TX: skb->sk might have been passed as argument to dst->output and must remain valid until tx completes. Move sk to reassembled skb and fix up wmem accounting.

CVE-2024-31580 – PyTorch before v2.2.0 contain a heap buffer overflow vulnerability (18th Apr 2024)

Preface: Using the C++ new operator, we can allocate memory at the runtime. The new operator in C++ is used for the dynamic memory allocation; It is used to allocate the memory at runtime on heap memory.

Background: PyTorch is a deep learning framework open sourced by Facebook in early 2017. It is built on Torch and is advertised as Python First. It is tailor-made for the Python language. PyTorch is unique in that it fully supports GPUs and uses reverse-mode automatic differentiation technology, so the computational graph can be modified dynamically. This makes it a popular choice for rapid experimentation and prototyping.

Vulnerability details: PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input.

Official announcement: Please refer to the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2024-31580

CVE-2024-31861: Improper Control of Generation of Code (Code Injection) vulnerability in Apache Zeppelin. (12-April-2024)

Preface: Training is the most important step in machine learning. In training, you pass the prepared data to your machine learning model to find patterns and make predictions. It results in the model learning from the data so that it can accomplish the task set.

Background: What is Apache Zeppelin? Apache Zeppelin is an open-source, web-based notebook that enables data visualization, data exploration, and collaborative data analytics. Apache Zeppelin interpreter supports several language backends, including Apache Spark, Python, R, JDBC, Apache Flink, Markdown, and Shell.

By integrating submarine in zeppelin, we use zeppelin’s data discovery, data analysis and data visualization and collaboration capabilities to visualize the results of algorithm development and parameter adjustment during machine learning model training.

Vulnerability details: Improper Control of Generation of Code (‘Code Injection’) vulnerability in Apache Zeppelin. The attackers can use Shell interpreter as a code generation gateway, and execute the generated code as a normal way. This issue affects Apache Zeppelin: from 0.10.1 before 0.11.1. Users are recommended to upgrade to version 0.11.1, which doesn’t have Shell interpreter by default.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2024-31861

CVE‑2024-0072 and CVE-2024-0076: Supercomputer and AI development Interlude (4th Apr 2024)

Preface: A CUDA binary (also referred to as cubin) file is an ELF-formatted file which consists of CUDA executable code sections as well as other sections containing symbols, relocators, debug info, etc. By default, the CUDA compiler driver nvcc embeds cubin files into the host executable file.

Background: To dump cuda elf sections in human readable format from a cubin file, use the following command: cuobjdump -elf <cubin file>

nvdisasm extracts information from standalone cubin files and presents them in human readable format. The output of nvdisasm includes CUDA assembly code for each kernel, listing of ELF data sections and other CUDA specific sections.

–base-address <value>

–base

Desc: Specify the logical base address of the image to disassemble. This option is only valid when disassembling a raw instruction binary (see option –binary), and is ignored when disassembling an Elf file. Default value: 0.

Vulnerability details: CVE‑2024‑0072 and CVE-2024-0076: NVIDIA CUDA toolkit for all platforms contains a vulnerability in cuobjdump and nvdisasm where an attacker may cause a crash by tricking a user into reading a malformed ELF file. A successful exploit of this vulnerability may lead to a partial denial of service.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5517

CVE-2024-3135: Missing CSRF token vulnerability in localAI (April 1, 2024)

Preface: Imagine that you are training your application to differentiate between two types of cars (Ferrari and Porsche). You show the app numerous images of both cars, from appearance to features to engine design. Over time, apps begin to recognize the unique features that distinguish each other. At this point, the application can tell the difference between the two without help, which is basically what your Machine Learning model is used for. We call this phase training.

Background: LocalAI is a drop-in replacement REST API compatible with OpenAI API specifications for local inferencing. It allows to run models locally or on-prem with consumer grade hardware (No need for expensive cloud services or GPUs), supporting multiple models families compatible with the ggml format.

Vulnerability details: The web server lacked CSRF tokens allowing an attacker to host malicious JavaScript on a host that when visited by a LocalAI user, could allow the attacker to fill disk space to deny service or abuse credits.

Ref: Why missing CSRF looks common? This can be caused by ad- or script-blocking plugins or extensions and the browser itself if it’s not allowed to set cookies.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2024-3135

CVE‑2024‑0082 – Design weakness of NVIDIA ChatRTX for Windows (26-03-2024)

Preface: Unlike OpenAI’s ChatGPT, Chat with RTX doesn’t remember the context of prompts. Asking Chat with RTX to give examples of fishes in one prompt and then asking for a description of “the fishes” in the next prompt will result in a blank – users will need to spell out everything explicitly.

Background: Chat with RTX defaults to AI startup Mistral’s open-source model but supports other text-based models, including Meta’s Llama 2, which is also open-source.

Chat with RTX is a demo app that lets you personalize a GPT large language model (LLM) connected to your own content—docs, notes, videos, or other data. Leveraging retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, you can query a custom chatbot to quickly get contextually relevant answers. And because it all runs locally on your Windows RTX PC or workstation.

Vulnerability details: NVIDIA ChatRTX for Windows contains a vulnerability in the UI, where an attacker can cause improper privilege management by sending open file requests to the application. A successful exploit of this vulnerability might lead to local escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5532

CVE-2024-21661: Argo CD suffers denial of service (DoS) vulnerability (18-03-2024)

Preface: What does multi threaded environment mean? Multithreading is the ability of a program or an operating system to enable more than one user at a time without requiring multiple copies of the program running on the computer.

Background: Argo CD is implemented as a Kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the Git repo). Hooks are simply Kubernetes manifests tracked in the source repository of your Argo CD Application. Synchronization can be configured using resource hooks. Hooks are ways to run scripts before, during, and after a Sync operation. Hooks can also be run if a Sync operation fails at any point. For example:

Using a Sync hook to orchestrate a complex deployment requiring more sophistication than the Kubernetes rolling update strategy.

Vulnerability details: An attacker can exploit a critical flaw in the application to initiate a Denial of Service (DoS) attack, rendering the application inoperable and affecting all users. The issue arises from unsafe manipulation of an array in a multi-threaded environment.

Official announcement: Please see the link below for details – https://nvd.nist.gov/vuln/detail/CVE-2024-21661

CVE-2024-2193: Specter v1 variant inheriting the Specter v1 vulnerability. So called GhostRace. AMD believes the previous guidance remains applicable to mitigate this vulnerability (15-03-2024)

AMD made this announcement on March 12, 2024.

Preface: Spectre variant 1 attacks take advantage of speculative execution of conditional branches, while Spectre variant 2 attacks use speculative execution of indirect branches to leak privileged memory.

Background: Speculative execution improves speed by operating on multiple instructions at once—possibly in a different order than when they entered the CPU. Speculative execution includes instruction or data pre-fetch, branch prediction, or any operation performed speculatively based on the prediction of program/system behavior.

Vulnerability details: A Speculative Race Condition (SRC) vulnerability that impacts modern CPU architectures supporting speculative execution has been discovered. CPU hardware utilizing speculative execution that are vulnerable to Spectre v1 are likely affected. An unauthenticated attacker can exploit this vulnerability to disclose arbitrary data from the CPU using race conditions to access the speculative executable code paths. Security researchers have labeled this variant of the Spectre v1 vulnerability “GhostRace”, for ease of communication.

Official announcement: Please refer to the following link for details –

CPU hardware utilizing speculative execution may be vulnerable to speculative race conditionshttps://www.kb.cert.org/vuls/id/488902

AMD official article https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7016.html

About CVE-2023-39368: The machine learning process requires CPUs and GPUs. Does bus lock regulator mechanism impact this area? Glad to tell, the problem fixed. (14-03-2024)

CVE-2023-39368 was published on 13th March 2024. In fact, Intel solve this problem since the end of 2020. Maybe hesitant about this design weakness. So it wasn’t announced until this month.

Preface: What is Intel E core? While P cores are focused on delivering peak performance for intensive workloads, E cores ensure that the system runs efficiently during regular use.

Background: What is the lock prefix in Intel? The LOCK prefix is typically used with the BTS instruction to perform a read-modify-write operation on a memory location in shared memory environment. The integrity of the LOCK prefix is not affected by the alignment of the memory field. Memory locking is observed for arbitrarily misaligned fields.

Vulnerability details: CVE-2023-39368 – A potential security vulnerability in the bus lock regulator mechanism for some Intel Processors may allow denial of service. Intel is releasing firmware updates to mitigate this potential vulnerability.

Official announcement: Please refer to the link for details – https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00972.html

CVE-2024-27307: Not only machine learning, other system should staying alert because zOS Connect Designer uses JSONata, an open source expression language that is used for querying and transforming JSON data. (7thMar 2024)

Preface: What is declarative machine learning? Declarative machine learning enables users to specify what they want, and let the software figure out how to do it. Declarative ML is similar to AutoML tools that also make default selections and automate part or all of the ML lifecycle.

Background: JSONata is a JSON query and transformation language that is inspired by the location path semantics of XPath 3.1. XPath 3.1 is an expression language that allows the processing of values conforming to the data model defined in [XQuery and XPath Data Model (XDM) 3.1].

The JSONata reference is implemented in JavaScript and ships via NPM. There are also implementations available in Rust, Go, Java, Python, and .NET, some of which use JavaScript interpreters to ensure compatibility.

Vulnerability details: JSONata is a JSON query and transformation language. Starting in version 1.4.0 and prior to version 1.8.7 and 2.0.4, a malicious expression can use the transform operator to override properties on the `Object` constructor and prototype. This may lead to denial of service, remote code execution or other unexpected behavior in applications that evaluate user-provided JSONata expressions.

Remedy: This issue has been fixed in JSONata versions 1.8.7 and 2.0.4. Applications that evaluate user-provided expressions should update ASAP to prevent exploitation. As a workaround, one may apply the patch manually.

Official announcement: Please refer to the link for details https://nvd.nist.gov/vuln/detail/CVE-2024-27307