Category Archives: Potential Risk of CVE

Regarding CVE-2024-0108: The manufacturer did not describe much. Is the situation below exactly what CVE mentioned? (25/07/2024)

Preface: What is an example of autonomous AI?

Autonomous intelligence is artificial intelligence (AI) that can act without human intervention, input, or direct supervision. It’s considered the most advanced type of artificial intelligence. Examples may include smart manufacturing robots, self-driving cars, or care robots for the elderly.

Background: What is Jetson AGX Xavier used for?

As the world’s first computer designed specifically for autonomous machines, Jetson AGX Xavier has the performance to handle the visual odometry, sensor fusion, localization and mapping, obstacle detection, and path-planning algorithms that are critical to next-generation robots.

Vulnerability details: NVIDIA Jetson Linux contains a vulnerability in NvGPU where error handling paths in GPU MMU mapping code fail to clean up a failed mapping attempt. A successful exploit of this vulnerability may lead to denial of service, code execution, and escalation of privileges.

Official announcement: Please refer to the official announcement for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5555

CVE-2024-41012: filelock- Remove locks reliably when fcntl/close race is detected (24/07/2024)

In the Linux kernel, design weakness (CVE-2024-41012) has been resolved.

Preface: The GFP acronym stands for “get free pages”, the underlying memory allocation function. Diversity of the allocation APIs combined with the numerous GFP flags makes the question “How should I allocate memory?” not that easy to answer, although very likely you should use. kzalloc(<size>, GFP_KERNEL);

Background: You can build a 64-bit POSIX-compliant tick-less kernel with a Linux-compatible syscall implementation using Go.

Vulnerability details: When fcntl_setlk() races with close(), it removes the created lock with do_lock_file_wait(). However, LSMs can allow the first do_lock_file_wait() that created the lock while denying the second do_lock_file_wait() that tries to remove the lock. In theory (but AFAIK not in practice), posix_lock_file() could also fail to remove a lock due to GFP_KERNEL allocation failure (when splitting a range in the middle).

After the bug has been triggered, use-after-free reads will occur in lock_get_status() when userspace reads /proc/locks. This can likely be used to read arbitrary kernel memory, but can’t corrupt kernel memory. This only affects systems with SELinux / Smack / AppArmor / BPF-LSM in enforcing mode and only works from some security contexts.

Official announcement: Please refer to the official announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-41012

CVE-2024-6960: H2O Model Deserialization RCE (21st July 2024)

Preface: TensorFlow provides a flexible framework for deep learning tasks, but may not be as optimized as H2O for handling large datasets.

Background: H2O uses Iced classes as the primary means of moving Java Objects around the cluster.

Auto-serializer base-class using a delegator pattern (the faster option is to byte-code gen directly in all Iced classes, but this requires all Iced classes go through a ClassLoader).

Iced is a marker class, and Freezable is the companion marker interface. Marked classes have 2-byte integer type associated with them, and an auto-genned delegate class created to actually do byte-stream and JSON serialization and deserialization. Byte-stream serialization is extremely dense (includes various compressions), and typically memory-bandwidth bound to generate.

Vulnerability details: The H2O machine learning platform uses “Iced” classes as the primary means of moving Java Objects around the cluster. The Iced format supports inclusion of serialized Java objects. When a model is deserialized, any class is allowed to be deserialized (no class whitelist). An attacker can construct a crafted Iced model that uses Java gadgets and leads to arbitrary code execution when imported to the H2O platform.

Official announcement: Please refer to the official announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-6960

A critical step in exploiting a buffer overflow is determining the offset where important program control information is overwritten. In the Linux kernel, the (CVE-2024-41011) vulnerability has been resolved. (18-07-2024)

Preface: The PAGE_SIZE macro defined in the Linux kernel source determines the page size. Its definition is in the kernel header file /usr/src/kernels/5.14[.] 0-22. el9[.] x86_64/include/asm-generic/page.

Background: MMIO stands for Memory-Mapped Input/Output. In Linux, MMIO is a mechanism used by devices to interface with the CPU that involves mapping their control registers and buffers directly into the processor’s memory address space.

This enables the CPU to access device registers and exchange data with devices using load and store instructions, just as if they were conventional memory locations. Graphics cards, network interfaces, and storage controllers all employ MMIO to effectively conduct input and output tasks.

Vulnerability details: drm/amdkfd: don’t allow mapping the MMIO HDP page with large pages We don’t get the right offset in that case. The GPU has an unused 4K area of the register BAR space into which you can remap registers. We remap the HDP flush registers into this space to allow userspace (CPU or GPU) to flush the HDP when it updates VRAM. However, on systems with >4K pages, we end up exposing PAGE_SIZE of MMIO space.

Official announcement: Please refer to the official announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-41011

CVE-2024-41009: bpf – Fix overrunning reservations in ringbuf (17th July 2024)

Preface: Consumer and producer counters are put into separate pages to allow each position to be mapped with different permissions. This prevents a user-space application from modifying the position and ruining in-kernel tracking. The permissions of the pages depend on who is producing samples: user-space or the kernel. Starting from Linux 5.8, BPF provides a new BPF data structure (BPF map): BPF ring buffer (ringbuf). It is a multi-producer, single-consumer (MPSC) queue and can be safely shared across multiple CPUs simultaneously.

Background: The first core skill point is “BPF Hooks”, that is, where in the kernel can BPF programs be loaded. There are nearly 10 types of hooks in the current Linux kernel, as shown below:

kernel functions (kprobes)

userspace functions (uprobes)

system calls

fentry/fexit

Tracepoints

network devices (tc/xdp)

network routes

TCP congestion algorithms

sockets (data level)

Vulnerability details: For example, consider the creation of a BPF_MAP_TYPE_RINGBUF map with size of 0x4000. Next, the consumer_pos is modified to 0x3000 /before/ a call to bpf_ringbuf_reserve() is made. This will allocate a chunk A, which is in [0x0,0x3008], and the BPF program is able to edit [0x8,0x3008]. Now, lets allocate a chunk B with size 0x3000. This will succeed because consumer_pos was edited ahead of time to pass the `new_prod_pos – cons_pos > rb->mask` check. Chunk B will be in range [0x3008,0x6010], and the BPF program is able to edit [0x3010,0x6010]. Due to the ring buffer memory layout mentioned earlier, the ranges [0x0,0x4000] and [0x4000,0x8000] point to the same data pages. This means that chunk B at [0x4000,0x4008] is chunk A’s header. bpf_ringbuf_submit() / bpf_ringbuf_discard() use the header’s pg_off to then locate the bpf_ringbuf itself via bpf_ringbuf_restore_from_rec(). Once chunk B modified chunk A’s header, then bpf_ringbuf_commit() refers to the wrong page and could cause a crash.  

Official announcement: Please refer to the official announcement for details – https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=47416c852f2a04d348ea66ee451cbdcf8119f225

About CVE-2024-41008: When a design weakness is discovered in a GPU, it is now not limited to affecting graphics cards! Machine learning should be on alert! (16th July 2024)

Preface: In computer science, reference counting is a programming technique of storing the number of references, pointers, or handles to a resource, such as an object, a block of memory, disk space, and others. In garbage collection algorithms, reference counts may be used to deallocate objects that are no longer needed.

Background: The drm/amdgpu driver supports all AMD Radeon GPUs based on the Graphics Core Next (GCN) architecture.

AI and Machine Learning Development on a Local Desktop with AMD Radeon™ Graphics Cards

AMD now supports RDNA™ 3 architecture-based GPUs for desktop based AI and ML workflows using AMD ROCm™ software. Developers can work with ROCm 6.1 software for Radeon on Linux® systems using PyTorch®, TensorFlow and ONNX Runtime. Added support for WSL 2 (Windows® Subsystem for Linux) now also enables users to develop with AMD ROCm™ software on a Windows® system, eliminating the need for dual boot set ups.

Vulnerability details: The CVE does not describe the vulnerability enumeration. Additionally, AMD only provides patch change details. Perhaps the design weakness in CVE-2024-41008 is related to garbage collection.

This patch changes the handling and lifecycle of vm->task_info object.

The major changes are:

  • vm->task_info is a dynamically allocated ptr now, and its uasge is reference counted.
  • introducing two new helper funcs for task_info lifecycle management
    • amdgpu_vm_get_task_info: reference counts up task_info before returning this info
    • amdgpu_vm_put_task_info: reference counts down task_info
  • – last put to task_info() frees task_info from the vm.

Official announcement: Please refer to the vendor announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-41008

CVE-2024-41007: When use TCP_USER_TIMEOUT in Linux. It may hit Kernel design weakness! (16th July 2024)

Preface: What is jiffies in the Linux kernel? A jiffy is a kernel unit of time declared in <linux/jiffies[.]h> . To understand jiffies, we need to introduce a new constant, HZ, which is the number of times jiffies is incremented in one second. Each increment is called a tick.

Background: tcp_user_timeout – Controls the number of milliseconds that transmitted data may remain unacknowledged before a connection is forcibly closed. Default is 0 which means it is disabled.

Vulnerability details: Avoid too many retransmit packets. If a TCP socket is using TCP_USER_TIMEOUT, and the other peer retracted its window to zero, tcp_retransmit_timer() can retransmit a packet every two jiffies (2 ms for HZ=1000), for about 4 minutes after TCP_USER_TIMEOUT has ‘expired’.

Solution: The fix is to make sure tcp_rtx_probe0_timed_out() takes icsk->icsk_user_timeout into account. Before blamed commit, the socket would not timeout after icsk->icsk_user_timeout, but would use standard exponential backoff for the retransmits. Also worth noting that before commit e89688e3e978 (“net: tcp: fix unexcepted socket die when snd_wnd is 0”), the issue would last 2 minutes instead of 4.

Speculation: CVE does not describe a Common Weakness Enumeration. But believe that the minimal impact would be a denial of service. But it may more serious!

Official announcement: Please refer to the vendor announcement for details – https://nvd.nist.gov/vuln/detail/CVE-2024-41007

AMD released the CVE-2023-20587 security update on July 13, 2024.Don’t underestimate this related SPI flash design weakness! (15th Jul 2024)

Preface: SMM is the privileged mode of the processor. Like BIOS and UEFI, SMM code operates underneath the operating system. SMM has full access to physical memory, SMM-specific memory called SMRAM, MSR-specific scratchpad, the SPI flash region to read and write BIOS variables, and I/O operations. Additionally, SMM is designed to be invisible to lower privileged layers such as the operating system kernel or hypervisor.

Background: Attackers typically escalate privileges to the SMM by exploiting vulnerabilities in the SMM code. The OS calls SMM code through system management interrupts, or SMI, and passes parameters to SMI handlers using a shared memory area called the SMM Communication Buffer.

Vulnerability details: CVE-2023-20587: Improper Access Control in System Management Mode (SMM) may allow an attacker access to the SPI flash potentially leading to arbitrary code execution.

The relevant vulnerabilities are as follows:

CVE-2023-20579: Improper Access Control in the AMD SPI protection feature may allow a user with Ring0 (kernel mode) privileged access to bypass protections potentially resulting in loss of integrity and availability.

CVE-2023-20576: Insufficient Verification of Data Authenticity in AGESA™ may allow an attacker to update SPI ROM data potentially resulting in denial of service or privilege escalation.

CVE-2023-20577: A heap overflow in SMM module may allow an attacker with access to a second vulnerability that enables writing to SPI flash, potentially resulting in arbitrary code execution.

Official announcement: Please refer to the vendor announcement for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7009.html

CVE-2024-0102:  About NVIDIA® CUDA® Toolkit. If you remember, a similar incident happened in April of this year. Believe this is a weakness of similar designs. (11 July 2024)

Preface: OpenAI revealed that the project cost $100 million, took 100 days, and used 25,000 NVIDIA A100 GPUs. Each server equipped with these GPUs uses approximately 6.5 kW, so an estimated 50 GWh of energy is consumed during training.

Background: Parallel processing is a method in computing of running two or more processors (CPUs) to handle separate parts of an overall task. Breaking up different parts of a task among multiple processors will help reduce the amount of time to run a program. GPUs render images more quickly than a CPU because of its parallel processing architecture, which allows it to perform multiple calculations across streams of data simultaneously. The CPU is the brain of the operation, responsible for giving instructions to the rest of the system, including the GPU(s).

NVIDIA CUDA provides a simple C/C++ based interface. The CUDA compiler leverages parallelism built into the CUDA programming model as it compiles your program into code.
CUDA is a parallel computing platform and programming interface model created by Nvidia for the development of software which is used by parallel processors. It serves as an alternative to running simulations on traditional CPUs.

Vulnerability details: NVIDIA CUDA Toolkit for all platforms contains a vulnerability in nvdisasm, where an attacker can cause an out-of-bounds read issue by deceiving a user into reading a malformed ELF file. A successful exploit of this vulnerability might lead to denial of service.

Official announcement: Please refer to the vendor announcement for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5548

CVE-2024-39489: Linux kernel enhance memory management on IPv6 feature (11 July 2024)

Preface: The Linux kernel implements most of its IPv6 parts from USAGI. USAGI project was founded to improve and develop Linux IPv6 stack. The integrated USAGI version/release is unknown. Implemented into the kernel are the core functions of USAGI; the “standard” user-level programs provide basic IPv6 functionality.

Background: IPv6 converting to using crypto_pool has the following advantages.

– now SR uses asynchronous API which may potentially free CPU cycles and improve performance for of CPU crypto algorithm providers;

– hash descriptors now don’t have to be allocated on boot, but only at the moment SR starts using HMAC and until the last HMAC secret is deleted;

– potentially reuse ahash_request(s) for different users

– allocate only one per-CPU scratch buffer rather than a new one for

  each user

– have a common API for net/ users that need ahash on RX/TX fast path

Vulnerability details: In the Linux kernel, the following vulnerability has been resolved: ipv6: sr: fix memleak in seg6_hmac_init_algo seg6_hmac_init_algo returns without cleaning up the previous allocations if one fails, so it’s going to leak all that memory and the crypto tfms. Update seg6_hmac_exit to only free the memory when allocated, so we can reuse the code directly.

Official announcement: For detail, please refer to link – https://nvd.nist.gov/vuln/detail/CVE-2024-39489