Category Archives: System

About CVE-2025-37889 – ASoC framework Consistently treat platform_max as control value (12th May 2025)

Preface: The overall project goal of the ALSA System on Chip (ASoC) layer is to provide better ALSA support for embedded system-on-chip processors.

Advanced Linux Sound Architecture (ALSA) is a software framework and part of the Linux kernel that provides an application programming interface (API) for sound card device drivers.

Background: The snd_soc_put_volsw() function is part of the ALSA System on Chip (ASoC) layer in the Linux kernel. It is included in the kernel source, but whether it is available by default depends on the specific kernel configuration and the presence of ASoC support. Here’s a brief overview of its features:

Purpose: It sets the volume control values for the sound subsystem.

Arguments: It takes two arguments: kcontrol, which represents the mixer control, and ucontrol, which contains the control element information.

Return Value: It returns 0 on success.

Vulnerability details: ASoC: ops: Consistently treat platform_max as control value This reverts commit 9bdd10d57a88 (“ASoC: ops: Shift tested values in snd_soc_put_volsw() by +min”), and makes some additional related updates.

Speculated this is an enhancement and remediation for CVE-2022-48917

In the Linux kernel, the following vulnerability has been resolved: ASoC: ops: Shift tested values in snd_soc_put_volsw() by +min While the $val/$val2 values passed in from userspace are always >= 0 integers, the limits of the control can be signed integers and the $min can be non-zero and less than zero. To correctly validate $val/$val2 against platform_max, add the $min offset to val first. (CVE-2022-48917)

Official announcement: Please see the link for details –

https://nvd.nist.gov/vuln/detail/CVE-2025-37889

CVE-2025-21983: While kvfree_rcu() itself is not fundamentally flawed, Linux now been resolved by using a more appropriate workqueue. (2nd Apr 2025)

Preface: The buddy allocator is a well-known memory management algorithm used in the Linux kernel. It is designed to efficiently allocate and deallocate memory in contiguous blocks.

Background: What is RCU usage in the Linux kernel?

Read-copy update (RCU) is a scalable high-performance synchronization mechanism implemented in the Linux kernel. RCU’s novel properties include support for con- current reading and writing, and highly optimized inter- CPU synchronization.

Currently kvfree_rcu() APIs use a system workqueue which is “system_unbound_wq” to driver RCU machinery to reclaim a memory.

In the Linux kernel, the kvfree_rcu() API uses a system workqueue, specifically the system_unbound_wq, to drive RCU (Read-Copy-Update) machinery for memory reclamation. This setup is used to handle deferred memory freeing in a non-blocking manner. However, there was a recent change where the workqueue was switched to WQ_MEM_RECLAIM to ensure that memory reclamation tasks are handled more efficiently and to avoid potential kernel warnings.

Not every Linux API uses the system_unbound_wq to request memory. The system_unbound_wq is a specific type of workqueue used for tasks that are not bound to any particular CPU, allowing them to run on any available CPU. This is useful for tasks that require high concurrency or have wide fluctuations in concurrency levels.

Vulnerability details: The issue with kvfree_rcu() is primarily related to how it uses the system workqueue (system_unbound_wq) for memory reclamation. This can lead to kernel warnings and potential system instability. The warnings indicate that the workqueue framework rules are being violated, which can affect the reliability of the memory reclamation process.

Remedy: So, while kvfree_rcu() itself is not fundamentally flawed, the way it was implemented led to issues that have now been resolved by using a more appropriate workqueue.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2025-21983

Similar to previously disclosed side-channel attacks. Manufacturer (AMD) response to researcher (30-03-2025)

Preface: On 24th Oct, 2024, Researchers from Azure® Research, Microsoft® have provided to AMD a paper titled “Principled Microarchitectural Isolation on Cloud CPUs.” In their paper, the researchers describe a potential side-channel vulnerability on AMD CPUs. AMD believes that existing mitigation recommendations for prime and probe side-channel attacks remain applicable to the presented vulnerability.

Background: A two-bit saturating up-down counter is a type of counter used in computer architecture, particularly in branch prediction mechanisms. Here’s a brief overview:

  • Two-bit: The counter uses two bits, allowing it to represent four states (00, 01, 10, 11).
  • Up-down: The counter can increment (count up) or decrement (count down) based on the input signal.
  • Saturating: The counter does not wrap around when it reaches its maximum (11) or minimum (00) value. Instead, it stays at these values if further increments or decrements are attempted.
How It Works:
  1. States: The counter has four states: 00, 01, 10, and 11.
  2. Incrementing: If the counter is at 11 and receives an increment signal, it remains at 11. Similarly, if it is at 00 and receives a decrement signal, it stays at 00.
  3. Usage: These counters are often used in branch prediction to keep track of the history of branch outcomes and make predictions based on this history.

Ref: The pattern history table (PHT) branch architecture is an example of an architecture using two-bit saturating up-down counters. It contains a table of two-bit counters used to predict the direction for conditional branches.

About Branch History Leak:

Researchers from The Harbin Institute of Technology have shared with AMD a paper titled “Branch History LeakeR: Leveraging Branch History to Construct a New Side Channel-Theory and Practice” that demonstrates a side channel attack using the Global History Register (GHR).  The GHR is used to assist in conditional branch prediction. The researchers note that the GHR is shared between different security domains and may retain data after a security domain switch.  After a return to the user-space, the researchers were able to infer the direction of recently executed conditional branches.

Official announcement: Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7026.html

A bug was found in containerd prior to versions 1.6.38, 1.7.27, and 2.0.4 (18-03-2025)

Preface: Stateless applications perform tasks based on the input provided in the current transaction. These applications make use of Content Delivery Network (CDN) and web to process short term requests. Unlike stateful applications, stateless applications do not save users data. There is no stored knowledge or information for reference to past records. 

Containers are widely used for deploying microservices, running stateful applications, and achieving high-performance, scalable solutions.

Background: A 32-bit signed integer can represent values from -2,147,483,648 to 2,147,483,647. When applied to UID (User Identifier) and GID (Group Identifier), it means that the maximum value for these identifiers is 2,147,483,647.

Setting a user with a specific UID:GID serves several important purposes in Unix-like operating systems:

  1. Identification: The UID uniquely identifies a user, while the GID identifies the group to which the user belongs. This helps the system manage user permissions and access control.
  2. Permissions Management: UIDs and GIDs are used to determine the access rights of users and groups to files and directories. For example, a file might be readable and writable by its owner (identified by UID), but only readable by others in the same group (identified by GID).
  3. Security: By assigning different UIDs and GIDs, the system can enforce security policies, ensuring that users can only access the resources they are permitted to. This is crucial for maintaining the integrity and confidentiality of data.
  4. Resource Allocation: UIDs and GIDs help in allocating system resources, such as CPU time and memory, to users and groups.
  5. This ensures fair usage and prevents any single user or group from monopolizing system resources.

Vulnerability details: containerd is an open-source container runtime. A bug was found in containerd prior to versions 1.6.38, 1.7.27, and 2.0.4 where containers launched with a User set as a `UID:GID` larger than the maximum 32-bit signed integer can cause an overflow condition where the container ultimately runs as root (UID 0). This could cause unexpected behavior for environments that require containers to run as a non-root user.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2024-40635

CVE-2024-0114: NVIDIA Hopper HGX for 8-GPU contains a vulnerability in the HGX Management Controller HMC (7 th March 2025)

Preface: NVIDIA collaborates with Supermicro for their server solutions, including the use of Supermicro’s BMC (Baseboard Management Controller) in certain systems. Supermicro provides a range of server solutions optimized for NVIDIA’s platforms.

Background: The NVIDIA Hopper HGX for 8 GPUs has several standout features:

High Performance: It hosts eight H100 Tensor Core GPUs, which are designed for AI and high-performance computing (HPC) workloads.

Advanced Connectivity: Each H100 GPU connects to four third-generation NVSwitches, enabling a fully connected topology. This setup allows any H100 GPU to communicate with any other H100 GPU concurrently at a bidirectional speed of 900 GB/s.

Enhanced Bandwidth: The NVLink ports provide more than 14 times the bandwidth of the current PCIe Gen4 x16 bus.

Vulnerability details: VIDIA Hopper HGX for 8-GPU contains a vulnerability in the HGX Management Controller (HMC) that may allow a malicious actor with administrative access on the BMC to access the HMC as an administrator. A successful exploit of this vulnerability may lead to code execution, denial of service, escalation of privileges, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5561

PAGE PREFETCHER ATTACK – AMD ID: AMD-SB-7040 (28-2-2025)

Preface: Page prefetching is a technique used to improve performance by preloading data into the cache before it’s actually needed. However, the implementation and presence of a page prefetcher can vary depending on the CPU architecture and design.

Background: A page prefetcher attack is a type of side-channel attack that exploits the page prefetching mechanism in modern CPUs. Page prefetching is a performance optimization technique where the CPU predicts and loads pages of memory into the cache before they are actually needed. This can inadvertently create security vulnerabilities.

In a page prefetcher attack, an attacker can infer sensitive information by observing the patterns and timing of page prefetching operations. For example, the attacker might be able to determine which memory pages are being accessed by the victim, thereby gaining insights into the victim’s activities or extracting sensitive data.

About the topic: Researchers have disclosed to AMD a potential exploit, the page prefetcher attack (PPA), a prefetcher-based side-channel attack.

Manufacturer response: AMD has evaluated the paper and does not believe there are any new security implications. Please refer to the link – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7040.html

Python , have ever you though? (25-02-2025)

Preface: Maintaining a satellite’s orbit involves a combination of precise calculations and regular adjustments. Here are the key factors:

  1. Velocity and Gravity: A satellite stays in orbit by balancing its velocity (speed in a straight line) with the gravitational pull of the Earth. The satellite must travel fast enough to counteract the pull of gravity, which keeps it in a stable orbit.
  2. Orbital Station-Keeping: This involves small adjustments using thrusters to correct any deviations in the satellite’s path. These maneuvers ensure the satellite remains in its designated orbit.
  3. Fuel Management: Satellites carry a limited amount of fuel for these adjustments. Efficient fuel management is crucial for prolonging the satellite’s operational life.
  4. Monitoring and Control: Ground stations continuously monitor satellites and send commands to perform necessary adjustments. This helps in maintaining the satellite’s orbit and addressing any potential issues.

Background: The PyEphem module provides highly precise data on the planets and our solar system. This module leverages an extremely robust C library that allows you to pinpoint planets, perform interplanetary calculations and discover more data than you’ll ever know what to do with.

Best practice: If you’re using PyEphem, it’s a good idea to keep your Python environment and libraries up to date and to check the module’s GitHub repository for any reported issues or updates.

CVE-2023-31315: AMD SMM Lock Bypass (21-Aug-2024)

Preface: AMD EPYC™ Processors power the highest-performing x86 servers for the modern data center, on prem and in cloud environments, across industries.

Background: Model-specific registers (MSR) are control registers provided by the processor implementation so that system software can interact with a variety of features, including performance monitoring, checking processor status, debugging, program tracing or toggling specific CPU features.

Intel and AMD may use the same MSR for the same feature, such as the IA32_LSTAR MSR register.

When it came to the Intel Pentium processor, Intel officially introduced two instructions, RDMSR and WRMSR, for reading and writing the MSR temporary register. At this time, MSR was officially introduced. When the RDMSR and WRMSR instructions were introduced, the CPUID instruction was also introduced. This instruction is used to indicate which functions are available in a specific CPU chip, or whether the MSR registers corresponding to these functions exist. The software can query a certain function through the CPUID instruction. Whether these functions are supported on the current CPU.

Vulnerability details: Improper validation in a model specific register (MSR) could allow a malicious program with ring0 access to modify SMM configuration while SMI lock is enabled, potentially leading to arbitrary code execution.

Ref: Researchers from IOActive have reported that it may be possible for an attacker with ring 0 access to modify the configuration of System Management Mode (SMM) even when SMM Lock is enabled.

Official announcement: Please refer to the link for details – https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7014.html

CVE-2023-52910 – iommu/iova: Fix alloc iova overflows issue (21-08-2024)

Preface: Modern hardware provides an I/O memory management unit (IOMMU) that mediates direct memory accesses (DMAs) by I/O devices in the same way that a processor’s MMU mediates memory accesses by instructions.

Background: With IOMMU, when the device performs DMA access to memory, the system returns to the device driver no longer a physical address, but a virtual address. This address is generally called IOVA. When the device accesses memory, IOMMU converts this virtual address into a physical address. But when iommu bypass is used, the device can also directly use the physical address for DMA.

Vulnerability details: This issue occurs in the following two situations

-The first iova size exceeds the domain size. When initializing iova domain, iovad->cached_node is assigned as iovad->anchor. For example, the iova domain size is 10M, start_pfn is 0x1_F000_0000, and the iova size allocated for the first time is 11M.

-The node with the largest iova->pfn_lo value in the iova domain is deleted, iovad->cached_node will be updated to iovad->anchor, and then the alloc iova size exceeds the maximum iova size that can be allocated in the domain.

Official announcement: Please refer to the url for details – https://nvd.nist.gov/vuln/detail/CVE-2023-52910

CVE-2024-44070: FRRouting (FRR) – bgpd – ensure the hash works  (18th Aug 2024)

Preface: As Time Goes By , OSS (Open Source Software) for use by cost-conscious commercial companies. It is quite popular in cloud.

Background: FRRouting (FRR) is a free and open source Internet routing protocol suite for Linux and Unix platforms. It implements BGP, OSPF, RIP, IS-IS, PIM, LDP, BFD, Babel, PBR, OpenFabric and VRRP, with alpha support for EIGRP and NHRP.

FRR’s seamless integration with native Linux/Unix IP networking stacks makes it a general purpose routing stack applicable to a wide variety of use cases including connecting hosts/VMs/containers to the network, advertising network services, LAN switching and routing, Internet access routers, and Internet peering.

Vulnerability details: An issue was discovered in FRRouting (FRR) through 10.1. bgp_attr_encap in bgpd/bgp_attr.c does not check the actual remaining stream length before taking the TLV value.

Official announcement: For details, please refer to link – https://www.tenable.com/cve/CVE-2024-44070