CVE-2023-6531: A use-after-free flaw was found in the Linux Kernel , fixed design weakness, attacker cannot relies on io_uring_get_socket() anymore (21st Jan 2024)

Preface: In my opinion, this design flaw is dangerous. But no worries, about a month ago. Vendors have issued remediations based on their priorities. CVE technical details were released today (21st Jan 2024).

Background: io_uring is applicable to most businesses and applications with a demand for asynchronous I/O. As of now, io_uring has been integrated into multiple mainstream open-source applications, such as RocksDB, Netty, QEMU, SPDK, PostgreSQL, MariaDB, etc.

What is io_uring? io_uring is an asynchronous I/O interface for the Linux kernel. An io_uring is a pair of ring buffers in shared memory that are used as queues between user space and the kernel:

Submission queue (SQ): A user space process uses the submission queue to send asynchronous I/O requests to the kernel.

Completion queue (CQ): The kernel uses the completion queue to send the results of asynchronous I/O operations back to user space.

Many io_uring features will soon be available in Red Hat Enterprise Linux 9.3, which ships with core version 5.14. Fedora 37 currently provides the latest io_uring functionality.

Vulnerability details: A use-after-free flaw was found in the Linux Kernel due to a race problem in the unix garbage collector’s deletion of SKB races with unix_stream_read_generic() on the socket that the SKB is queued on.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2023-6531

CVE-2024-21735: SAP LT Replication Server design weakness (included – version S4CORE 103, S4CORE 104, S4CORE 105, S4CORE 106, S4CORE 107, S4CORE 108) 18th Jan 2024

Preface: The latest version of SAP Landscape Transformation Replication Server. It combines the latest SAP Landscape Transformation Replication Server functionality with the latest SAP Basis version for the best support of all uses cases involving SAP systems and databases. There are two options for using this version of SAP Landscape Transformation Replication Server (see also chapter Installation Options):

i.As a standalone system based on SAP S/4HANA Foundation 2020 (or higher) together with the DMIS 2020 addon.

ii.Embedded in SAP S/4HANA 2020 (or higher).

Background: (S_DMIS – Authority object for SAP SLO Data migration server)

The user role SAP_IUUC_REPL_ADMIN is required to use SAP Landscape Transformation Replication Server. By default, this role does not allow users to view the data that is replicated from the source system to the target system. However, the authorization object S_DMIS (with activity 29) allows users to view the data that is being replicated (by means of the replication logging function).

SAP strongly recommend that you use the Read Access Logging (RAL) component to monitor and log read access to the relevant data.

Vulnerability Details: SAP LT Replication Server – version S4CORE 103, S4CORE 104, S4CORE 105, S4CORE 106, S4CORE 107, S4CORE 108, does not perform necessary authorization checks. This could allow an attacker with high privileges to perform unintended actions, resulting in escalation of privileges, which has High impact on confidentiality, integrity and availability of the system.

Official announcement: Please refer to the link for details – https://www.sap.com/documents/2022/02/fa865ea4-167e-0010-bca6-c68f7e60039b.html

Oracle’s January 2024 Critical Patch Update Bulletin to remediate the related CVE-2023-44487 vulnerability in its product family (17th Jan 2024)

Preface: HTTP/2 Rapid Reset, based on stream multiplexing. HTTP/2 Rapid Reset attacks mostly affect the large infrastructure providers. Software smaller providers use, such as NGINX, Apache Server, ….etc

Background: Oracle GraalVM is a high-performance JDK that can speed up the performance of Java and JVM-based applications using an alternative just-in-time (JIT) compiler.

Vulnerability details: The HTTP/2 protocol allows a denial of service (server resource consumption) because request cancellation can reset many streams quickly, as exploited in the wild in August through October 2023.

If you are managing or operating your own HTTP/2-capable server (open source or commercial) you should immediately apply a patch from the relevant vendor when available.

Official announcement: Please refer to the link for details – https://www.oracle.com/security-alerts/cpujan2024.html

The patch for CVE-2023-44487 also addresses CVE-2023-36478, CVE-2023-40167, CVE-2023-42794, CVE-2023-42795, and CVE-2023-45648.

The patch for CVE-2023-44487 also addresses CVE-2023-45143.

The patch for CVE-2023-45648 also addresses CVE-2023-42794, CVE-2023-42795, and CVE-2023-44487.

The patch for CVE-2023-44487 also addresses CVE-2023-36478.

Netscaler zero-days – CVE-2023-6548 and CVE-2023-6549 (16th Jan 2024)

Preface: NetScaler was initially developed in 1997 by Michel K Susai and acquired by Citrix Systems in 2005. What is the difference between NetScaler ADC and NetScaler gateway? NetScaler ADC is an application delivery controller. NetScaler Gateway (formerly Citrix Gateway) is an access gateway with SSL VPN solution, providing single sign-on (SSO) and authentication for remote end users of network assets.

Background: A cross-site scripting (XSS) attack is a type of injection attack in which malicious script is injected into an otherwise benign and trusted website. The risk of XSS comes from the ability to execute arbitrary JS within the current user context.

UDP is common, but it has inherent vulnerabilities that make it prone to attacks, such as limited packet verification, IP spoofing and DoS attacks.

Ref:

#NSIP address – The management IP address for NetScaler Gateway that is used for all management‑related access to the appliance. NetScaler Gateway also uses the NSIP address for authentication

#Subnet IP (SNIP) address – The IP address that represents the user device by communicating with a server on a secondary network

#Cluster management IP (CLIP) address

Vulnerability details:

CVE-2023-6548 is a RCE vulnerability in the NetScaler ADC and Gateway appliances. An authenticated attacker with low level privileges could exploit this vulnerability if they are able to access NetScaler IP (NSIP), Subnet IP (SNIP), or cluster management IP (CLIP) with access to the appliance’s management interface.

CVE-2023-6549 is a denial of service (DoS) vulnerability in the NetScaler ADC and Gateway appliances. An attacker could exploit this vulnerability when a vulnerable appliance has been configured as a Gateway (e.g. VPN, ICA Proxy, CVPN, RDP Proxy) or as a AAA virtual server.

Official announcement: Please refer to the link for details – https://support.citrix.com/article/CTX584986/netscaler-adc-and-netscaler-gateway-security-bulletin-for-cve20236548-and-cve20236549

LiDAR assists archaeologist discovered ruins found in upper Amazon rainforest (15th Jan 2024)

Preface: In ancient time of South America Tribal leaders would cover their bodies with gold powder and wash themselves in a holy lake in the mountains. For example, the famous place for ancient civilization execute this ceremony is Lake Titicaca. Priests and nobles would throw precious gold and emeralds into the lake dedicated to God.

El Dorado, so called the Golden Kingdom is an ancient legend that first began with a South American ritual. Spanish Conquistadors, upon hearing these tales from the natives, believed there was a place abundant in gold and precious stones and began referring to it as El Dorado. Many explorers believe that Ciudad Blanca is the legendary El Dorado. Legend has it that somewhere beneath the forest canopy lies the ancient city of Ciudad Blanca and now archaeologists think they may have found it.

A group of scientists from fields including archaeology, anthropology and geology  using new technology known as airborne light detection and ranging (LiDAR). They found what appears to be a network of plazas and pyramids, hidden for hundreds of years in the underneath of the forest.

Background: What is LiDAR? LiDAR (light detection and ranging) is a remote sensing method that uses a laser to measure distances. Pulses of light are emitted from a laser scanner, and when the pulse hits a target, a portion of its photons are reflected back to the scanner. Because the location of the scanner, the directionality of the pulse, and the time between pulse emission and return are known, the 3D location (XYZ coordinates) from which the pulse reflected is calculable.

Which software is used for LiDAR data processing?

While LiDAR is a technology for making point clouds, not all point clouds are created using LiDAR. For example, point clouds can be made from images obtained from digital cameras, a technique known as photogrammetry. The one difference to remember that distinguishes photogrammetry from LiDAR is RGB. Unlike the RGB image, the LIDAR projection image does not have obvious texture, and it is difficult to find patterns in the projected image.

The programs to process LiDAR are numerous and increasing rapidly in accordance with the evolving field and user needs. ArcGIS has LiDAR processing functionality. ArcGIS accepts LAS or ASCII file types and has both 2D and 3D visualization options. Additionally, there are other options on the market. For example: NVIDIA DeepStream Software Development Kit (SDK). This SDK is an accelerated AI framework to build pipelines. DeepStream pipelines enable real-time analytics on video, image, and sensor data.

The architecture diagram on the right is for reference.

Headline News: https://www.sciencenews.org/article/ancient-urban-complex-ecuador-amazon-laser

About NVIDIA Security Bulletin – CVE-2023-31029 and CVE-2023-31030 (14th Jan 2024)

Preface: Artificial intelligence performs better when humans are involved in data collection, annotation, and validation. But why is artificial intelligence ubiquitous in the human world? Can we limit the use of AI?

Background: The NVIDIA DGX A100 system comes with a baseboard management controller (BMC) for monitoring and controlling various hardware devices on the system. It monitors system sensors and other parameters. Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into Linux. KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs).

What is Virtio-net device? Virtio-net device emulation enables users to create VirtIO-net emulated PCIe devices in the system where the NVIDIA® BlueField® DPU is connected.

Vulnerability details:

CVE-2023-31029 – NVIDIA DGX A100 baseboard management controller (BMC) contains a vulnerability in the host KVM daemon, where an unauthenticated attacker may cause a stack overflow by sending a specially crafted network packet. A successful exploit of this vulnerability may lead to arbitrary code execution, denial of service, information disclosure, and data tampering.

CVE-2023-31030 – NVIDIA DGX A100 BMC contains a vulnerability in the host KVM daemon, where an unauthenticated attacker may cause a stack overflow by sending a specially crafted network packet. A successful exploit of this vulnerability may lead to arbitrary code execution, denial of service, information disclosure, and data tampering.

Official announcement: Please refer to the link for details – https://nvidia.custhelp.com/app/answers/detail/a_id/5510

My comment: The vendor published this vulnerability but did not provide full details. Do you think whether the details in attached diagram is the actual reason?

CVE-2023-5091: Mali GPU Kernel Driver allows improper GPU processing operations (8th Jan 2024)

Preface: According to news in October 2023, experts speculated that commercial spyware exploited a security vulnerability in the Arm Mali GPU driver to compromise some people’s devices. The vulnerability was claimed to be a local attack. But how do attacker plant malware on a smartphone without remote access? Hard to say! Phishing and social engineering techniques may be involved.

Background: About four years ago, the mainstream GPUs are PowerVr, Mali, and Adreno (Qualcomm). Apple used a customized version of PowerVr in the early days. However, as Apple develops its own GPU, PowerVr software design now owned by Canyon Bridge Capital Partners. Mali is the graphics acceleration IP of ARM. Mali is actually ARM’s Mali series IP core.

The first version of the Mali microarchitecture is called Utgard. Later there were versions called Midgard (second generation), Bifrost (third generation), and Valhall (fourth generation). Valhall was launched in the second quarter of 2019. The main series are Mali-G57 and Mali-G77.

However, commercial spyware has exploited a security hole in Arm’s Mali GPU drivers to compromise some people’s devices, according to news from Oct 2023.

ARM decided last September (2023) not to disclose any details of CVE-2023-5091 to the public. The official announcement published on January 8, 2024 finally.

Vulnerability details: Use After Free vulnerability in Arm Ltd Valhall GPU Kernel Driver allows a local non-privileged user to make improper GPU processing operations to gain access to already freed memory. This issue affects Valhall GPU Kernel Driver: from r37p0 through r40p0.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2023-5091

CVE-2024-21318 – SharePoint Enterprise Server 2016, 2019 and Subscription Edition design limitation (10th Jan 2024)

Preface: Under normal circumstances, CVEs are recorded sequentially every year. Microsoft announced CVE-2024-21318 on January 9, 2024. It’s the start of a new year, and this record let me speculated that whether there are plenty of design weakness found in 2023. But it is waiting to be verified. Due to the huge amount of data, it need to wait for the official CVE reference number. So, it carry forward to 2024. This brings the total to five figures.

Background: Microsoft did not disclose details. Therefore, the technical details are not yet clear. Do you think SharePoint Add-in is one of the possible factor in this matter?

A SharePoint Add-in is a self-contained piece of functionality that extends the capabilities of SharePoint websites to solve a well-defined business problem. Add-ins don’t have custom code that runs on SharePoint servers. Instead, all custom logic moves “up” to the cloud, or “down” to client computers, or “over” to an on-premises server that is outside the SharePoint farm or SharePoint Online subscription. Keeping custom code off SharePoint servers provides reassurance to SharePoint administrators that the add-in can’t harm their servers or reduce the performance of their SharePoint Online websites.

Business logic in a SharePoint Add-in can access SharePoint data through one of the several client APIs included in SharePoint. Which API you use for your add-in depends on certain other design decisions you make.

Vulnerability details: Microsoft SharePoint Server Remote Code Execution Vulnerability. Technical details unknown.

Remedy: Applying the patch can eliminate this problem. Possible mitigations were released immediately after the vulnerability was disclosed.

Official announcement: Please refer to the link for details –

https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21318

Supply constraints and product attribute design. It is expected that two camps will be operated in the future. (9th JAN 2024)

Preface: When High performance cluster (HPC) was born, it destiny do a competition with traditional mainframe technology.  The major component of HPC is the multicore processor. That is GPU. For example: The NVIDIA GA100 GPU is composed of multiple GPU Processing Clusters (GPCs), Texture Processing Clusters (TPCs), Streaming Multiprocessors (SMs), and HBM2 memory controllers. Compare with the best of the best setup,  the world’s fastest public supercomputer, Frontier, has 37,000 AMD Instinct 250X GPUs.

How to break through traditional computer technology and go from serial to parallel processing: CPUs are fast, but they work by quickly executing a series of tasks, which requires a lot of interactivity. This is known as serial processing. GPU parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. As time goes by. Until the revolution in GPU processor technology and high-performance clusters. RedHat created a High Performance Cluster system configuration. The overall performance is close to that of a supercomputer processor using crossbar switches. But the bottleneck lies in how to transform traditional software applications from serial processing to parallel processing.

Reflection of reality in the technological world: A common consense that GPU processor manufacturer Nvidia had strong market share in the world. The Nvidia A100 processor delivers strong performance on intensive AI tasks and deep learning. A more budget-friendly option, the H100 can be preferred for graphics-intensive tasks. The H100’s optimizations, such as TensorRT-LLM and NVLink, show that it surpasses the A100, especially in the LLM area. Large Language Models (LLMs) have revolutionised the field of natural language processing. As these models grow in size and complexity, the computational demands for inference also increase significantly. To tackle this challenge, leveraging multiple GPUs becomes essential.

Supply constraints and product attribute design create headaches for web hosting providers: CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). But converting serial C code to data parallel code is a difficult problem. Because of this limitation. Nvidia develop NVIDIA CUDA Compiler (NVCC). This software is a proprietary compiler by Nvidia intended for use with CUDA.

Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python.

But you cannot use CUDA without a Nvidia Graphics Card. CUDA is a framework developed by Nvidia that allows people with a Nvidia Graphics Card to use GPU acceleration when it comes to deep learning, and not having a Nvidia graphics card defeats that purpose. (Refer to attached Diagram Part 1).

If web hosting service provider not use NVIDIA product, is it possible to use other brand GPU processor for AI machine learning? Yes, you can select OpenCilk.

OpenCilk (http://opencilk.org) is a new open-source platform to support task-parallel programming in C/C++. (Refer to attached Diagram Part 2)

Referring to the above details, the technological development atmosphere makes people foresee that two camps will operate in the future. This is the Nvidia camp and the non-Nvidia camp. This is why I have observed that web hosting service providers are giving themselves headaches in this new trend in technology gaming.

CVE-2023-34326: Potential risk allowing access to unindented memory regions (8th JAN 2024)

Preface: In fact, by the time the vulnerability was released to the public, the design limitations and/or flaws had already been fixed. You may ask, what is the discussion space for the discovered vulnerabilities? As you know, an increasing number of vendors remain compliant with CVE policies, but the technical details will not be disclosed. If your focus is understanding, even if the vendor doesn’t release any details. You can learn about specific techniques as you learn. The techniques you learn can expand your horizons.

Background: AMD-Vi represents an I/O memory management unit (IOMMU) that is embedded in the chipset of the AMD Opteron 6000 Series platform. IOMMU is a key technology in extending the CPU’s virtual memory to GPUs to enable heterogeneous computing. AMD-Vi (also known as AMD IOMMU) to allow for PCI Passthrough.

DMA mapping is a conversion from virtual addressed memory to a memory which is DMA-able on physical addresses (actually bus addresses).

DMA remapping maps virtual addresses in DMA operations to physical addresses in the processor’s memory address space. Similar to MMU, IOMMU uses a multi-level page table to keep track of the IOVA-to-PA mappings at different page-size granularity (e.g., 4-KiB, 2-MiB, and 1-GiB pages). The hardware also implements a cache (aka IOTLB) of page table entries to speed up translations.

AMD processors use two distinct IOTLBs for caching Page Directory Entry (PDE) and Page Table Entry (PTE) (AMD, 2021; Kegel et al., 2016).

Ref: If your application scenario does not require virtualization, then disable AMD Virtualization Technology. With virtualization disabled, also, disable AMD IOMMU. It can cause differences in latency for memory access. Finally, also disable SR-IOV.

Vulnerability details: The caching invalidation guidelines from the AMD-Vi specification (48882—Rev 3.07-PUB—Oct 2022) is incorrect on some hardware, as devices will malfunction (see stale DMA mappings) if some fields of the DTE are updated but the IOMMU TLB is not flushed. Such stale DMA mappings can point to memory ranges not owned by the guest, thus allowing access to unindented memory regions.

Official announcement: Please refer to the link for details – https://nvd.nist.gov/vuln/detail/CVE-2023-34326

antihackingonline.com