CVE-2024-39486: Direct Rendering Manager (DRM) of video card. A race leads to use-after-free of a "struct pid" (8 Jul 2024) Preface: The display pipeline driver responsible for interfacing with the display uses the kernel mode setting (KMS) API and the GPU responsible for drawing objects into memory uses the direct rendering manager (DRM) API. Background: The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. For plain GEM based drivers there is the DEFINE_DRM_GEM_FOPS() macro, and for DMA based drivers there is the DEFINE_DRM_GEM_DMA_FOPS() macro to make this simpler. A refcount records the number of references (i.e., pointers in the C language) to a given memory object. A positive refcount means a memory object could be accessed in the future, hence it should not be freed. Vulnerability details: filp->pid is supposed to be a refcounted pointer; however, before this patch, drm_file_update_pid() only increments the refcount of a struct pid after storing a pointer to it in filp->pid and dropping the dev->filelist_mutex, making the race possible. Remark: The official explanation says it may be difficult to encounter this design weakness in reality. Because process A has to pass through a synchronize_rcu() operation while process B is between mutex_unlock() and get_pid(). Vulnerability (CVE-2024-39486) has been resolved. Official announcement: For detail, please refer to link – https://lnkd.in/gxWjvw8c
Picco Chan CIISCM,CEH,CHFI,ESCA,CISO,CIW,SSCP’s Post
More Relevant Posts
-
We’re excited to expand the functionality of Snowflake ML with the new Container Runtime for Snowflake Notebooks, available in public preview! Container Runtime provides flexible infrastructure for building and running resource-intensive ML workflows within Snowflake. Using Snowflake Notebooks in Container Runtime gives you access to distributed processing on both CPUs and GPUs, optimized data loading, automatic lineage capture, and Model Registry integration. It also provides flexibility to leverage a set of preinstalled packages or the ability to pip install any open-source package of choice. Our Quickstart will guide you through installing packages, training a model, and viewing logs. Try it out: https://lnkd.in/gdwSmXV8
To view or add a comment, sign in
-
-
Top 10 Tricks for Google Colab Users: 10. Specify TensorFlow version 9. Use TensorBoard for visualization 8. Use TPUs when you need more processing power 7. Use Local Runtimes if you have local hardware accelerators 6. Use Colab Scratchpad for quick tests 5. Copy data to Colabs VMs for fast data loading 4. Check your RAM and resource limits to make sure you don't run out of resources 3. Close Tabs when done to end the session and save resources 2. Use GPUs only when needed to ensure you have access when you really need them. 1. What's your number 1 tip for using Google Colab?
To view or add a comment, sign in
-
++ Build an Auto-scaling Inference Service ++ This video is for those of you who want to set up APIs for custom models (can be text or multi-modal). - I walk through the steps involved in setting up inference endpoints. - I weigh the options of a) renting gpus, b) using a serverless service or c) building an auto-scaling service yourself - Then, I build out an auto-scaling service that can be served through a single open-ai style endpoint. I show how to set up a scaling service for SmolLM and also for Qwen multi-modal text plus image models. Find the video over on Trelis Research on YouTube
To view or add a comment, sign in
-
-
🎉 Thrilled to share that Container Runtime for Snowflake Notebooks is now in public preview! GPUs, no problem! Snowflake is making it easier than ever before to build and deploy models while using distributed GPUs, all from a single platform. Check out the blog to learn more:
To view or add a comment, sign in
-
To support various AI software, such as, LangChain, etc., and to be able to manage 1000+ GPUs, and to be used by many customers for their critical business tasks, the amount of work grow exponentially. With very limited resources, how am I able to handle them? My eyes turn to this tiny AI cluster I built with just under $3000 GPUs. Can the AI cluster help me to develop itself? With 1.5X, or 2X, or 3X, or 5X, or even 10X productivity improvement? It is good to find them out.
A Small Cluster with 2 Linux Nodes and 7 GPU Graphic Cards (4 RTX 3060, 2 RTX 4060 Ti Super and 1 RTX 4070 Ti Super). In the bottom node, I have to lift all 4 GPU cards and connect them to the motherboard via PCIe cables due to space limitation. Total GPU VRAM: 96GB Total cost of GPU Graphic Cards < $3000 Able to run Llama2-7B, Llama2-13B, Llama3-8B and CodeLlama-34B-Python models in float16 concurrently. Need software dev work to make it happen as conventional ways require more than (7 + 13 + 8 + 34) * 2 GB = 124GB GPU VRAM. With Device Management and AI Service Management software in place, this small cluster can be turned into an Enterprise GenAI System.
To view or add a comment, sign in
-
-
It's been a crazy year! For our last release of 2024, we shipped: ⚒️ 𝐌𝐮𝐥𝐭𝐢-𝐆𝐏𝐔 𝐖𝐨𝐫𝐤𝐞𝐫𝐬 You can now run workloads across multiple GPUs! This lets you run workloads that might not fit on a single GPU. For example, you could run a 13B parameter LLM on 2x A10Gs, which normally would only fit on a single A100-40. ⚡️𝐈𝐧𝐬𝐭𝐚𝐧𝐭𝐥𝐲 𝐖𝐚𝐫𝐦 𝐔𝐩 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 We added a "Run Now" button to the dashboard to instantly invoke an app and warm up the container. 🚢 𝐈𝐦𝐩𝐨𝐫𝐭 𝐋𝐨𝐜𝐚𝐥 𝐃𝐨𝐜𝐤𝐞𝐫𝐟𝐢𝐥𝐞𝐬 We wanted to make it easier to use existing Docker images on Beam. You can now use a Dockerfile that you have locally to create your Beam image. 🔑 𝐏𝐚𝐬𝐬 𝐒𝐞𝐜𝐫𝐞𝐭𝐬 𝐭𝐨 𝐈𝐦𝐚𝐠𝐞 𝐁𝐮𝐢𝐥𝐝𝐬 You can now pass secrets into your image builds, useful for accessing private repos or running build steps that require credentials of some kind. 𝐀𝐧𝐝 𝐰𝐞'𝐯𝐞 𝐠𝐨𝐭 𝐬𝐨𝐦𝐞 𝐚𝐦𝐚𝐳𝐢𝐧𝐠 𝐧𝐞𝐰 𝐟𝐞𝐚𝐭𝐮𝐫𝐞𝐬 𝐜𝐨𝐦𝐢𝐧𝐠 𝐢𝐧 𝐉𝐚𝐧𝐮𝐚𝐫𝐲. It's been an excited year, and we can't wait to ship more stuff for you in 2025. Happy New Year!
To view or add a comment, sign in
-
A Small Cluster with 2 Linux Nodes and 7 GPU Graphic Cards (4 RTX 3060, 2 RTX 4060 Ti Super and 1 RTX 4070 Ti Super). In the bottom node, I have to lift all 4 GPU cards and connect them to the motherboard via PCIe cables due to space limitation. Total GPU VRAM: 96GB Total cost of GPU Graphic Cards < $3000 Able to run Llama2-7B, Llama2-13B, Llama3-8B and CodeLlama-34B-Python models in float16 concurrently. Need software dev work to make it happen as conventional ways require more than (7 + 13 + 8 + 34) * 2 GB = 124GB GPU VRAM. With Device Management and AI Service Management software in place, this small cluster can be turned into an Enterprise GenAI System.
To view or add a comment, sign in
-
-
This post examines how different software components came together to allow LLM-as-judge evaluation without the need for expensive GPUs. All the components were built with and chosen for their user control, open source nature, and interoperability. https://lnkd.in/ePhve9n3
To view or add a comment, sign in
-
Passing GPUs into a Docker container can be challenging. We must install the NVIDIA Docker Container Toolkit on the host system and ensure that the versions of our current GPUs and OS are compatible. After passing the GPUs into the container, CUDA software should be installed. Read this story from Sivanarayana Mamidi on Medium: https://lnkd.in/gwHJ3FxN
To view or add a comment, sign in
-
🎉 Thrilled to share that Container Runtime for Snowflake Notebooks is now in public preview! Snowflake is making it easier than ever before to build and deploy models while using distributed GPUs, all from a single platform. ❄️ Check out the blog to learn more - as always, link in the first comment 🔗
To view or add a comment, sign in
-
More from this author
-
About Aliens corpses in Mexico: If I am real.
Picco Chan CIISCM,CEH,CHFI,ESCA,CISO,CIW,SSCP 1y -
A hundred years later, communication was ready to receive phone call again.
Picco Chan CIISCM,CEH,CHFI,ESCA,CISO,CIW,SSCP 1y -
Who can tell the truth about the supernatural phenomena described in the Old Testament and other related religious bibles?
Picco Chan CIISCM,CEH,CHFI,ESCA,CISO,CIW,SSCP 3y