Join the Helmholtz GPU Hackathon 2025! Together with NVIDIA and OpenACC, Helmholtz-Gemeinschaft (supported by Helmholtz Information & Data Science Academy) will host a GPU Hackathon from 1st-11th April 2025 at Forschungszentrum Jülich. This hybrid event is designed for researchers aiming to port or optimize their HPC or HPC+AI applications to GPUs. We will utilize our cutting-edge JUPITER hardware! Highlights: 🤝expert mentoring for participants ⚡️exascale hardware 🚀performance gains and speedups More info & registration: https://lnkd.in/eD-Ud3gF
Jülich Supercomputing Centre (JSC)’s Post
More Relevant Posts
-
Look at the speed of GPU. This is the kind of chip used for training ML model which can have millions of parameters.
The Mythbusters, Adam Savage and Jamie Hyneman, demonstrate the power of GPU computing at an NVIDIA conference, helping you understand how GPU computing works.
To view or add a comment, sign in
-
🚀 Exciting news! Check out this insightful blog post on "A Preliminary Study on Accelerating Simulation Optimization with GPU Implementation" recently published on arXiv. The post presents a preliminary study on the use of GPU to accelerate computation for simulation optimization tasks, highlighting the computational advantages of parallel processing for large-scale matrices and vectors operations. Don't miss the opportunity to learn about the potential of GPU implementation for simulation optimization problems. Read the full post here: https://bit.ly/4d0GrRT
To view or add a comment, sign in
-
Interested in running on systems like Aurora with Intel GPUs? Read about our efforts getting HARVEY ported in an efficient manner. Aristotle Martin is leading our effort to increase portability of HARVEY while ensuring efficient performance. The Intel Corporation tools to help port the code helped make the process pretty straight forward. Check it out in the recent case study.
As part of the ALCF's Aurora Early Science Program, Amanda Randles and Aristotle Martin of Duke University Pratt School of Engineering and collaborators are working to ready HARVEY—a massively parallel CFD code for blood flow simulations—for Argonne National Laboratory's Aurora #exascale supercomputer. In their initial efforts, the team has ported the original HARVEY code from CUDA to Data Parallel C++, achieving a more than 10x performance improvement on the Intel GPUs that power Aurora. To learn about the team's code development efforts, see this case study from Intel Corporation: https://lnkd.in/gM9zbD6T
To view or add a comment, sign in
-
-
Harness the branching behaviour of mid-circuit measurements (MCMs) to create dynamic circuits 🌱 From teleportation to measurement-based quantum computing, MCMs plant the seed for a forest of possibilities. Learn more in part 1 of our demo series 👇 https://lnkd.in/ghAaGhuC
To view or add a comment, sign in
-
-
Do you know how to use photonics for neuromorphic computing? Check the work of Amir Handelman, presented at #NOD2024, EIC Bayflex
To view or add a comment, sign in
-
-
We analyze with unprecedented details the dynamics of a laser diode with both optical injection and optical feedback. That configuration is commonly used in computing and sensing but little was known so far on the underlying nonlinear dynamics With Lucas O. https://lnkd.in/evUC7tfz
To view or add a comment, sign in
-
Excited to share our newly accepted paper by Computers and Geotechnics on the Material Point Method (#MPM) by Hao Chen and Shiwei Zhao for simulating complex #granular flows! Our study introduces a sparse-memory-encoding framework that overcomes efficiency limitations in large-scale simulations caused by GPU computing's reliance on contiguous memory distribution. We present a novel atomic-free dual mapping algorithm and an efficient memory shift algorithm that optimizes memory usage for material properties. This framework seamlessly integrates various material models and accommodates diverse boundary conditions, enabling effective and efficient modeling of large-scale real-world problems like landslides. Here is a simulation of the GPU-MPM for the Baige Landslide, which occurred in Tibet, China, in 2018, resulting in a debris flow of 27.5 million cubic meters into the Jinsha River, cutting it off, and forming a gigantic barrier lake which breached later. This simulation used 4.1 million MPM points and 100,000 time steps and ran for 46 minutes on a desktop server with an RTX 4070 Ti. Check out the preprint: https://lnkd.in/gGbhb9aY
To view or add a comment, sign in
-
Today is day 11 of 12 Days of Simulation! 🎁 Are you struggling to run simulations efficiently and finding yourself compromising on model complexity due to hardware limitations? HPC (High-performance computing) could be your solution. In a world where time is invaluable, Ansys is your ally in revolutionizing simulation efficiency. Make capacity limitations a thing of the past and embrace a future where simulations run faster, capture finer details, and explore more complex physics effortlessly. Learn more! https://bit.ly/4fwdaia
To view or add a comment, sign in
-
-
Are hardware limitations forcing you to compromise on the complexity of your simulations? High-performance computing (HPC) might be the answer you’re looking for. In today’s fast-paced world, SimuTech Consulting is here to transform your simulation efficiency. Say goodbye to capacity constraints and welcome a future where simulations run faster, capture finer details, and effortlessly explore more complex physics. Discover more and take your simulations to the next level! Learn more about Simutech's in house capability to help YOU.
Today is day 11 of 12 Days of Simulation! 🎁 Are you struggling to run simulations efficiently and finding yourself compromising on model complexity due to hardware limitations? HPC (High-performance computing) could be your solution. In a world where time is invaluable, Ansys is your ally in revolutionizing simulation efficiency. Make capacity limitations a thing of the past and embrace a future where simulations run faster, capture finer details, and explore more complex physics effortlessly. Learn more! https://bit.ly/4fwdaia
To view or add a comment, sign in
-
-
Baige Tibet Landslide simulated through MPM with GPU technology! GPU processing for the use of the Material Point Method (MPM) can revolutionize landslide studies. Today, we commercially use software that is not highly recommended, applying methods like Finite Volume, due to the lack of more suitable tools.
Professor at The Hong Kong University of Science and Technology | Co-Editor-in-Chief for Computers and Geotechnics (Elsevier) & Editor for Granular Matter (Springer)
Excited to share our newly accepted paper by Computers and Geotechnics on the Material Point Method (#MPM) by Hao Chen and Shiwei Zhao for simulating complex #granular flows! Our study introduces a sparse-memory-encoding framework that overcomes efficiency limitations in large-scale simulations caused by GPU computing's reliance on contiguous memory distribution. We present a novel atomic-free dual mapping algorithm and an efficient memory shift algorithm that optimizes memory usage for material properties. This framework seamlessly integrates various material models and accommodates diverse boundary conditions, enabling effective and efficient modeling of large-scale real-world problems like landslides. Here is a simulation of the GPU-MPM for the Baige Landslide, which occurred in Tibet, China, in 2018, resulting in a debris flow of 27.5 million cubic meters into the Jinsha River, cutting it off, and forming a gigantic barrier lake which breached later. This simulation used 4.1 million MPM points and 100,000 time steps and ran for 46 minutes on a desktop server with an RTX 4070 Ti. Check out the preprint: https://lnkd.in/gGbhb9aY
To view or add a comment, sign in
Senior AE Student @Georgia Tech | Minor in Math & German
2moExciting