Following our Burn 0.14 release, we have updated our tutorial to help you transition from PyTorch to Burn. Once implemented with Burn, your model is primed for fine-tuning and can be deployed anywhere—from the web using our Wgpu backend to embedded devices with NdArray. Enjoy the power, flexibility and speed of Rust, and deploy your models with confidence!
Tracel Technologies
Technology, Information and Internet
Québec, Quebec 246 followers
High-performance computing to bring intelligence everywhere
About us
Tracel is a tech company that develops an open-source deep learning framework, optimized for both training and inference, serving as the foundation for any intelligent system. In complement, we are building a comprehensive software platform for automating machine learning operations. We help researchers and engineers bring their vision to life by building AI tools that enable faster iteration, better reliability, and improved performance. Our products are meant to bridge the gaps in the current Python-based ecosystem, known to have major limitations that hinder the progress of AI. It's time to democratize the development of AI, from low-level GPU programming to cloud model deployment, with a modular ecosystem.
- Website
-
https://tracel.ai/
External link for Tracel Technologies
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Québec, Quebec
- Type
- Privately Held
- Founded
- 2023
- Specialties
- Artificial Intelligence, Machine Learning, ML Ops, Deep Learning, GPGPU, and Rust
Locations
-
Primary
365 Rue Abraham-Martin
600
Québec, Quebec G1K 8N1, CA
Employees at Tracel Technologies
Updates
-
𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗶𝗻𝗴 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗕𝗹𝗼𝗰𝗸𝘀: 𝗔 𝗗𝗲𝗲𝗽 𝗗𝗶𝘃𝗲 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗕𝘂𝗿𝗻 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 🧱 In this new series of posts, we'll provide a detailed overview of the core components of our deep learning framework Burn to help you build your models. Today, we start at the source of any machine learning project: data. Learn how to easily integrate your own dataset with Burn using our `Dataset` and `Batcher` traits, designed to make data loading straightforward and efficient. By implementing the trait for your dataset type, you can easily handle various modalities like images, texts, audio or even videos. Burn also provides lazy dataset transformations, streamlining the pre-processing workflow. Your dataset is used to access individual samples from the source, while our data loader efficiently gathers samples across multiple threads to form a batch. Check out the full article: https://lnkd.in/egQdkunt
-
-
Burn 0.14.0 Released: First Fully Rust-Native Deep Learning Framework We're pleased to announce the release of Burn 0.14.0, a significant update to our deep learning framework. This version establishes Burn as the first fully Rust-native framework in its field, allowing developers to work entirely in Rust from GPU kernels to model definition. Key highlights: - Full Rust integration: No need for CUDA, C++, or WGSL - New tensor data format: Up to 4x faster serialization/deserialization, with Quantization support (Beta) - Performance improvements and bug fixes - New tensor operations and enhanced documentation We truly appreciate the help from our growing community, over 50 contributors for this release alone. For more details, please see our release notes: https://lnkd.in/eVSFf8EF
-
Announcing CubeCL: Multi-Platform GPU Computing Introducing CubeCL, a new project that modernizes GPU computing, making it easier to write optimal and portable kernels. CubeCL allows you to write GPU kernels using a subset of Rust syntax, with ongoing work to support more language features. Why it Matters CubeCL tackles three major challenges in GPU computing: - Portability: The same codebase can be used to program any GPU without a loss in performance. - Usability: No need for a new shader language — simply add an attribute on top of your Rust code and voilà, it can now run on any GPU. - Performance: We generate fine-grained kernel specialization via an innovative compile-time system to use the most efficient instructions available. Our goal extends beyond providing an optimized compute language; we aim to develop an ecosystem of high-performance and scientific computing in Rust.
-
Burn 0.13.0 Released! This is a huge release with tons of improvements and new features! 🔥 Lots of work has been done in the autodiff system where gradient checkpointing is now supported. It allows recomputing the forward pass of some operations instead of saving their result. Not only can this save a lot of memory usage during training, it also composes gracefully with kernel fusion during the backward pass. This release also introduces the new burn-jit project, which allows to create new backends that can be compiled to any GPU shader language while automatically supporting all our optimizations. We ported the WGPU backend to this new representation, and new targets should be coming soon. Stay tuned for the next releases. We also put a lot of care into improving the user APIs. You don't need to implement both init and init_with methods for optimized parameter initialization, since they are now lazy. In addition, it's now easier to switch between backends and precision types at runtime using the new backend bridge. Those improvements were based on the community feedback, and we are committed to continuously improving the APIs. Our 0.13.0 release aggregates contributions from 29 community members, setting a new record. We're very happy to witness the growing liveliness of the community! Release Notes: https://lnkd.in/eqaTMqUM
Release v0.13.0 · tracel-ai/burn
github.com
-
Tracel Technologies reposted this
Two of the core pillars of Burn are its performance and portability. Acknowledging that optimization and hardware support are an ongoing process, we established a feedback loop to track enhancements across all backends. Hence, we're excited to announce the release of the community backend comparison benchmarks today. This platform enables you to explore aggregated results and encourages anyone to anonymously share benchmarks conducted on their hardware. This not only enables us to track performance improvements with each new release but also empowers the community to do the same. Speaking of release, we've recently completed a significant WGPU refactoring. This transformation shifts the backend to a Just-In-Time GPU shader-agnostic runtime. Consequently, we'll be rolling out new GPU backends in the coming months, with support for specific hardware instructions like Tensor Cores. Given that all kernels are written in a new intermediate representation, enhancements will reverberate across all JIT backends. While our focus has been on the architectural structure of Burn rather than individual kernels, we anticipate significant performance enhancements in the future. Equipped with tools like loop unrolling, shape specialization, autotuning, gradient checkpointing and operation fusion, we're better positioned than ever to make GPUs go brrrrrrr. You can explore the community benchmarks on the website https://lnkd.in/eYmbzQBa
-
Two of the core pillars of Burn are its performance and portability. Acknowledging that optimization and hardware support are an ongoing process, we established a feedback loop to track enhancements across all backends. Hence, we're excited to announce the release of the community backend comparison benchmarks today. This platform enables you to explore aggregated results and encourages anyone to anonymously share benchmarks conducted on their hardware. This not only enables us to track performance improvements with each new release but also empowers the community to do the same. Speaking of release, we've recently completed a significant WGPU refactoring. This transformation shifts the backend to a Just-In-Time GPU shader-agnostic runtime. Consequently, we'll be rolling out new GPU backends in the coming months, with support for specific hardware instructions like Tensor Cores. Given that all kernels are written in a new intermediate representation, enhancements will reverberate across all JIT backends. While our focus has been on the architectural structure of Burn rather than individual kernels, we anticipate significant performance enhancements in the future. Equipped with tools like loop unrolling, shape specialization, autotuning, gradient checkpointing and operation fusion, we're better positioned than ever to make GPUs go brrrrrrr. You can explore the community benchmarks on the website https://lnkd.in/eYmbzQBa
-
Happy to share what we have been working on lately. Our new blog post explores Burn's tensor operation stream strategy, optimizing models through an eager API by creating custom kernels with fused operations. Our custom GELU experiment reveal a remarkable improvement of up to 78 times on our WGPU backend. Post: https://lnkd.in/ehdEjF8y
Optimal Performance without Static Graphs by Fusing Tensor Operation Streams
burn.dev
-
Burn has a brand new website! 🔥 We believed that the previous one could be enhanced to better communicate the specific features of Burn and showcase what the framework has to offer. With the new website, you can discover why we chose Rust to build it and explore its capabilities. We're excited about what will be built with it! Website: https://burn.dev
Community Driven
burn.dev
-
Tracel Technologies reposted this
Un grand merci à notre sponsor Or : ✨ Tracel Technologies ✨ Tracel est une entreprise technologique qui développe des infrastructures d'apprentissage profond, optimisées tant pour l'entraînement que pour l'inférence. Nous aidons les chercheurs et ingénieurs à concrétiser leur vision en construisant des outils open source qui leur permettent de créer des modèles fiables et efficaces tout en réduisant le temps de développement. Nous souhaitons que notre travail joue un rôle crucial dans la démocratisation de l'avenir de l'intelligence artificielle et de ses applications dans diverses industries, tout en contribuant à l'avancement du domaine. A big thank you to our Gold sponsor: ✨ Tracel Technologies ✨ Tracel is a tech company that develops deep learning infrastructures, optimized for both training and inference. We help researchers and engineers bring their vision to life by building open-source tools that enable them to create reliable and efficient models with reduced development time. We want our work to play a crucial role in democratizing the future of artificial intelligence and its applications across various industries, while contributing to the advancement of the field. #csgames #csgames2024 #montreal #quebec #canada #computerscience #computersciencecompetition #competition #cs
-