Our Chief Business Officer Roman Chernin is on stage at TechCrunch Disrupt right now, sharing in detail how Nebius is navigating the race of cloud providers in AI. If you’re nearby, join us at the Industry Stage! #Disrupt #Disrupt2024 #conferences #keynote
Over ons
Cloud platform specifically designed to train AI models
- Website
-
https://nebius.ai
Externe link voor Nebius
- Branche
- IT-services en consultancy
- Bedrijfsgrootte
- 201 - 500 medewerkers
- Hoofdkantoor
- Amsterdam
- Type
- Naamloze vennootschap
- Specialismen
- IT
Producten
Nebius
CIEM-software (Cloud Infrastructure Entitlements Management)
AI-centric cloud platform ready for intensive workloads
Locaties
-
Primair
Amsterdam, NL
-
Tel Aviv, IL
-
Belgrade, RS
Medewerkers van Nebius
Updates
-
🟢 TechCrunch Disrupt. We’re here. Our booth L21 stands out, much like our ambitions in the US and global AI markets, which are the focus of this conference. We brought a big team along: - This Wednesday, our CBO Roman Chernin will give a talk covering the AI cloud race from behind the scenes. Come to the Industry Stage at 10:50 AM — you won’t want to miss it. - Tom Blackwell, our Chief Communications Officer, is also here, overseeing how the Nebius brand performs in action. - Anastasia Zemskova, Nebius’ CMO, is ready to talk about partnerships and marketing opportunities at scale. - Shane Zide, our VP of Sales, can discuss infrastructure needs if you’re with a major US-based company. - If you’re, on the other hand, not an AI company but still want to enhance processes with AI, talk to GTM Lead Aleksey Golubitsky. - Speaking of which, our Head of GTM Andrei Meganov is a seasoned strategist — ask him how to radically save on GPU solutions. - Dylan Bristot is advancing Nebius AI Studio, which currently provides endpoints for the most popular models. If you’re an app builder and/or inferencing like crazy, he can lend a hand. - Michael Talan, Enterprise Leader, is here for enterprises, period. - Justin M. and Matthew Murphy, both Sales Executives, will welcome you with an introductory offer and a fine starter set of GPUs. - Andrey Gorbunov, Head of Growth, has onboarded numerous companies to Nebius, both large and small. Feel free to connect with him regardless of where you stand. - Leandro Salvador, Sales Manager, has also worked with some of our biggest clients. As for Victor Zhukov, he’s your go-to if you’re doing generative AI. - Cloud Solutions Architect Team Leader Levon Sarkisian is less involved in the business side of things — saving him time to dive incredibly deep into our AI cloud and AI Studio. - Anna Peshekhonova is the Head of Growth Marketing, so if our collaboration with you can bring growth to both parties, you should talk to Anna. - And Alina Vasilchenkova is ML Community & Events Manager — if you’d like to connect with us on behalf of your community, you know what to do. There are even more Nebius employees on-site — you can connect with the right people here on LinkedIn, at our booth or in the corridors of Disrupt. Something tells us this week is going to be huge. #Disrupt #Disrupt2024 #networking
-
🔥 H100 for just $1.5/h. Introducing Explorer Tier — special pricing for the first 1,000 hours To support your first steps in new AI projects, we’re launching the Explorer Tier — enjoy NVIDIA® H100 Tensor Core SXM GPUs at just $1.5 per hour for your first 1,000 GPU hours each month. Learn more and sign up: https://lnkd.in/dP_5aWiR #H100 #training #inference #specialoffers #discounts
-
There’s a clear trend: larger AI models, built with more data and parameters, demonstrate superior power, efficiency and accuracy compared to their smaller counterparts. But does it really mean that every interested team should adopt these? Or can you achieve your goals with smaller alternatives? Today’s blog post will guide you through the differences, use cases and cost-benefit analysis of models, especially in the context of inference. You'll also discover how to utilize Nebius AI Studio to select the ideal open-source model: https://lnkd.in/dCJrA_XG #LLMs #largemodels #smallmodels #comparison #inference
-
Our own data center in Finland is the asset we poured our hearts into, overseeing everything from how the building itself is structured, all the way to in-house server design. We were eager to invite Alex and show him behind the scenes. Check out the video to get an inside view of how we set things up.
Ever wondered where the future of AI is being built? I just visited the data centre in Finland that's making it happen. Nebius’ data centre is the powerhouse where AI models are trained. Thousands of GPUs working in unison. It’s expanding to host up to 60,000 GPUs dedicated to intensive AI workloads. They’re building a full-stack AI cloud platform. Here’s what I learned: 1. There is a scarcity of GPUs in the US • Clusters are being sold in massive packages • People who need smaller requirements can’t find them 2. Nebius are building a self-serve platform • Cover infrastructure requirements from a single GPU to big GPU clusters • They’re not a GPU reseller—they’re designing the servers and the racks from the ground up 3. Applications • Helped Mistral train their multimodal models • Provide full-stack infrastructure for AI model development Something else that was unique about the visit. Nebius cools the servers in Finland using the outside air. The heat that’s generated from the servers is then shipped back into the grid. This means Nebius not only heats the onsite building, But it also heats homes nearby, benefitting the local community. They’re able to recover 70% of the heat generated. And it’s the first in the world to have this heat reuse application connected to the local municipal grid. They’re now investing over $1B in AI data centres in Europe. I feel the future of AI depends on infrastructure like this that balances performance with sustainability. Follow me Alex Banks for daily AI highlights & insights.
-
Here’s how scalability works with our open-source K8s operator for Slurm. ↔️ ML development involves several stages, each needing different levels of computing power. Sometimes, you need heavy-duty training; other times, just small experiments. Keeping (and paying for) the maximum hardware you might ever need is expensive and wasteful. If you’re feeling altruistic, here’s another reason: when you hog hardware resources you’re not using, other companies can’t access them because the global supply is limited. That’s why it’s very important to give users an easy way to scale their clusters based on current needs. Which is what we did in our Soperator, a Kubernetes operator that runs and manages Slurm clusters as K8s resources. This isn’t something we needed to work on specifically — we got it for free just by hosting Slurm in K8s. You can simply change a single value in the YAML manifest and watch your cluster grow or shrink. #opensource #Slurm #K8s #clusters #GitHub
-
Quick reminder: you still have time to join today’s webinar with our own Nikita Vdovushkin. He’ll address the current challenges GenAI app builders face and unveil Nebius’ in-house inference infrastructure. Register here: https://lnkd.in/dspfBwhG #webinar #inference #GenAI #opensourse #models
🌄 Beyond ChatGPT: unlocking the power of open-source LLMs. Register for the webinar: https://lnkd.in/dspfBwhG As GenAI application builders face escalating costs and privacy concerns with proprietary models like ChatGPT, open-source alternatives such as Llama and Mistral are emerging as game-changers. These models offer comparable quality at a fraction of the cost, with enhanced privacy controls. In this webinar, Nebius’ team of experts will unveil our in-house inference infrastructure and share critical insights on navigating the token-as-a-service market. You will learn: - How to evaluate build vs. buy decisions for your GenAI infrastructure. - Selecting the optimal model: Balancing performance, cost and privacy. - Step-by-step guide to migrating from third-party providers to open-source solutions. - Key performance indicators for benchmarking inference providers. - Overcoming technical challenges in implementing cost-effective inference infrastructure. # For who For GenAI builders, CTOs, product managers, technical co-founders, data scientists and related roles in AI development. # When October 24, Thursday, 17:00 UTC+2 / 8:00 PST. We’ll finish around 18:00 after Q&A part. # Where Zoom. You will receive the link after registration. Register today: https://lnkd.in/dspfBwhG #webinars #LLMs #opensource #inference
-
We made it to Mumbai! 🇮🇳 Here at NVIDIA AI Summit India, our booth is surrounded by the company's partners from various sub-domains. So many people have come to the event, we’ve heard that attendance is in the tens of thousands. It’s at events like these that we truly see how wide-ranging our field is — and how much untapped AI potential remains in markets like India. We’re committed to expanding our presence in the country — right now, we’re just laying the groundwork. There is still time to say hi: reach out directly to Levon Sarkisian, Ranvish Vir or Nadya Saul-Kopievskaya. #conferences #summits #networking
-
Nebius heeft dit gerepost
Al has been a prominent topic for several years, but what is its role in healthcare and in enhancing the quality of care and patient outcomes? Al has numerous opportunities to improve diagnoses, create personalised care pathways, and strengthen risk management strategies. We’re excited to welcome all innovators to our stand at the ICHOM conference in Amsterdam! Join us to explore the latest advancements in cloud technology that are transforming Lifescience and Healthcare. Stop by to connect, learn, and discover how we can help drive your organization forward. See you at ICHOM! #ICHOM #ICHOM2024 #ICHOM24 #HealthcareInnovation #Biotech #DigitalHealth #HealthTech #nebius #cloud #cloudcomputing #cloudai
-
Nebius heeft dit gerepost
After opening our hub a little over a month ago, we finally had the chance to showcase the essence of Eurasian Startup Hub - engaging with talented individuals at the earliest stages of their startup journey. This weekend we hosted our first hackathon in collaboration with London OpenDataScience [ods.ai]. We gathered a group of over 50 engineers, researchers, and aspiring founders to sit down and spend 24 hours building innovative AI apps. We saw some really impressive ideas brought to life, with 14 teams progressing to present in the finals. Congrats to the 3 winning teams (Deputy, fAIrytale, NotALawyer.ai) and everyone who took part in this amazing event! Special shoutout to our sponsors Nebius and Recraft, along with mentors and judges who evaluated and supported participants along the way. It was amazing to witness how in such a short timeframe people managed to form teams, build new connections and produce such incredible products. If you are planning to continue developing ideas into full-scale startups after the hackathon, or generally starting a new venture, our doors are alway open!
-
+2