Lightning AI’s cover photo
Lightning AI

Lightning AI

Software Development

New York, NY 91,394 followers

The AI development platform - From idea to AI, Lightning fast⚡️. Creators of AI Studio, PyTorch Lightning and more.

About us

The AI development platform - From idea to AI, Lightning fast ⚡️. Code together. Prototype. Train on GPUs. Scale. Serve. From your browser - with zero setup. AI Studio is your laptop on the cloud. Zero setup. Always ready. Persistent storage and environments. Code on CPU. Debug on GPU. Scale to multi-node. Run sweeps, jobs and more. Scale models with PyTorch Lightning, Fabric, Lit-GPT, torchmetrics and more.

Website
http://lightning.ai
Industry
Software Development
Company size
11-50 employees
Headquarters
New York, NY
Type
Privately Held
Founded
2019
Specialties
Artificial Intelligence, Machine Learning, Infrastructure, deep learning, data science, and open source

Locations

Employees at Lightning AI

Updates

  • View organization page for Lightning AI

    91,394 followers

    🚀 Don't miss Luca Antiga, CTO of Lightning AI, in his #GTC25 speaking session: Train and Serve AI Systems Fast With the Lightning AI Open-Source Stack Learn about Lightning AI’s high-performance set of libraries for training, fine-tuning, and deploying AI systems that builds upon and extends the PyTorch ecosystem and discover new features like multi-dimensional parallelism with DTensors and quantization with torchao. ⚡️ 11:20 AM - 11:35 AM PDT ⚡️ San Jose Convention Center Grand Ballroom Theater (L2) https://lnkd.in/ekeKi4gi

    • No alternative text description for this image
  • Lightning AI reposted this

    📣 Just announced at #GTC25: We are thrilled to announce the launch of new and exclusive cloud partner benefits available to #NVIDIAInception and #NVIDIAConnect members. https://nvda.ws/4kBRYLi ☁️ Enhanced cloud benefit offers ✅ Simplified process Explore these new offers from our partners: Nebius, Lambda, Scaleway, Yotta, Lintasarta, Weights & Biases, Lightning AI, BRIA AI, Beamr, and more.

    • No alternative text description for this image
  • Lightning AI reposted this

    We are super excited to work with Lightning AI as we further advance 's AI-powered 3D imaging platform! ⚡ With Lightning AI, we're accelerating our deep learning workflows on the cloud, handling large-scale 3D and 4D microscopy data efficiently and deploying cutting-edge 3D foundation models for drug discovery and precision medicine. At Sentinal4D, our mission is to learn how cells respond to treatments in true 3D by leveraging AI to decode drug mechanisms, identify off-target effects, and enhance preclinical decision-making. Grateful to work alongside innovators like Lightning AI to push the boundaries of precision medicine! #AI #3DImaging #DrugDiscovery #PrecisionMedicine #Oncology

  • 🚀 Join Luca Antiga & Thomas Viehmann for their talk, Make My PyTorch Model Fast, and Show Me How You Did It TODAY at #GTC25! Learn how Thunder - PyTorch-to-Python compiler - optimizes PyTorch models with ease without changing the code! ⚡️ 11:00 AM - 11:40 AM PDT ⚡️ San Jose Convention Center 210F (L2)

    • No alternative text description for this image
  • Lightning AI reposted this

    View profile for Alex Razvant

    Senior AI/ML Engineer | Author @NeuralBits | Sharing expert insights on E2E ML Systems.

    If you're still using FastAPI to deploy Hugging Face LLMs/VLMs - switch to LitApi! FastAPI is a great framework for implementing RESTful APIs. However, it wasn’t specifically designed to handle the complex requirements of serving ML models at scale. The team at Lightning AI is behind LitServe and LitApi to fill in that gap. 🔹 𝗟𝗶𝘁𝗔𝗣𝗜 builds on top of FastAPI, adapting for ML workloads, standardizes the core steps of serving a model. 🔹 𝗟𝗶𝘁𝗦𝗲𝗿𝘃𝗲𝗿 handles the infrastructure side of serving models. 🔸 Here's what you must know: 1. 𝗢𝗻𝗲-𝘁𝗶𝗺𝗲 𝗺𝗼𝗱𝗲𝗹 𝘀𝗲𝘁𝘂𝗽 In the 𝙨𝙚𝙩𝙪𝙥() method, we can load any model only once. 2. 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗲 𝗣𝗿𝗲𝗱𝗶𝗰𝘁 In the 𝙥𝙧𝙚𝙙𝙞𝙘𝙩() method, we implement the inference on inputs logic. 3. 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗲 𝗕𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝗟𝗼𝗴𝗶𝗰 You can specify a MAX_BATCH_SIZE and a BATCH_TIME_WINDOW and it'll automatically handle the dynamic batching of requests as they come in concurrently. You can use ThreadPoolExecutor to parallelize the preprocessing steps in the 𝙗𝙖𝙩𝙘𝙝() method. 4. 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗲 𝗨𝗻𝗯𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝗟𝗼𝗴𝗶𝗰 After inferencing on a batch, you'll handle the detach () of GPU tensors and post-process the raw logits in the 𝙪𝙣𝙗𝙖𝙩𝙘𝙝() method. 5. 𝗗𝗲𝗰𝗼𝗱𝗲 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝗮𝗻𝗱 𝗲𝗻𝗰𝗼𝗱𝗲 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 In the 𝙙𝙚𝙘𝙤𝙙𝙚_𝙧𝙚𝙦𝙪𝙚𝙨𝙩() - specify how the API should access the input value from the request. In the 𝙚𝙣𝙘𝙤𝙙𝙚_𝙧𝙚𝙨𝙥𝙤𝙣𝙨𝙚() - specify how the API should return responses to the client. Simple as that! To scale this up for a production workload, you'll use LitServe's scale configuration parameters: ``` LitServer( lit_api: LitAPI, accelerator: str = "auto", devices: Union[str, int] = "auto", workers_per_device: int = 1, timeout: Union[float, bool] = 30, max_batch_size: int = 1, batch_timeout: float = 0.0, stream: bool = False, ) ``` 📙 For a full tutorial, see this article: https://lnkd.in/dGUrVX7s --- #machinelearning #deeplearning #artificialintelligence --- 💡 Follow me for more 𝗲𝘅𝗽𝗲𝗿𝘁 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 on AI/ML Engineering

    • No alternative text description for this image
  • Unlock the power of Fully Sharded Data Parallel (FSDP) & Thunder! ⚡ FSDP is already powerful, but tweaking bucketing logic to better align with hardware speeds could unlock even more efficiency. Faster materialization of tensors on high-performance machines means reduced overhead and improved scalability. Watch the full Thunder Session with Luca Antiga and Thomas Viehmann ➡️ https://lnkd.in/eeJRXuYD

Similar pages

Browse jobs