✨ New: Embed Datawrapper charts in Observable Framework data apps! Quickly embed Datawrapper charts as web components in Framework data apps using Datawrapper's script embed API. Or, write a 🐍 Python data loader right in your Framework project to generate a Datawrapper chart dynamically. Thanks to Ben Welsh for showing how it's done, and for writing up our newest technique example! Check it out:
Observable’s Post
More Relevant Posts
-
🚨Blog post 🚨 FastAPI's asynchronous capabilities make it one of the best performing Python web frameworks available, something that has contributed to its meteoric rise in popularity over the past few years. However, because of its use of asynchronous code, testing a FastAPI application can be more complex than a standard synchronous API. In this post, we'll explore how to use pytest and pytest-asyncio to write effective async tests for your application. https://lnkd.in/gMebT-5X
Fast and furious: async testing with FastAPI and pytest
weirdsheeplabs.com
To view or add a comment, sign in
-
We've been increasingly leaning on FastAPI when building APIs for clients. This post gives a high-level overview of how to set up async tests in a FastAPI application. Feel free to check out the demo repository (link in post) if it is of interest!
🚨Blog post 🚨 FastAPI's asynchronous capabilities make it one of the best performing Python web frameworks available, something that has contributed to its meteoric rise in popularity over the past few years. However, because of its use of asynchronous code, testing a FastAPI application can be more complex than a standard synchronous API. In this post, we'll explore how to use pytest and pytest-asyncio to write effective async tests for your application. https://lnkd.in/gMebT-5X
Fast and furious: async testing with FastAPI and pytest
weirdsheeplabs.com
To view or add a comment, sign in
-
Keywords are the reserved words that make Python so powerful. 1. continue - Skips the rest of the current loop iteration and continues to the next iteration. 2. from - Used in import statements to specify where a module should be imported from. 3. else - Executes a block of code if the preceding condition(s) in an if statement is false. 4. and - A logical operator that returns True if both operands are True. 5. break - Terminates the current loop (for loop or while loop). 6. elif - Stands for "else if"; used to check additional conditions after an initial if statement. 7. True - Boolean value representing true, used in conditional expressions. 8. not - Logical operator that negates the truth value of an expression. 9. if - Used to execute a block of code if a condition is true. 10. while - Executes a block of code as long as a condition is true. 11. lambda - Creates an anonymous function (a function without a name). 12. except - Used in exception handling to catch and handle specific exceptions. 13. import - Imports a module into the current namespace. 14. def - Defines a function. #PythonProgramming #Keywords #DataScience #DeveloperLife
To view or add a comment, sign in
-
-
Just had a great conversation about how to build workflows with “callback URLs.” These are workflows that pause until someone clicks a URL (or submits a form, etc.) then resume with the new input. The distinguished engineer I was talking to was building them on AWS with Step Functions. To make callback URLs work, he needed to: - Create a Step Functions callback task that generates a task token and publishes it to SQS. - Create a Lambda that reads task tokens from SQS, generates the callback URLs, then sends them in an email - Create an API Gateway for the callback URLs - Create another Lambda called from the API Gateway that parses the task token out of the URL and submits it to the Step Function. That’s four different services–and hundreds of lines of code–he had to use just to make callback URLs work! What we realized is that with DBOS, you can build the same workflow in <20 lines of code, all in one Python process. A DBOS workflow can construct a callback URL, send it out in an email, then durably wait until a notification arrives–and it only takes a three-line HTTP endpoint to serve the callback URLs and send notifications from them.
To view or add a comment, sign in
-
-
If you're an engineer building serverless backends, take the 60 seconds it takes to read this from Peter Kraft. Examples like this are why we do what we do. 👇
Just had a great conversation about how to build workflows with “callback URLs.” These are workflows that pause until someone clicks a URL (or submits a form, etc.) then resume with the new input. The distinguished engineer I was talking to was building them on AWS with Step Functions. To make callback URLs work, he needed to: - Create a Step Functions callback task that generates a task token and publishes it to SQS. - Create a Lambda that reads task tokens from SQS, generates the callback URLs, then sends them in an email - Create an API Gateway for the callback URLs - Create another Lambda called from the API Gateway that parses the task token out of the URL and submits it to the Step Function. That’s four different services–and hundreds of lines of code–he had to use just to make callback URLs work! What we realized is that with DBOS, you can build the same workflow in <20 lines of code, all in one Python process. A DBOS workflow can construct a callback URL, send it out in an email, then durably wait until a notification arrives–and it only takes a three-line HTTP endpoint to serve the callback URLs and send notifications from them.
To view or add a comment, sign in
-
-
🚀 Pip vs Conda: What’s the Difference? 🤔 If you’ve worked with Python, you’ve probably encountered pip and conda—but when should you use one over the other? Let’s break it down! 1. Purpose • Pip: Python’s package manager for installing libraries from PyPI. It handles Python packages only. • Conda: A package, environment, and dependency manager, not limited to Python. It can install packages for other languages like R and system-level dependencies. 2. Scope • Pip: Great for Python-only projects. You’ll need external tools (like virtualenv) to manage environments. • Conda: Perfect for data science, machine learning, or any project requiring non-Python libraries (e.g., NumPy, pandas with system dependencies). 3. Package Source • Pip: Pulls packages exclusively from PyPI. • Conda: Downloads packages from the Conda repositories (e.g., Anaconda or Miniconda), which often include precompiled binaries for easier installation. 4. Environment Management • Pip: Requires additional tools like venv or virtualenv to manage isolated environments. • Conda: Built-in environment management makes it easy to create isolated, reproducible environments. 5. Use Cases • Pip: When you’re working on a lightweight project with pure Python dependencies. • Conda: When you need to manage diverse libraries or are working in the data science ecosystem. Can I Use Both Together? Yes, but use with caution! Conda manages its environments, so mixing pip-installed packages can sometimes lead to dependency conflicts. If you do use pip within a conda environment, always run conda install pip first. 👉 Key Takeaway: • Use pip for Python-specific projects. • Use conda for complex, multi-language, or data-heavy projects. Which one do you prefer using? Let me know in the comments! #Python #DataScience #MachineLearning #Programming #Coding #PythonPackages #Conda #Pip #SoftwareDevelopment #DataEngineering #PythonTips #OpenSource #Developers #PythonCommunity #Tech
To view or add a comment, sign in
-
🚀 Generative AI for Automated ETL Code Generation 🚀 I’m excited to share a project I’ve been working on. Flask web application that uses GPT-2 to generate Python ETL scripts from user descriptions! 🎉 Here’s how it works: You provide a simple description The app generates the corresponding ETL Python code for you! It’s easy to set up and run locally. Feel free to fork the repo and share feedback! 👉 GitHub Repository: https://lnkd.in/euGRyZxD #AI #ETL #Python #DataEngineering #MachineLearning #OpenSource #Saint Peter's University
GitHub - raavibharath/Generative-AI-for-Automated-ETL-Code-Generation: This project provides a Flask web application that uses a pre-trained GPT-2 model to generate ETL (Extract, Transform, Load) code based on user descriptions.
github.com
To view or add a comment, sign in
-
🌟 New Blog Post! 🌟 Discover how to implement table historization using Django DRF for efficient data tracking and compliance. This guide covers setup, serialization, and testing of historized models. #Django #DRF #TableHistorization #Python 🔗 Read here:
Django DRF and Table Historization - Syntera
https://www.syntera.ch/blog
To view or add a comment, sign in
-
🧞♂️ Ever wished you could share an interactive view of your work with a collaborator, instead of static screenshots and long descriptions? 🤓 Wouldn't it be nice to quickly review differences on binary files in the browser like you do with code? ✨ Now you can, with XetHub custom views. Custom views lets you create interactive views that live alongside the files in your repository, bringing instant context to opaque binary files whether you're browsing or comparing files. Quickly create your first view using our built-in Python editor, or leverage templates from our view gallery for easy onboarding. Try it for yourself! https://lnkd.in/gViqEG4h
XetHub | Introducing Custom Views: From Binary Blobs to Shared Insights
about.xethub.com
To view or add a comment, sign in
-
Released version 1.0.0 of md5sift! 🚀 md5sift is a CLI tool written in Python designed to generate checksum reports for files across local directories or network shares. It offers filtering by file extensions or predefined file lists and produces reports in CSV format. https://lnkd.in/gGA-a9bU
GitHub - madebyjake/md5sift: Generate checksum reports (CSV) for files across local directories or network shares with filtering by file extensions or predefined file lists.
github.com
To view or add a comment, sign in