✨ New: Embed Datawrapper charts in Observable Framework data apps! Quickly embed Datawrapper charts as web components in Framework data apps using Datawrapper's script embed API. Or, write a 🐍 Python data loader right in your Framework project to generate a Datawrapper chart dynamically. Thanks to Ben Welsh for showing how it's done, and for writing up our newest technique example! Check it out:
Observable’s Post
More Relevant Posts
-
Keywords are the reserved words that make Python so powerful. 1. continue - Skips the rest of the current loop iteration and continues to the next iteration. 2. from - Used in import statements to specify where a module should be imported from. 3. else - Executes a block of code if the preceding condition(s) in an if statement is false. 4. and - A logical operator that returns True if both operands are True. 5. break - Terminates the current loop (for loop or while loop). 6. elif - Stands for "else if"; used to check additional conditions after an initial if statement. 7. True - Boolean value representing true, used in conditional expressions. 8. not - Logical operator that negates the truth value of an expression. 9. if - Used to execute a block of code if a condition is true. 10. while - Executes a block of code as long as a condition is true. 11. lambda - Creates an anonymous function (a function without a name). 12. except - Used in exception handling to catch and handle specific exceptions. 13. import - Imports a module into the current namespace. 14. def - Defines a function. #PythonProgramming #Keywords #DataScience #DeveloperLife
To view or add a comment, sign in
-
I'm excited to share with you guys another FastAPI related project 🚨 FastCRUD is a Python package for FastAPI, offering robust async CRUD operations with SQLAlchemy and flexible endpoint creation utilities🚀 Github: https://lnkd.in/dqVSAfWm Docs: https://lnkd.in/d6iHfXgy To use, just pip install fastcrud and follow the docs and you'll get tons of easy to use and powerful database interaction methods and automatic endpoint creation utilities. Features: ⚡️ Fully Async: Leverages Python's async capabilities for non-blocking database operations. 📚 SQLAlchemy 2.0: Works with the latest SQLAlchemy version for robust database interactions. 🦾 Powerful CRUD Functionality: Full suite of efficient CRUD operations with support for joins. ⚙️ Dynamic Query Building: Supports building complex queries dynamically, including filtering, sorting, and pagination. 🤝 Advanced Join Operations: Facilitates performing SQL joins with other models with automatic join condition detection. 📖 Built-in Offset Pagination: Comes with ready-to-use offset pagination. ➤ Cursor-based Pagination: Implements efficient pagination for large datasets, ideal for infinite scrolling interfaces. 🤸♂️ Modular and Extensible: Designed for easy extension and customization to fit your requirements. 🛣️ Auto-generated Endpoints: Streamlines the process of adding CRUD endpoints with custom dependencies and configurations. Improvements are coming, issues and pull requests are more than welcome 🚧 https://lnkd.in/dqVSAfWm Thanks Sebastián Ramírez Montaño for FastAPI! Also thanks again to Marcelo Trylesinski, another project based on his fastapi-microservices repo, this time based on the CRUDBase class 🙏
GitHub - igorbenav/fastcrud: FastCRUD is a Python package for FastAPI, offering robust async CRUD operations and flexible endpoint creation utilities.
github.com
To view or add a comment, sign in
-
MS computer Science student @ George Washington University | Machine intelligence | Secured building, deploying and monitoring ML models, | information assurance and trustworthiness
The rate at which Fastapi is growing is awesome 😎. If you want to learn fastAPI, check out “High-performance Web Apps with Fastapi” a book by Malhar Lathkar very comprehensive with code samples you can try on the go.
I'm excited to share with you guys another FastAPI related project 🚨 FastCRUD is a Python package for FastAPI, offering robust async CRUD operations with SQLAlchemy and flexible endpoint creation utilities🚀 Github: https://lnkd.in/dqVSAfWm Docs: https://lnkd.in/d6iHfXgy To use, just pip install fastcrud and follow the docs and you'll get tons of easy to use and powerful database interaction methods and automatic endpoint creation utilities. Features: ⚡️ Fully Async: Leverages Python's async capabilities for non-blocking database operations. 📚 SQLAlchemy 2.0: Works with the latest SQLAlchemy version for robust database interactions. 🦾 Powerful CRUD Functionality: Full suite of efficient CRUD operations with support for joins. ⚙️ Dynamic Query Building: Supports building complex queries dynamically, including filtering, sorting, and pagination. 🤝 Advanced Join Operations: Facilitates performing SQL joins with other models with automatic join condition detection. 📖 Built-in Offset Pagination: Comes with ready-to-use offset pagination. ➤ Cursor-based Pagination: Implements efficient pagination for large datasets, ideal for infinite scrolling interfaces. 🤸♂️ Modular and Extensible: Designed for easy extension and customization to fit your requirements. 🛣️ Auto-generated Endpoints: Streamlines the process of adding CRUD endpoints with custom dependencies and configurations. Improvements are coming, issues and pull requests are more than welcome 🚧 https://lnkd.in/dqVSAfWm Thanks Sebastián Ramírez Montaño for FastAPI! Also thanks again to Marcelo Trylesinski, another project based on his fastapi-microservices repo, this time based on the CRUDBase class 🙏
GitHub - igorbenav/fastcrud: FastCRUD is a Python package for FastAPI, offering robust async CRUD operations and flexible endpoint creation utilities.
github.com
To view or add a comment, sign in
-
Hello Connections, I am glad to announce that I recently dived into the domain of Web-Scrapping using Python Libraries where I used the BeautifulSoup and Requests libraries of Python. Web scraping is an automated process of extracting data from websites. It involves fetching a webpage's content using tools like Python's requests library and parsing the HTML using libraries like BeautifulSoup. The extracted data can be used for various purposes, including data analysis, research, and machine learning. Web scraping is powerful for gathering large amounts of data quickly and efficiently, making it a valuable skill in data-driven fields. This Jupyter Notebook demonstrates basic web scraping using Python's requests and BeautifulSoup libraries. The process involves fetching a webpage's content via an HTTP GET request with requests.get(url), then parsing the HTML with BeautifulSoup. Key learnings include understanding web scraping basics, using `requests` to retrieve web content, and using BeautifulSoup to parse and inspect HTML. The project emphasizes the importance of ethical scraping by respecting websites' terms of service and avoiding server overload. Additionally, it highlights the potential for extending this setup to extract specific data elements from web pages for various applications such as data analysis and machine learning. This foundational knowledge provides a basis for more advanced web scraping and data extraction tasks. Website Link :- https://lnkd.in/gHvPHgdA This website is exclusively made to help programmers practice web scrapping. https://lnkd.in/gnk8N9_V
GitHub - japendra88/BeautifulSoup-and-Requests-Python
github.com
To view or add a comment, sign in
-
Transform Your Data Workflow with XL2PY: Excel to Python Made Easy Ever find yourself watching the clock as your Excel files slowly process? We've got just the solution for you. Introducing XL2PY, a revolutionary engine that's changing the game by converting Excel files into Python code in a flash. Now Supporting Key Excel Functions: AND, AVERAGE, AVERAGEIFS IF, IFERROR MAX, MAXIFS MIN, MINIFS OR, SUM, SUMIFS VLOOKUP Send your files to theactuarialclub[@]gmail.com and get your hands on the Python code within 24 hours. We're keeping it simple - no multi-row formulas, please. Keep the title of the email as 'XL2PY Demo' and keeping the formulas in the first row is sufficient! With XL2PY, you're not just speeding up data processing; you're setting a new standard for efficiency and precision in your projects. Whether you're deep into data analysis or crafting complex reports, XL2PY is your ally in navigating the challenges of data processing. Let's make those long waits a thing of the past and usher in a new era of productivity. Dive into the future of data processing with XL2PY. Your data, supercharged!
To view or add a comment, sign in
-
Python Developer |Dsa with python| Full Stack Developer { Django | Django Channels | Django REST Framework | HTML | CSS | JavaScript | Bootstrap } + Data Visualization with Chart.js & Django
Hi Everyone, Today, in my #DRF journey, I learned about "ModelViewSet" and was amazed by its capabilities. With just two lines of code, it can perform complete CRUD operations. Additionally, I learned how to create API routers using "rest_framework.routers," which can handle all URL requirements on their own. Django Rest Framework is truly remarkable 💖. #Question What is your favorite way to create APIs using #DRF? Here are some of my repositories: - Full Django Notes: [ https://lnkd.in/gmVSSK2Y ] - Full Django Channels Notes: [ https://lnkd.in/g3abDvcH ] - Complete DRF Notes: [ https://lnkd.in/epn6uPeV ] - Data Visualization Notes: [ https://lnkd.in/g6p49SFT ] - DSA Using Python: [ https://lnkd.in/gFY38p6A ] - Login via Email: [ https://lnkd.in/gAiXVBNw ]
To view or add a comment, sign in
-
Certified Scrum Product Owner® | Scrum Master® | Google Certified Project Manager | IBM® Data Science Professional | 12 Years delivering success across industries | Agile Project Leadership | Data-Driven Problem Solver
Tech and coding are my jam! 🎸 How Python Generators Saved My Analytics Pipeline? 🆒 Ever struggled to analyze a massive dataset from an API? Traditional methods can get clunky and memory-intensive, especially with large responses. That's where Python generators came to my rescue! Generators allowed me to process the data in chunks, saving precious memory and streamlining my workflow. Here's a taste: def fetch_api_data(url): response = requests.get(url) for item in response.json(): yield item # Generator magic! def analyze_data(data): # Your data analysis logic here for item in fetch_api_data("https://lnkd.in/ejcFqy-G"): analysis_result = analyze_data(item) # Push the result to your database/storage push_data(analysis_result) Benefits of using Generators: Memory Efficiency: Generators process data one item at a time, significantly reducing memory usage compared to loading the entire dataset upfront. Scalability: Easily handle massive datasets without worrying about memory limitations. Simplicity: Clean and concise code compared to complex loops and data manipulation techniques.
To view or add a comment, sign in
-
📺 MEANWHILE: The devs at Dataiku ship a ton of features every release! Sometimes, a few slip by the "news cycle" ... So here's a new segment to keep track of some that get overlooked by the "mainstream media" 🙃 https://lnkd.in/gVZhzBwB For example, a release earlier this year added APIs to interact with the project git. We showed how to use them in a tutorial in the dev-guide recently. Here's a quick demo--additional commentary in the Closed Captions [㏄]! tl;dw - we use a simple Python script to create a working branch, make changes to the project, and push the changes to the remote. You can find the full tutorial here: https://lnkd.in/gRdVh3RZ
To view or add a comment, sign in
-
Data Analyst-Operations || Python, SQL, ETL Pipelines, Prefect || Transforming complex data into actionable insights through advanced analysis || Data Integration with Python and Smartsheet ||
Presenting my very first website crawler called ShopClues crawler. This crawler is built using Python, Selenium and Beautiful Soup. It aims at crawling the website based on the URL list which I have mentioned in the code. You can try out different URLs but make sure it’s from the ShopClues website only. Each of the URL has around more than 5000 items whose details get scrapped and dumped in a google sheet through the google sheet Api. This data is presented visually on the looker studio. Additionally, this data can be stored in MySQL Database so that you can use this data for your own research purposes. Comments on the same are welcomed. For any modifications or changes do raise issues and PRs on this repo and I will surely look into this. Link for the Project: https://lnkd.in/g9d2EZM6 Link for the Dashboard: https://lnkd.in/g4P697kS
To view or add a comment, sign in
-
Helping Small Businesses & Startups Grow with Custom Websites & Marketing Solutions | Delivering Results with Passion & Innovation | Founder of GARI TECH
Day 2/30 💻 For Day 2, I’m showcasing one of the projects I’ve been working on: an API that converts PDFs into images. This is simple but incredibly useful for anyone who needs to extract images from PDFs quickly – whether it’s for presentations, report, or design work. The best part is that it’s built as a fully functional API. Here’s a detailed look at how it works: - Framework: I developed it using Flask, a lightweight Python web framework 🐍. - Endpoint: The main API endpoint is `/convert` 🔄. You send a PDF file through the API, and it returns each page as a separate image file 🖼️. - Bulk Processing: Instead of manually converting PDFs one by one, this API can handle large batches of PDFs at once, saving you a ton of time. - Testing: I’ve tested everything using Postman to ensure smooth operation and fast response times 🛠️. This project shows how a simple idea can make work more efficient and scalable. I’ve also included a screen recording showing the API in action! 🎥 👇 Feel free to comment below if you want to know more about the process or have any feedback. Let’s connect and build cool things together! 💬 #Flask #Python #API #PostmanAPI
To view or add a comment, sign in
11,907 followers