Leveraging SWR for Efficient Data Fetching: A Deep Dive into Use Cases and Integration with ISR Architecture In this article, we’ll delve into how to use SWR and explore how it can be integrated with Incremental Static Regeneration (ISR), a powerful feature provided by Next.js for static site generation. https://lnkd.in/dt5uSsc7 #publicissapient #publicisgroupe #nextjs #swr #frontenddevelopment #serversiderendering #clientsiderendering #serverside #clientside
Rakesh Kumar’s Post
More Relevant Posts
-
GET or POST for #machinelearning model endpoints? ➡ GET requests are used to retrieve information from a server without modifying any resources on the server. ➡ POST requests are used for submitting HTML forms on web pages, and making complex API requests where the data being sent is not easily represented in a URL. For #machinelearning endpoints, I see two scenarios: ➡ We expect the user to send model features as JSON. ➡ We expect the user to send some identifier (like customer ID), and the features are retrieved from the feature store. I often see that the first case is defined as a POST request with the request body, and the second one as a GET request where data (identifier) is part of the URL, even though the same thing is happening in the background. I noticed it causes a lot of confusion. Semantics-wise it must be GET in both cases. ❗Even though both POST and GET support sending data in the request body, it is discouraged for GET requests to prevent undefined behavior. #restapi #softwareengineering #mlops
To view or add a comment, sign in
-
Why is JSON-LD important for the development of Web3? ⬇️ Amongst other reasons, Fluree CEO Brian Platz emphasizes JSON-LD's ability to wrap data in RDF, a universal format that enables data to be interpreted and used outside of the database. Read more on the role of JSON-LD in the development of Web 3 here: https://lnkd.in/ggJZavQ2
Why JSON-LD matters for Web3
cointelegraph.com
To view or add a comment, sign in
-
"RTK Query", an addon on top of the redux toolkit reduces the complicated work related to remote state domain and is absolutely a game changer. As we can do a lot of things with Redux and also brings lot more predictability in applications 🔥. In simpler definition, RTK Query simplifying the process of data fetching, caching, storing, updating and retrieving logic by including some easy to follow APIs that takes those work away from us so that we can centre our mind by defining data source endpoints, data re-validation moments with lot less code, without bothering about reducers while also making the UX a lot more elegant. We don’t have to deal with all that headache of managing cache data, rather these types of implementations are hidden behind some abstractions (exposed APIs). Some of the extremely useful features are: → Automatic data loading and error state → So much easier to update data on the server and re-run the queries that is affected by server mutation with tag types to keep the cached data in sync with the server. → Avoid duplicate requests Yes there are more specialized tools available in the market e.g. React Query, SWR etc. which are probably better than RTKQ but they are highly specialized in managing only async states. In case of redux it does solve a lot of different use cases like: 1) Complex Client state management 2) server state caching management I do love React Query but when a situation demands to manage all those different use cases at the same time then RTK query really shines.
To view or add a comment, sign in
-
In the realm of data, speed and efficiency are not just advantages—they're necessities. Integrating WebAssembly with Conduit brings unmatched speed to your data workflows, ensuring your operations run smoother and faster than ever. Explore the benefits today. https://lnkd.in/gU3yJbDq #dataspeed #operationalefficiency #wasm #developer
Introducing Conduit 0.9: Revolutionizing Data Processing
meroxa.com
To view or add a comment, sign in
-
JSON-LD addresses one of Web3's most significant challenges by providing a common language for data, thus facilitating secure and efficient data sharing.
Why JSON-LD matters for Web3
cointelegraph.com
To view or add a comment, sign in
-
WebAssembly (WASM) is revolutionizing how we think about web performance, and when combined with Conduit, it unlocks unparalleled efficiency in data processing. Discover how WASM with Conduit can transform your data operations with speed and flexibility. https://lnkd.in/gU3yJbDq #WebAssembly #DataProcessing #conduit #meroxa #developer
Introducing Conduit 0.9: Revolutionizing Data Processing
meroxa.com
To view or add a comment, sign in
-
𝗥𝗘𝗦𝗧 𝗔𝗣𝗜 𝘃𝘀. 𝗚𝗿𝗮𝗽𝗵𝗤𝗟 - 𝗪𝗵𝗮𝘁’𝘀 𝗕𝗲𝘀𝘁 𝗳𝗼𝗿 𝗬𝗼𝘂𝗿 𝗣𝗿𝗼𝗷𝗲𝗰𝘁? When choosing an API architecture, REST and GraphQL are two popular options. Here’s a quick comparison between them - 𝗥𝗘𝗦𝗧 𝗔𝗣𝗜 ✅ Uses HTTP methods (GET, POST, PUT, DELETE) and URLs. ✅ Fixed data structure, which might lead to over-fetching or under-fetching of data. ✅ Easy to implement caching mechanisms. ✅ Well-established and easy to understand. 𝗚𝗿𝗮𝗽𝗵𝗤𝗟 ✅ Allows clients to request exactly the data they need, reducing over-fetching and under-fetching. ✅ All data is accessed through a single endpoint. ✅ More dynamic queries and schema flexibility. ✅ Can be more complex to set up and requires additional tooling. Which one do you prefer and why? Share your experiences! Follow Sanajit Jana for more such content 🔥 #REST #GraphQL #API #SoftwareDevelopment #TechTalk
To view or add a comment, sign in
-
First one and second one are my preferred ones. Graphql is too complex it can really lead to significant performance degradation very easily.
Nobody likes making 20 calls to render a page. 3 Aggregation Patterns your consumers will love. 1. Central Aggregating Gateway It is middleware between user interfaces and microservices, performing call filtering and aggregation. Reduces many calls from the UI by consolidating them into a single call to the gateway. 𝗣𝗿𝗼𝘀 - Simplifies client-side logic by reducing the number of API calls. - A centralized place for common things like authentication and rate limiting. 𝗖𝗼𝗻𝘀 - It can become a bottleneck if many teams need to make changes. - Requires significant coordination between teams, slowing down development. - Risks becoming a single point of failure or bottleneck. 2. Backend for Frontend (BFF) Provides a dedicated backend for each frontend, tailored to the specific needs of that frontend. It separates the concerns of different UIs (e.g., mobile and desktop), each with its own BFF. It solves the bottleneck issues in the previous pattern by decentralizing the aggregation logic. 𝗣𝗿𝗼𝘀 - Reduce unnecessary data fetching and optimize performance. - It is easier to manage and evolve; the same team can maintain the UI and the BFF. - Reduces the need for cross-team coordination. 𝗖𝗼𝗻𝘀 - Increases duplication across BFFs. - It can lead to maintenance overhead if the number of BFFs grows. 3. GraphQL A query language for APIs that allows clients to request only the data they need. It reduces over-fetching and under-fetching issues by enabling precise data queries. Offers flexibility to clients to adjust their queries without needing backend changes. 𝗣𝗿𝗼𝘀 - Reduces the number of API calls and the volume of data transferred, as clients fetch only what they need. - Flexibility in query construction without requiring backend changes. -It can serve as an effective aggregating layer like a BFF but with more flexibility. 𝗖𝗼𝗻𝘀 - It’s complex and requires a serious investment. - Potential for complex query structures leading to performance issues. - If you don't design it right, it might become a bottleneck like a central aggregating gateway. Have you used one of these? Share your thoughts!
To view or add a comment, sign in
-
I like the BFF concept the most out of these. The team that works on the code for the front end should own the code for the BFF. The backend focused engineers that maintain the microservices should have a presence in the team though (even just as reviewers on PRs), so that they can keep an eye on how things are being implemented and advise on anything that the full statck team are not privy to. A DDD approach would obviously also significantly assist here. As far as duplication goes, I don't think it's something to worry about at this level, but again, the backend presence in these teams could reduce it somewhat.
Nobody likes making 20 calls to render a page. 3 Aggregation Patterns your consumers will love. 1. Central Aggregating Gateway It is middleware between user interfaces and microservices, performing call filtering and aggregation. Reduces many calls from the UI by consolidating them into a single call to the gateway. 𝗣𝗿𝗼𝘀 - Simplifies client-side logic by reducing the number of API calls. - A centralized place for common things like authentication and rate limiting. 𝗖𝗼𝗻𝘀 - It can become a bottleneck if many teams need to make changes. - Requires significant coordination between teams, slowing down development. - Risks becoming a single point of failure or bottleneck. 2. Backend for Frontend (BFF) Provides a dedicated backend for each frontend, tailored to the specific needs of that frontend. It separates the concerns of different UIs (e.g., mobile and desktop), each with its own BFF. It solves the bottleneck issues in the previous pattern by decentralizing the aggregation logic. 𝗣𝗿𝗼𝘀 - Reduce unnecessary data fetching and optimize performance. - It is easier to manage and evolve; the same team can maintain the UI and the BFF. - Reduces the need for cross-team coordination. 𝗖𝗼𝗻𝘀 - Increases duplication across BFFs. - It can lead to maintenance overhead if the number of BFFs grows. 3. GraphQL A query language for APIs that allows clients to request only the data they need. It reduces over-fetching and under-fetching issues by enabling precise data queries. Offers flexibility to clients to adjust their queries without needing backend changes. 𝗣𝗿𝗼𝘀 - Reduces the number of API calls and the volume of data transferred, as clients fetch only what they need. - Flexibility in query construction without requiring backend changes. -It can serve as an effective aggregating layer like a BFF but with more flexibility. 𝗖𝗼𝗻𝘀 - It’s complex and requires a serious investment. - Potential for complex query structures leading to performance issues. - If you don't design it right, it might become a bottleneck like a central aggregating gateway. Have you used one of these? Share your thoughts!
To view or add a comment, sign in
-
Software Engineer | JavaScript Developer | Web Developer | ReactJs | NextJs | RedwoodJs | React Native | NodeJs | TypeScript | REST API | GraphQL | MongoDB | ES6 | Redux | Redux Toolkit | HTML5 | CSS3 | TailwindCSS | Git
The Great Debate: REST API vs GraphQL As developers, we've all been there - stuck in a heated debate about the best API architecture. But what are the real differences between REST API and GraphQL? REST API: Treats data as a resource, with a fixed set of endpoints. Uses HTTP methods to interact with resources (GET, POST, PUT, DELETE). Typically returns a fixed set of data, with limited filtering or sorting options. GraphQL: Treats data as a graph, with flexible querying capabilities. Uses a single endpoint, with queries and mutations to interact with data. Returns only the data requested, with support for filtering, sorting, and pagination. So, which one is better? It's not a simple answer. REST API is great and easy to use, REST APIs can be easily integrated into various systems without extensive additional work, while GraphQL shines in complex, data-driven applications. What's your take on the REST API vs GraphQL debate? Share your thoughts and experiences in the comments! #RESTAPI #GraphQL #APIArchitecture #WebDevelopment #BackendDevelopment #APIDesign
To view or add a comment, sign in