𝐄𝐯𝐞𝐫 𝐖𝐨𝐧𝐝𝐞𝐫𝐞𝐝 𝐰𝐡𝐲 𝐬𝐚𝐦𝐞 𝐫𝐞𝐪𝐮𝐞𝐬𝐭 𝐭𝐚𝐤𝐞𝐬 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭 𝐫𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐭𝐢𝐦𝐞 🤔? ➡️𝑪𝒐𝒏𝒕𝒆𝒙𝒕 𝑺𝒘𝒊𝒕𝒄𝒉 𝒐𝒓 𝑩𝒂𝒄𝒌𝒈𝒓𝒐𝒖𝒏𝒅 𝑷𝒓𝒐𝒄𝒆𝒔𝒔 🔄: The request time may contain more data or random additional latency introduced by a context switch to a background process. ➡️𝑵𝒆𝒕𝒘𝒐𝒓𝒌 𝑷𝒂𝒄𝒌𝒆𝒕 𝑳𝒐𝒔𝒔 𝒂𝒏𝒅 𝑹𝒆𝒕𝒓𝒂𝒏𝒔𝒎𝒊𝒔𝒔𝒊𝒐𝒏 📦: =>Loss of a network packet and TCP retransmission can contribute to additional latency. ➡️𝑮𝒂𝒓𝒃𝒂𝒈𝒆 𝑪𝒐𝒍𝒍𝒆𝒄𝒕𝒊𝒐𝒏 𝑷𝒂𝒖𝒔𝒆 🗑️: =>Latency might be introduced by garbage collection pauses in the server's memory management. ➡️𝑷𝒂𝒈𝒆 𝑭𝒂𝒖𝒍𝒕 𝒂𝒏𝒅 𝑫𝒊𝒔𝒌 𝑹𝒆𝒂𝒅 📀: =>A page fault forcing a read from disk can result in additional latency. ➡️𝑴𝒆𝒄𝒉𝒂𝒏𝒊𝒄𝒂𝒍 𝑽𝒊𝒃𝒓𝒂𝒕𝒊𝒐𝒏𝒔 𝒊𝒏 𝑺𝒆𝒓𝒗𝒆𝒓 𝑹𝒂𝒄𝐤 🔄: =>External factors like mechanical vibrations in the server rack can impact performance. ➡️𝑽𝒂𝒓𝒊𝒐𝒖𝒔 𝑶𝒕𝒉𝒆𝒓 𝑪𝒂𝒖𝒔𝒆𝒔 💼: =>There could be numerous other factors causing random additional latency. 𝐖𝐡𝐞𝐧 𝐬𝐞𝐯𝐞𝐫𝐚𝐥 𝐛𝐚𝐜𝐤𝐞𝐧𝐝 𝐜𝐚𝐥𝐥𝐬 𝐚𝐫𝐞 𝐧𝐞𝐞𝐝𝐞𝐧 𝐭𝐨 𝐬𝐞𝐫𝐯𝐞 𝐚 𝐫𝐞𝐪𝐮𝐞𝐬𝐭? 🌟 ➡️𝑩𝒂𝒄𝒌𝒆𝒏𝒅 𝑪𝒂𝒍𝒍 𝑫𝒖𝒓𝒂𝒕𝒊𝒐𝒏𝒔 ⏱️: =>Backend Calls: 92ms, 100ms, 150ms. =>Each backend call has a specific duration. ➡️𝑺𝒆𝒓𝒗𝒊𝒄𝒆 𝑪𝒐𝒎𝒑𝒍𝒆𝒕𝒊𝒐𝒏 ✅: =>The response will be served after the completion of all backend calls. Specifically, it will be delayed until the slowest backend call is finished (150ms). 🚀 ➡️Thanks for reading 😀 , Don't Like if you find it's not useful 🤣 . ➡️I am open to work, DM me for any referral or any openings. ➡️Credit to: "Designing Data-Intensive Applications." by Martin Kleppmann #tech #technotes #interview #systemdesign #backend #servers #apis #technology #creativity #Future #bettersoftwareengineer #softwareengineer #bettereveryday #networking
Achyuth Chowdary D A’s Post
More Relevant Posts
-
Career Branding Coach | Guiding Tech Professionals to Build Influential Personal Brands for Job Opportunities & Career Growth | Principal Software Engineer at Oracle
𝐓𝐨𝐩 6 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫 𝐔𝐬𝐞 𝐂𝐚𝐬𝐞𝐬 Load balancers are essential tools for managing web traffic efficiently.These capabilities collectively enhance server performance, reduce downtime, and ensure a seamless user experience. 𝐓𝐫𝐚𝐟𝐟𝐢𝐜 𝐃𝐢𝐬𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧 - Efficient Traffic Management: Distributes client requests across multiple servers. - Reduced Server Load: Balances load to prevent server overload. 𝐇𝐢𝐠𝐡 𝐀𝐯𝐚𝐢𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 - Minimized Downtime: Ensures continuous availability even if one server fails. - Failover Support: Automatically redirects traffic to healthy servers. 𝐒𝐒𝐋 𝐓𝐞𝐫𝐦𝐢𝐧𝐚𝐭𝐢𝐨𝐧 - Simplified SSL Management: Offloads SSL decryption tasks from backend servers. - Enhanced Performance: Improves server efficiency by handling encryption at the load balancer. 𝐒𝐞𝐬𝐬𝐢𝐨𝐧 𝐏𝐞𝐫𝐬𝐢𝐬𝐭𝐞𝐧𝐜𝐞 - Consistent User Experience: Ensures users are always directed to the same server. - Stateful Applications Support: Essential for applications requiring session data. 𝐒𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 - Dynamic Scaling: Adds or removes servers based on traffic demand. - Improved Resource Utilization: Maximizes server efficiency and cost-effectiveness. 𝐇𝐞𝐚𝐥𝐭𝐡 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 - Continuous Health Checks: Regularly checks server health and performance. - Proactive Issue Resolution: Detects and isolates faulty servers to maintain service quality. Follow Ashish Sahu for more content Join my friend's vibrant DevOps community ⏬ https://lnkd.in/g7fEqzj3 #python #softwareengineering #systemdesign #engineering #coder #coding #programming #sql #dsa
To view or add a comment, sign in
-
Career Branding Coach | Guiding Tech Professionals to Build Influential Personal Brands for Job Opportunities & Career Growth | Principal Software Engineer at Oracle
𝐓𝐨𝐩 6 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫 𝐔𝐬𝐞 𝐂𝐚𝐬𝐞𝐬 Load balancers are essential tools for managing web traffic efficiently.These capabilities collectively enhance server performance, reduce downtime, and ensure a seamless user experience. 𝐓𝐫𝐚𝐟𝐟𝐢𝐜 𝐃𝐢𝐬𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧 - Efficient Traffic Management: Distributes client requests across multiple servers. - Reduced Server Load: Balances load to prevent server overload. 𝐇𝐢𝐠𝐡 𝐀𝐯𝐚𝐢𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 - Minimized Downtime: Ensures continuous availability even if one server fails. - Failover Support: Automatically redirects traffic to healthy servers. 𝐒𝐒𝐋 𝐓𝐞𝐫𝐦𝐢𝐧𝐚𝐭𝐢𝐨𝐧 - Simplified SSL Management: Offloads SSL decryption tasks from backend servers. - Enhanced Performance: Improves server efficiency by handling encryption at the load balancer. 𝐒𝐞𝐬𝐬𝐢𝐨𝐧 𝐏𝐞𝐫𝐬𝐢𝐬𝐭𝐞𝐧𝐜𝐞 - Consistent User Experience: Ensures users are always directed to the same server. - Stateful Applications Support: Essential for applications requiring session data. 𝐒𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 - Dynamic Scaling: Adds or removes servers based on traffic demand. - Improved Resource Utilization: Maximizes server efficiency and cost-effectiveness. 𝐇𝐞𝐚𝐥𝐭𝐡 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 - Continuous Health Checks: Regularly checks server health and performance. - Proactive Issue Resolution: Detects and isolates faulty servers to maintain service quality. Follow @ashsau for more content Join my friend's vibrant DevOps community ⏬ https://lnkd.in/g7fEqzj3 #python #softwareengineering #systemdesign #engineering #coder #coding #programming #sql #dsa #datastructure
To view or add a comment, sign in
-
Day 09 of Becoming a Better Engineer! One thing I've realized so far is that being a good problem solver is more about using common sense than relying on fancy technical terms. Let me clarify. Fancy terms like stateful backend, caching with Redis, low latency, network optimization, saving network bandwidth, minimizing server costs, preventing server and database crashes, TCP, UDP, and so on can help in planning a better architecture. However, for a beginner, these terms can be overwhelming. Here's a simple story to explain these concepts that even a 5-year-old can understand. Prerequisites: Common sense Story There is a boy, X (frontend), who has a medical condition that causes him to forget everything once he completes a task. When someone asks him for information, his job is to get it from Y. Y is very busy and can't handle too much pressure; if asked repeatedly, Y might become overwhelmed. The distance between X and Y is 10 km. X has two vehicles: a TCP truck and a UDP bike. If X chooses the truck, he must pack survival essentials, ensuring a safe journey, but it takes time to prepare. In contrast, the UDP bike allows him to travel quickly but without any guarantee of reaching the destination. Suppose X reaches Y, gets the information, and returns home. Since he forgets everything after completing the task, he would need to repeat the whole process if asked for the same information again. This increases time (latency), fuel costs (costing), and disturbs Y unnecessarily. Instead, if X shares the information with his neighbors (browser caching), he can ask them next time instead of repeating the process. This saves time, cost, and energy. Additionally, if the neighbors are unavailable, he could have shared the information with his relatives (server caching). This way, he still travels less and is more efficient than starting from scratch each time. Note: Neighbors are less trustworthy than relatives. Concepts We Learned from This Story: TCP UDP Saving network bandwidth Saving database calls Client-side caching Server-side caching Low latency Saving costs If you find this helpful, please share your thoughts. #Engineering #SoftwareDevelopment #ProblemSolving #CommonSense #TCP #UDP #Caching #NetworkOptimization #CostSaving #LowLatency #BeginnerFriendly #TechStory #SoftwareArchitecture Real engineering is interesting!
To view or add a comment, sign in
-
🚀 Staggering Insights on the Value of Open Source Software! 🚀 An eye-opening study unveils the immense economic impact of Open Source Software: 1. OSS's Supply vs. Demand Side Value: While the cost to replicate the most popular OSS stands at $4.15 billion, its demand-side value is a staggering $8.8 trillion! This underscores OSS's irreplaceable role in our ecosystem. 2. Language Concentration of Value: The demand-side value is highly concentrated in a few pivotal programming languages (Java, JavaScript, C (including C# and C++), Python, and Typescript). 3. Power of the Few: Just a small fraction of OSS developers (3,000 developers, representing 5%!) are responsible for a lion's share of its supply-side value (93%!). 4. Companies would need to spend 3.5x more on software than they currently do if OSS did not exist. #OpenSourceSoftware #OSS #DevSec #AppSec #OpenSource #DevSecOps #ProductSecurity
To view or add a comment, sign in
-
🚀 New Article Alert! 📚 Choosing Between JAR and REST APIs for Offering Services: A Comprehensive Guide Are you grappling with the decision of whether to deploy your services via JAR files or REST APIs? This article dives deep into the need for each method, weighs their pros and cons, and provides insightful business use-cases along with real-time examples. 🛠️ Highlights Include: 1) Understanding the fundamental differences between JAR and REST APIs. 2) Pros and cons of each method, helping you make an informed decision. 3) Real-world business use-cases and examples. 4) Detailed discussion on memory instantiation and thread management in container virtual PODs. 5) Impact on latency and maintenance for both approaches. Whether you are developing high-performance computing applications or scalable web services, this guide offers valuable insights to help you choose the right approach for your needs. 🔗 Read the full article here: [https://lnkd.in/gzccrwvB] #Tech #SoftwareEngineering #APIs #Java #REST #Microservices #CloudComputing #Scalability #PerformanceOptimization #MediumArticle #LinkedIn
To view or add a comment, sign in
-
Software Engineer from 1337 - 42 Network - Mohammed VI Polytechnic University | WebDev & DevOps Enthusiast
In my latest article, I dive into the essential principles that every backend engineer needs to know. Instead of focusing on specific tools or frameworks, I explore the core fundamentals like communication protocols, database engineering, web servers, proxies, security, and messaging systems. Understanding these key concepts will not only help you build more resilient systems but also provide a strong foundation as you grow in your backend engineering career. Whether you’re a beginner or looking to refine your skills, this article covers what truly matters. I will dive deeper into every concept in my next articles. Don't miss out. #BackendEngineering #TechEssentials #SoftwareDevelopment #EngineeringFundamentals #CareerGrowth
To view or add a comment, sign in
-
Serialization is the process of converting an object into a format that can be transmitted over a network and later reconstructed into the original object. This technique is crucial for data exchange in distributed systems and APIs. - **JSON** is a widely used format for serialization. It is human-readable and easy to use, making it a popular choice for web applications. - **gRPC** offers more advanced features compared to JSON. It uses Protocol Buffers (protobuf) for serialization, which results in smaller payload sizes and reduced latency. - **Interface Definition Language (IDL)** is used to define the structure of the data being serialized, ensuring consistency and compatibility across different systems. In summary, while JSON is great for simple and human-readable data exchange, gRPC with Protocol Buffers provides a more efficient and powerful alternative for high-performance applications. #Serialization #JSON #gRPC #ProtocolBuffers #SoftwareEngineering Note: I am currently looking for job opportunities in software engineering. Feel free to reach out if you have any leads!
To view or add a comment, sign in
-
𝐓𝐨𝐩 6 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫 𝐔𝐬𝐞 𝐂𝐚𝐬𝐞𝐬 Load balancers are essential tools for managing web traffic efficiently.These capabilities collectively enhance server performance, reduce downtime, and ensure a seamless user experience. 𝐓𝐫𝐚𝐟𝐟𝐢𝐜 𝐃𝐢𝐬𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧 - Efficient Traffic Management: Distributes client requests across multiple servers. - Reduced Server Load: Balances load to prevent server overload. 𝐇𝐢𝐠𝐡 𝐀𝐯𝐚𝐢𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 - Minimized Downtime: Ensures continuous availability even if one server fails. - Failover Support: Automatically redirects traffic to healthy servers. 𝐒𝐒𝐋 𝐓𝐞𝐫𝐦𝐢𝐧𝐚𝐭𝐢𝐨𝐧 - Simplified SSL Management: Offloads SSL decryption tasks from backend servers. - Enhanced Performance: Improves server efficiency by handling encryption at the load balancer. 𝐒𝐞𝐬𝐬𝐢𝐨𝐧 𝐏𝐞𝐫𝐬𝐢𝐬𝐭𝐞𝐧𝐜𝐞 - Consistent User Experience: Ensures users are always directed to the same server. - Stateful Applications Support: Essential for applications requiring session data. 𝐒𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 - Dynamic Scaling: Adds or removes servers based on traffic demand. - Improved Resource Utilization: Maximizes server efficiency and cost-effectiveness. 𝐇𝐞𝐚𝐥𝐭𝐡 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 - Continuous Health Checks: Regularly checks server health and performance. - Proactive Issue Resolution: Detects and isolates faulty servers to maintain service quality. Credit -: Ashish Sahu DevOps community ⏬ https://lnkd.in/g7fEqzj3 #python #softwareengineering #systemdesign #engineering #coder #coding #programming #sql #dsa #datastructure
To view or add a comment, sign in
-
Technical Lead | Architecting Scalable Microservices & Cloud-Native Solutions | .NET, Angular, & Azure | Driving Innovative Solutions & Leading High-Performance Teams
I've recently published a new Medium post about handling faults in .NET applications using Polly. Failures in distributed systems are inevitable, whether it's network issues, temporary service outages, or other transient faults. But with the right approach, we can make our applications more resilient and reliable. 💡 In this post, I walk through how to integrate Polly into your projects, with straightforward explanations and simple examples. If you're looking to improve the reliability of your applications, I think you'll find this post helpful. 🚀 Read the full post here. 📖 #dotnet #Polly #WebAPI #SoftwareDevelopment #Resilience #Coding #APIDevelopment #softwareengineering
Fault Handling in .NET with Polly
medium.com
To view or add a comment, sign in
-
A week ago, an exception was raised 💣 by one of our microservices. I found this in the logs, reported it to the respective team, and they resolved it. However, the same exception occurred again, but this time it required a different solution. 𝐇𝐚𝐧𝐝𝐥𝐢𝐧𝐠 𝐀𝐏𝐈 𝐂𝐚𝐥𝐥𝐬: 🛡 When calling an API, whether external or an internal microservice, you cannot always expect a response. Therefore, it's essential to use try-catch blocks and set default values. Now comes the main point, 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐇𝐓𝐓𝐏 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐂𝐨𝐝𝐞𝐬: 🔍 🛑 503 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 𝐔𝐧𝐚𝐯𝐚𝐢𝐥𝐚𝐛𝐥𝐞 𝐌𝐞𝐚𝐧𝐢𝐧𝐠: The server is currently unable to handle the request due to temporary overloading or maintenance. 𝐂𝐚𝐮𝐬𝐞𝐬: - Server overload - Server maintenance - Temporary server downtime - Too many concurrent requests 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Just leave it, try after some time. 🛑 403 𝐅𝐨𝐫𝐛𝐢𝐝𝐝𝐞𝐧 𝐌𝐞𝐚𝐧𝐢𝐧𝐠: The server understands the request but refuses to authorize it. 𝐂𝐚𝐮𝐬𝐞𝐬: - Insufficient permissions - Authentication issues - IP restrictions - Misconfiguration of ports and protocol 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Go to the respective team that has ownership of the API and report it. ☑️ CASE 1 (1 Week ago): The scenario was let's say a web server is configured to only allow secure (HTTPS) connections on port 443. If a client attempts to access the server over HTTP on port 80, the server might respond with a 403 Forbidden error. So we fixed it all. ☑️ CASE 2: The scenario was due to overload, just 503 was happening. We did one more try, it was all set. 📝 𝐌𝐨𝐫𝐚𝐥 𝐨𝐟 𝐭𝐡𝐞 𝐬𝐭𝐨𝐫𝐲: Don’t just glance at errors or exceptions. Read and understand them thoroughly like a watchdog. The compiler is your friend, not your enemy, helping you identify and resolve issues effectively. #techclearance #10xengineers #debugging #softwareengineering #learnalways #followAndLikeForMore
To view or add a comment, sign in