On This Month in Datadog, we’re recapping DASH 2024. Tune in for keynote highlights, including LLM Observability, Kubernetes Autoscaling, and Log Workspaces, which enables you to parse, enrich, and analyze log data from multiple sources. Watch now:
Hello and welcome to This Month in Datadog. This episode we're recapping our flagship conference, Dash, which we recently held in the North Javits Center right here in New York. Dash 2024 was our largest to date and brought together thousands of attendees from around the globe for two days of networking, workshops, breakout sessions and Expo hall and so much more. On stage during the keynote, we shared the latest products and features to help you better observe your environment, secure your infrastructure and workloads, and take action. But before we get to those, we want to let you know that Datadog Workshop we is returning in September. Between September 9th and 13th, we're offering virtual training sessions covering a wide variety of topics including performance to security, distributed tracing, Kubernetes, and more. Each workshop is $75 and all profits will be donated to Doctors Without Borders, the Cancer Research Institute, and Girls Who Code. You can register by visiting the Link Show. And now let's get to our recap of DASH 2024. We've seen the impressive potential of large language models and this led to an incredible innovation across many industries. Data dog LM observability groups semantically similar prompts and responses into clusters and also auto labels them for easy analysis. I am happy to announce that data dog LMAO. Servility is now available. Have a new solution will allow you to prioritize the workloads and clusters with the most savings potential, to take direct action from the data dog clap form to alley, and then automate right sizing recommendations, and to observe and measure the impact of your complete Auto scaling program on your key cost and efficiency metrics. I can now immediately see the total idle cost for my entire Kubernetes footprint across clouds. In this case, I see I have over 85,000. Dollars in idle spend last month and I'm motivated to start optimizing. We're very excited to announce Datadog Kubernetes Autoscaling. Introducing log workspaces. Log workspaces allow me to freely join and transform data across multiple sources and then chain together simple queries to perform complex analysis in a single collaborative space. We believe that Data Dog is better with open telemetry, and open telemetry is better with data. For the first time ever, I can debug my application with live production data at every step of the process. I'm excited to introduce Data Dogs Live Debugger. It's become the critical product. For you to understand the performance of your browser or mobile apps, you want to understand questions like, as I release a new feature, how does that actually affect my conversion rate? You'll notice right away this is a brand new product. It brings your business teams and technical teams into One UI, leading to better collaboration. Introducing product analytics. For many of you the chatting directly with BITS, it's a great way to get information that you are looking for during an incident. I'm on call for a food delivery service and have just been paged for our most critical service, the Restaurants API, which is responsible for processing all of our user orders. By the time I scramble over to my laptop, BITS has already slacked me to let me know that it's begun its investigation. I'm thrilled to unveil the latest evolution of BITS AI. Built by Uncle Engineers 4, Uncle Engineers Datalog Oncall supports everything you need from a paging solution and combines it with everything you already love about the data dot platform. The breakout sessions have been really good, just hearing from leadership, from other companies. So walking to the Expo really made me feel like I was in the show. I just attended a workshop that was Assyrian practice, that it has been actually very helpful for me. But I had a lot of fun. I mean, there's so many people that you can collaborate with. Yeah, I wanna see that. It's my first time at Dash, most of my first time in New York City. And the energy is just high vibes. Good times, good energy, good people. Honestly, amazing conference. I'll be back next year. Each year, we're proud to share our latest innovations for building the next generation of applications, infrastructure, security, and teams. I'd like to thank everyone who attended, as well as our sponsors, partners, speakers, and staff for making Dash 2024 the best one yet. Check out our YouTube channel for videos of breakout sessions, fireside chats, and a special edition of Data Dog on about MLM's autonomous agents and chat bots. And check out our blog for a guide to every. Announcement made a dash. You can find links to all of these resources in the videos description. Before we go, I'd like to highlight some super interesting research recently released by the data science teams at Data. We're excited to announce Toto, the time series optimized Transformer for observability. It's a state-of-the-art foundation model for time series forecasting. The blog post has a great overview, but if you're interested in the topic, I recommend reading the full technical report. And that wraps up today's episode. Next month, we're back to our regular coverage of products, features and announcements. We'll see you then.
looks good. i'm looking into datadog recently. Datadog is glue that connects dev and ops teams. What an amazing field! btw what languages/dev tools does datadog need to build custom integration/custom metrics or/and custom tracking/correlation? Go/C , Cpython and c extensions, or node with C++ addons? it totally depends on interaction efficiency and dev friendliness. I might learn GO from now. hahah.
🔹 Founder & Sales Director at Intent Media Labs 🔹 Ultimately, successful content marketing isn’t just about being noticed but being remembered🔹@intentmedialabs.com🔹
Great recap of DASH 2024! The focus on LLM Observability is essential as AI systems become more complex. Kubernetes Autoscaling offers much-needed efficiency for resource management. Log Workspaces streamline multi-source data analysis, enhancing operational clarity.
Awesome DASH 2024 recap! I'm excited about LLM Observability and the future of AI overall. I seriously love the monthly recaps. They're a great vehicle for the community.
A lot of goodness in here and a good (perhaps best?) use of 5min if LLM, Kubernetes, and/or Logs are part of your daily routine.
#AI#Datadog#DASH#DevSecOps#Logs#Kubernetes
On This Month in Datadog, we’re recapping DASH 2024. Tune in for keynote highlights, including LLM Observability, Kubernetes Autoscaling, and Log Workspaces, which enables you to parse, enrich, and analyze log data from multiple sources. Watch now:
With Datadog Kubernetes Monitoring, you can visualize the health and performance of all your clusters, regardless of what platform they’re running on. Download our brief to learn more: https://lnkd.in/e8rxDzZn
That first mile of getting data in can often be the hardest. That's why Dynatrace continues to invest in log ingest, offering a range of out-of-the-box solutions.
Now, you can harness more data for AI-driven insights, faster troubleshooting & efficiency.
That first mile of getting data in can often be the hardest. That's why Dynatrace continues to invest in log ingest, offering a range of out-of-the-box solutions.
With these latest innovations, you can harness even more data for comprehensive AI-driven insights, faster troubleshooting, and improved operational efficiency. Get the scoop in Troy Mangum's blog.
You do not have visibility into your 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧'𝐬 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞? or in which service/dependency is the bottleneck? So, you should consider testing
𝐭𝐫𝐚𝐜𝐢𝐧𝐠.
I would say that 𝐭𝐫𝐚𝐜𝐞𝐬 are a more "advanced" facet of observability, than logs and common metrics, for example. They show us the execution path of a request and its dependencies or microservices connected to it, making it easier to identify “problematic parts” of your application.
No matter what service architecture you are working with, whether your application is a monolith with a single database or a sophisticated mesh of services.
Here are some nice tools to help you get started:
⧁ Datadog trial https://lnkd.in/dUeT9gdJ
⧁ Jaeger https://lnkd.in/dJxqSN5w
⧁ OpenTelemetry https://lnkd.in/d29jvHX6
...
In this article, you will learn strategies for reducing inter-AZ data transfer costs in Kubernetes, including approaches for applications running in multiple zones and those limited to fewer zones.
More: https://lnkd.in/g9BHknf5
On-call stress is real—switching between tools, managing alerts, and racing against time can cause burnout and inefficiency.
That’s why we’ve launched Datadog On-Call! It integrates monitoring, paging, and incident response into one platform, so your team can respond faster with less stress #Datadog#OnCall#IncidentResponse#Observability
Datadog On-Call unifies observability, paging, and monitoring into one seamless platform, which eliminates the inefficiencies of multiple disjointed tools and allows engineers to focus on resolving incidents quickly. Learn more and get started today: https://lnkd.in/eHjujywb
Datadog helps you understand your entire Kubernetes environment—from your hosts, containers, and applications down to Kubernetes itself—so you can deliver the best customer experience possible. Check out our free brief to learn more: https://lnkd.in/e8rxDzZnDatadog#monitoring#observability#containers
How I cut 61% off my client's Datadog logging bill in 3 days.
Simple playbook:
1. Understand how the logs are used
2. View most logged patterns
3. Highlight useless logging patterns
4. Figure out where these logs are coming from
5. Delete or mute the offending logs
6. Watch your bill plummet
I saved a 5 figure sum for my client every year and you can do the same!
DM me SAVE to get a preview of my free booklet on reducing logging costs.
Or comment and tell me how much your logging is costing you.
if you're not willing to sacrifice for your goal, your goal will become your sacrifice.
2molooks good. i'm looking into datadog recently. Datadog is glue that connects dev and ops teams. What an amazing field! btw what languages/dev tools does datadog need to build custom integration/custom metrics or/and custom tracking/correlation? Go/C , Cpython and c extensions, or node with C++ addons? it totally depends on interaction efficiency and dev friendliness. I might learn GO from now. hahah.