Some thoughts about the #kafkasummit last week:
"The focus this year was obviously Flink, an already well established framework for stream processing. Often the question comes down to choosing between kstream or Flink, but with the managed solution of Flink by Confluent the integration with Kafka becomes a no brainer: both. For now the support is limited to Flink SQL, but the Table API and custom procedures are on their way.
Another surprising difference from last year is more and more talks about event-driven architecture from a business point of view. The adoption of business events (or the many different names used like domain events, public events, ...) is on the rise as companies start to understand the true benefit of event-driven architecture as an integration strategy cross bounded contexts.
The talk that I found very interesting was the talk about error handling by Cédric Schaller. The way it was explained was very well structured and clear. The different options and the reason as to why you would use one option instead of the other was well highlighted. I even had the chance to learn some tricks on how to better implement certain error handling scenarios.
Next to the actual talks, I spoke with many people that are beyond the mere technical adoption of Kafka and event streaming. Many companies that are now at a stage where they started or are thinking to start using their event platform as a data platform. This obviously gives rise to many questions, one of which is certainly backup and restore... (Kannika 😉)
And last but not least, had a great time with the colleagues from Cymo. #kip"
Come and visit us at booth 304 and we’ll explain you how you can backup your events in the most easiest and user friendly way!
And make sure to ask for a pair of socks!
#kafkasummit #kafka
Tech/ Privacy / Security person and AI pioneer. I don't want to work for you, I want to hang out with you eight hours a day and tell each other everything.
2moI love Hadrian this has nothing to do with the legos