32 subscribers
התחל במצב לא מקוון עם האפליקציה Player FM !
Building a Microservices Architecture with Apache Kafka at Nationwide Building Society ft. Rob Jackson
Manage episode 424666826 series 2510642
Nationwide Building Society, a financial institution in the United Kingdom with 137 years of history and over 18,000 employees, relies on Apache Kafka® for their event streaming needs. But how did this come to be? In this episode, Tim Berglund talks with Rob Jackson (Principal Architect, Nationwide) about their Kafka adoption journey as they celebrate two years in production.
Nationwide chose to adopt Kafka as a central part of their information architecture in order to integrate microservices. You can't have them share a database that's design-time coupling, and maybe you tried having them call each other synchronously. There's a little bit too much runtime coupling, leading to the rise of event-driven reactive microservices as a stable and extensible architecture for the next generation.
Nationwide also chose to use Kafka for the following reasons:
- To replace their mortgage sales systems from traditional orchestration style to event-driven designs and choreography-based solutions using microservices in Kafka
- A cost-effective way to scale their mainframe systems with change data capture (CDC)
Rob explains to Tim that now with the adoption of Kafka across other use cases at Nationwide, he no longer needs to ask his team to query their APIs. Kafka has also enabled more choreography-based use cases and the ability to design new applications to create events (pushed into a common/enterprise event hub). Kafka has helped Nationwide eliminate any bottlenecks in the process and speed up production.
Furthermore, Rob delves into why his team migrated from orchestration to choreography, explaining their differences in depth. When you start building your applications in a choreography-based way, you will find as a byproduct that interesting events are going into Kafka that you didn’t foresee leveraging but that may be useful for the analytics community. In this way, you can truly get the most out of your data.
EPISODE LINKS
- Case Study: Event Streaming & Real-Time Data in Banking
- Introducing Events and Stream Processing into Nationwide Building Society (Kafka Summit talk)
- Learn more about Nationwide
- Join the Confluent Community
- Check out Kafka tutorials, resources, and guides at Confluent Developer
- Live demo: Kafka streaming in 10 minutes on Confluent Cloud
- Use 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
265 פרקים
Manage episode 424666826 series 2510642
Nationwide Building Society, a financial institution in the United Kingdom with 137 years of history and over 18,000 employees, relies on Apache Kafka® for their event streaming needs. But how did this come to be? In this episode, Tim Berglund talks with Rob Jackson (Principal Architect, Nationwide) about their Kafka adoption journey as they celebrate two years in production.
Nationwide chose to adopt Kafka as a central part of their information architecture in order to integrate microservices. You can't have them share a database that's design-time coupling, and maybe you tried having them call each other synchronously. There's a little bit too much runtime coupling, leading to the rise of event-driven reactive microservices as a stable and extensible architecture for the next generation.
Nationwide also chose to use Kafka for the following reasons:
- To replace their mortgage sales systems from traditional orchestration style to event-driven designs and choreography-based solutions using microservices in Kafka
- A cost-effective way to scale their mainframe systems with change data capture (CDC)
Rob explains to Tim that now with the adoption of Kafka across other use cases at Nationwide, he no longer needs to ask his team to query their APIs. Kafka has also enabled more choreography-based use cases and the ability to design new applications to create events (pushed into a common/enterprise event hub). Kafka has helped Nationwide eliminate any bottlenecks in the process and speed up production.
Furthermore, Rob delves into why his team migrated from orchestration to choreography, explaining their differences in depth. When you start building your applications in a choreography-based way, you will find as a byproduct that interesting events are going into Kafka that you didn’t foresee leveraging but that may be useful for the analytics community. In this way, you can truly get the most out of your data.
EPISODE LINKS
- Case Study: Event Streaming & Real-Time Data in Banking
- Introducing Events and Stream Processing into Nationwide Building Society (Kafka Summit talk)
- Learn more about Nationwide
- Join the Confluent Community
- Check out Kafka tutorials, resources, and guides at Confluent Developer
- Live demo: Kafka streaming in 10 minutes on Confluent Cloud
- Use 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
265 פרקים
Todos los episodios
×
1 Apache Kafka 3.5 - Kafka Core, Connect, Streams, & Client Updates 11:25

1 How to use Data Contracts for Long-Term Schema Management 57:28

1 How to use Python with Apache Kafka 31:57

1 Next-Gen Data Modeling, Integrity, and Governance with YODA 55:55

1 Migrate Your Kafka Cluster with Minimal Downtime 1:01:30

1 Real-Time Data Transformation and Analytics with dbt Labs 43:41

1 What is the Future of Streaming Data? 41:29

1 What can Apache Kafka Developers learn from Online Gaming? 55:32


1 How to use OpenTelemetry to Trace and Monitor Apache Kafka Systems 50:01

1 What is Data Democratization and Why is it Important? 47:27

1 Git for Data: Managing Data like Code with lakeFS 30:42

1 Using Kafka-Leader-Election to Improve Scalability and Performance 51:06

1 Real-Time Machine Learning and Smarter AI with Data Streaming 38:56

1 The Present and Future of Stream Processing 31:19

1 Top 6 Worst Apache Kafka JIRA Bugs 1:10:58

1 Learn How Stream-Processing Works The Simplest Way Possible 31:29

1 Building and Designing Events and Event Streams with Apache Kafka 53:06

1 Rethinking Apache Kafka Security and Account Management 41:23

1 Real-time Threat Detection Using Machine Learning and Apache Kafka 29:18

1 Improving Apache Kafka Scalability and Elasticity with Tiered Storage 29:32

1 Decoupling with Event-Driven Architecture 38:38

1 If Streaming Is the Answer, Why Are We Still Doing Batch? 43:58

1 Security for Real-Time Data Stream Processing with Confluent Cloud 48:33

1 Running Apache Kafka in Production 58:44

1 Build a Real Time AI Data Platform with Apache Kafka 37:18

1 Optimizing Apache JVMs for Apache Kafka 1:11:42


1 Application Data Streaming with Apache Kafka and Swim 39:10
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.