Artwork

תוכן מסופק על ידי Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Optimizing Apache Kafka's Internals with Its Co-Creator Jun Rao

48:54
 
שתפו
 

Manage episode 424666757 series 2510642
תוכן מסופק על ידי Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

You already know Apache Kafka® is a distributed event streaming system for setting your data in motion, but how does its internal architecture work? No one can explain Kafka’s internal architecture better than Jun Rao, one of its original creators and Co-Founder of Confluent. Jun has an in-depth understanding of Kafka that few others can claim—and he shares that with us in this episode, and in his new Kafka Internals course on Confluent Developer.

One of Jun's goals in publishing the Kafka Internals course was to cover the evolution of Kafka since its initial launch. In line with that goal, he discusses the history of Kafka development, including the original thinking behind some of its design decisions, as well as how its features have been improved to better meet its key goals of durability, scalability, and real-time data.

With respect to its initial design, Jun relates how Kafka was conceived from the ground up as a distributed system, with compute and storage always maintained as separate entities, so that they could scale independently. Additionally, he shares that Kafka was deliberately made for high throughput since many of the popular messaging systems at the time of its invention were single node, but his team needed to process large volumes of non-transactional data, such as application metrics, various logs, click streams, and IoT information.

As regards the evolution of its features, in addition to others, Jun explains these two topics at great length:

  • Consumer rebalancing protocol: The original "stop the world" approach to Kafka's consumer rebalancing—although revolutionary at the time of its launch, was eventually improved upon to take a more incremental approach.
  • Cluster metadata: Moving from the external ZooKeeper to the built-in KRaft protocol allows for better scaling by a factor of ten. according to Jun, and it also means you only need to worry about running a single binary.

The Kafka Internals course consists of eleven concise modules, each dense with detail—covering Kafka fundamentals in technical depth. The course also pairs with four hands-on exercise modules led by Senior Developer Advocate Danica Fine.

EPISODE LINKS

  continue reading

פרקים

1. Intro (00:00:00)

2. Kafka Internals course (00:01:50)

3. What is a Kafka broker? (00:02:26)

4. Achieving high throughput (00:07:07)

5. High availability guarantee (00:15:03)

6. Consumer Group Protocol (00:17:04)

7. "Stop the world" rebalance (00:22:03)

8. Control Plane and Data Plane (00:25:35)

9. Continue innovation to serve stronger and better user needs (00:35:06)

10. Cluster metadata (00:37:16)

11. Jun's favorite module(s) in the course (00:42:34)

12. It's a wrap (00:46:31)

265 פרקים

Artwork
iconשתפו
 
Manage episode 424666757 series 2510642
תוכן מסופק על ידי Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

You already know Apache Kafka® is a distributed event streaming system for setting your data in motion, but how does its internal architecture work? No one can explain Kafka’s internal architecture better than Jun Rao, one of its original creators and Co-Founder of Confluent. Jun has an in-depth understanding of Kafka that few others can claim—and he shares that with us in this episode, and in his new Kafka Internals course on Confluent Developer.

One of Jun's goals in publishing the Kafka Internals course was to cover the evolution of Kafka since its initial launch. In line with that goal, he discusses the history of Kafka development, including the original thinking behind some of its design decisions, as well as how its features have been improved to better meet its key goals of durability, scalability, and real-time data.

With respect to its initial design, Jun relates how Kafka was conceived from the ground up as a distributed system, with compute and storage always maintained as separate entities, so that they could scale independently. Additionally, he shares that Kafka was deliberately made for high throughput since many of the popular messaging systems at the time of its invention were single node, but his team needed to process large volumes of non-transactional data, such as application metrics, various logs, click streams, and IoT information.

As regards the evolution of its features, in addition to others, Jun explains these two topics at great length:

  • Consumer rebalancing protocol: The original "stop the world" approach to Kafka's consumer rebalancing—although revolutionary at the time of its launch, was eventually improved upon to take a more incremental approach.
  • Cluster metadata: Moving from the external ZooKeeper to the built-in KRaft protocol allows for better scaling by a factor of ten. according to Jun, and it also means you only need to worry about running a single binary.

The Kafka Internals course consists of eleven concise modules, each dense with detail—covering Kafka fundamentals in technical depth. The course also pairs with four hands-on exercise modules led by Senior Developer Advocate Danica Fine.

EPISODE LINKS

  continue reading

פרקים

1. Intro (00:00:00)

2. Kafka Internals course (00:01:50)

3. What is a Kafka broker? (00:02:26)

4. Achieving high throughput (00:07:07)

5. High availability guarantee (00:15:03)

6. Consumer Group Protocol (00:17:04)

7. "Stop the world" rebalance (00:22:03)

8. Control Plane and Data Plane (00:25:35)

9. Continue innovation to serve stronger and better user needs (00:35:06)

10. Cluster metadata (00:37:16)

11. Jun's favorite module(s) in the course (00:42:34)

12. It's a wrap (00:46:31)

265 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה