Get Started Free
Untitled design (21)

Tim Berglund

VP Developer Relations

Kafka Producers

The API surface of the producer library is fairly lightweight: In Java, there is a class called KafkaProducer that you use to connect to the cluster. You give this class a map of configuration parameters, including the address of some brokers in the cluster, any appropriate security configuration, and other settings that determine the network behavior of the producer. There is another class called ProducerRecord that you use to hold the key-value pair you want to send to the cluster.

To a first-order approximation, this is all the API surface area there is to producing messages. Under the covers, the library is managing connection pools, network buffering, waiting for brokers to acknowledge messages, retransmitting messages when necessary, and a host of other details no application programmer need concern herself with.

Producer Example

Whether you know it or not yet, you're extremely happy someone wrote this library for you.

try (KafkaProducer<String, Payment> producer = new KafkaProducer<>(props)) {

    for (long i = 0; i < 10; i++) {
        final String orderId = "id" + Long.toString(i);
        final Payment payment = new Payment(orderId, 1000.00d);
        final ProducerRecord<String, Payment> record = 
           new ProducerRecord<>("transactions", 
                                        payment.getId().toString(), 
                                        payment);
        producer.send(record);
   }
} catch (final InterruptedException e) {
    e.printStackTrace();
}

One note: Remember the discussion of partitions up above? Partitions are what take a single topic and break it up into many individual logs that can be hosted on different brokers. Well, it is the producer that makes the decision about which partition to send each message—whether to round-robin keyless messages, compute the destination partition by hashing the key, or apply a custom-configured scheme (although this isn’t very commonly used). In a real sense, partitioning lives in the producer.

Use the promo codes KAFKA101 & CONFLUENTDEV1 to get $25 of free Confluent Cloud storage and skip credit card entry.

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Producers

Hey, Tim Berglund with Confluent to talk to you about Kafka producers. All right, now let's get outside of the Kafka cluster proper, that group of brokers that are doing all that replication, and partition management, and pub-subbing, and all the stuff that they do. Let's get outside of there to think about the applications that use Kafka: producers and consumers. This is where we spend most of our time as developers, because these are client applications. Now, this is code that you write. They put messages into topics, and they read messages out of topics. Every component of the Kafka platform that is not a Kafka broker is at bottom either a producer, or a consumer, or both. Producing and consuming is how you interface with the cluster. Let's zero in on producers first. Now, the API surface of the producer library is fairly lightweight. In Java, which is the native language of Apache Kafka, there's a class called KafkaProducer that you use to connect to the cluster. You give this class a map of configuration parameters including like the address of some brokers in the cluster. It doesn't have to be all of them, but just, you know, two or three brokers in the cluster to get it started, any appropriate security configuration, and any other settings that determine the network behavior of the producer. There's quite a bit to tune in there, and ideally you don't need to turn many of those knobs, but you can if you need to, and they're all fairly well-documented. There's another class called ProducerRecord that you use to hold the key-value pair you wanna send to the cluster. Remember, events are modeled as key-value pairs, and so ProducerRecord is the Java class that wraps that event. To a first-order approximation, this is the entire API surface area that you need to think about to produce messages. Under the covers, of course, there's a lot more going on, like the library is managing connection pools, doing network buffering, waiting for brokers to acknowledge messages so it can free up buffer space, retransmitting messages when necessary, and a host of other details that no self-respecting application programmer need concern herself with most of the time. Whether you know it or not, you are really glad somebody wrote this library for you. Now, one note, elsewhere, I talked about partitions. Partitions are what take a single topic and break it up into many individual logs that can be hosted on different brokers. Well, it is the producer that makes the decision about which partition to send each message to, whether to round-robin keyless messages, or messages with no keys, or to compute the destination partition by hashing the key, or even to apply a custom-configured scheme. That's not very commonly used, but it's a thing you can do. In a real sense, partitioning lives in the producer. So that is the Kafka producer. I strongly recommend at this point that you get your hands dirty with some code. This is really where you probably need to see the API, type things out for yourself, watch them run.