Kafka 0.8 Producer Performance

At LiveRamp, we constantly face scaling challenges as the volume of data that our infrastructure must deal with continues to grow. One such challenge involves the logging system. At present we use Scribe as the transport mechanism to get logs from our webapp servers into our HDFS cluster. Scribe has served us well, but we are looking for alternatives because it has the following shortcomings:

  • It provides no support for compression
  • Consumers run in batches (map-reduce jobs) so real-time stats are not possible
  • It is no longer in active development

One of the most promising alternatives to Scribe that addresses all of the above is Kafka. We used Kafka to build a real-time stats system prototype during our last Hackweek, and saw enough promise to do some more in-depth testing. In this post we will focus on producer performance and scaling. Since we intend to put producers in our webapp servers, we are interested in both high overall throughput and low latency when sending individual messages.

Why Kafka 0.8

At the time of this writing, Kafka 0.8 has not been released, and documentation for it is scarce. However, since it is a backwards incompatible release that introduces a number of important features, it would make little sense for anyone just getting started with Kafka to invest development effort in the previous version.

All tests in this post were run on this revision of the 0.8 branch.

Setup

Brokers

We are starting with a modestly sized cluster of three machines. The specs are as follows:

Each machine has two pairs of disks in a mirroring configuration (RAID-1), which allow us to take advantage of the new multiple data directories feature introduced in Kafka 0.8. This makes it possible for a topic to have separate partitions on different disks, which should significantly increase the throughput per broker. This behavior is configured in the log.dirs setting as shown in the broker configuration below. We used default values for most other settings.

As recommended by the Kafka documentation, we use a separate cluster of three dedicated machines for ZooKeeper. All machines are connected with gigabit links.

Producers

Our real use case involves a number of webapp servers each producing a relatively modest volume of logs. For this test, however, we used only a few dedicated producer machines using a custom-made tool that simulates the real load. Each producer was configured as follows:

The most important setting here is producer.type, which we set to async. Asynchronous mode is essential to get the most out of Kafka in terms of throughput. In this mode, each producer keeps an in-memory queue of messages that are sent in batch to the broker when a pre-configured batch size or time interval has been reached. This makes compression much more efficient, especially in a use case like ours in which log lines have string representations of JSON objects, and the same keys are repeated over and over across lines. Having fewer, larger messages also helps to achieve better network utilization.

Performance Tools

The Kafka distribution provides a producer performance tool that can be invoked with the script bin/kafka-producer-perf-test.sh. While this tool is very useful and flexible, we only used it to corroborate that the results obtained with our own custom tool made sense. This is due to the following reasons:

  • Our tool is written in Java and uses the producer from the Java API.
  • While the message size is adjustable in the Kafka tool, we wanted to use messages with the same content structure as our real production logs.
  • Not all configuration parameters are exposed by the Kafka tool.
  • Our tool makes it possible to set a target throughput, which limits the rate at which threads push messages to the brokers. This is necessary to evaluate latency under realistic load conditions.

Throughput Results

Baseline Performance

The Kafka documentation claims that producers can push about 50MB/sec through a system with a single broker as long as the batch size is not too small (the default value of 200 should be large enough). We were able to verify this claim very quickly for Kafka 0.7.2 by running the following command on a fresh installation

and obtaining the following results:

Running an equivalent command on a fresh installation of Kafka 0.8, however, gave us markedly worse results:

This is because in an effort to increase availability and durability, version 0.8 introduced intra-cluster replication support, and by default a producer waits for an acknowledgement response from the broker on every message (or batch of messages if async mode is used). It is possible to mimic the old behavior, but we were not very interested in that given that we intend to use replication in production.

Performance degraded further once we started using a sample of real ~1KB sized log messages rather than the synthetic messages produced by the Kafka tool, resulting in a throughput of about 10 MB/sec.

All throughput numbers refer to uncompressed data.

Number of Producers

Our first test consisted in evaluating the impact of adding producer machines.

By adding identically configured producer machines, each pushing as many messages as it can, the overall throughput increases slightly. We also observed that throughput was distributed very evenly across the machines.

Number of Partitions

Next, using all ten machines at our disposal we tested the effect of using different numbers of partitions.

Throughput increases very markedly at first as more brokers and disks on them start hosting different partitions. Once all brokers and disks are used though, adding additional partitions does not seem to have any effect.

Number of Replicas

As we saw in the baseline performance tests, even using a single replica represents a big performance hit when compared to the old system which had no support for replication at all. We were interested in knowing how much of an additional hit we would get when using two and three replicas.

Fortunately, the extra performance hit turned out to be quite small.

Number of Topics

Finally, we tested the effect of increasing the number of topics. Our use case requires only a handful of topics, so we only experimented with small numbers.

Update: Michael G. Noll (see comment below) kindly pointed out that throughput could be improved by disabling ack messages, and provided this post. as a reference of what could be expected. I rerun some of the tests and here are some preliminary results:

  • Using the most realistic scenario (10 partitions, 10 producer machines, 3 replicas, and 1-10 topics, same as the last chart above), I only obtained a very modest 12% increase on average throughput.
  • Since this is very different from the ~2x mentioned in the post, I did some more digging and found the following:
    • Using one producer machine and a topic with 10 partitions and 3 replicas, I was able to reproduce the 2x improvement (21 to 44 MB/sec) with both Kafka's and our own tool (setting it to use synthetic messages)
    • When switching our tool back to real messages (a sample of production logs), that 2x became ~12%
    • Therefore, it appears that the ack message is no longer a big bottleneck once real messages are used.

Latency Results

Having an idea of what is the maximum throughput that can be achieved, we investigated the average and maximum latency of sending an individual message, which directly impacts the loading time on a browser hitting our webapp servers (this is the time for a thread using the Kafka producer to return from a call to send, NOT the full producer-broker-consumer cycle). To do this, we configured our tool to limit the rate at which it pushes messages according to a target throughput, and monitored latency for different values of throughput.




The average latency is consistently below 0.02 ms for as long as the target throughput does not reach the maximum throughput. Unfortunately, the maximum latency hovers around 120 ms even for very low values of throughput. Once the producers start trying to push more messages than the brokers can handle, both average and maximum latency increase very dramatically.

Finally, we set queue.enqueue.timeout.ms to 0 in an attempt to prevent the Kafka producer from ever blocking on a call to send, hoping that this would decrease the maximum latency. Unfortunately, this had no effect whatsoever. We got identical results to the graphs above. The only difference was that, as expected, producers started throwing exceptions (kafka.common.QueueFullException) when the target throughput reached the maximum throughput. Also, we observed that once exceptions were thrown, the producers would hang indefinitely despite invoking the close method, and a call to System.exit was required to force the application to quit.

Conclusions

Based on the numbers obtained above, we can draw the following preliminary conclusions:

  • Kafka 0.8 improves availability and durability at the expense of some performance.
  • Throughput seems to scale very well as the number of brokers and/or disks per broker increases.
  • Moderate numbers of producer machines and topics have no negative effect on throughput compared to a single producer and topic.
  • When configured in async mode, producers have very low average latency for each message sent, but there are outliers that take over 100 ms, even when operating at low overall throughput. This poses a problem for our use case.
  • Trying to push more data than the brokers can handle for any sustained period of time has catastrophic consequences, regardless of what timeout settings are used. In our use case this means that we need to either ensure we have spare capacity for spikes, or use something on top of Kafka to absorb spikes.

Next Steps

We have just scratched the surface and there is still a lot of work to be done. Following is a list of some of the things we will probably look into:

  • perform a similar analysis on consumers to make sure high throughput can be sustained regardless of how many consumers are active.
  • experiment with custom partitioners so that each producer needs to communicate with only a subset of the brokers (If/when we add more broker nodes to the cluster).
  • set up a mirroring configuration in which separate Kafka clusters from multiple cloud regions send their traffic to a master cluster.

Feedback Welcome

It is our hope that the information we provided will be useful for people considering using Kafka for the first time or switching from 0.7 to 0.8. If you have any questions, comments or suggestions please leave them below.

Share Button

11 Responses to “Kafka 0.8 Producer Performance”

  1. Michael G. Noll Apr 10, 2013 at 2:00 am #

    Thanks for sharing your benchmarking results, Piotr!

    I'd have one question regarding your benchmark setup:

    > This is because in an effort to increase availability and durability,
    > version 0.8 introduced intra-cluster replication support, and by
    > default a producer waits for an acknowledgement response from
    > the broker on every message (or batch of messages if async mode
    > is used). It is possible to mimic the old behavior, but we were not
    > very interested in that given that we intend to use replication in production.

    Did you try disabling message acking by setting request.required.acks to zero?

    All things being equal this will cause Kafka to continue using replicas (which, as you said, is what you want to use in production) but it will prevent the producer from waiting to receive acknowledgements from the broker for every message/batch of messages being enqueued by the producer.

    I'd be curious to learn how this would change -- for better or worse -- the reported numbers.

    See also here:
    https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing

    • Piotr Kozikowski Apr 11, 2013 at 12:41 pm #

      Hi Michael, I believe I tested that at some point when comparing the baseline performance of 0.7 vs 0.8. Disabling message acking resulted in a significant improvement but it was still not quite as good the 0.7 version. I will rerun some of the tests with that setting and post an update with the results when I get a chance.

  2. Ben Jul 29, 2013 at 9:26 am #

    Hey just wondering if the code that you used to do the above tests available somewhere?

    • Piotr Kozikowski Oct 17, 2013 at 10:03 am #

      Sorry for the late response. Unfortunately, the code is ingrained in one of our internal projects, and it would take some effort to extract it and make it available in a public repo. We may do that at some point in the future, but there are no specific plans at this point.

  3. medha Oct 17, 2013 at 2:30 am #

    I wanted to know that, can we develop KAFKA content based? I am doing research in high performance pub/sub using manycore or multocore architecture.

    • Piotr Kozikowski Oct 17, 2013 at 10:00 am #

      I'm not sure I understand the question. Could you be more specific?

  4. Henri Jan 2, 2014 at 5:24 am #

    Hi, did you ever investigate to find the root cause for high maximum latency? Could it be garbage collection that causes the outliers?

  5. ZOE Jan 9, 2014 at 3:02 am #

    Hi PIOTR, I'm doing performance test with 3 Brokers and 1 Producer.
    I tested the effect of using different numbers of partitions.
    Unfortunately, My test throughput is not better than your result.
    (your result is twice as much numbers of partitions increase, but my result is 1.2~1.4x as much.)
    Could you let me know the detail settings of perf-test option or anything clue?

    Best,

    • Piotr Kozikowski Mar 27, 2014 at 1:11 pm #

      We only used the perf-test with the baseline performance settings (1 machine):

      bin/kafka-producer-perf-test.sh --broker-list=localhost:9092 --messages 10000000 --topic test --threads 10 --message-size 1000 --batch-size 200 --compression-codec 1

      The rest of the results were obtained using our own tool. The main reason was that the perf-tool uses synthetic messages that compress unrealistically well, so the performance reported is not reproducible once real messages are used. Our tool used a sample of real production messages. If you see a big performance difference with your results this is something to take into consideration.

  6. aan Mar 24, 2014 at 3:56 pm #

    Hello Piotr,
    We are also planning to write our own Java tool and along with JMeter. Have you used JMeter at all? Thanks!

    • Piotr Kozikowski Mar 27, 2014 at 1:03 pm #

      Hi Aan,

      We did not use JMeter or anything similar for this. Our custom tool was very basic. It essentially replicates what the Scala tool bundled with Kafka does, and adds a few extra features. The idea was to make it as comparable to the Kafka benchmarks already available as possible, while simulating our use case in a reasonably realistic way.

Leave a Reply