蔡仕伟结婚了吗:基准测试:LIBEVENT vs LIBEV

来源:百度文库 编辑:中财网 时间:2024/05/03 17:46:14

BENCHMARKING LIBEVENT AGAINST LIBEV

Top

2011-01-11, Version 6

This document briefly describes the results of running the libeventbenchmark program against both libevent and libev.

Libevent Overview

Libevent (first released in2000-11-14) is a high-performance event loop that supports a simple API,two event types (which is can either I/O+timeout or signal+timeout) and anumber of "backends" (select, poll, epoll, kqueue and /dev/poll at thetime of this writing).

Algorithmically, it uses red-black trees to organise timers and doublylinked lists for other types. It can have at most one read and one writewatcher per file descriptor or signal. It also offers a simple DNSresolver and server and HTTP server and client code, as well as a socketbuffering abstraction.

Libev Overview

Libev (first released in 2007-11-12) is also ahigh-performance event loop, supporting eight event types (I/O, real timetimers, wall clock timers, signals, child status changes, idle, check andprepare handlers).

It uses a priority queue to manage timers and uses arrays as fundamentaldata structure. It has no artificial limitations on the number of watcherswaiting for the same event. It offers an emulation layer for libeventand optionally the same DNS, HTTP and buffer management (by reusing thecorresponding libevent code through its emulation layer).

Benchmark Setup

The benchmark is very simple: first a number of socket pairs is created,then event watchers for those pairs are installed and then a (smaller)number of "active clients" send and receive data on a subset of thosesockets.

The benchmark program used was bench.c,taken from the libevent distribution, modified to collect total timeper test iteration, optionally enable timeouts on the event watchers aswell as optionally using the native libev API and to output the timesdifferently.

For libevent, version 1.4.3 was used, while for libev, version 3.31 wasused. Both libraries and the test program were compiled by gcc version4.2.3 with the optimisation options -O3 -fno-guess-branch-probability-g and run on a Core2 Quad at 3.6GHz with Debian GNU/Linux (Linux version2.6.24-1). Both libraries were configured to use the epoll interface(the most performant interface available in either library on the testmachine).

The same benchmark program was used to both run the libevent vs. libeventemulation benchmarks (the same code paths/source lines were executed inthis case) and run the the native libev API benchmark (with differentcodepaths, but functionally equivalent).

The difference between the libevent and libev+libevent emulation versionsis strictly limited to the use of different header files (event.h fromlibevent, or the event.h emulation from libev).

Each run of the benchmark program consists of two iterations outputtingthe total time per iteration as well as the time used in handlingthe requests only. The program was run for various total numbers of filedescriptors. Each run is composed of six individual runs, and the resultused is the minimum time from these runs, for the second iteration of thetest.

The test program was run on its own cpu, with realtime priority, to achievestable timings.

First Benchmark: no timeouts, 100 and 1000 active clients

Without further ado, here are the results (click for larger version):

The left two graphs show the overall time spent for setting up watchers,preparing the sockets and polling for events, while the right two graphsonly include the actual poll processing. The top row represents 100 activeclients (clients doing I/O), the bottom row uses 1000. All graphs have alogarithmic fd-axis to sensibly display the large range of file descriptornumbers of 100 to 100000 (in reality, its actually socket pairs, thusthere are actually twice the number of file descriptors in the process).

Discussion

The total time per iteration increases much faster for libevent than forlibev, taking almost twice as much time than libev regardless of thenumber of clients. Both exhibit similar growth characteristics, though.

The polling time is also very similar, with libevent being consistentlybeing slower in the 1000-fd case, and virtually identical timings in the100-fd case. The absolute difference, however, is small (less than 5%).

The native API timings are consistently better than the emulation API, butthe absolute difference again is small.

Interpretation

The cost for setting up or changing event watchers is clearly much higherfor libevent than for libev, and API differences cannot account for this(the difference between native API and emulation API in libev is verysmall). This is important in practise, as the libevent API has no goodinterface to change event watchers or timeouts on the fly.

At higher numbers of file descriptors, libev is consistently faster thanlibevent.

Also, the libevent API emulation itself only results in a small overheadin this case.

Second Benchmark: idle timeouts, 100 and 1000 active clients

Again, the results first (click for larger version):

The graph layout is identical to the first benchmark. The difference isthat this time, there is a random timeout attached to each socket. Thiswas done to mirror real-world conditions where network servers usuallyneed to maintain idle timeouts per connection. Those idle timeouts to bereset on activity. This is implemented by setting a random 10 to 11 secondtimeout during set-up, and deleting/re-adding the event watcher each timea client receives data.

Discussion

The graphs have both changed dramatically. The total time per iterationhas increased dramatically for libevent, but has increased only slightlyfor the libev curves. The difference between native and emulated API hasbecome more apparent.

The event processing graphs look very different now, with libev beingconsistently faster (with a factor of two to almost three) over the wholerange of file descriptor numbers. The growth behaviour exhibited isroughly similar, but much lower for libev than for libevent.

As for libev alone, the native API is consistently faster than theemulated API, and the difference is noticable compared to the firstbenchmark, but still relatively small when compared to the differencebetween libevent and libev, regarding poll times. The overall time hasbeen almost halved, however.

Interpretation

Both libev and libevent use a binary heap for timer management (earlierversions of libevent used a red-black tree), which explains the similargrowth characteristics. Apparently, libev makes better use of the binaryheap than libevent, even with identical API calls (note that the upcoming3.33 release of libev, not used in this benchmark, uses a cache-aligned4-heap and benchmarks consistently faster than 3.31).

Another reason for the higher performance, especially in the set-up phase,might be that libevent calls the epoll_ctl syscall on each change (twiceper fd for del/add), while libev only sends changes to the kernel beforethe next poll (EPOLL_MOD).

The native API is noticably faster in this case (almost twice as fastoverall). The most likely reason is again timer management, as libeventuses two O(log n) operations, while libev needs a single and simpler O(logn) operation.

Summary

The benchmark clearly shows that libev has much lower costs and isconsequently faster than libevent. API design issues also play a role inthe results as the native API can do much better than the emulated APIwhen timers are being used. Even though this puts libev at a disadvantage(the emulation layer has to emulate some aspects of libevent that itsnative API does not. It also has to map each libevent watcher to three ofits own watchers, and has to run thunking code to map from those threewatchers to the libevent user code, due to the different structure oftheir callbacks). It is still much faster than libevent even when usingthe libevent emulation API.

Appendix: Benchmark graphs for libev 4.03, libevent 1.4.13, libevent 2.0.10

Here are the benchmark graphs, redone with more current versions. Nothingdrastic has changed, libevent2 seems to be a tiny bit slower (probably dueto the extra thread locking), libev a tiny bit faster.

Author/Contact

Marc Alexander Lehmann