Beruflich Dokumente
Kultur Dokumente
1, FEBRUARY 2008
Abstract—The Lightweight Directory Access Protocol (LDAP) is dialling, and user authentication based on authoritative data
being used for an increasing number of distributed directory ap- sources.
plications. We describe a tool to analyze the performance of LDAP In many of these cases, such as the use of LDAP directories
directories, and study the performance of a LDAP directory under
for storage of personnel information and authentication rules,
a variety of access patterns. In the experiments, we use a LDAP
schema proposed for the administration of Service Level Specifica- the data is relatively static, so that caching can be used to im-
tions (SLSs) in a differentiated services network. Individual mod- prove performance. In some situations, the database information
ules in the server and client code are instrumented to obtain a de- needs to be updated frequently. For example, in IP telephony,
tailed profile of their contributions to the overall system latency every time a subscriber uses a different terminal or is at a dif-
and throughput. We first study the performance under our de- ferent location, his account information may need to be updated.
fault experiment setup. We then study the importance of the fac- Despite the growing importance of LDAP services, there has
tors in determining scalability, namely front-end versus back-end
processes, CPU capability, and available memory. At high loads, been little work on how LDAP servers behave under different
the connection management latency increases sharply to dominate workloads, and in different operating environments. In partic-
the response in most cases. The TCP Nagle algorithm is found to ular, the performance of LDAP in a dynamic environment with
introduce a very large additional latency, and it appears beneficial frequent searches has not been closely examined.
to disable it in the LDAP server. The CPU capability is found to be In this paper, we report on the development of a tool to bench-
significant in limiting the performance of the LDAP server, and for mark LDAP server performance, and analyze results derived
larger directories, which cannot be kept in memory, data transfer
from the disk also plays a major role. The scaling of server perfor- using this tool. In addition, we have instrumented the server and
mance with the number of directory entries is determined by the in- LDAP client API codes to allow detailed profiling. These re-
crease in back-end search latency, and scaling with directory entry sults include the contribution of various system components to
size is limited by the front-end encoding of search results, and, for the overall performance in terms of latency and throughput, the
out-of-memory directories, by the disk access latency. We investi- scaling of performance with directory size, entry size, and ses-
gate different mechanisms to improve the server performance. sion reuse, and the importance of various factors in determining
Index Terms—Benchmark, diffServ, directory service, LDAP, scalability. We also investigate modifications and usage patterns
measurement, performance, policy. that improve server performance.
Given the growing use of LDAP in applications, it is useful
and convenient to carry out the performance experiments using
I. INTRODUCTION data based on an existing LDAP schema proposed for a real
directory application. In this work, we use a schema proposed in
HE Lightweight Directory Access Protocol (LDAP) is [3] and [4] for the administration of Service Level Specifications
T being used for an increasing number of directory applica-
tions. Applications include personnel databases for administra-
(SLSs), which are used to configure networks for supporting
different levels of services. In this application, it is envisioned
tion, tracking schedules [1], address translation databases for IP that the administrative policies embodied in the LDAP schema
telephony, network databases for storing network configuration will be stored on directories and downloaded to devices such as
information and service policy rules [2]–[4], and storage of hosts, routers, policy servers, proxies. If the SLS is allowed to
authentication rules [5], [6]. The H.350 series of ITU describe be dynamically negotiated [7], the LDAP service must deal with
a Directory Services architecture for multimedia conferencing frequent directory queries. In these respects, this application is
using LDAP and was finalized at the last meeting of ITU-T representative of many current or proposed LDAP applications
Study Group 16 in May 2003. H.350-based directory services [2], [5] [8]. The results reported in this work should be generally
can provide automated configuration of endpoints, association applicable to many of the applications cited earlier; aspects of
of persons with endpoints, searchable white pages, clickable the work that are specific to SLS administration will be pointed
out where appropriate.
Manuscript received April 6, 2004; revised January 8, 2007; approved by
The rest of this paper is organized as follows. In Section II,
IEEE/ACM TRANSACTIONS ON NETWORKING Editor A. Greenberg. we first provide general background on the LDAP directory ser-
X. Wang is with the Department of Electrical and Computer Engineering, vice, and then provide a very brief introduction to differentiated
State University of New York at Stony Brook, Stony Brook, NY 11794 USA service networks and service level specifications, as well as the
(e-mail: xwang@ece.sunysb.edu).
H. Schulzrinne is with the Department of Computer Science, Columbia Uni- LDAP schema proposed for this application. The experimental
versity, New York, NY 10027 USA (e-mail: hgs@cs.columbia.edu). set-up is discussed in Section III, followed by a discussion of
D. Kandlur is with the IBM Almaden Research Center, San Jose, CA 95120 the test methodology in Section IV. Experiments are described,
USA (e-mail: kandlur@us.ibm.com).
D. C. Verma is with the IBM T. J. Watson Research Center, Hawthorne, NY
and the results are presented and analyzed in Section V, and re-
10532 USA (e-mail: dverma@us.ibm.com). lated work is presented in Section VI. Finally, we summarize
Digital Object Identifier 10.1109/TNET.2007.911335 our results and present some conclusions in Section VII.
1063-6692/$25.00 © 2008 IEEE
WANG et al.: MEASUREMENT AND ANALYSIS OF LDAP PERFORMANCE 233
V. ANALYSIS OF RESULTS
Our experiments have three main purposes: identify the con-
tributions of various system components towards the overall Fig. 6. (a) Average connection time, processing time and response time and (b)
average server throughput shown as a function of the average request rate at the
LDAP performance; study the limits of LDAP performance, and server. The directory had 10 000 entries and entry size was 488 bytes.
what determines these limits; and finally, to suggest measures to
improve performance.
B. Improving the LDAP Search Performance
We organize the experimental results as follows. The overall
performance with respect to throughput and latency is intro- In this section, we first investigate the various components
duced in Section V-A. The various components of the total of the search and connect latency. We then study measures to
search latency are studied in Section V-B, followed by mea- improve effective search latency and throughput.
sures to improve LDAP performance. Some important limita- 1) Components of LDAP Search Latency: We now consider
tions on LDAP performance are studied in Section V-C. We the components of the search latency in more detail. These re-
then discuss the effect of session reuse on server performance in sults were obtained by adding monitoring code to the various
Section V-D. Finally, performance of update operations is com- process modules in the slapd daemon. Fig. 7 shows the major
pared to the performance of search operations in Section V-E. components that contribute to the server and client search la-
All the results shown in the performance studies are average tency, under a load of 105 search requests/second, at which the
values obtained throughout the simulation period. server is not yet saturated. Surprisingly, while the processing
time as measured at the client is approximately 60 ms, results
A. Overall LDAP Performance
obtained from tcpdump and from the monitoring code in slapd
The general variation in the LDAP latency and throughput as show that out of 60 ms, approximately 50 ms is a waiting time
a function of server load are shown in Fig. 6. The load was gen- arising from the Nagle algorithm implemented in TCP. We dis-
erated by 1000 clients querying periodically, with each client cuss this in greater detail in Section V-B3, and consider the com-
request retrieving a 488 byte SLAPolicyRules entry from a data- ponents of the actual search latency in this section.
base with 10 000 entries. Fig. 6(a) shows the variation in con- At the server, the LDBM back-end uses an index mechanism
nect, processing, and response latencies as a function of the to store and retrieve information. Each entry is assigned a unique
average request rate at the LDAP server, and Fig. 6(b) shows ID, used to refer to the entry in the indexes. A search for entries
the variation in throughput, or number of queries served per first returns a list of IDs of entries that have the value being
second, as a function of the average request rate. Below a load searched; the IDs are then used to retrieve the corresponding
threshold corresponding to a request rate of 105 per second, re- entries. The candidate ID lookup and the data entry retrieval are
sponse latency remains fairly constant at approximately 64 ms, seen to take up around 5 ms, 60% of the total processing time
and is dominated by the processing time of 60 ms. Above this of 8.2 ms.
threshold, the response time increases rapidly with increasing The main front-end operations at the server are building the
load. In this region, the connect time is seen to be the main bot- search filter, testing a search filter against an entry, ASN.1 en-
tleneck; the processing time also increases with load, but its ef- coding of the result entry, sending the search result, and sending
fect is less significant. Corresponding to the latency character- the search status. The front-end operations at the client-side in-
istics, Fig. 6(b) shows that the server throughput saturates at a clude building the search request, ASN.1 decoding, and con-
rate of approximately 105 requests per second. We now consider struction of the final search result. In all, the front-end oper-
various aspects of this performance in more detail. ations take around 3.16 ms, 40% of the total response latency.
WANG et al.: MEASUREMENT AND ANALYSIS OF LDAP PERFORMANCE 237
Fig. 7. Latency associated with the various server and client process modules in a LDAP search operation.
Fig. 9. Comparison of the server performance with and without Nagle Fig. 10. Comparison of the performance when the server is configured with no
algorithm: (a) average processing time and response time; (b) average server entry cache and 10 000 entry caches: (a) the processing and response latencies;
throughput. The directory had 10 000 entries and entry size was 488 bytes. (b) the throughput.
(14 bytes) was delayed about 50 ms until an acknowledgment Fig. 10(b). The relatively smaller improvement in throughput
message was received from client side. compared with the improvement in processing time is because
Researchers have presented evidence [17], [18] that the Nagle the throughput is impacted by the overall response time instead
algorithm should be disabled in order to reduce latency and to of search processing time, while at high loads, the connect la-
protect against unforeseen interactions between TCP and HTTP tency is the dominant component in the total response latency.
with persistent connections. We believe Nagle algorithm should Memory usage was observed to increase 9% with the above in-
also be turned off in the LDAP application, since the delay of crease in cache size.
the last segment of the response was shown to be unnecessary Since we are primarily interested in the performance of the
from the tcpdump results. To avoid consequent extra packets on LDAP server, for the experiments in the remainder of this paper,
the network, functions such as writev() can be used as have been the Nagle algorithm was disabled to eliminate the unnecessary
used in WWW servers such as Flash and JAWS. wait before returning search results to the client. Unless other-
4) Effect of Caching LDAP Entries: The LDBM backend can wise specified, an LDBM index cache of size 10 MB, and entry
be configured to keep a cache for each index file (dbcachesize) cache equal to the size of the directory were used. Before an
and a cache of entries (cachesize) in memory. We compared the experiment, the cache was first filled up to avoid the extra over-
server performance with two different sizes of the index cache, head due to cache misses at the beginning of the experiment.
100 KB and 10 MB. The backend search time was almost iden-
tical for either case, although at the smaller cache size there was C. Performance Limitations
a large increase in the time required to populate the directory.
This indicated that the index cache was large enough to hold In this section, we study the limitations of the LDAP server
the index files in either case. We studied the impact of the entry performance in three important areas: server CPU capability,
cache size on the server performance by varying the cache size the scaling of the LDAP directory, and the scaling of LDAP
from 0 entries (disk directory without any cache) to 10 000 en- entry size. An useful point to consider is that in some cases
tries (in-memory directory with full cache) while keeping the network connectivity may significantly influence the perceived
index cache size at 10 MB. For a directory with 10 000 en- server response. In our case, since clients were connected to the
tries, a cache with 10 000 entries results in a reduction in the server over a high-speed LAN, network congestion would not
back-end entry retrieval latency (shown in Fig. 7) from 5 ms happen. With default entry size 488 bytes and 10 Mb/s Ethernet,
to 1.8 ms, with little change in the other latency components. the transmission time is about 0.4 ms, and the delay effect due
Consequently, the total processing time reduces from 8.2 ms to to network connection could be neglected.
5 ms, and the contribution of the back-end processing time to 1) Server Processor Capability: Single vs Dual Processors:
the total latency reduces from 60% to 36%. When the latency is In this section, we consider the importance of processing ca-
plotted as a function of load, we see a 40% reduction in search pability in determining the server performance. As mentioned
processing time due to caching over nearly the entire range of earlier, all our experiments were carried out on a dual processor
loads, as shown in Fig. 10(a). Correspondingly, the throughput server, but by binding the slapd process to one of the two CPU’s,
increases 25% and reaches 140 requests/second, as shown in it was used in single-processor mode for the experiments in
WANG et al.: MEASUREMENT AND ANALYSIS OF LDAP PERFORMANCE 239
Fig. 12. Throughput of search and add operations for single and dual processor
servers, with a varying number of LDAP clients generating queries continu-
ously.
Fig. 13. Effect of directory size on the server connect and processing times (a) Fig. 14. Effect of directory entry size on (a) the server connect and processing
and the server throughput (b) with different directory size and cache size. times and (b) the server throughput for 5000 entry directory with 5000 entry
cache.
the cache size was increased beyond 50 000, performance de-
graded progressively due to lack of memory. When the direc- up from 0.2 ms at the smaller entry size. These results are un-
tory is out-of-memory, performance scaling is limited both by derstandable, since both ANS.1 encoding and the filter matching
the database search time, and by the disk access time. Fig. 13 process depend on the number of attributes in the LDAP entry,
shows that further increasing the number of directory entries which increases as the entry size increases. The latency of the
from 50 000 to 100 000 while keeping the cache size at 50 000 other front-end processing steps, and back-end processing steps
entries (i.e., the directory is half cached) causes the average re- increase by much smaller amounts. Under heavy loads, when
sponse time to increase another 7 ms (87.5%) and the total pro- the server saturates, Fig. 14(b) shows the maximum throughput
cessing time reaches 15 ms. Correspondingly, the throughput at 4880 bytes is 30% smaller.
reduces from 110 requests/second to 85 requests/second (23%). The comparison between the two entry sizes was also per-
Since the connect latency dominates the response at high loads, formed with a directory of 10 000 entries, the cache size re-
the reduction in throughput is relatively small compared to the maining at 5000 entries. Fig. 15(a) shows the processing latency,
increase in processing time. and Fig. 15(b) the throughput, as a function of load. The in-
To summarize, there was a moderate decrease in latency and crease in total response time with increase in entry size is now
throughput with directory scaling up to 50 000 entries, due to 40 ms at low and moderate loads. The much larger deteriora-
an increase in database search time. Further scaling was con- tion in response time with entry size is due to the increased data
strained by system memory leading to a disk directory, and the transfer time from the disk in addition to the increased front-end
deterioration in processing latency was significantly sharper, processing time. The increase in front-end processing time re-
due to increasing disk access time and database search time. mains at about 8 ms, confirming that the front-end processing
3) Scaling the Directory Entry Size: In this section we study is not influenced greatly by the increase in directory size, and
the scaling of performance with the size of the LDAP entry. In the increase in data handling time at the back-end is 32 ms.
our experiments, we compared the performance of LDAP di- Also, for the larger entry size, the processing time is compa-
rectories with 488 byte entries, and with 4880 byte entries. For rable with the connect latency even at high loads (unlike pre-
the larger entry size, the availability of system RAM limited the vious experiments in which the connect latency dominated the
maximum database cache size to 5000 entries, beyond which the total response at high loads). Consequently the increase in pro-
performance degraded rapidly. We first performed the compar- cessing time with entry size strongly influences the maximum
ison for an in-memory directory, using a 5000 entry directory throughput, which decreases sharply from 110 requests/second
and 5000 entry cache for both entry sizes. Fig. 14(a) shows the to 30 requests/second.
processing latency, and Fig. 14(b) the throughput, as a function To summarize, for an in-memory directory the scaling of the
of load. At light and moderate loads, the total response latency total processing latency with entry-size is determined by the
increases by about 8 ms with increase in entry size, while the front-end processing, specifically, by the ASN.1 encoding and
throughput remains the same. Profiling at the server shows that filter matching. For out-of-memory operation, the increase in
the increase in response time comes mainly from the increase in processing latency is dominated by the increased back-end data
the ASN.1 encoding time, from 1.2 ms to 7 ms. In addition, the retrieval time. The maximum throughput also decreases sharply
filter matching step took around 0.8 ms at the larger entry size, in this case.
WANG et al.: MEASUREMENT AND ANALYSIS OF LDAP PERFORMANCE 241
Fig. 16. Effect of session reuse rate on (a) the server processing times and (b)
Fig. 15. Effect of directory entry size on (a) the server connect and processing the server throughput.
times and (b) the server throughput for a 10 000 entry directory with 5000 entry
cache.
D. Session Reuse
In the experiments so far, the client requested a single
search or add operation, and then closed its connection. We
now study the change in performance when a client performs
multiple searches in a LDAP session. We varied the degree of
session reuse from 0% to 100%, with 0% corresponding to a
new connection being established for every search, and 100%
corresponding to the connection being left open during the
duration of the experiment.
Fig. 16(a) shows that the search time increases at heavy loads
as the session reuse rate increases from 0% to 100%. This is be-
cause the actual server load for searching operations increases
with the increase in session reuse, while the total connect latency
decreases because fewer connect requests are received. At the
same time, the total response time decreases as can be deduced
from the increase in the throughput shown in Fig. 16(b) from
105 request/second to 155 requests/second at the onset of con-
gestion, and from 140 requests/second to 223 requests/second
under the heaviest load, an improvement of 60%.
E. Performance Under Update Load Fig. 17. Comparison of latency and throughput characteristics for search and
add operations: (a) variation of connect, processing and response latency with
The experiments described so far measured the performance server load, and (b) variation of server throughput with server load. (Directory
under loads generated by search requests. In order to compare had 10 000 entries, entry size 488 bytes.)
these results with the performance under update operation, a
set of experiments was performed with clients each periodically
querying the server with add requests. Fig. 17 shows that the at high loads, as the connect latency becomes more important.
processing time for add operations dominates the total response Fig. 17(b) shows that the add throughput begins to saturate be-
time not only at low loads, but also when the server saturates. yond a threshold of 60 requests per second and finally satu-
This is unlike the search scenario, in which the connect time rates at 65 requests per second, about 55% less than the search
dominates the total response time when the server is under heavy throughput.
load. This is probably because the latency due to locks gener- The LDAP modify operation was not directly investigated. A
ated by competing write() requests becomes significant under modify operation involves searching and retrieving an existing
heavy load. At low loads, the add latency is about four times entry, modifying it, and then writing it back to the LDAP direc-
the search latency, and the difference between the two decreases tory. Consequently, one would expect the processing latency in
242 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 16, NO. 1, FEBRUARY 2008
this case to be roughly the sum of the search and add processing also identified an important required modification to the basic
latencies. OpenLDAP implementation in order to obtain the above per-
formance. We now briefly summarize our important findings.
• Disabling of Nagle algorithm. The Nagle algorithm was
VI. RELATED WORK
observed to contribute an additional wait time of roughly
Benchmarking of LDAP server performance has been re- 50 ms to a search operation. Disabling the Nagle algorithm
ported recently by Mindcraft [19]. The certified results of results in a reduction in response time by a factor of 7 under
Mindcraft’s work were from the performance comparison normal operation.
of three servers: Netscape Directory server 3.0 (NSDS3), • Entry caching. For a directory with 10 000 entries, an
Netscape Directory Server 1.0 (NSDS1) and Novell LDAP in-memory directory (cache size 10 000) has a 40% im-
Services (NDS). In their work, the performance for a 10 000 provement in processing time and 25% improvement in
entry personnel directory was measured on a 200 MHz Intel throughput over a directory without a cache.
Pentium Pro with 512 MB RAM. The throughput for NSDS3, • Scaling with directory size. The scaling of performance
NSDS1, and NDS were found to be 183 requests/second, with the number of entries in the directory is determined by
38.4 requests/second and 0.8 requests/second respectively. the back-end processing. Up to 50 000 directory entries can
The size of the LDAP entry was not specified. The directory be kept in-memory in our system, and the server processing
was in-memory in all cases, and the performance with larger, time and throughput deteriorate by about 60% and 21%
out-of-memory directories (or large entry sizes) was not con- respectively when directory size increases from 10 000 to
sidered. Since the directory was in-memory, CPU capability 50 000. Beyond this limit, the directory is out-of-memory
was found to be the bottleneck. due to system RAM constraints, and increasing the direc-
In the above work, the LDAP server was generally treated tory size from 50 000 to 100 000 entries results in a sharper
as a black box. Our work differs significantly in our objectives increase in processing time of another 87.5%, and a de-
and approach. We have determined the scalability of the perfor- crease in throughput by 23%.
mance particularly with respect to directory size and entry size, • Scaling with entry size. When the directory is in-memory,
determined the contribution of different system components and the scaling of performance with the entry size in the di-
parameters to the server performance and scalability, and pro- rectory is determined by the front-end processing, mainly
vided suggestions for improving system performance. The de- an increase in the time for ASN.1 encoding of the re-
tailed nature of this work also dictated the choice of OpenLDAP trieved entry. An increase in entry size from 488 bytes
instead of a commercial server, as explained earlier. to 4880 bytes for a 5000 entry directory results in an in-
crease in processing time of 8 ms at moderate load, 88%
of which is due to the increased ASN.1 encoding time, and
VII. CONCLUSION a throughput deterioration of about 30%. However, for a
In this paper, we have discussed the performance of the LDAP directory with 10 000 entries, the cache size is still lim-
directory service in a dynamic, distributed environment, with ited to 5000 by the system RAM, and a similar increase in
frequent directory accesses. The LDAP directory structure used entry size results in a much larger throughput deterioration
in the experiments is based on a proposed LDAP schema for of about 70%, mainly due to the increase in data transfer
administration of Service Level Specifications in differentiated latency from the disk.
service networks, and a brief explanation of the use of LDAP • CPU capability. For an in-memory directory, the CPU is
in this context has been provided. However, the experimental a significant bottleneck. Using dual processors improves
results are applicable to LDAP directory applications in general. performance by 40%.
We have shown that under our default operating condi- • Session reuse. In general, higher session reuse leads to im-
tions—a directory with 10 000 488-byte entries, and a cache proved performance. A 60% gain in performance is ob-
size of 10 000 entries—the LDAP server has an average re- tained when the session is left open during the duration
sponse latency of 8 ms at loads up to 105 search requests per of the experiment, relative to when a connection is opened
second, and a maximum throughput of 140 search requests per and closed for each search request.
second. Out of the total response latency of 8 ms, 5 ms comes In conclusion, we believe that the results show that
from the processing latency, 36% of which is contributed by OpenLDAP slapd is a potential candidate for supporting
back-end processing (entry retrieval from the database), and policy administration in the differentiated service environment
64% by front-end processing. In general, at high loads, the con- as well as in other network applications that need dynamic
nect latency increases sharply to dominate the overall response, directory access support. In future, we plan to evaluate other
and eventually limits the server throughput. Consequently, a LDAP servers based on the criteria developed in this paper,
change in the processing time due to changes in system parame- which may run on different platforms and with different imple-
ters has a relatively smaller effect on the maximum throughput. mentations.
In addition to this basic performance specification, we have
obtained a detailed profile of contributions of various system
REFERENCES
components to the overall performance; studied the scaling of
performance with directory size, entry size, and session reuse; [1] T. Small, D. Hennessy, and F. Dawson, “Calendar attributes for VCard
and LDAP,” RFC 2739, Internet Engineering Task Force, Jan. 2000.
and determined the relative importance of various factors in [2] B. Aboba, “Lightweight directory access protocol (v3): Schema for
determining scalability, namely front-end versus back-end the Routing Policy Specification Language (RPSL),” Internet draft, In-
processing, CPU capability, and available memory. We have ternet Engineering Task Force, Nov. 1997.
WANG et al.: MEASUREMENT AND ANALYSIS OF LDAP PERFORMANCE 243
[3] E. Ellesson, D. Verma, R. Rajan, and S. Kamat, “Schema for service Henning Schulzrinne (M’86–SM’04–F’06) re-
level administration of differentiated services and integrated services in ceived the undergraduate degree in economics and
networks,” Internet Draft, Internet Engineering Task Force, Feb. 1998. electrical engineering from the Darmstadt University
[4] R. Rajan, J. C. Martin, S. Kamat, M. See, R. Chaudhury, D. Verma, of Technology, Darmstadt, Germany, the M.S.E.E.
G. Powers, and R. Yavatkar, “Schema for differentiated services and degree as a Fulbright scholar from the University
integrated services in networks,” Internet Draft, Internet Engineering of Cincinnati, Cincinnati, OH, and the Ph.D. degree
Task Force, Oct. 1998. from the University of Massachusetts, Amherst, MA.
[5] B. Aboba, “Extension for PPP authentication,” Internet draft, Internet He was a Member of Technical Staff at AT&T Bell
Engineering Task Force, Nov. 1997. Laboratories, Murray Hill, NJ, and an associate de-
[6] L. Bartz, “LDAP schema for role based access control,” Internet draft, partment head at GMD-Fokus, Berlin, Germany, be-
Internet Engineering Task Force, Oct. 1997. fore joining the Computer Science and Electrical En-
[7] S. Blake, D. L. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, gineering Departments at Columbia University, New York, NY. He is currently
“An architecture for differentiated service,” RFC 2475, Internet Engi- chair of the Department of Computer Science. He serves as Chief Scientist for
neering Task Force, Dec. 1998. SIPquest Inc. and Chief Scientific Advisor for Ubiquity Software Corporation.
[8] L. Bartz, “LDAP schema for role based access control,” Internet Draft His research interests encompass real-time, multimedia network services in the
(Work in Progress), Internet Engineering Task Force, Oct. 1995. Internet and modeling and performance evaluation. Protocols which he co-de-
[9] W. Yeong, T. Howes, and S. E. Kille, “Lightweight directory access veloped are now Internet standards, used by almost all Internet telephony and
protocol,” RFC 1777, Internet Engineering Task Force, Mar. 1995. multimedia applications.
[10] M. Wahl, T. Howes, and S. E. Kille, “Lightweight directory access Dr. Schulzrinne is an associate editor of IEEE Internet Computing and the
protocol (v3),” RFC 2251, Internet Engineering Task Force, Dec. 1997. ACM Transactions on Multimedia Computing and Communications, and Ap-
[11] “Resource reservation protocol (RSVP)—Version 1 functional specifi- plications (TOMCCAP). He serves as vice chair of ACM SIGCOMM and is
cation,” RFC 2205, Internet Engineering Task Force, Sep. 1997. a member of the steering committee for the IEEE/ACM TRANSACTIONS ON
[12] K. Nichols, S. Blake, F. Baker, and D. L. Black, “Definition of the NETWORKING. He also was a member of the IAB (Internet Architecture Board).
differentiated services field (DS field) in the Ipv4 and Ipv6 headers,” He has received the New York City Mayor’s Award for Excellence in Science
RFC 2474, Internet Engineering Task Force, Dec. 1998. and Technology and the VON Pioneer Award.
[13] M. Beigi, R. Jennings, S. Rao, and D. Verma, “Supporting service level
agreements using differentiated services,” Internet Draft, Internet En- Dilip D. Kandlur (M’91–SM’03–F’04) received the
gineering Task Force, Nov. 1998. B.Tech. degree in computer science and engineering
[14] The Open Source LDAP Suite. OpenLDAP [Online]. Available: http:// from the India Institute of Technology, Bombay, the
www.OpenLDAP.org M.S. degree from the University of Michigan, Ann
[15] The Berkeley Database. Sleepycat Software [Online]. Available: http:// Arbor, and the Ph.D. degree in computer science from
www.sleepycat.com the University of Michigan in 1991.
[16] J. Nagle, “Congestion control in IP/TCP Internetworks,” Comput. He joined IBM in 1991. He is currently the Di-
Commun. Rev., vol. 14, pp. 11–17, Oct. 1984. rector of the Storage Systems group at IBM’s Al-
[17] J. Heidemann, “Performance interactions between P-HTTP and TCP maden Research Center. He is responsible for devel-
implementations,” Comput. Commun. Rev., vol. 27, pp. 65–73, Apr. oping new technologies in storage systems, storage
1997. management, file systems and server software, and
[18] H. F. Nielsen, J. Gettys, A. Baird-Smith, E. Prud’hommeaux, H. W. also facilitating IBM’s use of these technologies in its products and services.
Lie, and C. Lilley, “Network performance effects of HTTP/1.1, CSS1, The Storage Systems Research group is well-known for its pioneering research
and PNG,” Boston, MA, Note 970214, W3C, Feb. 1997. in the areas of RAID, parallel file systems, storage virtualization, and enter-
[19] Directory Server Certified Performance Reports. Mindcraft [Online]. prise backup and restore. It has contributed to major storage industry prod-
Available: http://www.mindcraft.com/perfreports/ldap/ ucts and innovations, such as IBM® RAMAC®, IBM 9337 RAID subsystem,
Tivoli® Storage Manager (previously known as ADSTAR Distributed Storage
Manager), General Parallel File System (GPFS) and, most recently, to IBM To-
talStorage products, SAN Volume Controller and Productivity Center. Previ-
ously, he was the Senior Manager of Networking Software and Services at the
IBM T. J. Watson Research Center, Hawthorne, NY, where he led IBM’s direc-
tion in edge-of-network computing. His team made numerous contributions in
networking performance and quality of service, scalable web infrastructure and
policy-based computing and service-level agreements.
An internationally recognized expert in quality of service and performance for
networks and network servers, Dr. Kandlur shared the IEEE Communications
Society’s 2003 William R. Bennett Award for the best original paper published
in the IEEE/ACM TRANSACTIONS ON NETWORKING. He is an IEEE Fellow and
has served as vice-chair of the IEEE Technical Committee on Computer Com-
Xin Wang (M’01) received the B.S. and M.S. munications. In addition, he has received several IBM awards for his technical
degrees in telecommunications engineering and achievements, inventions and innovations.
wireless communications engineering from Beijing
University of Posts and Telecommunications, Bei-
jing, China, and the Ph.D. degree in electrical and Dinesh C. Verma (M’93–SM’03–F’06) received the
computer engineering from Columbia University, B.Tech. degree in computer science from the Indian
New York, NY. Institute of Technology, Kanpur, India, in 1987, and
She is currently an Assistant Professor in the De- the Ph.D. degree in computer science from the Uni-
partment of Electrical and Computer Engineering of versity of California at Berkeley in 1992.
the State University of New York at Stony Brook, Since 1992, he has worked at the IBM T. J. Watson
Stony Brook, NY. Before joining Stony Brook, she Research Center and Philips Research Laboratories.
was a Member of Technical Staff in the area of mobile and wireless networking He is currently a Research Manager at the IBM T.
at Bell Labs Research, Lucent Technologies, New Jersey, and an Assistant Pro- J. Watson Research Center, Hawthorne, NY, and
fessor in the Department of Computer Science and Engineering of the State manages the Autonomic Systems and Networking
University of New York at Buffalo, Buffalo, NY. Her research interests include Department at the center. He is the Program Manager
analysis and architecture design in wireless networks and communications, mo- for the International Technology Alliance. He also oversees research in the
bile and distributed computing, infrastructure design and performance enhance- area of edge networking. His current work interests include topics related to
ment across network layers, applications and heterogeneous networks, network wireless and sensor networks, policy-based management, and next-generation
and mobility management, QoS, signaling and control, as well as support for networks. Other topics he has worked on include policy-enabled networks,
advanced network services and applications. caching and content distribution networks, service differentiation and service
Dr. Wang has been a member of the Association for Computing Machinery level management, adaptive control of web servers, and the exploitation of
(ACM) since 2004. Java-based technologies for distributed web-based computing.