Name lookup is an essential function but a performance bottleneck in both today's and future network architectures. Trie is an excellent candidate data structure and has been widely used for looking up and updating names. However, the granularity of trie - at bit, byte (character), or component level - can dramatically affect the network performance in terms of memory usage and packet-processing speed, which has not yet been studied adequately. To fill this gap, we first show that the choice of trie's granularity for name lookups and updates (i.e., insertions and removals) is not a trivial problem due to the complex performance tradeoffs involved. We also introduce a new tool, called NameGen, which uses a Markov-based name learning model and generates pseudo-real datasets with different tunable name characteristics. We compare different trie granularities based on a collection of datasets and performance metrics, highlight the strengths and weaknesses of each granularity, and draw a conclusion on the choice of granularity. Surprisingly, our experimental evaluation finds that there are only two key rules to choose the proper trie's granularity for any kind of dataset: 1) bit-level trie is the choice when the memory requirement is a real concern and 2) character- and component-level tries are preferred for faster lookups and updates when dealing with names composed of short and long components, respectively.
- implementation granularity
- Name lookup
- named data networking
ASJC Scopus subject areas
- Computer Science Applications
- Computer Networks and Communications
- Electrical and Electronic Engineering