Abstract:
A pipelined search engine device, such as a longest prefix match (LPM) search engine device, includes a hierarchical memory and a pipelined tree maintenance engine therein. The hierarchical memory is configured to store a b-tree of search prefixes (and possibly span prefix masks) at multiple levels therein. The pipelined tree maintenance engine, which is embedded within the search engine device, includes a plurality of node maintenance sub-engines that are distributed with the multiple levels of the hierarchical memory. The search engine device may also include pipeline control and search logic that is distributed with the multiple levels of the hierarchical memory.
Abstract:
Methods and apparatus are described for translating identifiers that are used by computers to reference various entities such as data structures, external objects, or connections in a telecommunications network, from a bulkier less manageable format to a smaller more manageable format. Such translations are carried out to reduce the needless processing and memory demands that are made of a localized set of components when the large identifiers the set receives from other components include fields that none of the members of the set need to access. The invention is centerd around a two-stage look-up method wherein an inputted external identifier is divided into two parts. The first part of the inputted external identifier is used as an address into a first look-up table that contains base-addresses of a second look-up table. The second part of the inputted external identifier is used as an offset-address into the second look-up table. The offset-address and the base-address are combined to access the second look-up table, which contains all the internal identifiers. The invention can be easily scaled to handle a larger range of external identifiers, and a larger number of internal identifiers. The invention operates at a fast and predictable speed. Use of the invention also leads to a significant savings in memory costs.