摘要:
A data warehouse includes a memory and a controller disposed on a substrate that is associated with a System on Chip (SoC). The controller is operatively coupled to the memory. The controller is configured to receive data from a first intellectual property (IP) block executing on the SoC; store the data in the memory on the substrate; and in response to a trigger condition, output at least a portion of the stored data to the SoC for use by a second IP block. An organization scheme for the stored data in the memory is abstracted with respect to the first and second IP blocks.
摘要:
Provided are a TCAM-based table query processing method and apparatus. The method includes: executing a first query process for querying a TCAM entry; while the first query process is executed, executing a second query process for querying one or more entries other than the TCAM entry, wherein the first query process and the second query process run independently of each other; and respectively acquiring a first query result and a second query result through the first query process and the second query process. The technical solution solves the technical problem in the related technologies that the packet processing time is long due to the TCAM-based table query manner and the packet forwarding performance is affected accordingly, shortens the packet processing time and improves the packet forwarding performance.
摘要:
A system for real-time search, including: a set of partitions, each including a set of segments, each segment corresponding to a time slice of messages posted to the messaging platform, and a real-time search engine configured to receive a search term in parallel with other partitions in the set of partitions, and search at least one of the set of segments in reverse chronological order of the corresponding time slice to identify document identifiers of messages containing the search term; and a search fanout module configured to: receive a search query including the search term; send the search term to each of the set of partitions for parallel searching; and return, in response to the search query, at least one of the identified document identifiers of messages containing the search term.
摘要:
An object of the present invention is to efficiently perform a data load process or a data store process between a memory and a storage unit in a processor. The processor includes: a plurality of storage units associated with a plurality of data elements included in a data set; and a control unit that reads the plurality of data elements stored in adjacent storage areas from a memory, in which a plurality of the data sets is stored, collectively for respective data sets, sorts the respective read data elements to a storage unit corresponding to the data element among the plurality of storage units, and writes the data elements to the respective data sets.
摘要:
The disclosure discloses an intelligence cache and an intelligence terminal, wherein the intelligence cache comprises: a general interface, configured to receive configuration information and/or control information, and/or data information from a core a bus, and return target data; a software define and reconfiguration unit configured to define a memory as a required cache memory according to the configuration information; a control unit, configured to control writing and reading of the cache memory and monitor instructions and data streams in real time; a memory unit, composed of a number of memory modules and configured to cache data; the required cache memory is formed by memory modules according to the definition of the software define and reconfiguration unit; and an intelligence processing unit, configured to process input and output data and transfer, convert and operate on data among multiple structures defined in the control unit. The disclosure can realize an efficient memory system according to the operating status of software, the features of tasks to be executed and the features of data structures through the flexible organization and management by the control unit and the close cooperation of the intelligence processing unit.
摘要:
A system (100) for analyzing unstructured data. The system includes an associative memory (102) including a plurality of data (104) in associated units having a plurality of associations. The associative memory (102) is configured to be queried based on at least one relationship selected from the group that includes direct relationships (114) and indirect relationships (116) among the plurality of data (104). The associative memory (102) further includes a content-addressable structure (118). The system also includes an analyzer (122) in communication with the associative memory (102), wherein the analyzer (122) is configured to parse and arrange the plurality of data (104) into comparable units (124, 126, 128) in response to a query (120). The analyzer (122) is configured to establish an ordered list (130) ranking the comparable units (124, 126, 128) in an order of precedence based on the query (120). ( Fig. 1 )
摘要:
A system (100) includes an associative memory (102), a first table (134), a second table (136), a comparator (164), and an updater (166). The associative memory (102) may include data and associations among data and may be built from the first table (134). The first table (134) may include a record (144) with a first and second field (150, 152). The associative memory may be configured to ingest the first field (150) and avoid ingesting the second field (152). The second table (136) may include a record (160) with a third field (162) storing information indicating whether the first field (150) has been ingested by the associative memory (102) or has been forgotten by the associative memory (102). The comparator (164) may be configured to compare the first and second table (134, 136) to identify one of whether the first field (150) should be forgotten or ingested by the associative memory (102). The updater (166) may be configured to update the associative memory (102) by performing one of ingesting or forgetting the first field (150).
摘要:
A method and apparatus is described for the filtering of a common input string (405) to generate various filtered comparand strings. The filtering of a common input string enables concurrent lookups in different tables to be performed on multiple filtered comparands by different CAM devices (or different blocks (410-414) within a CAM device (400)), to compare the data in the filtered comparand strings with data stored in its associative memory. By performing multiple lookups in parallel, rather than sequentially, packet throughput in a CAM may be significantly increased.
摘要:
A content addressable memory comprises a CAM control logic unit (11) and a plurality of cells (10) connected in a chain. Each cell comprises a memory block (12) coupled to a common address bus (ADD), a comparator (14) coupled to a common data bus (DATA) and to the data interface of the memory block (12), switching means (15) coupling the data interface of the memory block with the data bus, and a logic block (13) including a Match flip-flop (16). The memory is operable in two phases, a Search phase and an Access phase. In the Search phase, a sequence of words on the common data bus (DATA) is serially matched with the contents of a sequence of addresses in the memory blocks (12) of the cells (10). In the Access phase, the cells matched in the Search phase are made serially available for access via the common address and data buses (ADD and DATA).