摘要:
Word phrases are stored in a phrase structure. Each word is stored as a keyword in a keyword structure. Each keyword is associated with usage attributes identifying use of a word in a word phrase. Any preceding words associated with a keyword, and a mapping from any preceding words to a word phrase, is stored for each word. A word string is input. Match attributes are updated in a match structure if a word in the word string matches any keyword and if any preceding words associated with any matching keyword includes a preceding word which precedes the word in the word string. The match attributes indicate use of the matching word in the word string and in a word phrase. Whether a word phrase is present in the word string is determined based on the usage attributes and the match attributes associated with multiple matching words.
摘要:
In one embodiment, there is a multi-cluster synchronization system between two or more clusters. The multi-cluster synchronization system uses variable compression to optimize the transfer of information between the clusters. Compression is used not only to minimize the total number of bytes sent between the two clusters, but to dynamically vary the size of the objects sent across the wire to optimize for higher throughput after considering packet loss, TCP windows, and block sizes. This includes both the packaging of multiple small files together into one larger compressed file, saving on TCP and header overhead, but also the chunking of large files into multiple smaller files that are less likely to have difficulties due to intermittent network congestion or errors. A further embodiment uses forward error correction to maximize the chances that the remote end will be able to correctly reconstitute the transmission.
摘要:
In accordance with an embodiment, described herein are systems and methods for providing an enterprise crawl and search framework, including features such as use with middleware and enterprise application environments, pluggable security, search development tools, user interfaces, and governance. The system includes an enterprise crawl and search framework which abstracts an underlying search engine, provides a common set of application programming interfaces for developing search functionalities, and allows the framework to serve as an integration layer between one or more enterprise search engine and one or more enterprise application. An application development framework allows an application developer to make searchable view objects that are associated with a plurality of enterprise applications. At runtime, a searchable object manager loads searchable objects from persistent storage, validates the searchable object definitions, and provides the searchable objects to the framework for use in searching across the plurality of enterprise applications.
摘要:
In one embodiment, a method comprises creating and storing an ontology for a data store in response to receiving first user input defining the ontology, wherein the ontology comprises a plurality of data object types and a plurality of object property types; creating one or more parser definitions in response to receiving second user input defining the parser definitions, wherein each of the parser definitions specifies one or more sub-definitions of how to transform first input data into modified input data that is compatible with one of the object property types; and storing each of the one or more parser definitions in association with one of the plurality of object property types.
摘要:
An information processor includes an information processing sub-system having information processing circuits and a memory sub-system performing data communication with the information processing sub-systems, wherein the memory sub-system has a first memory, a second memory, a third memory having reading and writing latencies longer than those of the first memory and the second memory, and a memory controller for controlling data transfer among the first memory, the second memory and the third memory; graph data is stored in the third memory; the memory controller analyzes data blocks serving as part of the graph data, and performs preloading operation repeatedly to transfer the data blocks to be required next for the execution of the processing from the third memory to the first memory or the second memory on the basis of the result of the analysis.
摘要:
A computer system accesses rows of feed data and converts the received feed data into portions of binary blob data. The computer system also sends the binary blob data to a database server which is configured to access metadata associated with a feed including a dynamic server statement to determine how to convert the binary blob data to a server table with a blob column configured to store the rows of feed data. The database server accesses feed data belonging to a particular feed and executes a dynamic server statement to create a relational dataset in an in-memory table of the server. A second dynamic statement applies data processing conditions indicated in the metadata. When feed data rows match conditions, the computer system places feed data row information into an alert table that includes references to the blob table with blob data, thereby triggering an alert.
摘要:
Aspects of the invention relate generally to identifying and providing 3D models in response to a search request. More specifically, a server may access a database of 3D models, at least some of which include geolocation information such as an address, intersection, or geolocation coordinates. The server may select a particular model and identify points of interest. For example, the server may use detailed map information to identify points of interest located at or near the geolocation information associated with the particular model. Once a point of interest has been identified, a corresponding system tag may be generated and associated with the 3D model. Tags may be used to index, search, and retrieve 3D models in response to a search request. For example, when a request for a 3D model is received, the server identifies the search terms and searches the tags to identify relevant 3D models.
摘要:
Database processing using columns to present to a processing unit decompressed column data without changing the underlying row-based database architecture. For some embodiments, a database accelerator is used to efficiently process the columns of a database and output tuples to a processing unit's memory, such that the columns can be quickly processed (with the advantages of a column-based architecture) to create tuples of requested data, but without having to depart from a row-based architecture at the processing unit level or having decompressed data scattered throughout the processing unit's memory.
摘要:
Several different embodiments of a segmented object storage system are described. The object storage system divides files into a number of object segments, each segment corresponding to a portion of the object, and stores each segment individually in the cloud storage system. The system also generates and stores a manifest file describing the relationship of the various segments to the original data file. Requests to retrieve the segmented file are fulfilled by consulting the manifest file and using the information from the manifest to reconstitute the original data file from the constituent segments. Modifying, appending to, or truncating the object is accomplished by manipulating individual segments and the manifest file. In further embodiments, manipulation of the individual object segments and/or the manifest is used to implement copy-on-write, snapshotting, software transactional memory, and peer-to-peer transmission of the large file.
摘要:
An information processor includes an information processing sub-system having information processing circuits and a memory sub-system performing data communication with the information processing sub-systems, wherein the memory sub-system has a first memory, a second memory, a third memory having reading and writing latencies longer than those of the first memory and the second memory, and a memory controller for controlling data transfer among the first memory, the second memory and the third memory; graph data is stored in the third memory; the memory controller analyzes data blocks serving as part of the graph data, and performs preloading operation repeatedly to transfer the data blocks to be required next for the execution of the processing from the third memory to the first memory or the second memory on the basis of the result of the analysis.