Abstract:
In a system where transactions are accelerated with asynchronous writes that require acknowledgements, with pre-acknowledging writes at a source of the writes, a destination-side transaction accelerator includes a queue for queue writes to a destination, at least some of the writes being pre-acknowledged by a source-side transaction accelerator prior to the write completing at the destination, a memory for storing a status of a destination-side queue and possibly other determinants, and logic for signaling to the source-side transaction accelerator with instructions to alter pre-acknowledgement rules to hold off on and pursue pre-acknowledgements based on the destination-side queue status. The rules can take into account adjusting the flow of pre-acknowledged requests or pre-acknowledgements at the sender-side transaction accelerator based at least on the computed logical length.
Abstract:
The present invention organizes unsorted information into structured information and presents the structured information so that users are able to perform research efficiently and effectively. The present invention includes developing a parameterized template which is used to organize the unstructured data. Editors, with the help of a data analysis application, search through the unstructured information and organize the information using the parameterized template. After the information is properly organized, it is presented to users in a user-friendly format that enables users to quickly and easily search for specific elements in the information. Furthermore, the information is also presented to allow other tasks to be performed on the organized data such as comparisons.
Abstract:
A network stack includes a packet loss analyzer that distinguishes between packet losses due to congestion and due to lossyness of network connections. The loss analyzer observes the packet loss patterns for comparison with a packet loss model. The packet loss model may be based on a Forward Error Correction (FEC) system. The loss analyzer determines if lost packets could have been recovered by a receiving network device, if FEC had been used. If the lost packets could have been corrected by FEC, the loss analyzer assumes that no network congestion exists and that the packet loss comes from the lossy aspects of the network, such as radio interference for wireless networks. If the loss analyzer determines that some of the lost packet could not have been recovered by the receiving network device, the loss analyzer assumes that network congestion causes these packet losses and reduces the data rate.
Abstract:
Serial clustering uses two or more network devices connected in series via a local and/or wide-area network to provide additional capacity when network traffic exceeds the processing capabilities of a single network device. When a first network device reaches its capacity limit, any excess network traffic beyond that limit is passed through the first network device unchanged. A network device connected in series with the first network device intercepts and will process the excess network traffic provided that it has sufficient processing capacity. Additional network devices can process remaining network traffic in a similar manner until all of the excess network traffic has been processed or until there are no more additional network devices. Network devices may use rules to determine how to handle network traffic. Rules may be based on the attributes of received network packets, attributes of the network device, or attributes of the network.
Abstract:
An accessory stationery product is provided that includes at least one dry erase board surface that is adapted to be detachably mounted within or with respect to a spiral bound notebook. The accessory product includes structural feature(s) that allow a user to insert the dry erase surface into engagement with wire binder of a spiral bound notebook. The inserts are compatible with a range of standard/conventional spiral bound notebooks, i.e., spiral bound notebooks manufactured by various manufacturers and offered under various brand names. The accessory device addresses fundamental shortcomings associated with conventional spiral bound notebook design/use by giving users a cleanly erasable and reusable work space for notes and scratch work within the spiral bound notebook, thereby permitting the user to save paper and stay organized. Thus, the accessory device allows the user to have two spaces within a notebook: a versatile, dry erase space for temporary notes and scratch work and the rest of the notebook for more permanent work.
Abstract:
Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage; however, virtual storage arrays actually store data at the data center. The virtual storage arrays overcomes bandwidth and latency limitations of the wide area network by predicting and prefetching storage blocks, which are then cached at the branch location. Virtual storage arrays leverage an understanding of the semantics and structure of high-level data structures associated with storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. Virtual storage arrays determine the association between requested storage blocks and corresponding high-level data structure entities to predict additional high-level data structure entities that are likely to be accessed. From this, the virtual storage array identifies the additional storage blocks for prefetching.
Abstract:
Methods, systems and apparatus, including computer programs encoded on a computer storage medium, for disambiguating names in a document corpus. In an aspect, a method includes generating context term lists for a person name, each context term list being a list of context terms from a resource for the person name; clustering the context term lists into a plurality of clusters, each of the clusters of context term lists including context term lists that are most similar to the cluster relative to other clusters; for each of the clusters, selecting a representative term for the cluster; receiving the person name as a search query; and generating a plurality of query suggestions from the search query and the representative terms for the clusters, each query suggesting being a combination of the person name and one representative term.
Abstract:
Methods, systems, and apparatus, including computer program products, for generating synthetic queries using seed queries and structural similarity between documents are described. In one aspect, a method includes identifying embedded coding fragments (e.g., HTML tag) from a structured document and a seed query; generating one or more query templates, each query template corresponding to at least one coding fragment, the query template including a generative rule to be used in generating candidate synthetic queries; generating the candidate synthetic queries by applying the query templates to other documents that are hosted on the same web site as the document; identifying terms that match structure of the query templates as candidate synthetic queries; measuring a performance for each of the candidate synthetic queries; and designating as synthetic queries the candidate synthetic queries that have performance measurements exceeding a performance threshold.
Abstract:
In one implementation, processor testing may include the ability to randomly generate a first plurality of branch instructions for a first portion of an instruction set, each branch instruction in the first portion branching to a respective instruction in a second portion of the instruction set, the branching of the branch instructions to the respective instructions being arranged in a sequential manner. Processor testing may also include the ability to randomly generate a second plurality of branch instructions for the second portion of the instruction set, each branch instruction in the second portion branching to a respective instruction in the first portion of the instruction set, the branching of the branch instructions to the respective instructions being arranged in a sequential manner. Processor testing may additionally include the ability to generate a plurality of instructions to increment a counter when each branch instruction is encountered during execution.
Abstract:
A network analysis system invokes an application specific, or source-destination specific, path discovery process. The application specific path discovery process determines the path(s) used by the application, collects performance data from the nodes along the path, and communicates this performance data to the network analysis system for subsequent performance analysis. The system may also maintain a database of prior network configurations to facilitate the identification of nodes that are off the path that may affect the current performance of the application. The system may also be specifically controlled so as to identify the path between any pair of specified nodes, and to optionally collect performance data associated with the path.