摘要:
A method and an apparatus for characterizing performance of a device based on user-perceivable latency. To characterize device performance, a value of a metric may be computed from latencies of operations performed by the device. In computing a value of a metric, latencies may be treated differently, such that some latencies perceivable by a user of the device may have a greater impact on the value of the metric than other latencies that either are not perceivable or are perceived by the user to a lesser degree. Such a performance metric based on user-perceivable latency facilitates identification of computing device that provide a desirable user experience.
摘要:
A method and an apparatus for characterizing performance of a device based on user-perceivable latency. To characterize device performance, a value of a metric may be computed from latencies of operations performed by the device. In computing a value of a metric, latencies may be treated differently, such that some latencies perceivable by a user of the device may have a greater impact on the value of the metric than other latencies that either are not perceivable or are perceived by the user to a lesser degree. Such a performance metric based on user-perceivable latency facilitates identification of computing device that provide a desirable user experience.
摘要:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.
摘要:
Various embodiments provide a search tool that utilizes multiple different search engines. The individual search engines are configured to conduct searches in different ways across a search space that includes different types of data sets. In at least some embodiments, the type of search engine that is utilized is a function of characteristics of the data set(s) that is (are) to be searched. In search spaces that include different types of data sets, combining and mixing different search engines to collectively search the search space can provide a desirably fast and robust user experience.
摘要:
Two different process management views can be displayed, and a user can request to switch between the two views. The user can select a process in either view and have the selected process terminated. One view is a simplified view that identifies processes and whether they are non-responsive. The other view is an expanded view that identifies processes and the amount of various system resources used by each of those processes. Various additional information can be displayed in the expanded view, such as identifiers of various windows, tabs, and/or services associated with each of the processes.
摘要:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.
摘要:
Various embodiments provide a search tool that utilizes multiple different search engines. The individual search engines are configured to conduct searches in different ways across a search space that includes different types of data sets. In at least some embodiments, the type of search engine that is utilized is a function of characteristics of the data set(s) that is (are) to be searched. In search spaces that include different types of data sets, combining and mixing different search engines to collectively search the search space can provide a desirably fast and robust user experience.
摘要:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.
摘要:
Indexing documents is performed using low priority I/O requests. This aspect can be implemented in systems having an operating system that supports at least two priority levels for I/O requests to its filing system. Low priority I/O requests can be used for accessing documents to be indexed. Low priority I/O requests can also be used for writing information into the index. Higher priority requests can be used for I/O requests to access the index in response queries from a user. I/O request priority can be set on a per-thread basis as opposed to being set on a per-process basis (which may generate two or more threads for which it may be desirable to assign different priorities).
摘要:
A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.