摘要:
A method of assigning virtual memory to physical memory in a data processing system allocates a set of contiguous physical memory pages for a new page mapping, instructs the memory controller to move the virtual memory pages according to the new page mapping, and then allows access to the virtual memory pages using the new page mapping while the memory controller is still copying the virtual memory pages to the set of physical memory pages. The memory controller can use a mapping table which temporarily stores entries of the old and new page addresses, and releases the entries as copying for each entry is completed. The translation lookaside buffer (TLB) entries in the processor cores are updated for the new page addresses prior to completion of copying of the memory pages by the memory controller. The invention can be extended to non-uniform memory array (NUMA) systems. For systems with cache memory, any cache entry which is affected by the page move can be updated by modifying its address tag according to the new page mapping. This tag modification may be limited to cache entries in a dirty coherency state. The cache can further relocate a cache entry based on a changed congruence class for any modified address tag.
摘要:
A multiprocessor system includes a plurality of data processing nodes. Each node has a processor coupled to a system memory, a cache memory, and a cache directory. The cache directory contains cache coherency information for a predetermined range of system memory addresses. An interconnection enables the nodes to exchange messages. A node initiating a function shipping request identifies an intermediate destination directory based on a list of the function's operands and sends a message indicating the function and its corresponding operands to the identified destination directory. The destination cache directory determines a target node based, at least in part, on its cache coherency status information to reduce memory access latency by selecting a target node where all or some of the operands are valid in the local cache memory. The destination directory then ships the function to the target node over the interconnection.
摘要:
A method for a framework for scheduling tasks in a multi-core processor or multiprocessor system is provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.
摘要:
A system, and computer usable program product for a framework for scheduling tasks in a multi-core processor or multiprocessor system are provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.
摘要:
A method for a framework for scheduling tasks in a multi-core processor or multiprocessor system is provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.
摘要:
A method, system, and computer usable program product for a framework for scheduling tasks in a multi-core processor or multiprocessor system are provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.
摘要:
A data processing system includes a plurality of processing nodes that each contain at least one processor and data storage. The plurality of processing nodes are coupled together by a system interconnect. The data processing system further includes a configuration utility residing in data storage within at least one of the plurality of processing nodes. The configuration utility selectively configures the plurality of processing nodes into either a single non-uniform memory access (NUMA) system or into multiple independent data processing systems through communication via the system interconnect.
摘要:
A method for managing data access in a mobile device is provided in the illustrative embodiments. Using a data manager executing in the mobile device, a data item is configured in a data model. A value parameter of the data item is populated with data and a status parameter of the data item is populated with a status indication. A subscription to the data item is received from a mobile application executing in the mobile device. In response to the subscription, the data and the status of the data item are sent to the mobile application.
摘要:
An Internet client is provided with a SOCKS server. The client comprises a processor having an operating system, and a suite of one or more Internet tools. The SOCKS proxy server includes means for intercepting and servicing connection requests from the Internet tools. Preferably, the proxy server has a predetermined Internet Protocol address, preferably the loopback address. If the loopback address is not available on the protocol stack, a redirecting mechanism is used to redirect connection requests associated with stale IP addresses to a current IP address. The SOCKS server includes a filtering mechanism for filtering connection requests to particular servers, and a monitoring mechanism for monitoring network IP activity.
摘要:
An extended register processor includes a register file having a legacy register set and an extended register set. The extended register set includes a plurality of extended registers accessible only to extended register instructions. The processor maps extended register references to physical extended registers at run time. The processor includes a configurable extended register mapping unit to support this functionality. The mapping unit is accessible to an instruction decoder, which detects extended register references and forwards them to the mapping unit. The mapping unit returns a physical extended register corresponding to the extended register reference in the instruction. The mapping unit is configurable so that, for example, the mapping is specific to a code block. An extended register allocation instruction causes the processor to allocate a portion of the extended register set to the code block in which the declaration is located and to configure the mapping unit to reflect the allocation.