摘要:
Various techniques for manipulating data using access states of memory, access control fields of pointers and operations, and exception raising and exception trapping in a multithreaded computer system. In particular, the techniques include synchronization support for a thread blocked in a word, demand evaluation of values, parallel access of multiple threads to a list, synchronized and unsynchronized access to a data buffer, use of forwarding to avoid checking for an end of a buffer, use of sentinel word to detect access past a data structure, concurrent access to a word of memory using different synchronization access modes, and use of trapping to detect access to restricted memory.
摘要:
A method and system in a multithreaded processor for processing events without interrupt notifications. In one aspect of the present invention, an operating system creates a thread to execute on a stream of the processor. During execution of the thread, the thread executes a loop that determines whether an event has occurred and, in response to determining whether an event has occurred, assigns a different thread to process the event so that multiple events can be processed in parallel and so that interrupts are not needed to signal that the event has occurred. Another aspect of the present invention provides a method and system for processing asynchronously occurring events without interrupt notifications. To achieve this processing, a first thread is executed to generate a notification that the event has occurred upon receipt of the asynchronously occurring event. A second thread is also executed that loops determining whether a notification has been generated and, in response to determining that a notification has been generated, performing the processing necessary for the event.
摘要:
Various techniques for manipulating data using access states of memory, access control fields of pointers and operations, and exception raising and exception trapping in a multithreaded computer system. In particular, the techniques include synchronization support for a thread blocked in a word, demand evaluation of values, parallel access of multiple threads to a list, synchronized and unsynchronized access to a data buffer, use of forwarding to avoid checking for an end of a buffer, use of sentinel word to detect access past a data structure, concurrent access to a word of memory using different synchronization access modes, and use of trapping to detect access to restricted memory.
摘要:
A replay method and system for monitoring the generating of a data set from input data sets and, when the data set is subsequently accessed, automatically regenerating the data set if the data set is out-of-date. The replay system only regenerates those input data sets that are determined to be out-of-date and only regenerates the output data set if it is determined to be out-of-date. A data set is determined to be out-of-date only when an input data set has actually changed since the data set was last generated.
摘要:
Methods and systems for statistically eliding fences in a work stealing algorithm are disclosed. A data structure comprising a head pointer, tail pointer, barrier pointer and an advertising flag allows for dynamic load-balancing across processing resources in computer applications.
摘要:
Various techniques for manipulating data using access states of memory, access control fields of pointers and operations, and exception raising and exception trapping in a multithreaded computer system. In particular, the techniques include synchronization support for a thread blocked in a word, demand evaluation of values, parallel access of multiple threads to a list, synchronized and unsynchronized access to a data buffer, use of forwarding to avoid checking for an end of a buffer, use of sentinel word to detect access past a data structure, concurrent access to a word of memory using different synchronization access modes, and use of trapping to detect access to restricted memory.
摘要:
A method to exchange data in a shared memory system includes the use of a buffer in communication with a producer processor and a consumer processor. The cache data is temporarily stored in the buffer. The method includes for the consumer and the producer to indicate intent to acquire ownership of the buffer. In response to the indication of intent, the producer, consumer, buffer are prepared for the access. If the consumer intends to acquire the buffer, the producer places the cache data into the buffer. If the producer intends to acquire the buffer, the consumer removes the cache data from the buffer. The access to the buffer, however, is delayed until the producer, consumer, and the buffer are prepared.
摘要:
Embodiments of the present invention provide a class of computer architectures generally referred to as lightweight multi-threaded architectures (LIMA). Other embodiments may be described and claimed.
摘要:
Various techniques for manipulating data using access states of memory, access control fields of pointers and operations, and exception raising and exception trapping in a multithreaded computer system. In particular, the techniques include synchronization support for a thread blocked in a word, demand evalution of values, parallel access of multiple threads to a list, synchronized and unsynchronized access to a data buffer, use of fowarding to avoid checking for an end of a buffer, use of sentinel work to detect access past a data structure, concurrent access to a word of memory using different synchronization access modes, and use of trapping to detect access to restricted memory.
摘要:
A method and system that prepares a task for being swapped out from processor utilization that is executing on a computer with multiple processors that each support multiple streams. The task has one or more teams of threads, where each team represents threads executing on a single processor. The task designates, for each stream that is executing a thread, one stream as a team master stream and one stream as a task master stream. For each team master stream, the task notifies the operating system that the team is ready to be swapped out when each other thread of the team has saved its state and has quit its stream. Finally, for the task master stream, the task notifies the operating system that the task is ready to be swapped when it has saved its state and each other team has notified that it is ready to be swapped out.