Abstract:
A method for relief of pain comprises applying skin stimulation along one or two treatment lines traversing vertically the scalp of the subject above the eyes thereof. Skin stimulation may be accomplished by one or several techniques including acupuncture, corona discharge, applying an electromagnetic field, massaging, vibration, subcutaneous injection of stimulating fluids, light, or heating. Suitable head-mounted devices supporting skin stimulation units are described to allow self-use by the subject itself or the method to be administered by a medical personnel.
Abstract:
A method of parallel processing an ordered input data stream that includes a plurality of input data elements and a corresponding plurality of order keys for indicating an ordering of the input data elements, with each order key associated with one of the input data elements, includes processing the input data stream in a parallel manner with a plurality of worker units, thereby generating a plurality of sets of output data elements. The plurality of sets of output data elements is stored in a plurality of buffers, with each buffer associated with one of the worker units. An ordered output data stream is output while the input data stream is being processed by outputting selected output data elements from the buffers in an order that is based on the order keys.
Abstract:
A data partitioning interface provides procedure headings to create data partitions for processing data elements in parallel, and for obtaining data elements to process, without specifying the organizational structure of a data partitioning. A data partitioning implementation associated with the data partitioning interface provides operations to implement the interface procedures, and may also provide dynamic partitioning to facilitate load balancing.
Abstract:
A parallelism policy object provides a control parallelism interface whose implementation evaluates parallelism conditions that are left unspecified in the interface. User-defined and other parallelism policy procedures can make recommendations to a worker program for transitioning between sequential program execution and parallel execution. Parallelizing assistance values obtained at runtime can be used in the parallelism conditions on which the recommendations are based. A consistent parallelization policy can be employed across a range of parallel constructs, and inside recursive procedures.
Abstract:
The present invention extends to methods, systems, and computer program products for automatically optimizing memory accesses by kernel functions executing on parallel accelerator processors. A function is accessed. The function is configured to operate over a multi-dimensional matrix of memory cells through invocation as a plurality of threads on a parallel accelerator processor. A layout of the memory cells of the multi-dimensional matrix and a mapping of memory cells to global memory at the parallel accelerator processor are identified. The function is analyzed to identify how each of the threads access the global memory to operate on corresponding memory cells when invoked from the kernel function. Based on the analysis, the function altered to utilize a more efficient memory access scheme when performing accesses to the global memory. The more efficient memory access scheme increases coalesced memory access by the threads when invoked over the multi-dimensional matrix.
Abstract:
A method includes producing values with a producer thread, and providing a queue data structure including a first array of storage locations for storing the values. The first array has a first tail pointer and a first linking pointer. If a number of values stored in the first array is less than a capacity of the first array, an enqueue operation writes a new value at a storage location pointed to by the first tail pointer and advances the first tail pointer. If the number of values stored in the first array is equal to the capacity of the first array, a second array of storage locations is allocated in the queue. The second array has a second tail pointer. The first array is linked to the second array with the first linking pointer. An enqueue operation writes the new value at a storage location pointed to by the second tail pointer and advances the second tail pointer.
Abstract:
A concurrent grouping operation for execution on a multiple core processor is provided. The grouping operation is provided with a sequence or set of elements. In one phase, each worker receives a partition of a sequence of elements to be grouped. The elements of each partition are arranged into a data structure, which includes one or more keys where each key corresponds to a value list of one or more of the received elements associated with that key. In another phase, the data structures created by each worker are merged so that the keys and corresponding elements for the entire sequence of elements exist in one data structure. Recursive merging can be completed in a constant time, which is not proportional to the length of the sequence.
Abstract:
A membership interface provides procedure headings to add and remove elements of a data collection, without specifying the organizational structure of the data collection. A membership implementation associated with the membership interface provides thread-safe operations to implement the interface procedures. A blocking-bounding wrapper on the membership implementation provides blocking and bounding support separately from the thread-safety mechanism.
Abstract:
A method of resizing a concurrently accessed hash table is disclosed. The method includes acquiring the locks in the hash table. The hash table, in a first state, is dynamically reconfigured in size into a second state. Additionally, the amount of locks is dynamically adjusted based on comparing the size of the hash table in the second state to the size of the hash table in the second state.
Abstract:
A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates to a vertex spawner that an instance of each computational vertex is to be spawned in the local execution environment based on the organizational topology and indicated data storage locations, and indicates to the local execution environment that the spawned vertices are to be executed.