摘要:
Business activity management of monitoring multiple instances of a business activity and navigating business activity data in a distributed system. A system includes a plurality of databases storing data relating to multiple instances of a monitored business activity. A user interface receives a request from a user for business activity data independent of which database stores the requested data. An activity monitoring component for navigating to one of the databases as a function of defined relationships to retrieve the requested business activity data. The user interface displays the retrieved data to the user based on the user's permission.
摘要:
Multiple aggregation groups, which can be multiple partitions in an aggregated data table, are formed. Each group includes multiple aggregation records; each aggregation record includes an aggregation of values contained by a different subset of multiple database records. While an aggregation group is accessed by a single program thread during an aggregation group update transaction, no other threads are allowed to access that group. The aggregation groups are combined into a single table of aggregation records. Each of the multiple database records may correspond to an instance of an organizational activity and include a field having a value indicating the corresponding instance to be in one of several process states. Each aggregation group may further include time-sorted aggregation records, each time-sorted aggregation record containing an aggregation value for instances in one of the several process states during a time period associated with the time-sorted aggregation record. Aggregation records corresponding to instances completed outside of a preselected time window are deleted.
摘要:
A multi-threaded processor is provided that internally reorders output threads thereby avoiding the need for an external output reorder buffer. The multi-threaded processor writes its thread results back to an internal memory buffer to guarantee that thread results are outputted in the same order in which the threads are received. A thread scheduler within the multi-threaded processor manages thread ordering control to avoid the need for an external reorder buffer. A compiler for the multi-threaded processor converts instructions that would normally send processed results directly to an external reorder buffer so that the processed thread results are instead sent to the internal memory buffer of the multi-threaded processor.
摘要:
A multi-threaded processor is provided, such as a shader processor, having an internal unified memory space that is shared by a plurality of threads and is dynamically assigned to threads as needed. A mapping table that maps virtual registers to available internal addresses in the unified memory space so that thread registers can be stored in contiguous or non-contiguous memory addresses. Dynamic sizing of the virtual registers allows flexible allocation of the unified memory space depending on the type and size of data in a thread register. Yet another feature provides an efficient method for storing graphics data in the unified memory space to improve fetch and store operations from the memory space. In particular, pixel data for four pixels in a thread are stored across four memory devices having independent input/output ports that permit the four pixels to be read in a single clock cycle for processing.
摘要:
A graphics processor capable of parallel scheduling and execution of multiple threads, and techniques for achieving parallel scheduling and execution, are described. The graphics processor may include multiple hardware units and a scheduler. The hardware units are operable in parallel, with each hardware unit supporting a respective set of operations. The hardware units may include an ALU core, an elementary function core, a logic core, a texture sampler, a load control unit, some other hardware unit, or a combination thereof. The scheduler dispatches instructions for multiple threads to the hardware units concurrently. The graphics processor may further include an instruction cache to store instructions for threads and register banks to store data. The instruction cache and register banks may be shared by the hardware units.
摘要:
A memory includes multiple interface ports. The memory also includes at least two sub-arrays each having an instance of all of the bit lines of the memory and a portion of the word lines of the memory. The memory has a common decoder coupled to the sub-arrays and configured to control each of the word lines. The memory also includes multiplexers coupled to each of the interface ports. The multiplexers are configured to cause the selection of one of the sub-arrays based upon an address of a memory cell received at one or more of the interface ports.
摘要:
A 3D graphics pipeline includes a prefetch mechanism that feeds a cache of depth tiles. The prefetch mechanism may be predictive, using triangle geometry information from previous pipeline stages to pre-charge the cache, thereby allowing for an increase in memory bandwidth efficiency. A z-value compression technique may be optionally utilized to allow for a further reduction in power consumption and memory bandwidth.
摘要:
A device includes a multimedia processor that can concurrently support multiple applications for various types of multimedia such as graphics, audio, video, camera, games, etc. The multimedia processor includes configurable storage resources to store instructions, data, and state information for the applications and assignable processing units to perform various types of processing for the applications. The configurable storage resources may include an instruction cache to store instructions for the applications, register banks to store data for the applications, context registers to store state information for threads of the applications, etc. The processing units may include an arithmetic logic unit (ALU) core, an elementary function core, a logic core, a texture sampler, a load control unit, a flow controller, etc. The multimedia processor allocates a configurable portion of the storage resources to each application and dynamically assigns the processing units to the applications as requested by these applications.
摘要:
Configuration information is used to make a determination to bypass fragment shading by a shader unit of a graphics processing unit, the shader unit capable of performing both vertex shading and fragment shader. Based on the determination, the shader unit performs vertex shading and bypasses fragment shading. A processing element other than the shader unit, such as a pixel blender, can be used to perform some fragment shading. Power is managed to “turn off” power to unused components in a case that fragment shading is bypassed. For example, power can be turned off to a number of arithmetic logic units, the shader unit using the reduced number of arithmetic logic unit to perform vertex shading. At least one register bank of the shader unit can be used as a FIFO buffer storing pixel attribute data for use, with texture data, to fragment shading operations by another processing element.
摘要:
A graphics processing unit (GPU) efficiently performs 3-dimensional (3-D) clipping using processing units used for other graphics functions. The GPU includes first and second hardware units and at least one buffer. The first hardware unit performs 3-D clipping of primitives using a first processing unit used for a first graphics function, e.g., an ALU used for triangle setup, depth gradient setup, etc. The first hardware unit may perform 3-D clipping by (a) computing clip codes for each vertex of each primitive, (b) determining whether to pass, discard or clip each primitive based on the clip codes for all vertices of the primitive, and (c) clipping each primitive to be clipped against clipping planes. The second hardware unit computes attribute component values for new vertices resulting from the 3-D clipping, e.g., using an ALU used for attribute gradient setup, attribute interpolation, etc. The buffer(s) store intermediate results of the 3-D clipping.