Abstract:
A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The ordering scope manager stores a first value in a first storage location. The first value indicates that exclusive execution of a first task in a first ordering scope is enabled. In response to a relinquish indicator being received, the ordering scope manager stores a second value in the first storage location. The second value indicates that the exclusively execution of the first task in the first ordering scope is disabled.
Abstract:
A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The ordering scope manager stores a first value in a first storage location. The first value indicates that exclusive execution of a first task in a first ordering scope is enabled. In response to a relinquish indicator being received, the ordering scope manager stores a second value in the first storage location. The second value indicates that the exclusively execution of the first task in the first ordering scope is disabled.
Abstract:
A method of multicast data transfer including accessing a source address to a source location of mapped memory which stores source data, accessing multiple destination addresses to corresponding destination locations of the mapped memory, and for each of at least one section of the source data, reading the section using the source address, storing the section into a local memory of a data transfer device, and writing the section from the local memory to each destination location in the mapped memory using the destination addresses. Separate source and destination attributes may be provided, so that the source and each destination may have different attributes for reading and storing data. The source and each destination may have any number of data buffers accessible by corresponding links provided in data structures supporting the data transfer. The source data may be divided into sections and handled section by section.
Abstract:
A communication channel controller includes a queue, a memory map, and a scheduler. The queue to store a first memory transfer request received at the communication channel controller. The memory map stores information to identify a memory address range to be associated with a memory. The scheduler to compare a source address of the first memory transfer in the queue to the memory address range in the memory map to determine whether the source address of the first memory transfer request targets the memory, and in response allocate the first memory transfer request to a first communication channel of a plurality of communication channels in response to the first communication channel having all of its outstanding memory transactions to a common source address bank and source address page as a source address bank and a source address page of the first memory transfer request.
Abstract:
A method of multicast data transfer including accessing a source address to a source location of mapped memory which stores source data, accessing multiple destination addresses to corresponding destination locations of the mapped memory, and for each of at least one section of the source data, reading the section using the source address, storing the section into a local memory of a data transfer device, and writing the section from the local memory to each destination location in the mapped memory using the destination addresses. Separate source and destination attributes may be provided, so that the source and each destination may have different attributes for reading and storing data. The source and each destination may have any number of data buffers accessible by corresponding links provided in data structures supporting the data transfer. The source data may be divided into sections and handled section by section.
Abstract:
A communication channel controller includes a queue, a memory map, and a scheduler. The queue to store a first memory transfer request received at the communication channel controller. The memory map stores information to identify a memory address range to be associated with a memory. The scheduler to compare a source address of the first memory transfer in the queue to the memory address range in the memory map to determine whether the source address of the first memory transfer request targets the memory, and in response allocate the first memory transfer request to a first communication channel of a plurality of communication channels in response to the first communication channel having all of its outstanding memory transactions to a common source address bank and source address page as a source address bank and a source address page of the first memory transfer request.
Abstract:
An integrated circuit device includes a processor core, and a controller. The processor core issues a command intended for a first thread of a plurality of threads. The controller initiates de-allocates hardware resources of the controller that are allocated to the first thread during a thread reset process for the first thread, returns a specified value to the processor core in response to the first command intended for the first thread during the thread reset process, drops responses intended for the first thread from other devices during the thread reset process, completes the thread reset process in response to a determination that all expected responses intended for the first thread have been either received or dropped, and continues to issue requests to other devices in response to commands from other threads of the plurality of threads and processing corresponding responses during the thread reset process.
Abstract:
A data processing system employs a scheduler to schedule pending memory access requests and a memory controller to service scheduled pending memory access requests. The memory access requests are asynchronously scheduled with respect to the clocking of the memory. The scheduler is operated using a clock signal with a frequency different from the frequency of the clock signal used to operate the memory controller. The clock signal used to clock the scheduler can have a lower frequency than the clock used by a memory controller. As a result, the scheduler is able to consider a greater number of pending memory access requests when selecting the next pending memory access request to be submitted to the memory for servicing and thus the resulting sequence of selected memory access requests is more likely to be optimized for memory access throughput.
Abstract:
A source processor can divide each packet of a data stream into multiple segments prior to communication of the packet, allowing a packet to be transmitted in smaller chunks. The source processor can process the segments for two or more packets for a given data stream concurrently, and provide appropriate context information in each segments header to facilitate in order transmission and reception of the packets represented by the individual segments. Similarly, a destination processor can receive the packet segments packets for an ordered data stream from a source processor, and can assign different contexts, based upon the context information in each segments header. When a last segment is received for a particular packet, the context for the particular packet is closed, and a descriptor for the packet is sent to a queue. The order in which the last segments of the packets are transmitted maintains order amongst the packets.
Abstract:
A DMA controller allocates space at a buffer to different DMA engines based on the length of time data segments have been stored at a buffer. This allocation ensures that DMA engines associated with a destination that is experiencing higher congestion will be assigned less buffer space than a destination that is experiencing lower congestion. Further, the DMA controller is able to adapt to changing congestion conditions at the transfer destinations.