-
公开(公告)号:US20190205139A1
公开(公告)日:2019-07-04
申请号:US15858899
申请日:2017-12-29
Applicant: Intel Corporation
Inventor: Christopher J. Hughes , Joseph Nuzman , Jonas Svennebring , Doddaballapur N. Jayasimha , Samantika S. Sury , David A. Koufaty , Niall D. McDonnell , Yen-Cheng Liu , Stephen R. Van Doren , Stephen J. Robinson
IPC: G06F9/30
Abstract: Disclosed embodiments relate to spatial and temporal merging of remote atomic operations. In one example, a system includes an RAO instruction queue stored in a memory and having entries grouped by destination cache line, each entry to enqueue an RAO instruction including an opcode, a destination identifier, and source data, optimization circuitry to receive an incoming RAO instruction, scan the RAO instruction queue to detect a matching enqueued RAO instruction identifying a same destination cache line as the incoming RAO instruction, the optimization circuitry further to, responsive to no matching enqueued RAO instruction being detected, enqueue the incoming RAO instruction; and, responsive to a matching enqueued RAO instruction being detected, determine whether the incoming and matching RAO instructions have a same opcode to non-overlapping cache line elements, and, if so, spatially combine the incoming and matching RAO instructions by enqueuing both RAO instructions in a same group of cache line queue entries at different offsets.
-
公开(公告)号:US20240362021A1
公开(公告)日:2024-10-31
申请号:US18670427
申请日:2024-05-21
Applicant: Intel Corporation
Inventor: Doddaballapur N. Jayasimha , Jonas Svennebring , Samantika S. Sury , Christopher J. Hughes , Jong Soo Park , Lingxiang Xiang
CPC classification number: G06F9/3004 , G06F9/3001 , G06F9/30185 , G06F9/3836 , G06F9/46 , G06F13/28
Abstract: Disclosed embodiments relate to atomic memory operations. In one example, a method of executing an instruction atomically and with weak order includes: fetching, by fetch circuitry, the instruction from code storage, the instruction including an opcode, a source identifier, and a destination identifier, decoding, by decode circuitry, the fetched instruction, selecting, by a scheduling circuit, an execution circuit among multiple circuits in a system, scheduling, by the scheduling circuit, execution of the decoded instruction out of order with respect to other instructions, with an order selected to optimize at least one of latency, throughput, power, and performance, and executing the decoded instruction, by the execution circuit, to: atomically read a datum from a location identified by the destination identifier, perform an operation on the datum as specified by the opcode, the operation to use a source operand identified by the source identifier, and write a result back to the location.
-
公开(公告)号:US20220303331A1
公开(公告)日:2022-09-22
申请号:US17626119
申请日:2020-08-07
Applicant: Intel Corporation
Inventor: Jonas Svennebring , Carl-Oscar Montelius
IPC: H04L65/752 , H04L41/147
Abstract: In one embodiment, a computing device for receiving a media stream includes processing circuitry to receive a link performance prediction for a network link between the computing device and a network, which indicates a predicted performance of the network link during a future timeframe. Based on the link performance prediction, the processing circuitry identifies a performance objective for the media stream. The performance objective is associated with media stream content that will be received in the media stream over the network link for playback during the future timeframe. Based on the link performance prediction and the performance objective, the processing circuitry adjusts one or more media streaming parameters for the media stream content to be played during the future timeframe. The processing circuitry then receives the media stream content to be played during the future timeframe over the network link based on the media streaming parameter(s).
-
公开(公告)号:US20220038359A1
公开(公告)日:2022-02-03
申请号:US17505919
申请日:2021-10-20
Applicant: Intel Corporation
Inventor: Jonas Svennebring , Antony Vance Jeyaraj
Abstract: Various systems and methods for determining and communicating Link Performance Predictions (LPPs), such as in connection with management of radio communication links, are discussed herein. The LPPs are predictions of future network behaviors/metrics (e.g., bandwidth, latency, capacity, coverage holes, etc.). The LPPs are communicated to applications and/or network infrastructure, which allows the applications/infrastructure to make operational decisions for improved signaling/link resource utilization. In embodiments, the link performance analysis is divided into multiple layers that determine their own link performance metrics, which are then fused together to make an LPP. Each layer runs different algorithms, and provides respective results to an LPP layer/engine that fuses the results together to obtain the LPP. Other embodiments are described and/or claimed.
-
公开(公告)号:US20210385720A1
公开(公告)日:2021-12-09
申请号:US17184832
申请日:2021-02-25
Applicant: Intel Corporation
Inventor: Jonas Svennebring , Niall D. McDonnell , Andrey Chilikin , Andrew Cunningham , Christopher MacNamara , Carl-Oscar Montelius , Eliezer Tamir , Bjorn Topel
IPC: H04W36/30 , H04W36/32 , H04L12/715 , H04W76/27 , H04L12/717 , H04W40/18
Abstract: Aspects of data re-direction are described, which can include software-defined networking (SDN) data re-direction operations. Some aspects include data re-direction operations performed by one or more virtualized network functions. In some aspects, a network router decodes an indication of a handover of a user equipment (UE) from a first end point (EP) to a second EP, based on the indication, the router can update a relocation table including the UE identifier, an identifier of the first EP, and an identifier of the second EP. The router can receive a data packet for the UE, configured for transmission to the first EP, and modify the data packet, based on the relocation table, for rerouting to the second EP. In some aspects, the router can decode handover prediction information, including an indication of a predicted future geographic location of the UE, and update the relocation table based on the handover prediction information.
-
公开(公告)号:US20190319868A1
公开(公告)日:2019-10-17
申请号:US16452352
申请日:2019-06-25
Applicant: Intel Corporation
Inventor: Jonas Svennebring , Antony Vance Jeyaraj
Abstract: Various systems and methods for determining and communicating Link Performance Predictions (LPPs), such as in connection with management of radio communication links, are discussed herein. The LPPs are predictions of future network behaviors/metrics (e.g., bandwidth, latency, capacity, coverage holes, etc.). The LPPs are communicated to applications and/or network infrastructure, which allows the applications/infrastructure to make operational decisions for improved signaling/link resource utilization. In embodiments, the link performance analysis is divided into multiple layers that determine their own link performance metrics, which are then fused together to make an LPP. Each layer runs different algorithms, and provides respective results to an LPP layer/engine that fuses the results together to obtain the LPP. Other embodiments are described and/or claimed.
-
公开(公告)号:US11500636B2
公开(公告)日:2022-11-15
申请号:US16799619
申请日:2020-02-24
Applicant: Intel Corporation
Inventor: Christopher J. Hughes , Joseph Nuzman , Jonas Svennebring , Doddaballapur N. Jayasimha , Samantika S. Sury , David A. Koufaty , Niall D. McDonnell , Yen-Cheng Liu , Stephen R. Van Doren , Stephen J. Robinson
IPC: G06F9/30 , G06F12/0875
Abstract: Disclosed embodiments relate to spatial and temporal merging of remote atomic operations. In one example, a system includes an RAO instruction queue stored in a memory and having entries grouped by destination cache line, each entry to enqueue an RAO instruction including an opcode, a destination identifier, and source data, optimization circuitry to receive an incoming RAO instruction, scan the RAO instruction queue to detect a matching enqueued RAO instruction identifying a same destination cache line as the incoming RAO instruction, the optimization circuitry further to, responsive to no matching enqueued RAO instruction being detected, enqueue the incoming RAO instruction; and, responsive to a matching enqueued RAO instruction being detected, determine whether the incoming and matching RAO instructions have a same opcode to non-overlapping cache line elements, and, if so, spatially combine the incoming and matching RAO instructions by enqueuing both RAO instructions in a same group of cache line queue entries at different offsets.
-
公开(公告)号:US20220345931A1
公开(公告)日:2022-10-27
申请号:US17844969
申请日:2022-06-21
Applicant: Intel Corporation
Inventor: Jonas Svennebring , Theoharis Charitidis , Tirthendu Sarkar
Abstract: Devices, systems, and methods for temporary link performance elevation are disclosed herein. In one embodiment, a link performance elevation (LPE) request is received. The LPE request is a request to temporarily boost performance of a network link between an endpoint and a service provider for a finite duration. Based on the LPE request, a temporary performance boost is activated for the network link at the start of the finite duration, and the temporary performance boost is deactivated for the network link at the end of the finite duration.
-
公开(公告)号:US11159408B2
公开(公告)日:2021-10-26
申请号:US16452352
申请日:2019-06-25
Applicant: Intel Corporation
Inventor: Jonas Svennebring , Antony Vance Jeyaraj
Abstract: Various systems and methods for determining and communicating Link Performance Predictions (LPPs), such as in connection with management of radio communication links, are discussed herein. The LPPs are predictions of future network behaviors/metrics (e.g., bandwidth, latency, capacity, coverage holes, etc.). The LPPs are communicated to applications and/or network infrastructure, which allows the applications/infrastructure to make operational decisions for improved signaling/link resource utilization. In embodiments, the link performance analysis is divided into multiple layers that determine their own link performance metrics, which are then fused together to make an LPP. Each layer runs different algorithms, and provides respective results to an LPP layer/engine that fuses the results together to obtain the LPP. Other embodiments are described and/or claimed.
-
公开(公告)号:US11138112B2
公开(公告)日:2021-10-05
申请号:US16382092
申请日:2019-04-11
Applicant: Intel Corporation
Inventor: Doddaballapur N. Jayasimha , Samantika S. Sury , Christopher J. Hughes , Jonas Svennebring , Yen-Cheng Liu , Stephen R. Van Doren , David A. Koufaty
IPC: G06F12/0831 , G06F12/0815 , G06F12/0808 , G06F9/30 , G06F12/0817
Abstract: Disclosed embodiments relate to remote atomic operations (RAO) in multi-socket systems. In one example, a method, performed by a cache control circuit of a requester socket, includes: receiving the RAO instruction from the requester CPU core, determining a home agent in a home socket for the addressed cache line, providing a request for ownership (RFO) of the addressed cache line to the home agent, waiting for the home agent to either invalidate and retrieve a latest copy of the addressed cache line from a cache, or to fetch the addressed cache line from memory, receiving an acknowledgement and the addressed cache line, executing the RAO instruction on the received cache line atomically, subsequently receiving multiple local RAO instructions to the addressed cache line from one or more requester CPU cores, and executing the multiple local RAO instructions on the received cache line independently of the home agent.
-
-
-
-
-
-
-
-
-