-
621.
公开(公告)号:US10340916B1
公开(公告)日:2019-07-02
申请号:US15859124
申请日:2017-12-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Thomas J. Gibney , Sridhar V. Gada , Alexander J. Branover , Benjamin Tsien
IPC: G06F17/50 , H03K19/0175 , H03K19/173
CPC classification number: H03K19/017509 , H03K19/1733
Abstract: An electronic device includes a plurality of hardware functional blocks, the hardware functional blocks being logically grouped into two or more islands, with each island including a different one or more of the hardware functional blocks. A hardware controller in the electronic device is configured to determine a present activity being performed by at least one of the hardware functional blocks. The hardware controller then, based on the present activity, configures supply voltages for the hardware functional blocks in some or all of the islands.
-
公开(公告)号:US20190197761A1
公开(公告)日:2019-06-27
申请号:US15853207
申请日:2017-12-22
Applicant: Advanced Micro Devices, Inc.
Inventor: Skyler Jonathon Saleh , Maxim V. Kazakov , Vineet Goel
Abstract: A texture processor based ray tracing accelerator method and system are described. The system includes a shader, texture processor (TP) and cache, which are interconnected. The TP includes a texture address unit (TA), a texture cache processor (TCP), a filter pipeline unit and a ray intersection engine. The shader sends a texture instruction which contains ray data and a pointer to a bounded volume hierarchy (BVH) node to the TA. The TCP uses an address provided by the TA to fetch BVH node data from the cache. The ray intersection engine performs ray-BVH node type intersection testing using the ray data and the BVH node data. The intersection testing results and indications for BVH traversal are returned to the shader via a texture data return path. The shader reviews the intersection results and the indications to decide how to traverse to the next BVH node.
-
623.
公开(公告)号:US20190196978A1
公开(公告)日:2019-06-27
申请号:US15852442
申请日:2017-12-22
Applicant: Advanced Micro Devices, Inc.
Inventor: Arkaprava Basu , Eric Van Tassell , Mark Oskin , Guilherme Cox , Gabriel Loh
IPC: G06F12/1009 , G06F12/1027 , G06F9/38 , G06F13/40 , G06F13/42 , G06F9/48
CPC classification number: G06F12/1009 , G06F9/3887 , G06F9/4843 , G06F12/1027 , G06F13/4022 , G06F13/4282 , G06F2212/65 , G06F2212/68 , G06F2213/0026
Abstract: A data processing system includes a memory and an input output memory management unit that is connected to the memory. The input output memory management unit is adapted to receive batches of address translation requests. The input output memory management unit has instructions that identify, from among the batches of address translation requests, a later batch having a lower number of memory access requests than an earlier batch, and selectively schedules access to a page table walker for each address translation request of a batch.
-
公开(公告)号:US10331196B2
公开(公告)日:2019-06-25
申请号:US15626847
申请日:2017-06-19
Applicant: Advanced Micro Devices, Inc.
Inventor: Russell Schreiber
IPC: G06F1/324 , H03K5/159 , G06F1/3296 , G06F1/3287
Abstract: A system and method for providing efficient clock gating capability for functional units are described. A functional unit uses a clock gating circuit for power management. A setup time of a single device propagation delay is provided for a received enable signal. When each of a clock signal, the enable signal and a delayed clock signal is asserted, an evaluate node of the clock gating circuit is discharged. When each of the clock signal and a second clock signal is asserted and the enable signal is negated, the evaluate node is left floating for a duration equal to the hold time. Afterward, the devices in a delayed onset keeper are turned on and the evaluate node has a path to the power supply. When the clock signal is negated, the evaluate node is precharged.
-
公开(公告)号:US20190188557A1
公开(公告)日:2019-06-20
申请号:US15849617
申请日:2017-12-20
Applicant: Advanced Micro Devices, Inc.
Inventor: Daniel I. Lowell , Sergey Voronov , Mayank Daga
Abstract: Methods, devices, systems, and instructions for adaptive quantization in an artificial neural network (ANN) calculate a distribution of ANN information; select a quantization function from a set of quantization functions based on the distribution; apply the quantization function to the ANN information to generate quantized ANN information; load the quantized ANN information into the ANN; and generate an output based on the quantized ANN information. Some examples recalculate the distribution of ANN information and reselect the quantization function from the set of quantization functions based on the resampled distribution if the output does not sufficiently correlate with a known correct output. In some examples, the ANN information includes a set of training data. In some examples, the ANN information includes a plurality of link weights.
-
626.
公开(公告)号:US10320695B2
公开(公告)日:2019-06-11
申请号:US15165953
申请日:2016-05-26
Applicant: Advanced Micro Devices, Inc.
Inventor: Steven K. Reinhardt , Marc S. Orr , Bradford M. Beckmann , Shuai Che , David A. Wood
IPC: G06F15/173 , H04L12/805 , H04L12/811
Abstract: A system and method for efficient management of network traffic management of highly data parallel computing. A processing node includes one or more processors capable of generating network messages. A network interface is used to receive and send network messages across a network. The processing node reduces at least one of a number or a storage size of the original network messages into one or more new network messages. The new network messages are sent to the network interface to send across the network.
-
公开(公告)号:US10318363B2
公开(公告)日:2019-06-11
申请号:US15338172
申请日:2016-10-28
Applicant: Advanced Micro Devices, Inc.
Inventor: Greg Sadowski , Steven E. Raasch , Shomit N. Das , Wayne Burleson
Abstract: A system and method for managing operating parameters within a system for optimal power and reliability are described. A device includes a functional unit and a corresponding reliability evaluator. The functional unit provides reliability information to one or more reliability monitors, which translate the information to reliability values. The reliability evaluator determines an overall reliability level for the system based on the reliability values. The reliability monitor compares the actual usage values and the expected usage values. When system has maintained a relatively high level of reliability for a given time interval, the reliability evaluator sends an indication to update operating parameters to reduce reliability of the system, which also reduces power consumption for the system.
-
公开(公告)号:US10312221B1
公开(公告)日:2019-06-04
申请号:US15844575
申请日:2017-12-17
Applicant: Rahul Agarwal , Kaushik Mysore Srinivasa Setty , Milind S. Bhagavat , Brett P. Wilkerson
IPC: H01L23/52 , H01L25/065 , H01L23/538 , H01L23/498 , H01L23/00
CPC classification number: H01L25/0657 , H01L23/49811 , H01L23/5384 , H01L24/10
Abstract: Various semiconductor chip devices with stacked chips are disclosed. In one aspect, a semiconductor chip device includes a stack of plural semiconductor chips. Each two adjacent semiconductor chips of the plural semiconductor chips is electrically connected by plural interconnects and physically connected by a first insulating bonding layer. A first stack of dummy chips is positioned opposite a first side of the stack of semiconductor chips and separated from the plural semiconductor chips by a first gap. Each two adjacent of the first dummy chips are physically connected by a second insulating bonding layer. A second stack of dummy chips is positioned opposite a second side of the stack of semiconductor chips and separated from the plural semiconductor chips by a second gap. Each two adjacent of the second dummy chips are physically connected by a third insulating bonding layer. The first, second and third insulating bonding layers include a first insulating layer and a second insulating layer bonded to the first insulating layer. An insulating layer is in the first gap and another insulating layer is in the second gap.
-
公开(公告)号:US10311236B2
公开(公告)日:2019-06-04
申请号:US15358640
申请日:2016-11-22
Applicant: Advanced Micro Devices, Inc. , ATI Technologies ULC
Inventor: Kathirkamanathan Nadarajah , Oswin Housty , Sergey Blotsky , Tan Peng , Hary Devapriyan Mahesan
IPC: G06F9/00 , G06F15/177 , G06F21/57 , G06F9/4401
Abstract: Systems, apparatuses, and methods for performing secure system memory training are disclosed. In one embodiment, a system includes a boot media, a security processor with a first memory, a system memory, and one or more main processors coupled to the system memory. The security processor is configured to retrieve first data from the boot media and store and authenticate the first data in the first memory. The first data includes a first set of instructions which are executable to retrieve, from the boot media, a configuration block with system memory training parameters. The security processor also executes a second set of instructions to initialize and train the system memory using the training parameters. After training the system memory, the security processor retrieves, authenticates, and stores boot code in the system memory and releases the one or more main processors from reset to execute the boot code.
-
公开(公告)号:US10311191B2
公开(公告)日:2019-06-04
申请号:US15416731
申请日:2017-01-26
Applicant: Advanced Micro Devices, Inc.
Inventor: John J. Wuu , Patrick J. Shyvers , Ryan Alan Selby
Abstract: A system and method for floorplanning a memory. A computing system includes a processing unit which generates memory access requests and a memory. The size of each memory line in the memory includes M bits. A memory macro block includes at least a primary array and a sidecar array. The primary array stores a first portion of a memory line and the sidecar array stores a second smaller portion of the memory line being accessed. The primary array and the sidecar array have different heights. The height of the sidecar array is based on a notch height in at least one corner of the memory macro block. The notch creates on-die space for s reserved area on the die. The notches result in cross-shaped, T-shaped, and/or L-shaped memory macro blocks.
-
-
-
-
-
-
-
-
-