Abstract:
A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.
Abstract:
A switch capable of on-the-fly reduction in a network is provided. The switch is equipped with a reduction engine that can be dynamically configured to perform on-the-fly reduction. As a result, the network can facilitate an efficient and scalable environment for high performance computing.
Abstract:
A network interface controller (NIC) capable of efficient packet forwarding is provided. The NIC can be equipped with a host interface, a packet generation logic block, and a forwarding logic block. During operation, the packet generation logic block can obtain, via the host interface, a message from the host device and for a remote device. The packet generation logic block may generate a plurality of packets for the remote device from the message. The forwarding logic block can then send a first subset of packets of the plurality of packets based on ordered delivery. If a first condition is met, the forwarding logic block can send a second subset of packets of the plurality of packets based on unordered delivery. Furthermore, if a second condition is met, the forwarding logic block can send a third subset of packets of the plurality of packets based on ordered delivery.
Abstract:
A network interface controller (NIC) capable of hybrid message matching is provided. The NIC can be equipped with a host interface, a hardware endpoint, and an endpoint management logic block. The host interface can couple the NIC to a host device. The hardware endpoint can facilitate a point of communication for an application running on the host device. The endpoint management logic block can maintain a list for storing a message associated with an endpoint represented by the hardware endpoint. The endpoint management logic block can then determine whether the utilization of the list is higher than a threshold. If the utilization is higher than the threshold, the endpoint management logic block can set a state of the endpoint to indicate that the endpoint is software managed. The NIC thus can transfer the control of the endpoint from the hardware endpoint to a software process of the host device.
Abstract:
A network interface controller (NIC) capable of on-demand paging is provided. The NIC can be equipped with a host interface, an operation logic block, and an address logic block. The host interface can couple the NIC to a host device. The operation logic block can obtain from a remote device, a request for an operation based on a virtual memory address. The address logic block can obtain, from the operation logic block, a request for an address translation for the virtual memory address and issue an address translation request to the host device via the host interface. If the address translation is unsuccessful, the address logic block can send a page request to a processor of the host device via the host interface. The address logic block can then determine that a page has been allocated in response to the page request and reissue the address translation request.
Abstract:
A switch equipped with a self-managing reduction engine is provided. During operation, the reduction engine can use a timeout mechanism to manage itself in different latency-induced or error scenarios. As a result, the network can facilitate an efficient and scalable environment for high performance computing.
Abstract:
A switch is provided, which can receive a data communication at an edge of a network. The network may be made up of a plurality of switches. The switch may generate a flow channel based upon an identified source and destination for the data communication. The data communication can be routed across the plurality of switches based on minimizing a number of hops between a subset of the plurality of switches and in accordance with the flow channel.
Abstract:
Systems and methods for cooling computer components in large computer systems are disclosed herein. In one embodiment, a computer system configured in accordance with aspects of the invention can include a computer module positioned in a chassis, and an air mover configured to move air through the chassis and past the computer module. The computer system can further include a pressure sensor operably coupled to the air mover. If the pressure sensor determines that the difference between a first air pressure inside the chassis and a second air pressure outside the chassis is less than a preselected pressure, the air mover can increase the flow of air through the chassis and past the computer module.
Abstract:
Systems and methods for cooling computer components in large computer systems are disclosed herein. In one embodiment, a computer system configured in accordance with aspects of the invention can include a computer module positioned in a chassis, and an air mover configured to move air through the chassis and past the computer module. The computer system can further include a pressure sensor operably coupled to the air mover. If the pressure sensor determines that the difference between a first air pressure inside the chassis and a second air pressure outside the chassis is less than a preselected pressure, the air mover can increase the flow of air through the chassis and past the computer module.
Abstract:
A memory controller and method that provide a read-refresh (also called "distributed-refresh") mode of operation, in which every row of memory is read within the refresh-rate requirements of the memory parts, with data from different columns within the rows being read on subsequent read-refresh cycles until all rows for each and every column address have been read, scrubbing errors if found, thus providing a scrubbing function that is integrated into the read-refresh operation, rather than being an independent operation. For scrubbing, an atomic read-correct-write operation is scheduled. A variable-priority, variable-timing refresh interval is described. An integrated card self-tester and/or card reciprocal-tester is described. A memory bit-swapping-within-address-range circuit, and a method and apparatus for bit swapping on the fly and testing are described.