摘要:
Techniques are described herein that can be used to control which packets or other data are able to be processed or otherwise utilize logic of a computing device. For example, a signature may be associated with a packet or other data received from a network. The signature and the packet or other data may be transferred to the computing device. Prior to the computing device deciding whether to allow logic such as hardware or software to use, process, or act using the packet or other data, the computing device may inspect the signature to determine if such signature permits such packet or other data to be used, processed, or acted upon.
摘要:
A system and method of providing security mechanisms for securing traffic communicated from a server system to a client system independent of the state of the client system. The server system determines whether the client system has entered an operational state. When the client system is operational, key exchange processes are initiated between the two systems, the results of the key exchange processes being the parameters for use in securing traffic communication between the two systems. The results are stored in the client system. The results are inhibited from being updated in the client system until the server system is successful in completely executing another set of key exchange processes. The results are updated with the results obtained from successful execution of the other set of key exchange processes if the execution of the other set is successful. The traffic communication is thus secured based on whatever results are stored in the client system.
摘要:
Compounds of formula I are useful in treating diseases or conditions prevented by or ameliorated with histamine-3 receptor ligands. Also disclosed are histamine-3 receptor ligand compositions and methods of antagonizing or agonizing histamine-3 receptors.
摘要:
Methods for performing efficient receive interrupt signaling and associated apparatus, computing platform, software, and firmware. Receive (RX) queues in which descriptors associated with packets are enqueued are implemented in host memory and logically partitioned into pools, with each RX queue pool associated with a respective interrupt vector. Receive event queues (REQs) associated with respective RX queue pools and interrupt vectors are also implemented in host memory. Event generation is selectively enabled for some RX queues, while event generation is masked for others. In response to event causes for RX queues that are event generation-enabled, associated events are generated and enqueued in the REQs and interrupts on associated interrupt vectors are asserted. The events are serviced by accessing the events in the REQs, which identify the RX queue for the event and a next activity location at which a next descriptor to be processed is located. After asserting an interrupt, an RX queue may be auto-masked to prevent generation of additional events when new descriptors are enqueued in the RX queue.
摘要:
An embodiment may include at least one server processor that may control, at least in part, server switch circuitry data and control plane processing. The at least one processor may include at least one cache memory that is capable of being involved in at least one data transfer that involves at least one component of the server. The at least one data transfer may be carried out in a manner that by-passes involvement of server system memory. The switch circuitry may be communicatively coupled to the at least one processor and to at least one node via communication links. The at least one processor may select, at least in part, at least one communication protocol to be used by the links. The switch circuitry may forward, at least in part, via at least one of the links at least one received packet. Many modifications are possible.
摘要:
An embodiment may include circuitry that may be capable of selecting, from network devices, at least one network device to which at least one packet is to be transmitted. The network devices may be associated, at least in part, with each other in at least one link aggregation. The circuitry may select the at least one network device based at least in part upon a relative degree of affinity that the at least one network device may have with respect to at least one central processing unit (CPU) socket that may be associated, at least in part, with at least one flow to which the at least one packet may belong. The relative degree of affinity may be relative to respective degrees of affinity that one or more others of the network devices may have with respect to the at least one CPU socket. Many modifications are possible.
摘要:
Compounds of formula (I) wherein R1, R2, R3, m, and Y are defined in the specification are TRPA1 antagonists. Compositions comprising such compounds and methods for treating conditions and disorders using such compounds and compositions are also disclosed.
摘要:
This disclosure describes enhancements to Ethernet for use in higher performance applications like Storage, HPC, and Ethernet based fabric interconnects. This disclosure provides various mechanisms for lossless fabric enhancements with error-detection and retransmissions to improve link reliability, frame pre-emption to allow higher priority traffic over lower priority traffic, virtual channel support for deadlock avoidance by enhancing Class of service functionality defined in IEEE 802.1Q, a new header format for efficient forwarding/routing in the fabric interconnect and header CRC for reliable cut-through forwarding in the fabric interconnect. The enhancements described herein, when added to standard and/or proprietary Ethernet protocols, broadens the applicability of Ethernet to newer usage models and fabric interconnects that are currently served by alternate fabric technologies like Infiniband, Fibre Channel and/or other proprietary technologies, etc.
摘要:
Examples are disclosed for establishing a window for a queue structure maintained in a cache for a processing element for a network device. The processing element may be configured to operate in cooperation with an input/output device such as a network interface card. In some of these examples, the window may include portions of the queue structure having identifiers to active allocated buffers maintained in memory for the network device. The active allocated buffers may be configured to maintain or store data received or to be forwarded by the input/output device. For these examples, the window may be adjusted based on information gathered while the identifiers are read from or written to the portions of the queue structure.
摘要:
A method and apparatus forenhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.