Abstract:
Transmission of data over a serial link based on a unidirectional clock signal is provided. A unidirectional clock signal is generated based on a first clock of a master device. The unidirectional clock signal is sent to a slave device that is connected to the serial link. The master device transmits data to the slave device over the serial link based on the first clock. The slave device receives the unidirectional clock signal from a master device. The slave device transmits data over the serial link to the master device based on the unidirectional clock signal.
Abstract:
Systems and methods for secure content distribution to playback devices connected to a local network via a residential gateway using secure links are disclosed. One embodiment of the invention includes a content server, a rights management server, a residential gateway configured to communicate with the content server and the rights management server via a network, and a playback device configured to communicate with the residential gateway via a local network. In addition, the residential gateway is configured to receive protected content from the content server, the playback device is configured to request access to the protected content from the residential gateway, the residential gateway is configured to request access to the protected content from the rights management server and the request includes information uniquely identifying the playback device, the rights management server is configured to provide access information to the residential gateway when the information uniquely identifying the playback device satisfies at least one predetermined criterion with respect to playback devices associated with the residential gateway, the residential gateway and the playback device are configured to create a secure link between the residential gateway and the playback device via the local network, and the residential gateway is configured to decrypt the protected content using the access information provided by the rights management server and to encrypt the decrypted content for distribution to the playback device via the secure link.
Abstract:
A PCIe Fabric (100) is made up of two or more slices (104A, 104B), where each of the slices is directly connected to a processor (112A, 112B) and one or more clients (102A-102D). Each slice (104A, 104B) includes an Input/Output (10) tier switch (108A, 108B), a hub tier switch (106A, 106B), and target devices which may be one or more persistent storage modules PSM A - D (110A, 110B, 110C, 110D). Each IO tier switch may have multiple downstream ports, where one or more such downstream ports is connected, via crosslink, to a hub tier switch in a different slice. IO tier switch (108A) is configured to receive a transaction layer packet, TLP, from a client (102A, 102B), make a determination that an address in the TLP is not associated with any multicast address range in the IO tier switch (108A) and is not associated with any downstream port in the IO tier switch (108A), and, based on the determinations, route the TLP to a first hub tier switch (106A) via an upstream port on the IO tier switch. The first hub tier switch (106A) is configured to make a determination that the TLP is associated with a multicast group, and, based on the determination, generate a rewritten TLP and route the rewritten TLP to a target device via a downstream port on the first hub tier switch (106A). The rewritten TLP includes a new address but the same data payload as the original TLP. Accordingly, address-based routing may be used to achieve a fully-connected mesh between the tiers with all clients accessing all endpoints.
Abstract:
The present invention discloses a method, an apparatus and a system for determining a service transmission path. The method includes: receiving a service chaining object sent by a client based on an expanded path computation element communication protocol (PCEP), wherein the service chaining object contains service processing capacities which service nodes need to provide when transmitting a service in a network; determining at least one service node matched with the service chaining object in the network according to service processing capacities which service nodes in pre-stored service node attribute information are capable of providing; and generating a service transmission path based on the determined service nodes, for transmitting the service initiated by the client. The problems of large flow pressure and low utilization rate of the service nodes deployed in the network may be well solved.
Abstract:
One embodiment describes a network system. The system includes a primary enclosure including a network switch system that includes a plurality of physical interface ports. A first one of the plurality of physical interface ports is to communicatively couple to a network. The system further includes a sub-enclosure comprising a network interface card (NIC) to which a computer system is communicatively coupled and a downlink extension module (DEM) that is communicatively coupled with the NIC and a second one of the plurality of physical interface ports of the network switch system to provide network connectivity of the computer system to the network via the network switch system.
Abstract:
In an example, there is provided a network apparatus for providing native load balancing within a switch, including a first network interface operable to communicatively couple to a first network; a plurality of second network interfaces operable to communicatively couple to a second network, the second network comprising a service pool of service nodes; one or more logic elements providing a switching engine operable for providing network switching; and one or more logic elements comprising a load balancing engine operable for: load balancing incoming network traffic to the service pool via native hardware according to a load balancing configuration; detecting a new service node added to the service pool; and adjusting the load balancing configuration to account for the new service node; wherein the switching engine and load balancing engine are configured to be provided on the same hardware as each other and as the first network interface and plurality of second network interfaces.
Abstract:
The present invention discloses a network resource processing apparatus, method, and system, relates to the field of communications network technologies, and is used to resolve a communication congestion problem between network devices during network resource processing. A receiving module receives computation environment information that are corresponding to a computation task and network information of each computation node , which are transferred by a scheduler scheduler platform, provides the computation environment information to a bandwidth decision module and a generation module, and provides the network information of each computation node to the generation module; the bandwidth decision module decides a to-be-allocated bandwidth for the computation task according to the computation environment information; the generation module generates routing configuration policy information for the computation task according to the network information of each computation node, the decided to-be-allocated bandwidth, and the computation environment information, and provides the routing configuration policy information to a sending module; and the sending module sends the routing configuration policy information to a routing configuration controller. The solutions provided in embodiments of the present invention are used when a network resource is being processed.
Abstract:
A switch system allows free change of a grain degree of monitoring without being conscious of a routing control. For example, a control protocol of a transmitter is used based on the open flow (OpenFlow) technique to control the monitoring function of the switch system, and the centralized control of the monitoring can be realized as the whole network and the monitoring result is reflected on the routing control. Here, the switch has a flow table for packet transfer and a flow table for monitoring. Both of the tables are searched to one packet and a multi-hit operation is performed to execute the operation of each of the entries. That is, both the tables are searched and the packet is transferred according to corresponding flow entries.
Abstract:
The present application provides a flow table-based table entry addressing method, a switch, and a controller. The method includes: receiving, by a switch, a packet; matching, by the switch, the packet based on a previous flow table; after matching is successful, sending, by the switch based on a write storage index instruction in a flow table entry that is successfully matched, storage index information along with the packet to a lower-level flow table, where, the storage index information corresponds to a flow table entry in the lower-level flow table, and the write storage index instruction is sent by a controller; and directly addressing, by the switch in the lower-level flow table based on the storage index information, the flow table entry corresponding to the storage index information.