摘要:
An interface is operable to receive an element for deletion from a bloom filter. The bloom filter includes multiple hash functions and an array. A processor is operable to generate hash function output values for the element using the hash functions. The hash function output values correspond to indices identifying bits in the array. A memory is operable to maintain supplemental data structure entries. The supplemental data structure has entries associated with the indices. The processor is operable to modify the supplemental data structure entries to delete the element from the bloom filter.
摘要:
An interface is operable to receive an element for deletion from a bloom filter. The bloom filter includes multiple hash functions and an array. A processor is operable to generate hash function output values for the element using the hash functions. The hash function output values correspond to indices identifying bits in the array. A memory is operable to maintain supplemental data structure entries. The supplemental data structure has entries associated with the indices. The processor is operable to modify the supplemental data structure entries to delete the element from the bloom filter.
摘要:
The present invention defines a new protocol for communicating with an offload engine that provides Transmission Control Protocol (“TCP”) termination over a Fibre Channel (“FC”) fabric. The offload engine terminates all protocols up to and including TCP and performs the processing associated with those layers. The offload protocol guarantees delivery and is encapsulated within FCP-formatted frames. Thus, the TCP streams are reliably passed to the host. Additionally, using this scheme, the offload engine can provide parsing of the TCP stream to further assist the host. The present invention also provides network devices (and components thereof) that are configured to perform the foregoing methods. The invention further defines how network attached storage (“NAS”) protocol data units (“PDUs”) are parsed and delivered.
摘要:
Techniques are disclosed for abstracting write acceleration techniques and tape acceleration techniques away from transport providers (e.g., away from an FC or FCIP interlink between two storage area networks) and allowing acceleration to be provided as a service by nodes within the storage area network (SAN). Doing so allows the acceleration service to be provided anywhere in the SAN. Further, doing so allows users to scale the acceleration service as needed, without having to create awkward topologies of multiple VSANS. Further still, as the acceleration service is offered independently from the transport, compression, encryption, and other services may be offered as part of the transport between the FC/FCIP connection along with the acceleration service.
摘要:
A method and apparatus for achieving maximal, full connection in a multi-processor system having a plurality of processors. Each of the multiple processors has a respective memory. The invention includes communicatively connecting the processors. Following a disruption in the communicative connection, the invention collects connectivity information on one of the processors and selects certain of the processors to cease operations, based on the connectivity information collected. The invention further communicates the selection to each of the processors communicatively coupled to the one processor. The selected processors cease operations.
摘要:
Systems and methods for providing service virtualization endpoint (SVE) redundancy in a two-node, active-standby form. An active-standby pair of SVEs register with a cloud-centric-network control point (CCN-CP) as a single service node (SN) using a virtual IP address for both a control-plane and a data-plane. At any given time, only the active SVE is a host for the control-plane and the data-plane. When a failover happens, the hosting operation is taken over by the standby SVE, therefore the failover will be transparent to CCN-CP and the SN.
摘要:
Techniques and a network edge device are provided herein to extend local area networks (LANs) and storage area networks (SANs) beyond a data center while converging the associated local area network and storage area network host layers. A packet is received at a device in a network. It is determined if the packet is routed to a local or remote storage area network or local area network. In response to determining that the packet routed to a remote storage area network, storage area network extension services are performed with respect to the packet in order to extend the storage area network on behalf of a remote location. In response to determining that the packet is routed to a local local area network traffic, local area network extension services are performed with respect to the packet in order to extend the local area network on behalf of the remote location.
摘要:
Disclosed are apparatus and methods for facilitating caching in a storage area network (SAN). In general, data transfer traffic between one or more hosts and one or more memory portions in one or more storage device(s) is redirected to one or more cache modules. One or more network devices (e.g., switches) of the SAN can be configured to redirect data transfer for a particular memory portion of one or more storage device(s) to a particular cache module. As needed, data transfer traffic for any number of memory portions and storage devices can be identified for or removed from being redirected to a particular cache module. Also, any number of cache modules can be utilized for receiving redirected traffic so that such redirected traffic is divided among such cache modules in any suitable proportion for enhanced flexibility.
摘要:
According to the present invention, methods and apparatus are provided improving data transfers between a host and a tape device on fibre channel fabrics connected through an IP fabric. A fibre channel switch preemptively responds to write requests and data transfers from a host even before acknowledgments are received from a tape device. Flow control and error handling mechanisms are implemented to provide error recovery and to allow accelerated response without overrun.
摘要:
A method and apparatus to improve the performance of a SCSI write over a high latency network. The apparatus includes a first Switch close to the initiator in a first SAN and a second Switch close to the target in a second SAN. In various embodiments, the two Switches are border switches connecting their respective SANs to a relatively high latency network between the two SANs. In addition, the initiator can be either directly connected or indirectly connected to the first Switch in the first SAN. The target can also be either directly or indirectly connected to the second Switch in the second SAN. During operation, the method includes the first Switch sending Transfer Ready (Xfr_rdy) frame(s) based on buffer availability to the initiating Host in response to a SCSI Write command from the Host directed to the target. The first and second Switches then coordinate with one another by sending Transfer Ready commands to each other independent of the target's knowledge. The second switch buffers the data received from the Host until the target indicates it is ready to receive the data. Since the Switches send frames to the initiating Host independent of the target, the Switches manipulate the OX_ID and RX_ID fields in the Fibre Channel header of the various commands associated with the SCSI Write. The OX_ID and RX_ID fields are manipulated so as to trap the commands and so that the Switches can keep track of the various commands associated with the SCSI write.