Packet reorder resolution in a load-balanced network architecture
    11.
    发明授权
    Packet reorder resolution in a load-balanced network architecture 有权
    在负载平衡的网络架构中的分组重新排序分辨率

    公开(公告)号:US07515543B2

    公开(公告)日:2009-04-07

    申请号:US11018282

    申请日:2004-12-21

    Abstract: A load-balanced network architecture is disclosed in which a traffic flow deliverable from a source node to a destination node via intermediate nodes is split into parts, and the parts are distributed to respective ones of the intermediate nodes. Path delay differences for the parts are substantially equalized by delay adjustment at one or more of the intermediate nodes, and packets of one or more of the parts are scheduled for routing from respective ones of the intermediate nodes to the destination node based on arrival times of the packets at the source node.

    Abstract translation: 公开了一种负载平衡的网络架构,其中经由中间节点从源节点传递到目的地节点的业务流被分割成部分,并且将部分分配到相应的中间节点。 通过在一个或多个中间节点处的延迟调整基本上均衡部件的路径延迟差异,并且调度部分中的一个或多个的分组基于从中间节点中的相应节点到目的地节点的分组,基于到达时间 源节点上的数据包。

    Routing for networks with content filtering
    12.
    发明申请
    Routing for networks with content filtering 有权
    具有内容过滤功能的网路路由

    公开(公告)号:US20050259648A1

    公开(公告)日:2005-11-24

    申请号:US10851493

    申请日:2004-05-21

    Abstract: A network of nodes interconnected by links has content filtering specified at certain nodes, and routing of packet connections through the network is generated based on the specified content-filtering nodes. The network is specified via a content-filtering node placement method and a network-capacity maximization method so as to apply content filtering to packets for substantially all traffic (packet streams) carried by the network.

    Abstract translation: 通过链路互连的节点网络具有在某些节点处指定的内容过滤,并且基于指定的内容过滤节点生成通过网络的分组连接的路由。 通过内容过滤节点放置方法和网络容量最大化方法来指定网络,以对网络承载的基本上所有业务(分组流)的分组应用内容过滤。

    Low RAM space, high-throughput persistent key-value store using secondary memory

    公开(公告)号:US10558705B2

    公开(公告)日:2020-02-11

    申请号:US12908153

    申请日:2010-10-20

    Abstract: Described is using flash memory (or other secondary storage), RAM-based data structures and mechanisms to access key-value pairs stored in the flash memory using only a low RAM space footprint. A mapping (e.g. hash) function maps key-value pairs to a slot in a RAM-based index. The slot includes a pointer that points to a bucket of records on flash memory that each had keys that mapped to the slot. The bucket of records is arranged as a linear-chained linked list, e.g., with pointers from the most-recently written record to the earliest written record. Also described are compacting non-contiguous records of a bucket onto a single flash page, and garbage collection. Still further described is load balancing to reduce variation in bucket sizes, using a bloom filter per slot to avoid unnecessary searching, and splitting a slot into sub-slots.

    Structuring storage based on latch-free B-trees
    15.
    发明授权
    Structuring storage based on latch-free B-trees 有权
    基于无闩锁B树构建存储

    公开(公告)号:US09003162B2

    公开(公告)日:2015-04-07

    申请号:US13527880

    申请日:2012-06-20

    Abstract: A request to modify an object in storage that is associated with one or more computing devices may be obtained, the storage organized based on a latch-free B-tree structure. A storage address of the object may be determined, based on accessing a mapping table that includes map indicators mapping logical object identifiers to physical storage addresses. A prepending of a first delta record to a prior object state of the object may be initiated, the first delta record indicating an object modification associated with the obtained request. Installation of a first state change associated with the object modification may be initiated via a first atomic operation on a mapping table entry that indicates the prior object state of the object. For example, the latch-free B-tree structure may include a B-tree like index structure over records as the objects, and logical page identifiers as the logical object identifiers.

    Abstract translation: 可以获得修改与一个或多个计算设备相关联的存储中的对象的请求,该存储是基于无闩锁B树结构组织的。 可以基于访问包括将逻辑对象标识符映射到物理存储地址的映射指示符的映射表来确定对象的存储地址。 可以启动对对象的先前对象状态的第一增量记录的前缀,所述第一增量记录指示与所获取的请求相关联的对象修改。 可以通过指示对象的先前对象状态的映射表项上的第一原子操作来启动与对象修改相关联的第一状态改变的安装。 例如,无闩锁B树结构可以包括作为对象的记录上的B树类索引结构,以及逻辑页标识符作为逻辑对象标识符。

    Content aware chunking for achieving an improved chunk size distribution
    16.
    发明授权
    Content aware chunking for achieving an improved chunk size distribution 有权
    内容感知分块实现改进的块大小分布

    公开(公告)号:US08918375B2

    公开(公告)日:2014-12-23

    申请号:US13222198

    申请日:2011-08-31

    Abstract: The subject disclosure is directed towards partitioning a file into chunks that satisfy a chunk size restriction, such as maximum and minimum chunk sizes, using a sliding window. For file positions within the chunk size restriction, a signature representative of a window fingerprint is compared with a target pattern, with a chunk boundary candidate identified if matched. Other signatures and patterns are then checked to determine a highest ranking signature (corresponding to a lowest numbered Rule) to associate with that chunk boundary candidate, or set an actual boundary if the highest ranked signature is matched. If the maximum chunk size is reached without matching the highest ranked signature, the chunking mechanism regresses to set the boundary based on the candidate with the next highest ranked signature (if no candidates, the boundary is set at the maximum). Also described is setting chunk boundaries based upon pattern detection (e.g., runs of zeros).

    Abstract translation: 本发明涉及使用滑动窗口将文件分成满足块大小限制的块,例如最大和最小块大小。 对于块大小限制内的文件位置,将窗口指纹的签名代表与目标模式进行比较,如果匹配则识别出块边界候选。 然后检查其他签名和模式以确定与该块块边界候选者相关联的最高排名签名(对应于最小编号的规则),或者如果最高排名签名匹配则设置实际边界。 如果没有匹配最高排名的签名达到最大块大小,则分块机制基于具有下一个最高排名的签名的候选者(如果没有候选,边界被设置为最大)而退化以设置边界。 还描述了基于模式检测(例如,零的运行)设置块边界。

    STRUCTURING STORAGE BASED ON LATCH-FREE B-TREES
    17.
    发明申请
    STRUCTURING STORAGE BASED ON LATCH-FREE B-TREES 有权
    基于无需B-TREES的结构存储

    公开(公告)号:US20130346725A1

    公开(公告)日:2013-12-26

    申请号:US13527880

    申请日:2012-06-20

    Abstract: A request to modify an object in storage that is associated with one or more computing devices may be obtained, the storage organized based on a latch-free B-tree structure. A storage address of the object may be determined, based on accessing a mapping table that includes map indicators mapping logical object identifiers to physical storage addresses. A prepending of a first delta record to a prior object state of the object may be initiated, the first delta record indicating an object modification associated with the obtained request. Installation of a first state change associated with the object modification may be initiated via a first atomic operation on a mapping table entry that indicates the prior object state of the object. For example, the latch-free B-tree structure may include a B-tree like index structure over records as the objects, and logical page identifiers as the logical object identifiers.

    Abstract translation: 可以获得修改与一个或多个计算设备相关联的存储中的对象的请求,该存储是基于无闩锁B树结构组织的。 可以基于访问包括将逻辑对象标识符映射到物理存储地址的映射指示符的映射表来确定对象的存储地址。 可以启动对对象的先前对象状态的第一增量记录的前缀,所述第一增量记录指示与所获取的请求相关联的对象修改。 可以通过指示对象的先前对象状态的映射表项上的第一原子操作来启动与对象修改相关联的第一状态改变的安装。 例如,无闩锁B树结构可以包括作为对象的记录上的B树类索引结构,以及逻辑页标识符作为逻辑对象标识符。

    FLASH MEMORY CACHE INCLUDING FOR USE WITH PERSISTENT KEY-VALUE STORE
    18.
    发明申请
    FLASH MEMORY CACHE INCLUDING FOR USE WITH PERSISTENT KEY-VALUE STORE 有权
    闪存存储器缓存,包括使用唯一的键值存储

    公开(公告)号:US20130282965A1

    公开(公告)日:2013-10-24

    申请号:US13919738

    申请日:2013-06-17

    Abstract: Described is using flash memory, RAM-based data structures and mechanisms to provide a flash store for caching data items (e.g., key-value pairs) in flash pages. A RAM-based index maps data items to flash pages, and a RAM-based write buffer maintains data items to be written to the flash store, e.g., when a full page can be written. A recycle mechanism makes used pages in the flash store available by destaging a data item to a hard disk or reinserting it into the write buffer, based on its access pattern. The flash store may be used in a data deduplication system, in which the data items comprise chunk-identifier, metadata pairs, in which each chunk-identifier corresponds to a hash of a chunk of data that indicates. The RAM and flash are accessed with the chunk-identifier (e.g., as a key) to determine whether a chunk is a new chunk or a duplicate.

    Abstract translation: 描述的是使用闪存,基于RAM的数据结构和机制来提供用于在闪存页中缓存数据项(例如键值对)的闪存。 基于RAM的索引将数据项映射到闪存页面,并且基于RAM的写入缓冲器保持要写入闪存存储器的数据项目,例如当可以写入全页时。 回收机制使得通过将数据项降级到硬盘或将其重新插入到写入缓冲器中,基于其访问模式,可用于闪存存储器中的使用页面。 闪存存储器可以用在数据重复数据删除系统中,其中数据项包括块标识符,元数据对,其中每个块标识符对应于指示的数据块的散列。 使用块标识符(例如,作为密钥)来访问RAM和闪存,以确定块是新的块还是重复的。

    Optimized transport protocol for delay-sensitive data
    20.
    发明授权
    Optimized transport protocol for delay-sensitive data 有权
    延迟敏感数据的优化传输协议

    公开(公告)号:US08228800B2

    公开(公告)日:2012-07-24

    申请号:US12364520

    申请日:2009-02-03

    Abstract: Transmission delays are minimized when packets are transmitted from a source computer over a network to a destination computer. The source computer measures the network's available bandwidth, forms a sequence of output packets from a sequence of data packets, and transmits the output packets over the network to the destination computer, where the transmission rate is ramped up to the measured bandwidth. In conjunction with the transmission, the source computer monitors a transmission delay indicator which it computes using acknowledgement packets it receives from the destination computer. Whenever the indicator specifies that the transmission delay is increasing, the source computer reduces the transmission rate until the indicator specifies that the delay is unchanged. The source computer dynamically decides whether each output packet will be a forward error correction packet or a single data packet, where the decision is based on minimizing the expected transmission delays.

    Abstract translation: 当数据包通过网络从源计算机传输到目标计算机时,传输延迟最小化。 源计算机测量网络的可用带宽,形成来自一系列数据分组的输出分组序列,并通过网络将输出分组发送到目标计算机,其中传输速率升高到测量带宽。 结合传输,源计算机监视传输延迟指示符,其使用从目的地计算机接收的确认分组来计算它。 每当指示符指示传输延迟增加时,源计算机降低传输速率,直到指示符指定延迟不变。 源计算机动态地确定每个输出分组是否将是前向纠错分组或单个数据分组,其中决定基于最小化期望的传输延迟。

Patent Agency Ranking