Scalable multi-tenant network architecture for virtualized datacenters
    5.
    发明授权
    Scalable multi-tenant network architecture for virtualized datacenters 有权
    适用于虚拟化数据中心的可扩展多租户网络架构

    公开(公告)号:US09304798B2

    公开(公告)日:2016-04-05

    申请号:US14122164

    申请日:2011-06-07

    摘要: A scalable, multi-tenant network architecture for a virtualized datacenter is provided. The network architecture includes a network having a plurality of servers connected to a plurality of switches. The plurality of servers hosts a plurality of virtual interfaces for a plurality of tenants. A configuration repository is connected to the network and each server in the plurality of servers has a network agent hosted therein. The network agent encapsulates packets for transmission across the network from a source virtual interface to a destination virtual interface in the plurality of virtual interfaces for a tenant in the plurality of tenants. The packets are encapsulated with information identifying and locating the destination virtual interface, and the information is interpreted by switches connected to the source virtual interface and the destination virtual interface.

    摘要翻译: 提供了虚拟化数据中心的可扩展的多租户网络架构。 网络架构包括具有连接到多个交换机的多个服务器的网络。 多个服务器为多个租户托管多个虚拟接口。 配置存储库连接到网络,并且多个服务器中的每个服务器具有托管在其中的网络代理。 网络代理封装用于跨网络传输的分组,用于在多个虚拟接口中的源虚拟接口到多个虚拟接口中的目的地虚拟接口,用于多个租户中的租户。 通过识别和定位目标虚拟接口的信息封装数据包,并且该信息由连接到源虚拟接口和目标虚拟接口的交换机解释。

    GENERATING NETWORK TOPOLOGIES
    7.
    发明申请
    GENERATING NETWORK TOPOLOGIES 有权
    生成网络拓扑

    公开(公告)号:US20130111070A1

    公开(公告)日:2013-05-02

    申请号:US13285842

    申请日:2011-10-31

    IPC分类号: G06F15/16

    摘要: A method of generating a plurality of potential network topologies is provided herein. The method includes receiving parameters that specify a number of servers, a number of switches, and a number of ports in the switches. The parameters are for configuring a network topology. The method also includes generating one or more potential network topologies comprising the set of potential network topologies, for each of a number of dimensions. The number of dimensions is based on the number of switches. The method further includes determining that the set of potential network topologies is structurally feasible. Additionally, the method includes determining an optimal link aggregation (LAG) factor in each dimension of each of the set of potential network topologies.

    摘要翻译: 本文提供了一种产生多个潜在网络拓扑的方法。 该方法包括接收指定交换机中多个服务器,多个交换机和多个端口的参数。 参数用于配置网络拓扑。 该方法还包括为多个维度中的每一个生成包括一组潜在网络拓扑的一个或多个潜在网络拓扑。 尺寸数量取决于开关的数量。 该方法还包括确定该组潜在网络拓扑在结构上是可行的。 另外,所述方法包括确定所述潜在网络拓扑集合中的每一个的每个维度中的最优链路聚合(LAG)因子。

    System and method for energy efficient data prefetching
    8.
    发明授权
    System and method for energy efficient data prefetching 失效
    高效数据预取的系统和方法

    公开(公告)号:US07437438B2

    公开(公告)日:2008-10-14

    申请号:US10033404

    申请日:2001-12-27

    IPC分类号: G06F15/173

    CPC分类号: G06F17/30902 Y02D10/45

    摘要: A computer system uses a prefetch prediction model having energy usage parameters to predict the impact of prefetching specified files on the system's energy usage. A prefetch prediction engine utilizes the prefetch prediction model to evaluate the specified files with respect to prefetch criteria, including energy efficiency prefetch criteria, and generates a prefetch decision with respect to each file of the specified files. For each specified file for which the prefetch prediction engine generates an affirmative prefetch decision, an identifying entry is stored in a queue. The computer system fetches files identified by entries in the queue, although some or all of the entries in the queue at any one time may be deleted if it is determined that the identified files are no longer likely to be needed by the computer system.

    摘要翻译: 计算机系统使用具有能量使用参数的预取预测模型来预测预取指定文件对系统能量使用的影响。 预取预测引擎利用预取预测模型来评估关于预取标准的指定文件,包括能效预取标准,并且针对指定文件的每个文件生成预取决定。 对于预取预测引擎产生肯定预取决定的每个指定文件,识别条目存储在队列中。 计算机系统获取由队列中的条目标识的文件,但是如果确定所识别的文件不再可能被计算机系统需要,则队列中的任何一个或多个条目可能被删除。

    System and method for receiver based allocation of network bandwidth
    9.
    发明授权
    System and method for receiver based allocation of network bandwidth 有权
    基于接收机的网络带宽分配的系统和方法

    公开(公告)号:US06560243B1

    公开(公告)日:2003-05-06

    申请号:US09302781

    申请日:1999-04-30

    IPC分类号: H04J316

    CPC分类号: H04L47/10 H04L47/263

    摘要: A system receives a flow of data packets via the link and determines a target bandwidth to be allocated to the flow on the link. In response to the flow, the receiving system transmits data to the sending system. The transmitted data control the sending system such that when the sending system transmits subsequent data packets to the receiving system, such subsequent data packets are transmitted at a rate approximating the target bandwidth allocated to the flow. In one embodiment, the rate at which the transmitted data from the receiving system arrive at the sending system determines the rate at which the sending system transmits the subsequent data packets. The receiving system can control the rate by delaying its response to the sending system according to a calculated delay factor. In another embodiment, the data transmitted from the receiving system to the sending system indicate a maximum amount of data that the receiving system will accept from the sending system in a subsequent data transmission. The maximum amount is determined so that when the sending system transmits subsequent data packets according to that amount, data is transmitted by the sending system to the receiving system at a rate approximating the target bandwidth.

    摘要翻译: 系统通过链路接收数据包流,并确定要分配给链路上的流的目标带宽。 响应于该流程,接收系统向发送系统发送数据。 发送的数据控制发送系统,使得当发送系统向接收系统发送后续的数据分组时,这些后续的数据分组以接近分配给流的目标带宽的速率发送。 在一个实施例中,来自接收系统的发送数据到达发送系统的速率确定发送系统发送后续数据分组的速率。 接收系统可以根据计算的延迟因子延迟其对发送系统的响应来控制速率。 在另一个实施例中,从接收系统发送到发送系统的数据指示接收系统在随后的数据传输中将从发送系统接收的最大数据量。 确定最大量,使得当发送系统根据该量发送后续数据分组时,发送系统以接近目标带宽的速率将数据发送到接收系统。

    Cache memory system and method for selectively removing stale aliased
entries
    10.
    发明授权
    Cache memory system and method for selectively removing stale aliased entries 失效
    缓存内存系统和方法,用于选择性地删除失效的别名条目

    公开(公告)号:US5675763A

    公开(公告)日:1997-10-07

    申请号:US514350

    申请日:1995-08-04

    CPC分类号: G06F12/1063 G06F12/0897

    摘要: A cache memory system and method for selectively removing stale "aliased" entries, which arise when portions of several address spaces are mapped into a single region of real memory, from a virtually addressed cache, are described. The cache memory system includes a central processor unit (CPU) and a first-level cache on an integrated circuit chip. The CPU receives tag and data information from the first level cache via virtual address lines and data lines respectively. An off-chip second level cache is additionally coupled to provide data to the data lines. The CPU is coupled to a translation lookaside buffer (TLB) via the virtual address lines, while the second level cache is coupled to the TLB via physical address lines. The first and second level caches each comprise a plurality of entries. Each of the entries includes a status bit, indicating possible membership in a class of entries that might require flushing. Address translation database entries (page table entries or translation lookaside buffer (TLB) entries) are augmented with a field that contains the appropriate value of the status bits of each first and second level cache entry. Status bits are set for any page in which stale aliases may potentially occur (i.e., those shared pages that can be modified by at least one process or device). The cache-fill mechanism includes a path combining the status bits with the data being loaded into the first-level cache.

    摘要翻译: 描述了一种用于选择性地去除从虚拟寻址的高速缓存中将多个地址空间的部分映射到实际存储器的单个区域时出现的陈旧的“混叠”条目的高速缓存存储器系统和方法。 高速缓冲存储器系统包括集成电路芯片上的中央处理器单元(CPU)和第一级缓存。 CPU分别经由虚拟地址线和数据线从第一级高速缓存接收标签和数据信息。 另外,片外二级缓存耦合以向数据线提供数据。 CPU经由虚拟地址线耦合到翻译后备缓冲器(TLB),而第二级高速缓存通过物理地址线耦合到TLB。 第一级和第二级高速缓存各自包括多个条目。 每个条目包括状态位,指示可能需要冲洗的一类条目中可能的成员资格。 地址转换数据库条目(页表条目或翻译后备缓冲器(TLB)条目)用包含每个第一和第二级高速缓存条目的状态位的适当值的字段来增强。 为可能发生陈旧别名的任何页面设置状态位(即,可由至少一个进程或设备修改的那些共享页面)。 缓存填充机制包括将状态位与加载到第一级缓存中的数据组合的路径。