Abstract:
In one embodiment, the present invention includes a method for maintaining data in a first level cache non-inclusively with data in a second level cache coupled to the first level cache. At the same time, at least a portion of directory information associated with the data in the first level cache may be maintained inclusively with a directory portion of the second level cache. Other embodiments are described and claimed.
Abstract:
A melt-kneading method for filling material-containing resin or elastomer includes: a step of preparing a filling material as a filler and a resin or elastomer comprising an incompatible blend; and a step of introducing the filling material-containing resin or elastomer into a material feed part provided at an end of a cylindrical melt-kneading part having a heater and provided with a screw, and then melt-kneading the filling material-containing resin or elastomer under conditions where the rotation speed of the screw is about 600 rpm to about 3,000 rpm and its shear rate is about 900 to about 4,500 sec−1, thereby forming a co-continuous structure comprising the incompatible blend.
Abstract:
In general, in one aspect, the disclosures describes a method that includes receiving multiple ingress Internet Protocol packets, each of the multiple ingress Internet Protocol packets having an Internet Protocol header and a Transmission Control Protocol segment having a Transmission Control Protocol header and a Transmission Control Protocol payload, where the multiple packets belonging to a same Transmission Control Protocol/Internet Protocol flow. The method also includes preparing an Internet Protocol packet having a single Internet Protocol header and a single Transmission Control Protocol segment having a single Transmission Control Protocol header and a single payload formed by a combination of the Transmission Control Protocol segment payloads of the multiple Internet Protocol packets. The method further includes generating a signal that causes receive processing of the Internet Protocol packet.
Abstract:
A system and method for producing interior trim components is described. The system includes producing a substrate material from either metal or reinforced polymer materials. The substrate is then coated with adhesive. The adhesive coated substrate is then covered with a thick film containing artwork or a piece of natural wood veneer, which is applied using a mechanism that utilizes membrane to apply hydrostatic pressure to the thick film or the piece of natural wood veneer over the substrate in the presence of heat.
Abstract:
The invention discloses a two-way adjustable shock-absorbing backpack, comprising a backpack main body and back pads arranged on one side of the backpack main body, wherein the back pads are provided two, and the two are symmetrically arranged on the backpack main body, a back pad pocket with a top opening is formed between the back pad and the backpack main body, a support fabric bag is sewn into the back pad pocket, and a support rod is inserted into the inside of the support fabric bag, one side of the backpack main body above the support fabric bag is provided with a fixed fabric belt, in the invention, by adjusting the length of the secondary braces on the upper adjustment button, the shock-absorption adjustment of the top of the backpack is realized, by adjusting the lower adjustment button, the shock-absorbing stroke adjustment of the elastic band is realized.
Abstract:
Methods and apparatus to schedule applications in heterogeneous multiprocessor computing platforms are described. In one embodiment, information regarding performance (e.g., execution performance and/or power consumption performance) of a plurality of processor cores of a processor is stored (and tracked) in counters and/or tables. Logic in the processor determines which processor core should execute an application based on the stored information. Other embodiments are also claimed and disclosed.
Abstract:
The present document provides a method and wireless device for implementing route transmission based on a single IPv6 address prefix. The method includes: when a wireless device succeeds in IPv6-based dialing and obtains one 64-bit-long IPv6 address prefix from a network side, the wireless device first setting apart a 126-bit IPv6 address prefix from the prefix, and then allocating the 126-bit IPv6 address prefix to a WAN interface, allocating the 64-bit-long IPv6 address prefix to a LAN interface, and notifying a user terminal connected to the LAN interface of the IPv6 prefix of the LAN interface, so that the user terminal connected to the LAN interface generates its own IPv6 address through a stateless address auto-configuration mechanism for communication. With the technical solutions of the present document, in an IPv4/IPv6 dual stack mode, IPv4 and IPv6 protocol stacks operate normally, and the radio resource consumption is reduced.
Abstract:
A method, computer readable medium, and system are disclosed. In one embodiment, the method comprises setting a quality of service (QoS) priority level value for one or more computer system platform resources, other than a central processor core, relating to a task running on the computer system, and determining whether the one or more computer system platform resources will be allocated to the task based on the QoS priority level setting.
Abstract:
Methods and apparatus to provide for power consumption reduction in memories (such as cache memories) are described. In one embodiment, a virtual tag is used to determine whether to access a cache way. The virtual tag access and comparison may be performed earlier in the read pipeline than the actual tag access or comparison. In another embodiment, a speculative way hit may be used based on pre-ECC partial tag match to wake up a subset of data arrays. Other embodiments are also described.
Abstract:
An apparatus, method, and system are disclosed. In one embodiment the apparatus includes a cache memory, which a number of sets. Each of the sets in the cache memory have several cache lines. The apparatus also includes at least one process resource table. The process resource table maintains a cache line occupancy count of a number of cache lines. Specifically, the cache line occupancy count for each cache line describes the number of cache lines in the cache storing information utilized by a process running on a computer system. Additionally, the process resource table stores the occupancy count of less cache lines than the total number of cache lines in the cache memory.