MICROPROCESSOR WITH COMPARE OPERATION OF COMPOSITE OPERANDS
    3.
    发明公开
    MICROPROCESSOR WITH COMPARE OPERATION OF COMPOSITE OPERANDS 失效
    WITH比较操作复合操作数微处理器

    公开(公告)号:EP0795154A4

    公开(公告)日:1999-03-10

    申请号:EP95943654

    申请日:1995-12-01

    申请人: INTEL CORP

    摘要: A processor includes a decoder (202) coupled to receive a control signal (207). The control signal has a first source address (602), a second source address (603), a destination address (605), and an operation field (601). The first source address corresponds to a first location, and the second source address corresponds to a second location. The destination address corresponds to a third location. The operation field indicates that a type of packed data compare operation is to be performed. The processor includes a circuit coupled to the decoder for comparing a first packed data being stored at the first location with a second packed data being stored at the second location and for communicating a corresponding result packed data to the third location.

    LOCALIZED PERFORMANCE THROTTLING TO REDUCE IC POWER CONSUMPTION
    4.
    发明公开
    LOCALIZED PERFORMANCE THROTTLING TO REDUCE IC POWER CONSUMPTION 失效
    当地的电力控制来降低集成电路的能耗

    公开(公告)号:EP1023656A4

    公开(公告)日:2002-07-03

    申请号:EP97944556

    申请日:1997-09-29

    申请人: INTEL CORP

    IPC分类号: G06F1/32

    摘要: The power consumed within an integrated circuit (IC) is reduced by throttling the performance of particular functional units (105) within the IC. The recent utilization levels of particular functional units within an IC are monitored (108), for example, by computing each functional unit's average duty cycle over its recent operating history (106). If this activity level (109) is greater than a threshold, the functional unit is operated in a reduced-power mode (110). The threshold value is set large enough to allow short bursts of high utilization to occur. An IC can dynamically make the tradeoff between high-speed operation and low-power operation, by throttling back performance of functional units when their utilization exceeds a sustainable level. This dynamic power/speed tradeoff can be optimized across multiple functional units within an IC or among multiple ICs within a system. This dynamic power/speed tradeoff can be altered by providing software control over throttling parameters.

    A SYSTEM FOR SIGNAL PROCESSING USING MULTIPLY-ADD OPERATIONS
    6.
    发明公开
    A SYSTEM FOR SIGNAL PROCESSING USING MULTIPLY-ADD OPERATIONS 失效
    与MULTIPLIZIERUNG-信号处理系统添加操作

    公开(公告)号:EP0870224A4

    公开(公告)日:1999-02-10

    申请号:EP96945274

    申请日:1996-12-24

    申请人: INTEL CORP

    摘要: A computer system which includes a multimedia input device which generates an audio or video input signal and a processor coupled to the multimedia input device. The system further includes a storage device coupled to the processor and having stored therein a signal processing routine for multiplying and accumulating input values representative of the audio or video input signal. The signal processing routine, when executed by the processor, causes the processor to perform several steps. These steps include performing a packed multiply add on a first set of values packed into a first source and a second set of values packed into a second source each representing input signals to generate a packed intermediate result. The packed intermediate result is added to an accumulator to generate a packed accumulated result in the accumulator. These steps may be iterated with the first set of values and portions of the second set of values to the accumulator to generate the packed accumulated result. Subsequent thereto, the packed accumulated result in the accumulator is unpacked into a first result and a second result and the first result and the second result are added together to generate an accumulated result.

    CACHE HIERARCHY MANAGEMENT WITH LOCALITY HINTS FOR DIFFERENT CACHE LEVELS
    7.
    发明公开
    CACHE HIERARCHY MANAGEMENT WITH LOCALITY HINTS FOR DIFFERENT CACHE LEVELS 失效
    WITH LOCAL-说明不同级别的内存缓存层次管理

    公开(公告)号:EP1012723A4

    公开(公告)日:2002-11-20

    申请号:EP97953136

    申请日:1997-12-12

    申请人: INTEL CORP

    发明人: MITTAL MILLIND

    IPC分类号: G06F12/08 G06F13/00

    摘要: A computer system and method in which allocation of a cache memory (21a, 22a) is managed by utilizing a locality hint value (17, 18), included within an instruction (19), which controls if cache allocation is to be made. The locality value is based on spatial and/or temporal locality for a data access and may be assigned to each level of a cache hierarchy where allocation control is desired. The locality hint value may be used to identify a lowest level where management of cache allocation is desired and cache is allocated at that level and any higher level or levels. If the locality hint identifies a particular access for data as temporal or non-temporal with respect to a particular cache level, the particular access may be determined to be temporal or non-temporal with respect to the higher and lower cache levels.

    CONTROLLING MEMORY ACCESS ORDERING IN A MULTI-PROCESSING SYSTEM
    8.
    发明公开
    CONTROLLING MEMORY ACCESS ORDERING IN A MULTI-PROCESSING SYSTEM 失效
    存储器布置在多处理器系统控制

    公开(公告)号:EP1008053A4

    公开(公告)日:2001-12-19

    申请号:EP97951664

    申请日:1997-12-12

    申请人: INTEL CORP

    发明人: MITTAL MILLIND

    摘要: As shown in the Figure, a technique for controlling memory access ordering in a multi-processing system (11) in which a sequence of accesses to acquire, access and release a shared space of memory (15) is strictly adhered to by use of two specialized instructions for controlling memory (15) access. Two instructions noted as MFDA (Memory Fence Directional - Acquire) and MFDR (Memory Fence Directional - Release) are utilized to control the ordering. The MFDA instruction operates to ensure that all previous accesses to the specified address (typically to a lock controlling access to the shared space (15)) become visible to other processors before all future accesses are permitted. The MFDR instruction operates to ensure that all previous accesses become visible to other processors before any future accesses to the specified address.