Abstract:
A method and system for allowing a processor or I/O master to address more system memory than physically exists are described. A Compressed Memory Management Unit (CMMU) may keep least recently used pages compressed, and most recently and/or frequently used pages uncompressed in physical memory. The CMMU translates system addresses into physical addresses, and may manage the compression and/or decompression of data at the physical addresses as required. The CMMU may provide data to be compressed or decompressed to a compression/decompression engine. In some embodiments, the data to be compressed or decompressed may be provided to a plurality of compression/decompression engines that may be configured to operate in parallel. The CMMU may pass the resulting physical address to the system memory controller to access the physical memory. A CMMU may be integrated in a processor, a system memory controller or elsewhere within the system.
Abstract:
A set of concrete pavers is formed by a plurality of pavers being arranged in a paver row, and a plurality of paver rows being arranged to form a row. A number of parallel, adjacent rows form linear or meandering joints in the end-paver region of the rows when the rows are laid together. At least one section of the upper delimiting surface of the pavers is convexly cambered towards the outside, and projections on the side faces of the pavers form support elements for the adjacent pavers. The support elements, in combination with projections on adjacent pavers, create spaces that act as water-drainage openings in the joint regions. The pavers in each set include not only paving blocks with a length-to-height ratio of less than or equal to 4 but also at least one paving slab with a length-to-height ratio greater than 4.
Abstract:
A panel arrangement for a domestic appliance, having a panel which can be fitted to the domestic appliance such that it is visible from the outside. Indicator and/or operator controls are provided which are mounted on the panel. A printed circuit board comprises control and possibly power electronics and can be arranged in the interior of the domestic appliance. The indicator and/or operator controls are connected to the printed circuit board. At least some of the indicator and/or operator controls are mounted on the panel substantially independently of the layout of the printed circuit board and are connected to the printed circuit board via at least one flexible supply line.
Abstract:
An integrated memory controller (IMC) including MemoryF/X Technology which includes data compression and decompression engines for improved performance. The memory controller (IMC) of the present invention preferably selectively uses a combination of lossless, lossy, and no compression modes. Data transfers to and from the integrated memory controller of the present invention can thus be in a plurality of formats, these being compressed or normal (non-compressed), compressed lossy or lossless, or compressed with a combination of lossy and lossless. The invention also indicates preferred methods for specific compression and decompression of particular data formats such as digital video, 3D textures and image data using a combination of novel lossy and lossless compression algorithms in block or span addressable formats. To improve latency and reduce performance degradations normally associated with compression and decompression techniques, the MemoryF/X Technology encompasses multiple novel techniques such as: 1) parallel lossless compression/decompression; 2) selectable compression modes such as lossless, lossy or no compression; 3) priority compression mode; 4) data cache techniques; 5) variable compression block sizes; 6) compression reordering; and 7) unique address translation, attribute, and address caches. The parallel compression and decompression algorithm allows high-speed parallel compression and high speed parallel decompression operation. The IMC also preferably uses a special memory allocation and directory technique for reduction of table size and low latency operation. The integrated data compression and decompression capabilities of the IMC remove system bottle-necks and increase performance. This allows lower cost systems due to smaller data storage, reduced bandwidth requirements, reduced power and noise.
Abstract:
A parallel decompression system and method that decompresses input compressed data in one or more decompression cycles, with a plurality of tokens typically being decompressed in each cycle in parallel. A parallel decompression engine may include an input for receiving compressed data, a history window, and a plurality of decoders for examining and decoding a plurality of tokens from the compressed data in parallel in a series of decompression cycles. Several devices are described that may include the parallel decompression engine, including intelligent devices, network devices, adapters and other network connection devices, consumer devices, set-top boxes, digital-to-analog and analog-to-digital converters, digital data recording, reading and storage devices, optical data recording, reading and storage devices, solid state storage devices, processors, bus bridges, memory modules, and cache controllers.
Abstract:
The invention relates to a brick kit comprising a number of essentially prismatic concrete stone blocks and to a method for producing said stone blocks. According to the invention, to permit an automatic alignment and fixation of the stone blocks in brickwork, the stone blocks (1, 10, 18, 19,19′ 20) have a projection (2) and a depression (3) configured symmetrically in the centre of their upper and lower face respectively, the projections and depressions are configured with approximately the same shape and dimensions, and stone blocks that are stacked on top of one another can be adjusted and/or fixed in relation to one another, by means of the mutual engagement of the projections (2) and depressions (3).
Abstract:
A device and a method for producing concrete paving stones. These paving stones can be made from coarse grained concrete and a series of layers of colored concrete material. These layers of colored concrete material are placed in a container on top of a slider that is transversely displaceable. There is also an additional slider and a closing member disposed at the bottom of the container. The first slider, can be moved to form a passage slot between the first slider and the housing. The second slider can be moved to form a passage slot between the second slider and the housing. In addition, between the first slider and the second slider is a first deflecting curve for the colored concrete to mix around and between the second slider and the closing member is a second deflecting curve for further mixing of the colored concrete. This colored concrete is then added to the coarse grained concrete and compacted with it to create paving stones.
Abstract:
The invention relates to a stone structure assembly, comprising a plurality of stones, especially concrete blocks, having similar height, upper and lower sides extending on parallel planes and vertical side walls. In order to place the thirteen stone blocks pertaining to said assembly in such a way that they cannot be displaced and that similarly narrow spacing gaps are formed in-between said stones, two stones are shaped as a prism with the same surface area and eleven stones are shaped as a trapezium with differing surface areas. The stones in the stone structure assembly are configured as a single piece with a bottom section and a top section set back from the peripheral surface of the base section at least in certain areas.
Abstract:
A system and method for performing parallel data compression which processes stream data at more than a single byte or symbol (character) at one time. The parallel compression engine modifies a single stream dictionary based (or history table based) data compression method, such as that described by Lempel and Ziv, to provide a scalable, high bandwidth compression. The parallel compression method examines a plurality of symbols in parallel, thus providing greatly increased compression performance. The method first involves receiving uncompressed data, wherein the uncompressed data comprises a plurality of symbols. The method maintains a history table comprising entries, wherein each entry comprises at least one symbol. The method operates to compare a plurality of symbols with entries in the history table in a parallel fashion, wherein this comparison produces compare results. The method then determines match information for each of the plurality of symbols based on the compare results. The step of determining match information involves determining zero or more matches of the plurality of symbols with each entry in the history table. The method then outputs compressed data in response to the match information.