摘要:
A cooling assembly and method of cooling a heat-generating electronic component on a circuit board. A heat collector collects heat from the electronic component. A heat pipe transfers the heat to a location remote from the electronic component. A heat sink is mounted to the circuit board at the distant location. The heat sink has at least one groove formed on an underside thereof. The heat sink is mounted so that is overlies the heat pipe and the heat pipe is introduced into the groove, thereby securing the heat pipe between the heat sink and the circuit board.
摘要:
The read latency caused by job start preparation of a future job is at least partly hidden within the current job by reading information for job start preparation of the future job integrated with the execution of the current job. Instructions for job start preparation are preferably instrumented (701) into the current job and executed (702), whenever possible, in parallel with the instructions of the current job. The integrated job start preparation may include table look-ups, register file updating, instruction fetching and preparation. If the scheduled job order is allowed to change during execution, it is typically necessary to test (703) whether the next job is still valid before starting the execution, it is typically necessary to test (703) whether the next job is still valid before starting the execution of the next job and take appropriate actions (704; 705, 706) depending on the outcome of the test. In addition to reduced job start preparation time, unused slots in the instruction-parallel execution of the current job may be filled up by the added read instructions, thus providing more efficient utilization of the multiple functional execution units of the processor.
摘要:
The invention is directed toward a multiprocessing system having multiple processing units. For at least one of the processing units in the multiprocessing system, a first job signal is assigned to the processing unit for speculative execution of a corresponding first job, and a further job signal is assigned to the processing unit for speculative execution of a corresponding further job. The speculative execution of said further job is initiated when the processing unit has completed execution of the first job. If desirable, even more job signals may be assigned to the processing unit for speculative execution. In this way, multiple job signals are assigned to the processing units of the processing system, and the processing units are allowed to execute a plurality of jobs speculatively while waiting for commit priority. By assigning multiple job signals for speculative execution by one or more processing units, the effects of variations in execution time between jobs are neutralized, and the overall performance of the processing system is substantially improved.
摘要:
Method and sensing module for sensing pollution of outside air. The sensing module (1) comprises an electro chemical sensing element (3), and a processor (2). A sensing module output signal is provided based on the measurement signal and a baseline signal level. The baseline signal level is adapted depending on two threshold levels (13-15). A pollution concentration value is determined from the measurement signal and a classification level of air pollution is provided as sensing module output signal. A classification level is determined using a plurality of classification threshold values and the pollution concentration values. The plurality of classification threshold values are dynamically adjustable.
摘要:
The present invention discloses a processor system comprising a processor (31) and at least a first memory (32) and a second memory (34, 36, 37). The first memory (32) is normally faster than the second one, and means for memory allocation (38, 41, 48) perform the periodically static allocation of data into the first memory (32). The means for memory allocation (38, 41, 48) are run-time updateable by software. An execution profiling section (39) is provided for continuously or intermittently providing execution data used for updating the means for memory allocation (38, 41, 48). According to the invention, the memory allocation is performed on a variable or record (49, 50) level. The means for memory allocation preferably use linking tables (41, 48) supporting dynamic software changes. The first memory (32) is preferably an SRAM, connected to the processor by a dedicated bus (33).
摘要:
A computer system performs a coarse-grained dependency checking between concurrently executed jobs that share a memory. First and second jobs are defined, each having a set of shared individually addressable data items stored in a corresponding set of locations within a memory. The set of locations are partitioned into a set of data areas, wherein at least one of the data areas stores more than one of the data items. The first and second jobs are then run. To determine whether a collision has occurred between the first job and the second job, it is determined whether the first job accessed a same data area as was accessed by the second job, regardless of whether a same data item within the same data area was accessed by both the first job and the second job.
摘要:
A data processing system and method involving a data requesting element and a first memory element from which said data requesting element requests data is described. An example of such a system is a processor and a first level cache memory, or two memories arranged in a hierarchy. A second memory element is provided between the first memory element and the requesting element. The second memory element stores data units read out of said first memory element, and performs a prefetch procedure, where said prefetch procedure contains both a sequential sub-procedure and a sub-procedure based on prefetch data identifiers associated with some of the data units.
摘要:
A fault-tolerant client-server system has a primary server, a backup server; and a client. The client sends a request to the primary server, which receives and processes the request, including sending the response to the client, independent of any backup processing. The response includes the primary server state information. The primary server also performs backup processing that includes periodically sending the primary server state information to the backup server. The client receives the response from the primary server, and sends the primary server state information to the backup server. The primary server state information includes all request-reply pairs that the primary server has handled since a most recent transmission of primary server state information from the primary server to the backup server. The primary server's backup processing may be activated periodically based on a predetermined time interval. Alternatively, it may be activated when the primary server's memory for storing the primary server state information is filled to a predetermined amount.
摘要:
For fault testing in a digital system, a processor unit is made available from other activities and the logical units to be tested are set to a predetermined state. An output response analyze is activated and the processor unit generates a set of stimuli, influencing the appropriate logical units. The output response analyzer collects responses to the stimuli at different nodes in the digital system and creates signatures from them. The signals are verified and if a fault is noticed, this error is noticed. The present state of the processor and other logical units are stored in a storage device prior to the test and recovered after the testing is finished. This fault testing can be performed both at chip and board levels, and on systems with several units.
摘要:
A computer system uses paged memory mapping techniques to maintain speculative data generated by concurrent execution of speculative jobs. In some embodiments, a set of shared virtual pages is defined that stores data that are shared by a first job and a second job. A set of shared physical pages in the paged physical memory is also defined, wherein there is a one-to-one correspondence between the set of shared virtual pages and the set of shared physical pages. When a job is to generate speculative data, a private physical page in which the data is to reside is created. The contents of the corresponding shared physical page are copied to the private physical page, and the speculative job's accesses are then mapped to the private physical page instead of to the shared physical page. If speculation fails, the private page may be discarded, and the job restarted. If speculation succeeds, memory mapping is adjusted so that the private page replaces the formerly shared physical page.