Abstract:
Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.
Abstract:
A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.
Abstract:
A system and method for concurrent WLAN and WPAN wireless modes from a single device is presented. A client uses a Wi-Fi device's infrastructure mode to communicate in a WLAN environment and, during idle WLAN times, uses the Wi-Fi device's adhoc mode to communicate in a WPAN environment. The Wi-Fi device uses a watchdog timer to switch between infrastructure mode and adhoc mode. When the client's Wi-Fi device switches to infrastructure mode, the client's Wi-Fi device uses an infrastructure register and an infrastructure device driver to transfer data over the WLAN environment. Likewise, when the client's Wi-Fi device switches to adhoc mode, the client's Wi-Fi device uses an adhoc register and an adhoc device driver to transfer data over the WLAN environment. The client uses a code shim to act as a virtual device driver at times when either the infrastructure device driver or the adhoc device driver is inactive.
Abstract:
A memory coherence protocol is provided for using cache line access frequencies to dynamically switch from an invalidation protocol to an update protocol. A frequency access count (FAC) is associated with each line of data in a memory area, such as each cache line in a private cache corresponding to a CPU in a multiprocessor system. Each time the line is accessed, the FAC associated with the line is incremented. When the CPU, or process, receives an invalidate signal for a particular line, the CPU checks the FAC for the line. If the CPU, or process, determines that it is a frequent accessor of a particular line that has been modified by another CPU, or process, the CPU sends an update request in order to obtain the modified data. If the CPU is not a frequent accessor of a line that has been modified, the line is simply invalidated in the CPU's memory area. By dynamically switching from an invalidate protocol to an update protocol, based on cache line access frequencies, efficiency is maintained while cache misses are minimized. Preferably, all FACs are periodically reset in order to ensure that the most recent cache line access data in considered.
Abstract:
A system and method for identifying compatible threads in a Simultaneous Multithreading (SMT) processor environment is provided by calculating a performance metric, such as cycles per instruction (CPI), that occurs when two threads are running on the SMT processor. The CPI that is achieved when both threads were executing on the SMT processor is determined. If the CPI that was achieved is better than the compatibility threshold, then information indicating the compatibility is recorded. When a thread is about to complete, the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread. The scheduler identifies a thread that is (1) compatible with the thread that is still running on the SMT processor (i.e., the thread that is not about to complete), and (2) ready to execute. The CPI data is continually updated so that threads that are compatible with one another are continually identified.
Abstract:
A method and apparatus for managing write-to-read turnarounds in an early read after write memory system are presented. Memory controller logic identifies a write operation's bank set, allows a different bank set read operation to issue prior to the write operation's completion, and allows a same bank set read operation to issue once the write operation completes. The memory controller includes operation counter logic, operation selection logic, operation acceptance logic, command formatting logic, and memory interface logic. The operation counter logic receives new-operation-related signals from the operation acceptance logic and, in turn, provides signals to the operation selection logic and the operation acceptance logic as to when to issue a read operation that corresponds to either an even DRAM bank or an odd DRAM bank.
Abstract:
A system and method for virtualization of processor resources is presented. A thread is created on a processor and the processor's local memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by other processors, regardless of whether the processor is running. Additional threads create additional local memory mappings into the effective address space. The effective address space corresponds to either a physical local memory or a “soft” copy area. When the processor is running, a different processor may access data that is located in the first processor's local memory from the processor's local storage area. When the processor is not running, a softcopy of the processor's local memory is stored in a memory location (i.e. locked cache memory, pinned system memory, virtual memory, etc.) for other processors to continue accessing.
Abstract:
A system and method for displaying objects in a plurality of layers. The layers are distinguished from one another using a variety of display attributes in order to emphasize objects in upper layers and de-emphasize objects in lower layers. The display attributes may include use of color (hue, saturation, and value), three dimensional images, fill patterns, and other display techniques. The user is able to change the layering in order to emphasize a different group, or category, of objects and de-emphasize other groups. The layers can be predefined, for example a hardware and software layers, or may be defined by analyzing the attributes corresponding with the objects. Objects and their attributes are stored in a data store, such as a relational database. Predefined layers include one or more of these attributes to use for matching.
Abstract:
A system and method for tuning TCP/IP acknowledgments is provided. The system and method reduces the number of acknowledgments sent by a TCP/IP receiver by determining whether the connection state with the sender warrants using minimal acknowledgments. If minimal acknowledgments are used, the receiver sends fewer acknowledgments to the sender in response to received packets. The number of packets that are received before an acknowledgment is returned is increased until the delay value reaches a threshold value. The threshold value can be determined based on the size of the buffer setup to receive packets from the sender during the session. If errors, such as TCP/IP timeouts or duplicate packets, are detected, the threshold is changed to the last delay value that did not cause errors. If further errors are detected, the system is programmed to revert to sending traditional acknowledgments for the session.
Abstract:
Calibration factors determine how topograpy components are designed and built in order to support the management philosophies and methodologies. A marketing analysis may be used to identify the calibration factors that are needed to support a large market. In this manner, many calibration factors may be applied to a single topography requirement so that multiple operating environments and multiple management philosophies are supported by the topography. The components are stored in a component library and calibration factors corresponding to the components are stored in a data store. A customer's management philosophy, methodology, and operating environments are compared with the component metadata in order to identify suitable topography components which are installed on client computer systems to form to topography. Topography-neutral application components are adapted for installation on any topography regardless of the customer's management characteristics and operating environments.