摘要:
A parallel processing method and apparatus for initializing libraries is disclosed. Libraries for an application are identified, an initialization order for the libraries is determined, and the libraries are initialized in asynchronous stages. The initialization order is determined by forming a library tree of the libraries' references and determining a load order for the references according to the levels of the references in the library tree. The asynchronous stages comprise a loading stage that includes a load queue, a snapping stage that includes a snap queue, and an initializing stage that includes an initialize queue.
摘要:
One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon one or more wake policies, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a set of wake policies. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state.
摘要:
One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon one or more wake policies, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a set of wake policies. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state.
摘要:
One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon a wake policy, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a wake policy. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state.
摘要:
Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item.
摘要:
In one embodiment, the present invention includes a method for receiving control in a kernel mode via a ring transition from a user thread during execution of an unbounded transactional memory (UTM) transaction, updating a state of a transaction status register (TSR) associated with the user thread and storing the TSR with a context of the user thread, and later restoring the context during a transition from the kernel mode to the user thread. In this way, the UTM transaction may continue on resumption of the user thread. Other embodiments are described and claimed.
摘要:
Various aspects are disclosed herein for attenuating spin waiting in a virtual machine environment comprising a plurality of virtual machines and virtual processors. Selected virtual processors can be given time slice extensions in order to prevent such virtual processors from becoming de-scheduled (and hence causing other virtual processors to have to spin wait). Selected virtual processors can also be expressly scheduled so that they can be given higher priority to resources, resulting in reduced spin waits for other virtual processors waiting on such selected virtual processors. Finally, various spin wait detection techniques can be incorporated into the time slice extension and express scheduling mechanisms, in order to identify potential and existing spin waiting scenarios.
摘要:
Loading and unloading a plurality of libraries on a computing device having a loader lock and internal and external counts for each library in the plurality of libraries is disclosed. The libraries assume an initialize state, followed by an initialized state, a pending unload state, and an unload state according to when the internal and external counts are incremented and decremented. When in the pending unload state, the functions of a library that include functions that require acquiring the loader lock exit, the internal count is decremented by one, and the loader lock is released. Prior to entering the pending unload state, a library may be placed into a reloadable state. A library in the reloadable state may be reloaded upon request until a timer times out. When the timer times out, the library in the reloadable state transitions into the pending unload state.
摘要:
A secure process may be created which does not allow code to be injected into it, does not allow modification of its memory or inspection of its memory. The resources protected in a secure process include all the internal state and threads running in the secure process. Once a secure process is created, the secure process is protected from access by non-secure processes. Process creation occurs atomically in kernel mode. Creating the infrastructure of a process in kernel mode enables security features to be applied that are difficult or impossible to apply in user mode. By moving setup actions previously occurring in user mode (such as creating the initial thread, allocating the stack, initialization of the parameter block, environment block and context record) into kernel mode, the need of the caller for full access rights to the created process is removed. Instead, enough state is passed from the caller to the kernel with the first system call so that the kernel is able to perform the actions previously performed using a number of calls back and forth between caller and kernel. When the kernel returns the handle to the set-up process, some of the access rights accompanying the handle are not returned. Specifically, those access rights that enable the caller to inject threads, read/write virtual memory, and interrogate or modify state of the threads of the process are not returned to the caller.
摘要:
A computer operating system with a map that relates API namespaces to components that implement an interface contracts for the namespaces. When an API namespace is to be used, a loader within the operating system uses the map to load components based on the map. An application can reference an API namespace in the same way as it references a dynamically linked library, but the implementation of the interface contract for the API namespace is not tied to a single file or to a static collection of files. The map may identify versions of the API namespace or values of runtime parameters that may be used to select appropriate files to implement an interface contract in scenarios that may depend on factors such as hardware in the execution environment, a version of the API namespace against which an application was developed or the application accessing the API namespace.