Abstract:
A loadable module design that can be implemented in hardware-limited platforms. In particular, according to various aspects, literal memory space (424) may be reserved to the loadable module on a processor (610) running in an absolute addressing mode, wherein the literal space (424) may be reserved in a valid address range accessible to one or more address-restricted instructions. At build time, a partial linking combining one or more object files (320, 322, 324) associated with the loadable module with operating system exported symbols (330) may be generated. Accordingly, at load time, literal space may be allocated to the loadable module from the reserved literal memory space (424) and text and data spaces may be allocated from unused memory (440) such that addresses associated with all internal functions and variables may be relocated according to start addresses associated with the literal, text, and data spaces.
Abstract:
The disclosure pertains to the operation of graphics systems and to a variety of architectures for design and/or operation of a graphics system spanning from the output of an application program and extending to the presentation of visual content in the form of pixels or otherwise. In general, many embodiments of the invention envision the processing of graphics programming according to an on-the-fly decision made regarding how best to use the specific available hardware and software. In some embodiments, a software arrangement may be used to evaluate the specific system hardware and software capabilities, then make a decision regarding what is the best graphics programming path to follow for any particular graphics request. The decision regarding the best path may be made after evaluating the hardware and software alternatives for the path in view of the particulars of the graphics program to be processed.
Abstract:
An apparatus adapted to transform nodes of a flow graph for execution on an, in particular distributed, processing system, comprising: an interface adapted to receive a dataflow graph that comprises nodes, each node representing a high-level operation; a compiler adapted to: transform at least one high-level operation node to at least one low-level operation node corresponding to the at least one high-level operation to create a transformed dataflow graph, the at least one low-level operation adapted for execution on a processor of a plurality of processors of a runtime environment executing the transformed dataflow graph, the transform performed according to a performance measure calculated for each processor executing the at least one high-level operation using the at least one low-level operation adapted for execution by the respective processor.
Abstract:
A method for compiling source code comprises translating the source code into a plurality of intermediate representations using a plurality of alternative translation schemes, respectively, applying fusion and/or transformations to each of the plurality of intermediate representations to obtain a plurality of transformatted representations, generating a plurality of executable programs from the plurality of transformatted representations, respectively, and selecting one or more of the plurality of executable programs as output program depending on one or more performance metrics of the plurality of executable programs.
Abstract:
Systems and methods for performing dynamic code management, such as dynamic management of JavaScript tags in webpages or code segments in native applications, are disclosed. A user device loading a web or native application can access a factor, such as a user device-specific attribute or a piece of content of the webpage or native application being loaded. That factor can be applied to a rule that is evaluated (e.g., by the user device or a code server) to select one or more desired segments of code (e.g., JavaScript tags or native application code) to be executed by the user device from a pool of available code (e.g., pre-embedded code or dynamically injected code).
Abstract:
Information representative of a graph-based program specification (110) has components, and directed links between ports of said components, defining a dependency between said components. A directed link exists between a port of a first component and a port of a second component. The first component specifies first-component execution code that when compiled enables execution of a first task. The second component specifies second-component execution code that when compiled enables execution of a second task. Compiling (120) the graph-based program specification includes grafting first control code to said first-component execution code, which changes a state of said second component to a pending state, an active state, or a suppressed state. Based on said state, said first control code causes at least one of: invoking said second component if said state changes from pending to active, or suppressing said second component if said state changes from pending to suppressed.
Abstract:
Examples relate to providing distributed compilation of statically typed languages. In some examples, first order dependencies of a target module are identified, where each of the first order dependencies is associated with one of a number of dependent modules. Next, each first order dependency of is traversed to remove code references from source code of a corresponding module of the plurality of dependent modules, where each of the code references refers to a type defined in an indirect dependency of the target module, and compile the source code of the corresponding module to generate a module stub of a number of module stubs. At this stage, source code of the target module is compiled using the module stubs to generate a target program
Abstract:
Methods for reducing memory loads for accessing global variables (globals) when creating executables for position independent (PI) code are disclosed. A first method includes compiling PI code (404), identifying globals (405), and determining whether globals are defined in the executable (407). If a global is not defined in the executable, a definition is created in the executable (409). A second method includes receiving a list of defined globals from instrumented PI code binary (601) and comparing the list with globals in the PI code (607). Memory loads are created for globals that are unlisted (609). A third method includes compiling PI code with special relocations for globals (801) and determining whether globals are defined in the executable (803). If the global is defined in the executable, the special relocation is replaced with a direct load of the global (805). If not, the special relocation is replaced with a two-instruction sequence that loads the global's address and then the global's value (805).
Abstract:
A method and system for file processing may include the steps of scanning a source file, identifying a target code block, and generating a first abstract syntax tree (AST) reflecting the structure of the target code block. The file processing method may further include the steps of identifying a position to place a plugin code, placing the plugin code into the first AST, generating a second AST reflecting the structure of the target code block with the plugin code, and using the write-back interface to write the second AST into the source file.
Abstract:
Systems and methods for improving the performance of mobile applications are disclosed. An exemplary method can include receiving a request for the application, where the request can include target device information. The method can also determine whether the application has been cached before. If the application has not been cached, the method can download the application as a bytecode and process the bytecode into the native code format, using an Ahead-of-time compiler. The method can also provide the application in the native code format to the target device over the network.