摘要:
Methods, apparatus, and articles of manufacture to perform runtime trace filtering associated with application performance analysis are disclosed. A disclosed example method involves generating a first performance value based on first performance data associated with a first function of a first application process. A difference value is generated based on the first performance value and a historical performance value associated with the first function. The difference value is compared to a threshold value, and first trace data associated with execution of the first application process is collected based on the comparison of the difference value to the threshold value.
摘要:
Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
摘要:
Applying a mechanism of image signal processing and color-space-conversion to convert the captured Green components to be Y, luminance components for only those pixels having raw Green data without interpolation, and to convert the Blue components to be U, chrominance components for only those pixels having raw Blue data, and Red components to be V, chrominance components for only those pixels having raw Red data. These converted YUV components are input to a predetermined video compression codec for reducing the intra- and inter-frame redundant information.
摘要:
The invention discloses an overlapping command committing method of dynamic cycle pipeline, for a chip having pipeline structure, the method comprising the following steps: reading the command from command buffer, decoding the command, judging whether operator is reasonable or not, if a illegal command, then deleting, otherwise preprocessing the operator of command, preparing the initial operator of each pipeline, and observing the status of pipeline, waiting for pipeline command exiting signal, and judging whether there is command relevance or not, if not, then committing a new command to pipeline when the command exiting a last cycle of pipeline. Overlapping command committing method of the invention can avoid appearing of bubble, improve parallelism of pipeline performing unit, and thus shorten the processing period of command in chip, let the chip process more command in unit time.
摘要:
This invention relates to a noninvasive method for urodynamics testing and analysis, comprising: modeling a bladder before a releasing of the urine as a topological sphere, modeling a circle formed by cutting the topological sphere through its center as an elastic element, determining a functional relation between a length L of the elastic element and a urine volume a within the bladder: L=F(a), determining a functional relation between a length contraction ΔL of the elastic element and both of a urinary flow rate Q and the urine volume a within the bladder: ΔL=ξ(Q,a), determining a functional relation between a contraction velocity ν of the elastic element and the length contraction ΔL of the elastic element: ν=ΔL, calculating a value of an index DC for assessing a bladder contractility to determine the bladder contractility of the subject.
摘要翻译:本发明涉及一种用于尿动力学测试和分析的非侵入性方法,包括:在将尿液释放为拓扑球之前对膀胱进行建模,对通过其中心切割拓扑球作为弹性元件形成的圆进行建模,确定功能关系 在弹性元件的长度L和膀胱内的尿量a之间,L = F(a),确定弹性元件的长度收缩&Dgr; L与尿流量Q和尿液之间的功能关系 膀胱内的体积a:&Dgr; L =&xgr;(Q,a),确定收缩速度与ngr之间的函数关系; 的弹性元件和弹性元件的长度收缩&Dgr; L;计算用于评估膀胱收缩力以确定受试者的膀胱收缩力的指数DC的值。
摘要:
A computing platform may include heterogeneous processors (e.g., CPU and a GPU) to support sharing of virtual functions between such processors. In one embodiment, a CPU side vtable pointer used to access a shared object from the CPU 110 may be used to determine a GPU vtable if a GPU-side table exists. In other embodiment, a shared non-coherent region, which may not maintain data consistency, may be created within the shared virtual memory. The CPU and the GPU side data stored within the shared non-coherent region may have a same address as seen from the CPU and the GPU side. However, the contents of the CPU-side data may be different from that of GPU-side data as shared virtual memory may not maintain coherency during the run-time. In one embodiment, the vptr may be modified to point to the CPU vtable and GPU vtable stored in the shared virtual memory.
摘要:
Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
摘要:
The invention discloses an overlapping command committing method of dynamic cycle pipeline, for a chip having pipeline structure, the method comprising the following steps: reading the command from command buffer, decoding the command, judging whether operator is reasonable or not, if a illegal command, then deleting, otherwise preprocessing the operator of command, preparing the initial operator of each pipeline, and observing the status of pipeline, waiting for pipeline command exiting signal, and judging whether there is command relevance or not, if not, then committing a new command to pipeline when the command exiting a last cycle of pipeline. Overlapping command committing method of the invention can avoid appearing of bubble, improve parallelism of pipeline performing unit, and thus shorten the processing period of command in chip, let the chip process more command in unit time.
摘要:
For an error resilient pipeline, a Dynamically Adaptable Resilient Pipeline (DARP) controller determines a minimum error pipeline stage of a processor instruction pipeline with a minimum number of errors. In addition, the DARP controller determines a maximum error pipeline stage of the instruction pipeline with a maximum number of errors. The DARP controller increases a clock frequency for the instruction pipeline if the minimum number of errors of the minimum error pipeline stage is zero and the maximum number of errors of the maximum error pipeline stage does not exceed an error threshold. In addition, the DARP controller decreases the clock frequency if the minimum number of errors exceeds an error constant.
摘要:
Various embodiments are generally directed an apparatus and method for configuring an execution environment in a user space for device driver operations and redirecting a device driver operation for execution in the execution environment in the user space including copying instructions of the device driver operation from the kernel space to a user process in the user space. In addition, the redirected device driver operation may be executed in the execution environment in the user space.