-
公开(公告)号:US09880875B2
公开(公告)日:2018-01-30
申请号:US14692354
申请日:2015-04-21
发明人: Jongchul Park , Jinkyu Koo , Sangbok Han , Myungsun Kim
IPC分类号: G06F9/48
CPC分类号: G06F9/4881 , G06F2209/483 , G06F2209/486 , G06F2209/509 , Y02B70/1441 , Y02D10/24
摘要: Provided are a method and apparatus for task scheduling based on hardware. The method for task scheduling in a scheduler accelerator based on hardware includes: managing task related information based on tasks in a system; updating the task related information in response to a request from a CPU; selecting a candidate task to be run next after a currently running task for each CPU on the basis of the updated task related information; and providing the selected candidate task to each CPU. The scheduler accelerator supports the method for task scheduling based on hardware.
-
公开(公告)号:US09870267B2
公开(公告)日:2018-01-16
申请号:US11386443
申请日:2006-03-22
申请人: Anthony Nguyen , Engin Ipek , Victor Lee , Daehyun Kim , Mikhail Smelyanskiy
发明人: Anthony Nguyen , Engin Ipek , Victor Lee , Daehyun Kim , Mikhail Smelyanskiy
CPC分类号: G06F9/5044 , G06F15/8053 , G06F2209/509
摘要: Methods and apparatus to provide virtualized vector processing are disclosed. In one embodiment, a processor includes a decode unit to decode a first instruction into a decoded first instruction and a second instruction into a decoded second instruction, and an execution unit to: execute the decoded first instruction to cause allocation of a first portion of one or more operations corresponding to a virtual vector request to a first processor core, and generation of a first signal corresponding to a second portion of the one or more operations to cause allocation of the second portion to a second processor core, and execute the decoded second instruction to cause a first computational result corresponding to the first portion of the one or more operations and a second computational result corresponding to the second portion of the one or more operations to be aggregated and stored to a memory location.
-
33.
公开(公告)号:US09830191B2
公开(公告)日:2017-11-28
申请号:US14253740
申请日:2014-04-15
申请人: Seven Networks, Inc.
发明人: Abhay Nirantar
CPC分类号: G06F9/5027 , G06F9/485 , G06F2209/509 , Y02D10/22 , Y02D10/24
摘要: Techniques for temporarily and/or partially offloading mobile applications to one or more remote virtual machines in a server include establishing an application copy of a mobile application installed on a mobile device at a remote virtual machine, suspending the mobile application on the mobile device and offloading operations of the mobile application to the application copy at the remote virtual machine for a period of time. Suspending the mobile application and offloading its operations to the remote virtual machine for the period of time reduces consumption of resources on the mobile device. The virtual machine executes the application copy in the same manner the mobile device would execute the mobile application and transfers data from the execution to the mobile application at the end of the period of time to allow the mobile application to update itself and resume its operation without any loss of data or functionality.
-
公开(公告)号:US20170308401A1
公开(公告)日:2017-10-26
申请号:US15633645
申请日:2017-06-26
CPC分类号: G06F9/5027 , G06F2209/509 , G06F2209/549
摘要: A service provider may provide a companion container instance associated with a mobile device in order to facilitate operation of the mobile device. The companion container instance and the mobile device may be associated in a database operated by the service provider. Furthermore, the companion container instance may execute various operations on behalf of the mobile diver based at least in part on a task definition indicating a software function to be executed by the companion container instance. The software function configured to execute the various operations on behalf of the mobile device.
-
35.
公开(公告)号:US20170286170A1
公开(公告)日:2017-10-05
申请号:US15084333
申请日:2016-03-29
发明人: Arup De , Kiran Kumar Gunnam
CPC分类号: G06F3/067 , G06F3/0613 , G06F3/0647 , G06F9/5033 , G06F2209/509
摘要: Systems and methods for offloading processing from a host to one or more storage processing units using an interconnect network are provided. One such system includes a host having a processing task, a plurality of storage processing units (SPUs), a host interface configured to enable communications between the host and each of the plurality of SPUs, and an interconnection network coupled to at least two of the plurality of SPUs, where the host is configured to command at least one of the plurality of SPUs to perform the processing task, and command the interconnection network to couple two or more of the plurality of SPUs.
-
公开(公告)号:US09760159B2
公开(公告)日:2017-09-12
申请号:US14682088
申请日:2015-04-08
发明人: Andrew R. Putnam , Douglas Christopher Burger , Stephen F. Heil , Eric S. Chung , Adrian M. Caulfield
CPC分类号: G06F1/3287 , G06F1/3293 , G06F9/5044 , G06F9/5094 , G06F2209/509 , Y02D10/122 , Y02D10/171
摘要: Dynamic power routing is utilized to route power from other components, which are transitioned to lower power consuming states, in order to accommodate more efficient processing of computational tasks by hardware accelerators, thereby staying within electrical power thresholds that would otherwise not have accommodated simultaneous full-power operation of the other components and such hardware accelerators. Once a portion of a workflow is being processed by hardware accelerators, the workflow, or the hardware accelerators, can be self-throttling to stay within power thresholds, or they can be throttled by independent coordinators, including device-centric and system-wide coordinators. Additionally, predictive mechanisms can be utilized to obtain available power in advance, by proactively transitioning other components to reduced power consuming states, or reactive mechanisms can be utilized to only transition components to reduced power consuming states when a specific need for increased hardware accelerator power is identified.
-
37.
公开(公告)号:US20170255489A1
公开(公告)日:2017-09-07
申请号:US15446311
申请日:2017-03-01
发明人: Ted Abebe , Jonathan C. Gray , Richard Allen Kerslake , Timothy D. Karnes , Edward J. Nadrotowicz, JR. , Lawrence A. Lagrosa , Joseph S. Rizzo, JR. , Scott Alan Loverich
CPC分类号: G06F9/505 , G06F2209/509
摘要: Various embodiments are directed to a distributed computing task processor comprising a central computing entity and one or more mobile computing entities. The central computing entity is configured to generate task records corresponding to one or more tasks and to delegate various tasks to the one or more mobile computing entities. The mobile computing entities, in turn, are configured to complete the assigned tasks by retrieving data from various locations within a physical environment and/or by generating data locally on the mobile computing entity to be included in the task record to complete the task. The mobile computing entities may be configured to transmit the updated task records to the central computing entity for storage.
-
38.
公开(公告)号:US09727942B2
公开(公告)日:2017-08-08
申请号:US14065528
申请日:2013-10-29
发明人: Norio Nagai
CPC分类号: G06T1/20 , G06F9/5044 , G06F17/30442 , G06F17/30519 , G06F2209/509
摘要: A method for the selective utilization of graphics processing unit (GPU) acceleration of database queries in database management is provided. The method includes receiving a database query in a database management system executing in memory of a host computing system. The method also includes estimating a time to complete processing of one or more operations of the database query using GPU accelerated computing in a GPU and also a time to complete processing of the operations using central processor unit (CPU) sequential computing of a CPU. Finally, the method includes routing the operations for processing using GPU accelerated computing if the estimated time to complete processing of the operations using GPU accelerated computing is less than an estimated time to complete processing of the operations using CPU sequential computing, but otherwise routing the operations for processing using CPU sequential computing.
-
公开(公告)号:US09720726B2
公开(公告)日:2017-08-01
申请号:US13534900
申请日:2012-06-27
CPC分类号: G06F9/4843 , G06F9/5044 , G06F2209/5017 , G06F2209/509
摘要: A method and an apparatus that partition a total number of threads to concurrently execute executable codes compiled from a single source for target processing units in response to an API (Application Programming Interface) request from an application running in a host processing unit are described. The total number of threads is based on a multi-dimensional value for a global thread number specified in the API. The target processing units include GPUs (Graphics Processing Unit) and CPUs (Central Processing Unit). Thread group sizes for the target processing units are determined to partition the total number of threads according to either a dimension for a data parallel task associated with the executable codes or a dimension for a multi-dimensional value for a local thread group number. The executable codes are loaded to be executed in thread groups with the determined thread group sizes concurrently in the target processing units.
-
公开(公告)号:US09720708B2
公开(公告)日:2017-08-01
申请号:US13214083
申请日:2011-08-19
申请人: Eric R. Caspole
发明人: Eric R. Caspole
CPC分类号: G06F9/445 , G06F9/5055 , G06F2209/509
摘要: Techniques are disclosed relating to data transformation for distributing workloads between processors or cores within a processor. In various embodiments, a first processing element receives a set of bytecode. The set of bytecode specifies a set of tasks and a first data structure that specifies data to be operated on during performance of the set of tasks. The first data structure is stored non-contiguously in memory of the computer system. In response to determining to offload the set of tasks to a second processing element of the computer system, the first processing element generates a second data structure that specifies the data. The second data structure is stored contiguously in memory of the computer system. The first processing element provides the second data structure to the second processing element for performance of the set of tasks.
-
-
-
-
-
-
-
-
-