-
公开(公告)号:US10754953B2
公开(公告)日:2020-08-25
申请号:US16109870
申请日:2018-08-23
Inventor: Hai Jin , Weiqi Dai , Jun Deng , Deqing Zou
Abstract: The present invention provides a TrustZone-based security isolation system for shared library, the system at least comprising: a sandbox creator, a library controller, and an interceptor, the sandbox creator, in a normal world, dynamically creating a sandbox isolated from a Rich OS, the interceptor, intercepting corresponding system-calling information and/or Android framework APIs by means of inter-process stack inspection, the library controller, performing analysis based on the intercepted system-calling information and/or Android framework APIs, redirecting a library function to the sandbox, and switching calling states of the library function in the sandbox as well as setting up a library authority. The present invention has good versatility, low cost and high security. It realizes isolation of the library without increasing the trusted bases in the Secure World of the TrustZone, effectively reducing the risk of being attacked.
-
公开(公告)号:US10735329B2
公开(公告)日:2020-08-04
申请号:US16299820
申请日:2019-03-12
Inventor: Duoqiang Wang , Hai Jin , Chi Zhang
IPC: G06F9/54 , H04L12/801 , H04L9/12 , H04L9/14
Abstract: A container communication method for parallel-applications and a system using the method are disclosed. The method includes: when a first process of a first container has to communicate with a second process of a second container and the first and second containers are in the same host machine, creating a second channel that is different from a TCP (Transmission Control Protocol)-based first channel; the first container sending communication data to a shared memory area assigned to the first container and/or the second container by the host machine and send metadata of the communication data to the second container through the first channel; and when the second process acknowledges receiving the communication data based on the received metadata, transmitting the communication data to the second container through the second channel and feeding acknowledgement of the data back to the first process through the first channel.
-
公开(公告)号:US10678584B2
公开(公告)日:2020-06-09
申请号:US16135598
申请日:2018-09-19
Inventor: Fangming Liu , Hai Jin , Xiaoyao Li
Abstract: The present invention relates to an FPGA-based method and system for network function accelerating. The method comprises: building a network function accelerating system that includes a physical machine and an accelerator card connected through a PCIe channel, wherein the physical machine includes a processor and the accelerator card includes an FPGA, in which the accelerator card serves to provide network function accelerating for the processor; the processor being configured to: when it requires the accelerator card to provide network function accelerating, check whether there is any required accelerator module present in the FPGA, and if yes, acquire an accelerating function ID corresponding to the required accelerator module, and if not, select at least one partial reconfigurable region in the FPGA and configure it into the required accelerator module and generate a corresponding accelerating function ID; and/or sending an accelerating request to the FPGA.
-
64.
公开(公告)号:US20190278573A1
公开(公告)日:2019-09-12
申请号:US16216155
申请日:2018-12-11
Inventor: Xuanhua SHI , Hai Jin , Zhixiang Ke , Wenchao Wu
Abstract: The present invention relates to a method of memory estimation and configuration optimization for a distributed data processing system involves performing match between an application data stream and a data feature library, wherein the application data stream has received analysis and processing on conditional branches and/or loop bodies of an application code in a Java archive of the application, estimating a memory limit for at least one stage of the application based on the successful matching result, optimizing configuration parameters of the application accordingly, and acquiring static features and/or dynamic features of the application data based on running of the optimized application and performing persistent recording. Opposite to machine-learning-based memory estimation that does not ensure accuracy and fails to provide fine-grained estimation for individual stages, this method uses application analysis and existing data feature to estimate overall memory occupation more precisely and to estimate memory use of individual job stages for more fine-grained configuration optimization.
-
公开(公告)号:US20190244402A1
公开(公告)日:2019-08-08
申请号:US16237053
申请日:2018-12-31
Inventor: Qiangsheng HUA , Xuanhua Shi , Hai Jin , Yangyang Li
IPC: G06T11/20 , G06F16/2458 , G06F16/901
CPC classification number: G06T11/206 , G06F16/2465 , G06F16/9024 , G06F2216/03
Abstract: The present invention relates to a game-based method and system for streaming-graph partitioning, the method comprises: partitioning a streaming graph using one or more processors, the one or more processors being configured to: read an edge streaming having a predetermined number of edges in an unpartitioned area of the streaming graph as a sub-graph; based on a first pre-partitioning model, pre-partition the edges of the sub-graph to at least two partition blocks as an initial state of a game process; and sequentially select an optimal partition block for each edge of the sub-graph through the game process until the game process becomes convergent, the disclosed method and system can partition streaming graph using local information only, without loading the whole streaming graph into the memory, thus have good scalability and support dynamic graph partitioning; the disclosed partitioning method and system can provide better partitioning results.
-
公开(公告)号:US20190220073A1
公开(公告)日:2019-07-18
申请号:US16135404
申请日:2018-09-19
Inventor: Song WU , Yang Chen , Xinhou Wang , Hai Jin
CPC classification number: G06F1/26 , G06F8/60 , G06F11/3062 , G06F11/3414 , G06F11/3433 , G06F11/3452 , G06F2201/81 , H04L67/32 , H04L67/34 , H05K7/1492 , Y02D10/22 , Y02D10/36
Abstract: The present invention relates to a server deployment method based on datacenter power management, wherein the method comprises: constructing a tail latency table and/or a tail latency curve corresponding to application requests based on CPU utilization rate data of at least one server; and determining an optimal power budget of the server and deploying the server based on the tail latency requirement of the application requests. By analyzing the tail latency table or curve, the present invention can, within the limitation of datacenter rated power, on the premise of ensuring the performance of latency-sensitive applications, maximise the deployment density of servers in data centers.
-
公开(公告)号:US20190208467A1
公开(公告)日:2019-07-04
申请号:US16193805
申请日:2018-11-16
Inventor: Feng Lu , Ziqian Shi , Ruoxue Liu , Song Wu , Hai Jin
Abstract: The present invention relates to a method for cloudlet-based optimization of energy consumption, comprises: building a cloudlet system that comprises: at least two cloudlets and a mobile device wirelessly connected to the cloudlets, so that the cloudlets provide the mobile device wirelessly connected thereto with cloud computing services; acquiring system data related to the mobile device from the cloudlet system for analyzing a data volume to be handled by the mobile device locally and setting an initial operating frequency FLinitial for a first processor of the mobile device according to the data volume; and based on a tolerable latency range T of a task queue, dynamically deciding a scheduling strategy for individual task of the task queue using a Markov Decision Process, and setting a present operating frequency FL for the first processor of the mobile device corresponding to the scheduling strategy, so as to enable the mobile device to complete the tasks within the tolerable latency range T with minimal energy consumption.
-
-
-
-
-
-