-
公开(公告)号:US20190205193A1
公开(公告)日:2019-07-04
申请号:US16135480
申请日:2018-09-19
Inventor: Song WU , Zhuang XIONG , Hai JIN
CPC classification number: G06F11/008 , G06F11/2221 , G06F11/2268 , G06F11/2273 , G06F11/3034 , G06F11/3485 , G06F2201/81 , G06K9/6269 , G06N20/00
Abstract: An S.M.A.R.T. threshold optimization method used for disk failure detection includes the steps of: analyzing S.M.A.R.T. attributes based on correlation between S.M.A.R.T. attribute information about plural failed and non-failed disks and failure information and sieving out weakly correlated attributes and/or strongly correlated attributes; and setting threshold intervals, multivariate thresholds and/or native thresholds corresponding to the S.M.A.R.T. attributes based on distribution patterns of the strongly or weakly correlated attributes. As compared to reactive fault tolerance, the disclosed method has no negative effects on reading and writing performance of disks and performance of storage systems as a whole. As compared to the known methods that use native disk S.M.A.R.T. thresholds, the disclosed method significantly improves disk failure detection rate with a low false alarm rate. As compared to disk failure forecast based on machine learning algorithm, the disclosed method has good interpretability and allows easy adjustment of its forecast performance.
-
公开(公告)号:US20190205179A1
公开(公告)日:2019-07-04
申请号:US16180460
申请日:2018-11-05
Abstract: The present invention involves with a dynamic distributed processing method for stream data, at least comprising: analyzing and predicting an execution mode of at least one data feature block of the data of a user stream data processing program; dynamically adjusting the execution mode based on an average magnitude of queue latency and a threshold of queue latency of the stream data; and processing corresponding said at least one data feature block based on the execution mode. By associating and combining the irrelevant data stream mode and micro-batch mode of stream data computing, the present invention successfully releases automatic shift and data process between the two modes, which advantageously has both the high throughput property and the low latency property.
-
公开(公告)号:US20200322263A1
公开(公告)日:2020-10-08
申请号:US16773051
申请日:2020-01-27
Inventor: Song WU , Hai JIN , Junjian GUAN
IPC: H04L12/725 , H04L12/913 , H04L12/923 , H04L12/721 , H04L12/911
Abstract: The present invention relates to a network resource isolation method for container networks and a system implementing the method.
-
4.
公开(公告)号:US20170289059A1
公开(公告)日:2017-10-05
申请号:US15258763
申请日:2016-09-07
IPC: H04L12/911 , H04L29/08 , H04L29/06 , G06F9/455
CPC classification number: H04L47/70 , G06F9/455 , G06F9/45558 , G06F9/5027 , G06F2209/509 , H04L41/0896 , H04L67/1097 , H04L67/2833 , H04L67/2838 , H04L67/325
Abstract: The present invention discloses a container-based mobile code offloading support system in a cloud environment and the offloading method thereof, comprising a front-end processing layer, a runtime layer and a back-end resource layer. The front-end processing layer is responsible for responding to an arrived request and managing a status of a container, which is realized by a request distribution module, a code caching module and a monitoring and scheduling module; the runtime layer provides the same execution environment as that of a terminal, which is realized by a runtime module consisted of a plurality of mobile cloud containers; and the back-end resource layer solves incompatibility of a cloud platform with an mobile terminal environment and provides underlying resource support for a runtime, which is realized by a resource sharing module and an extended kernel module within a host operating system. The present invention utilizes the built mobile cloud container as the runtime environment for offloading code, ensuring execution requirements of offloading tasks and improving the computing performance of a cloud; cooperation between respective modules makes a further optimization to the performance of the platform, guaranteeing an efficient operation for the system.
-
公开(公告)号:US20230092214A1
公开(公告)日:2023-03-23
申请号:US17661991
申请日:2022-05-04
Inventor: Song WU , Hang HUANG , Kun WANG , Honglei WANG , Hai JIN
Abstract: The present invention relates to a container-oriented Linux kernel virtualizing system, at least comprising: a virtual kernel constructing module, being configured to provide a virtual kernel customization template for a user to edit and customize a virtual kernel of a container, and generate the virtual kernel taking a form of a loadable kernel module based on the edited virtual kernel customization template; and a virtual kernel instance module, being configured to reconstruct and isolate a Linux kernel, and operate a virtual kernel instance in a separate address space in response to a kernel request from a corresponding container. The container-oriented Linux kernel virtualizing system of the present invention is based on the use of a loadable module.
-
公开(公告)号:US20210216583A1
公开(公告)日:2021-07-15
申请号:US17012392
申请日:2020-09-04
Inventor: Song WU , Shengwei BIAN , Hai JIN , Hao FAN
Abstract: The present invention relates to an edge-computing-oriented construction method for a container image, and the construction method for a container image at least comprising steps of: having an image reconstruction module reconstruct an old container image so as to obtain a new container image comprising an index and a set of spare files that correspond to each other, and having an image management module store the index and the spare files separately from each other in an image repository and a spare file storage module respectively; having a download engine module scrape the index from the image repository to a corresponding container in the edge end, so that a container instance service module conducts search in a local file sharing module according to configuration information contained in the index and thereby retrieves a local shared file corresponding to the configuration information, an image file consulting module is configured to download a default file from the spare file storage module that does not exist in the local shared file retrieved according to the configuration information, and a service processor uploads the local shared file and the default file recorded according to the configuration information to the image reconstruction module, so that to match and to generate the index.
-
公开(公告)号:US20200326984A1
公开(公告)日:2020-10-15
申请号:US16752918
申请日:2020-01-27
Inventor: Song WU , Hai JIN , Ximing CHEN
Abstract: The present invention relates to a Docker-container-oriented method for isolation of file system resources, which allocates host file system resources according to access requests from containers and checks lock resources corresponding to the access requests.
-
公开(公告)号:US20190220073A1
公开(公告)日:2019-07-18
申请号:US16135404
申请日:2018-09-19
Inventor: Song WU , Yang Chen , Xinhou Wang , Hai Jin
CPC classification number: G06F1/26 , G06F8/60 , G06F11/3062 , G06F11/3414 , G06F11/3433 , G06F11/3452 , G06F2201/81 , H04L67/32 , H04L67/34 , H05K7/1492 , Y02D10/22 , Y02D10/36
Abstract: The present invention relates to a server deployment method based on datacenter power management, wherein the method comprises: constructing a tail latency table and/or a tail latency curve corresponding to application requests based on CPU utilization rate data of at least one server; and determining an optimal power budget of the server and deploying the server based on the tail latency requirement of the application requests. By analyzing the tail latency table or curve, the present invention can, within the limitation of datacenter rated power, on the premise of ensuring the performance of latency-sensitive applications, maximise the deployment density of servers in data centers.
-
-
-
-
-
-
-