S.M.A.R.T. THRESHOLD OPTIMIZATION METHOD USED FOR DISK FAILURE DETECTION

    公开(公告)号:US20190205193A1

    公开(公告)日:2019-07-04

    申请号:US16135480

    申请日:2018-09-19

    Abstract: An S.M.A.R.T. threshold optimization method used for disk failure detection includes the steps of: analyzing S.M.A.R.T. attributes based on correlation between S.M.A.R.T. attribute information about plural failed and non-failed disks and failure information and sieving out weakly correlated attributes and/or strongly correlated attributes; and setting threshold intervals, multivariate thresholds and/or native thresholds corresponding to the S.M.A.R.T. attributes based on distribution patterns of the strongly or weakly correlated attributes. As compared to reactive fault tolerance, the disclosed method has no negative effects on reading and writing performance of disks and performance of storage systems as a whole. As compared to the known methods that use native disk S.M.A.R.T. thresholds, the disclosed method significantly improves disk failure detection rate with a low false alarm rate. As compared to disk failure forecast based on machine learning algorithm, the disclosed method has good interpretability and allows easy adjustment of its forecast performance.

    DISTRIBUTED DYNAMIC PROCESSING METHOD FOR STREAM DATA WITHIN A COMPUTER SYSTEM

    公开(公告)号:US20190205179A1

    公开(公告)日:2019-07-04

    申请号:US16180460

    申请日:2018-11-05

    Abstract: The present invention involves with a dynamic distributed processing method for stream data, at least comprising: analyzing and predicting an execution mode of at least one data feature block of the data of a user stream data processing program; dynamically adjusting the execution mode based on an average magnitude of queue latency and a threshold of queue latency of the stream data; and processing corresponding said at least one data feature block based on the execution mode. By associating and combining the irrelevant data stream mode and micro-batch mode of stream data computing, the present invention successfully releases automatic shift and data process between the two modes, which advantageously has both the high throughput property and the low latency property.

    CONTAINER-BASED MOBILE CODE OFFLOADING SUPPORT SYSTEM IN CLOUD ENVIRONMENT AND OFFLOADING METHOD THEREOF

    公开(公告)号:US20170289059A1

    公开(公告)日:2017-10-05

    申请号:US15258763

    申请日:2016-09-07

    Abstract: The present invention discloses a container-based mobile code offloading support system in a cloud environment and the offloading method thereof, comprising a front-end processing layer, a runtime layer and a back-end resource layer. The front-end processing layer is responsible for responding to an arrived request and managing a status of a container, which is realized by a request distribution module, a code caching module and a monitoring and scheduling module; the runtime layer provides the same execution environment as that of a terminal, which is realized by a runtime module consisted of a plurality of mobile cloud containers; and the back-end resource layer solves incompatibility of a cloud platform with an mobile terminal environment and provides underlying resource support for a runtime, which is realized by a resource sharing module and an extended kernel module within a host operating system. The present invention utilizes the built mobile cloud container as the runtime environment for offloading code, ensuring execution requirements of offloading tasks and improving the computing performance of a cloud; cooperation between respective modules makes a further optimization to the performance of the platform, guaranteeing an efficient operation for the system.

    CONTAINER-ORIENTED LINUX KERNEL VIRTUALIZING SYSTEM AND METHOD THEREOF

    公开(公告)号:US20230092214A1

    公开(公告)日:2023-03-23

    申请号:US17661991

    申请日:2022-05-04

    Abstract: The present invention relates to a container-oriented Linux kernel virtualizing system, at least comprising: a virtual kernel constructing module, being configured to provide a virtual kernel customization template for a user to edit and customize a virtual kernel of a container, and generate the virtual kernel taking a form of a loadable kernel module based on the edited virtual kernel customization template; and a virtual kernel instance module, being configured to reconstruct and isolate a Linux kernel, and operate a virtual kernel instance in a separate address space in response to a kernel request from a corresponding container. The container-oriented Linux kernel virtualizing system of the present invention is based on the use of a loadable module.

    EDGE-COMPUTING-ORIENTED CONSTRUCTION METHOD FOR CONTAINER MIRROR IMAGE

    公开(公告)号:US20210216583A1

    公开(公告)日:2021-07-15

    申请号:US17012392

    申请日:2020-09-04

    Abstract: The present invention relates to an edge-computing-oriented construction method for a container image, and the construction method for a container image at least comprising steps of: having an image reconstruction module reconstruct an old container image so as to obtain a new container image comprising an index and a set of spare files that correspond to each other, and having an image management module store the index and the spare files separately from each other in an image repository and a spare file storage module respectively; having a download engine module scrape the index from the image repository to a corresponding container in the edge end, so that a container instance service module conducts search in a local file sharing module according to configuration information contained in the index and thereby retrieves a local shared file corresponding to the configuration information, an image file consulting module is configured to download a default file from the spare file storage module that does not exist in the local shared file retrieved according to the configuration information, and a service processor uploads the local shared file and the default file recorded according to the configuration information to the image reconstruction module, so that to match and to generate the index.

Patent Agency Ranking