Distributed dynamic processing method for stream data within a computer system

    公开(公告)号:US10719370B2

    公开(公告)日:2020-07-21

    申请号:US16180460

    申请日:2018-11-05

    Abstract: The present invention involves with a dynamic distributed processing method for stream data, at least comprising: analyzing and predicting an execution mode of at least one data feature block of the data of a user stream data processing program; dynamically adjusting the execution mode based on an average magnitude of queue latency and a threshold of queue latency of the stream data; and processing corresponding said at least one data feature block based on the execution mode. By associating and combining the irrelevant data stream mode and micro-batch mode of stream data computing, the present invention successfully releases automatic shift and data process between the two modes, which advantageously has both the high throughput property and the low latency property.

    NVM-BASED METHOD FOR PERFORMANCE ACCELERATION OF CONTAINERS

    公开(公告)号:US20200334066A1

    公开(公告)日:2020-10-22

    申请号:US16773004

    申请日:2020-01-27

    Abstract: The present disclosure discloses a NVM-based method for performance acceleration of containers. The method comprises classifying each image layer of mirror images as either an LAL (Layer above LDL) or an LBL (Layer below LDL) during deployment of containers; storing the LALs into a non-volatile memory and selectively storing each said LBL into one of the non-volatile memory and a hard drive; acquiring hot image files required by the containers during startup and/or operation of the containers and storing the hot image files required by the containers into the non-volatile memory; and sorting the mirror images in terms of access frequency according to at least numbers of times of access to the hot image files so as to release the non-volatile memory currently occupied by the mirror image having the lowest access frequency when the non-volatile memory is short of storage space.

    ACCELERATION METHOD FOR FPGA-BASED DISTRIBUTED STREAM PROCESSING SYSTEM

    公开(公告)号:US20200326992A1

    公开(公告)日:2020-10-15

    申请号:US16752870

    申请日:2020-01-27

    Inventor: Hai JIN Song Wu

    Abstract: The present invention relates to an acceleration method for an FPGA-based distributed stream processing system, which accomplishes computational processing of stream processing operations through collaborative computing conducted by FPGA devices and a CPU module and at least comprises following steps: building the FPGA-based distributed stream processing system having a master node by installing the FPGA devices on slave nodes; dividing stream applications into first tasks suitable to be executed by the FPGA devices and second tasks suitable to be executed by the CPU module; and where the stream applications submitted to the master node are configured with kernel files that can be compiled and executed by the FPGA devices or with uploading paths of the kernel files, making the master node allocate and schedule resources by pre-processing the stream applications.

    Method for cloudlet-based optimization of energy consumption

    公开(公告)号:US10736032B2

    公开(公告)日:2020-08-04

    申请号:US16193805

    申请日:2018-11-16

    Abstract: The present invention relates to a method for cloudlet-based optimization of energy consumption, comprises: building a cloudlet system that comprises: at least two cloudlets and a mobile device wirelessly connected to the cloudlets, so that the cloudlets provide the mobile device wirelessly connected thereto with cloud computing services; acquiring system data related to the mobile device from the cloudlet system for analyzing a data volume to be handled by the mobile device locally and setting an initial operating frequency FLinitial for a first processor of the mobile device according to the data volume; and based on a tolerable latency range T of a task queue, dynamically deciding a scheduling strategy for individual task of the task queue using a Markov Decision Process, and setting a present operating frequency FL for the first processor of the mobile device corresponding to the scheduling strategy, so as to enable the mobile device to complete the tasks within the tolerable latency range T with minimal energy consumption.

    Container-oriented Linux kernel virtualizing system and method thereof

    公开(公告)号:US12242877B2

    公开(公告)日:2025-03-04

    申请号:US17661991

    申请日:2022-05-04

    Abstract: The present invention relates to a container-oriented Linux kernel virtualizing system, at least comprising: a virtual kernel constructing module, being configured to provide a virtual kernel customization template for a user to edit and customize a virtual kernel of a container, and generate the virtual kernel taking a form of a loadable kernel module based on the edited virtual kernel customization template; and a virtual kernel instance module, being configured to reconstruct and isolate a Linux kernel, and operate a virtual kernel instance in a separate address space in response to a kernel request from a corresponding container. The container-oriented Linux kernel virtualizing system of the present invention is based on the use of a loadable module.

    Non-volatile memory (NVM) based method for performance acceleration of containers

    公开(公告)号:US11449355B2

    公开(公告)日:2022-09-20

    申请号:US16773004

    申请日:2020-01-27

    Abstract: The present disclosure discloses a NVM-based method for performance acceleration of containers. The method comprises classifying each image layer of mirror images as either an LAL (Layer above LDL) or an LBL (Layer below LDL) during deployment of containers; storing the LALs into a non-volatile memory and selectively storing each said LBL into one of the non-volatile memory and a hard drive; acquiring hot image files required by the containers during startup and/or operation of the containers and storing the hot image files required by the containers into the non-volatile memory; and sorting the mirror images in terms of access frequency according to at least numbers of times of access to the hot image files so as to release the non-volatile memory currently occupied by the mirror image having the lowest access frequency when the non-volatile memory is short of storage space.

    Edge-computing-oriented construction method for container mirror image

    公开(公告)号:US11341181B2

    公开(公告)日:2022-05-24

    申请号:US17012392

    申请日:2020-09-04

    Abstract: The present invention relates to an edge-computing-oriented construction method for a container image, and the construction method for a container image at least comprising steps of: having an image reconstruction module reconstruct an old container image so as to obtain a new container image comprising an index and a set of spare files that correspond to each other, and having an image management module store the index and the spare files separately from each other in an image repository and a spare file storage module respectively; having a download engine module scrape the index from the image repository to a corresponding container in the edge end, so that a container instance service module conducts search in a local file sharing module according to configuration information contained in the index and thereby retrieves a local shared file corresponding to the configuration information, an image file consulting module is configured to download a default file from the spare file storage module that does not exist in the local shared file retrieved according to the configuration information, and a service processor uploads the local shared file and the default file recorded according to the configuration information to the image reconstruction module, so that to match and to generate the index.

    Acceleration method for FPGA-based distributed stream processing system

    公开(公告)号:US11023285B2

    公开(公告)日:2021-06-01

    申请号:US16752870

    申请日:2020-01-27

    Inventor: Hai Jin Song Wu Die Hu

    Abstract: The present invention relates to an acceleration method for an FPGA-based distributed stream processing system, which accomplishes computational processing of stream processing operations through collaborative computing conducted by FPGA devices and a CPU module and at least comprises following steps: building the FPGA-based distributed stream processing system having a master node by installing the FPGA devices on slave nodes; dividing stream applications into first tasks suitable to be executed by the FPGA devices and second tasks suitable to be executed by the CPU module; and where the stream applications submitted to the master node are configured with kernel files that can be compiled and executed by the FPGA devices or with uploading paths of the kernel files, making the master node allocate and schedule resources by pre-processing the stream applications.

    S.M.A.R.T. threshold optimization method used for disk failure detection

    公开(公告)号:US10896080B2

    公开(公告)日:2021-01-19

    申请号:US16135480

    申请日:2018-09-19

    Abstract: An S.M.A.R.T. threshold optimization method used for disk failure detection includes the steps of: analyzing S.M.A.R.T. attributes based on correlation between S.M.A.R.T. attribute information about plural failed and non-failed disks and failure information and sieving out weakly correlated attributes and/or strongly correlated attributes; and setting threshold intervals, multivariate thresholds and/or native thresholds corresponding to the S.M.A.R.T. attributes based on distribution patterns of the strongly or weakly correlated attributes. As compared to reactive fault tolerance, the disclosed method has no negative effects on reading and writing performance of disks and performance of storage systems as a whole. As compared to the known methods that use native disk S.M.A.R.T. thresholds, the disclosed method significantly improves disk failure detection rate with a low false alarm rate. As compared to disk failure forecast based on machine learning algorithm, the disclosed method has good interpretability and allows easy adjustment of its forecast performance.

Patent Agency Ranking