Lempel-Ziv (LZ)-based data compression employing implicit variable-length distance coding
    2.
    发明授权
    Lempel-Ziv (LZ)-based data compression employing implicit variable-length distance coding 有权
    基于Lempel-Ziv(LZ)的数据压缩采用隐式可变长度距离编码

    公开(公告)号:US09160362B1

    公开(公告)日:2015-10-13

    申请号:US14272929

    申请日:2014-05-08

    CPC classification number: H03M7/3088 H03M7/3086 H03M7/40

    Abstract: Lempel-Ziv (LZ)-based data compression employing implicit variable-length distance coding is disclosed. Distances in LZ-based data compression length and distance blocks are implicit variable-length encoded during data compression to avoid padding encoded distances with extra bits (e.g., trailing 0's) that require fewer bits for storage than the number of bits needed to store maximum distance length. This reduces distance bit lengths in compressed output data to further reduce data size. During data compression, a distance table is generated that contains entries each having an assigned base and a number of extra bits to be read in compressed data during data decompression. In this manner, during data decompression, the entries in the distance table can be consulted to determine the number of bits in the variable-encoded distance in the compressed data to be read since the encoded distances can be encoded in the compressed data in fewer bits and without bit padding.

    Abstract translation: 公开了采用隐式可变长度距离编码的基于Lempel-Ziv(LZ)的数据压缩。 基于LZ的数据压缩长度和距离块的距离是在数据压缩期间进行隐式可变长度编码,以避免用额外的比特(例如,尾随0)来填充编码距离,这需要比存储最大距离所需的比特数少 长度。 这减少了压缩输出数据中的距离位长度,以进一步减少数据大小。 在数据压缩期间,生成一个距离表,该距离表包含在数据解压缩期间每个具有分配的基数和多个待压缩数据的额外位的条目。 以这种方式,在数据解压缩期间,可以参考距离表中的条目以确定要读取的压缩数据中的可变编码距离中的比特数,因为编码的距离可以以较少的比特被编码在压缩数据中 并且没有位填充。

    MULTI-SOURCE POSE MERGING FOR DEPTH ESTIMATION

    公开(公告)号:US20240354979A1

    公开(公告)日:2024-10-24

    申请号:US18435458

    申请日:2024-02-07

    Abstract: This disclosure provides systems, methods, and devices for image signal processing that support multi-source pose merging for depth estimation. In a first aspect, a method of image processing includes generating, in accordance with first image data of a first image frame and second image data of a second image frame, a first mask indicating one or more pixels determined not to change position between the first image frame and the second image frame, generating, in accordance with the first image data and the second image data, a second mask indicating one or more pixels determined not to change position between the first image frame and the second image frame, and combining the first mask with the second mask to generate a third mask. Other aspects and features are also claimed and described.

    Generating compressed data streams with lookback pre-fetch instructions for pre-fetching decompressed data from a lookback buffer

    公开(公告)号:US10120581B2

    公开(公告)日:2018-11-06

    申请号:US15085399

    申请日:2016-03-30

    Abstract: Aspects for generating compressed data streams with lookback pre-fetch instructions are disclosed. A data compression system is provided and configured to receive and compress an uncompressed data stream as part of a lookback-based compression scheme. The data compression system determines if a current data block was previously compressed. If so, the data compression system is configured to insert a lookback instruction corresponding to the current data block into the compressed data stream. Each lookback instruction includes a lookback buffer index that points to an entry in a lookback buffer where decompressed data corresponding to the data block will be stored during a separate decompression scheme. Once the data blocks have been compressed, the data compression system is configured to move a lookback buffer index of each lookback instruction in the compressed data stream into a lookback pre-fetch instruction located earlier than the corresponding lookback instruction in the compressed data stream.

    Self-adaptive Cache Architecture Based on Run-time Hardware Counters and Offline Profiling of Applications
    8.
    发明申请
    Self-adaptive Cache Architecture Based on Run-time Hardware Counters and Offline Profiling of Applications 审中-公开
    基于运行时硬件计数器和应用程序脱机分析的自适应缓存体系结构

    公开(公告)号:US20170017576A1

    公开(公告)日:2017-01-19

    申请号:US14801329

    申请日:2015-07-16

    Abstract: Aspects include computing devices, systems, and methods for implementing generating a cache memory configuration. A server may apply machine learning to context data. The server may determine a cache memory configuration relating to the context data for a cache memory of a computing device and predict execution of an application on the computing device. Aspects include computing devices, systems, and methods for implementing configuring a cache memory of the computing device. The computing device may classify a plurality of cache memory configurations, related to a predicted application execution, based on at least a hardware data threshold and a first hardware data. The computing device may select a first cache memory configuration from the plurality of cache memory configurations in response to the first cache memory configuration being classified for the first hardware data, and configuring the cache memory at runtime based on the first cache memory configuration.

    Abstract translation: 方面包括用于实现生成高速缓冲存储器配置的计算设备,系统和方法。 服务器可以将机器学习应用于上下文数据。 服务器可以确定与计算设备的高速缓冲存储器的上下文数据相关的高速缓存存储器配置,并且预测计算设备上的应用的执行。 方面包括用于实现配置计算设备的高速缓冲存储器的计算设备,系统和方法。 计算设备可以至少基于硬件数据阈值和第一硬件数据来分类与预测的应用执行相关的多个高速缓存存储器配置。 响应于第一高速缓存存储器配置被分类为第一硬件数据,以及基于第一高速缓冲存储器配置在运行时配置高速缓冲存储器,计算设备可以从多个高速缓存存储器配置中选择第一高速缓存存储器配置。

    DIRECT DEPTH PREDICTION
    9.
    发明申请

    公开(公告)号:US20250094796A1

    公开(公告)日:2025-03-20

    申请号:US18467804

    申请日:2023-09-15

    Abstract: Example systems and techniques are described for training a machine learning model. A system includes memory configured to store image data captured by a plurality of cameras and one or more processors communicatively coupled to the memory. The one or more processors are configured to execute a machine learning model on the image data, the machine learning model including a plurality of layers. The one or more processors are configured to apply a non-linear mapping function to output of one layer of the plurality of layers to generate depth data. The one or more processors are configured to train the machine learning model based on the depth data to generate a trained machine learning model.

    Compression of sparse deep convolutional network weights

    公开(公告)号:US12210958B2

    公开(公告)日:2025-01-28

    申请号:US16137491

    申请日:2018-09-20

    Abstract: The present disclosure describes methods, computer-readable media, and apparatuses for operating neural networks. For example, a first apparatus may receive a set of sparse weight vectors. The first apparatus may compress the set of sparse weight vectors to produce a compressed set of sparse weight vectors. The first apparatus may operate a neural network based on the compressed set of sparse weight vectors. In another example, a second apparatus may receive a set of sparse weight vectors. The second apparatus may perform a sparse computation based on the set of sparse weight vectors, and the performance of the sparse computation may produce one or more partial sums. The second apparatus may operate a neural network based at least in part on the one or more partial sums.

Patent Agency Ranking