METHODS AND APPARATUS FOR UPDATING A DEVICE CONFIGURATION
    1.
    发明申请
    METHODS AND APPARATUS FOR UPDATING A DEVICE CONFIGURATION 有权
    用于更新设备配置的方法和装置

    公开(公告)号:US20140162620A1

    公开(公告)日:2014-06-12

    申请号:US14099385

    申请日:2013-12-06

    Abstract: Methods and apparatus are provided for device configuration (e.g., feature segment loading and system selection). Certain aspects of the present disclosure generally relate to operating a user equipment (UE) in a first radio access network (RAN) with a first set of modem features that supports the first RAN, detecting a second RAN not supported by the first set of modem features, and rebooting the modem software to load a second set of modem features that supports the detected RAN. For certain aspects, the first RAN may be a Time Division-Synchronous Code Division Multiple Access (TD-SCDMA) network and the second RAN may be a Wideband-Code Division Multiple Access (W-CDMA) network or Long-Term Evolution (LTE) network. This allows features to be loaded into memory (e.g., only) when they are required to support a detected RAN, rather than loading an entire image, thereby conserving DRAM and increasing efficiency.

    Abstract translation: 提供了用于设备配置(例如,特征段加载和系统选择)的方法和装置。 本公开的某些方面通常涉及在具有支持第一RAN的第一组调制解调器特征的第一无线电接入网络(RAN)中操作用户设备(UE),检测不被第一组调制解调器支持的第二RAN 功能和重新启动调制解调器软件以加载支持检测到的RAN的第二组调制解调器功能。 对于某些方面,第一RAN可以是时分同步码分多址(TD-SCDMA)网络,而第二RAN可以是宽带码分多址(W-CDMA)网络或长期演进(LTE) )网络。 这允许在需要支持检测到的RAN时将特征加载到存储器(例如,仅)中,而不是加载整个图像,从而节约DRAM并提高效率。

    PRIORITY-BASED CACHE-LINE FITTING IN COMPRESSED MEMORY SYSTEMS OF PROCESSOR-BASED SYSTEMS

    公开(公告)号:US20230236979A1

    公开(公告)日:2023-07-27

    申请号:US17572472

    申请日:2022-01-10

    Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.

    Priority-Based Cache-Line Fitting in Compressed Memory Systems of Processor-Based Systems

    公开(公告)号:US20230236961A1

    公开(公告)日:2023-07-27

    申请号:US17572471

    申请日:2022-01-10

    CPC classification number: G06F12/023 G06F2212/401

    Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.

    EFFICIENT UTILIZATION OF MEMORY GAPS
    5.
    发明申请
    EFFICIENT UTILIZATION OF MEMORY GAPS 审中-公开
    有效利用记忆GAPS

    公开(公告)号:US20170046274A1

    公开(公告)日:2017-02-16

    申请号:US14827255

    申请日:2015-08-14

    Abstract: Systems and methods pertain to a method of memory management. Gaps are unused portions of a physical memory in sections of the physical memory mapped to virtual addresses by entries of a translation look-aside buffer (TLB). Sizes and alignment of the sections in the physical memory may be based on the number of entries in the TLB, which leads to the gaps. One or more gaps identified in the physical memory are reclaimed or reused, where the one or more gaps are collected to form a dynamic buffer, by mapping physical addresses of the gaps to virtual addresses of the dynamic buffer.

    Abstract translation: 系统和方法涉及内存管理方法。 间隙是通过翻译后备缓冲器(TLB)的条目映射到虚拟地址的物理内存的部分中的物理内存的未使用部分。 物理存储器中的段的大小和对齐可以基于TLB中的条目数量,这导致了间隙。 通过将间隙的物理地址映射到动态缓冲区的虚拟地址,在物理存储器中识别的一个或多个间隙被回收或再利用,其中通过将物理地址的间隔映射到虚拟地址来收集一个或多个间隙以形成动态缓冲器。

    TRAINING AND UTILIZATION OF A NEURAL BRANCH PREDICTOR

    公开(公告)号:US20190303158A1

    公开(公告)日:2019-10-03

    申请号:US15940896

    申请日:2018-03-29

    Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions executable by a processor as a neural subset of branch instructions, based on information obtained from using an execution trace, wherein the neural subset of branch instructions are determined to have larger benefit from a neural branch predictor than a non-neural branch predictor. The neural branch predictor is pre-trained for the neural subset based on the execution trace. Annotations are added to the neural subset of branch instructions, wherein the annotations are preserved across software revisions. At runtime, when the neural subset of branch instructions are encountered during any future software revision, the branch instructions thereof are detected as belonging to the neural subset of branch instructions based on the annotations, and the pre-trained neural branch predictor is used for making their branch predictions.

    TRAINING AND UTILIZATION OF NEURAL BRANCH PREDICTOR

    公开(公告)号:US20190087193A1

    公开(公告)日:2019-03-21

    申请号:US15712112

    申请日:2017-09-21

    CPC classification number: G06F9/3848 G06F9/3806

    Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions from an execution trace of instructions executed by a processor. The identified subset of branch instructions have greater benefit from branch predictions made by a neural branch predictor than branch predictions made by a non-neural branch predictor. During runtime, the neural branch predictor is selectively used for obtaining branch predictions of the identified subset of branch instructions. For remaining branch instructions outside the identified subset of branch instructions, branch predictions are obtained from a non-neural branch predictor. Further, a weight vector matrix comprising weight vectors for the identified subset of branch instructions of the neural branch predictor is pre-trained based on the execution trace.

    OVER-THE-AIR (OTA) UPDATING OF PARTIALLY COMPRESSED FIRMWARE

    公开(公告)号:US20190012164A1

    公开(公告)日:2019-01-10

    申请号:US16028321

    申请日:2018-07-05

    Abstract: Embodiments of the present disclosure include systems and methods for efficient over-the-air updating of firmware having compressed and uncompressed segments. The method includes receiving a first update to the firmware via a radio, wherein the first update includes a first uncompressed segment and a first compressed segment, receiving a second update to the firmware, wherein the second update corresponds to the first compressed segment, compressing the second update to generate a compressed second update, applying the first update to the firmware, and applying the compressed second update to the firmware to generate an updated firmware.

Patent Agency Ranking