-
1.
公开(公告)号:US20140162620A1
公开(公告)日:2014-06-12
申请号:US14099385
申请日:2013-12-06
Applicant: QUALCOMM INCORPORATED
Inventor: Nieyan GENG , Gurvinder Singh CHHABRA , Thomas KLINGENBRUNN , Shyamal RAMACHANDRAN , Francesco GRILLI , Uttam PATTANAYAK
CPC classification number: H04W88/06 , G06F9/4401 , H04L69/321 , H04M1/72525 , H04W8/22 , H04W48/18
Abstract: Methods and apparatus are provided for device configuration (e.g., feature segment loading and system selection). Certain aspects of the present disclosure generally relate to operating a user equipment (UE) in a first radio access network (RAN) with a first set of modem features that supports the first RAN, detecting a second RAN not supported by the first set of modem features, and rebooting the modem software to load a second set of modem features that supports the detected RAN. For certain aspects, the first RAN may be a Time Division-Synchronous Code Division Multiple Access (TD-SCDMA) network and the second RAN may be a Wideband-Code Division Multiple Access (W-CDMA) network or Long-Term Evolution (LTE) network. This allows features to be loaded into memory (e.g., only) when they are required to support a detected RAN, rather than loading an entire image, thereby conserving DRAM and increasing efficiency.
Abstract translation: 提供了用于设备配置(例如,特征段加载和系统选择)的方法和装置。 本公开的某些方面通常涉及在具有支持第一RAN的第一组调制解调器特征的第一无线电接入网络(RAN)中操作用户设备(UE),检测不被第一组调制解调器支持的第二RAN 功能和重新启动调制解调器软件以加载支持检测到的RAN的第二组调制解调器功能。 对于某些方面,第一RAN可以是时分同步码分多址(TD-SCDMA)网络,而第二RAN可以是宽带码分多址(W-CDMA)网络或长期演进(LTE) )网络。 这允许在需要支持检测到的RAN时将特征加载到存储器(例如,仅)中,而不是加载整个图像,从而节约DRAM并提高效率。
-
2.
公开(公告)号:US20230236979A1
公开(公告)日:2023-07-27
申请号:US17572472
申请日:2022-01-10
Applicant: QUALCOMM Incorporated
Inventor: Norris GENG , Richard SENIOR , Gurvinder Singh CHHABRA , Kan WANG
IPC: G06F12/084 , G06F12/0811 , G06F3/06
CPC classification number: G06F12/084 , G06F3/0608 , G06F3/0659 , G06F3/0679 , G06F12/0811
Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
-
3.
公开(公告)号:US20230236961A1
公开(公告)日:2023-07-27
申请号:US17572471
申请日:2022-01-10
Applicant: QUALCOMM Incorporated
Inventor: Norris GENG , Richard SENIOR , Gurvinder Singh CHHABRA , Kan WANG
IPC: G06F12/02
CPC classification number: G06F12/023 , G06F2212/401
Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.
-
公开(公告)号:US20200272520A1
公开(公告)日:2020-08-27
申请号:US16801776
申请日:2020-02-26
Applicant: QUALCOMM Incorporated
Inventor: Richard SENIOR , Sundeep KUSHWAHA , Harsha Gordhan JAGASIA , Christopher AHN , Gurvinder Singh CHHABRA , Nieyan GENG , Maksim KRASNYANSKIY , UNNI PRASAD
IPC: G06F9/50 , G06F9/48 , G06F12/0806
Abstract: A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.
-
公开(公告)号:US20170046274A1
公开(公告)日:2017-02-16
申请号:US14827255
申请日:2015-08-14
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Gurvinder Singh CHHABRA , Nieyan GENG , John BRENNEN , BalaSubrahmanyam CHINTAMNEEDI
IPC: G06F12/10
CPC classification number: G06F12/1036 , G06F12/023 , G06F12/0253 , G06F12/04 , G06F12/1027 , G06F2212/1044 , G06F2212/50
Abstract: Systems and methods pertain to a method of memory management. Gaps are unused portions of a physical memory in sections of the physical memory mapped to virtual addresses by entries of a translation look-aside buffer (TLB). Sizes and alignment of the sections in the physical memory may be based on the number of entries in the TLB, which leads to the gaps. One or more gaps identified in the physical memory are reclaimed or reused, where the one or more gaps are collected to form a dynamic buffer, by mapping physical addresses of the gaps to virtual addresses of the dynamic buffer.
Abstract translation: 系统和方法涉及内存管理方法。 间隙是通过翻译后备缓冲器(TLB)的条目映射到虚拟地址的物理内存的部分中的物理内存的未使用部分。 物理存储器中的段的大小和对齐可以基于TLB中的条目数量,这导致了间隙。 通过将间隙的物理地址映射到动态缓冲区的虚拟地址,在物理存储器中识别的一个或多个间隙被回收或再利用,其中通过将物理地址的间隔映射到虚拟地址来收集一个或多个间隙以形成动态缓冲器。
-
公开(公告)号:US20190303158A1
公开(公告)日:2019-10-03
申请号:US15940896
申请日:2018-03-29
Applicant: QUALCOMM Incorporated
Inventor: Gurkanwal BRAR , Christopher AHN , Gurvinder Singh CHHABRA
Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions executable by a processor as a neural subset of branch instructions, based on information obtained from using an execution trace, wherein the neural subset of branch instructions are determined to have larger benefit from a neural branch predictor than a non-neural branch predictor. The neural branch predictor is pre-trained for the neural subset based on the execution trace. Annotations are added to the neural subset of branch instructions, wherein the annotations are preserved across software revisions. At runtime, when the neural subset of branch instructions are encountered during any future software revision, the branch instructions thereof are detected as belonging to the neural subset of branch instructions based on the annotations, and the pre-trained neural branch predictor is used for making their branch predictions.
-
公开(公告)号:US20190087193A1
公开(公告)日:2019-03-21
申请号:US15712112
申请日:2017-09-21
Applicant: QUALCOMM Incorporated
Inventor: Gurkanwal BRAR , Christopher AHN , Gurvinder Singh CHHABRA
IPC: G06F9/38
CPC classification number: G06F9/3848 , G06F9/3806
Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions from an execution trace of instructions executed by a processor. The identified subset of branch instructions have greater benefit from branch predictions made by a neural branch predictor than branch predictions made by a non-neural branch predictor. During runtime, the neural branch predictor is selectively used for obtaining branch predictions of the identified subset of branch instructions. For remaining branch instructions outside the identified subset of branch instructions, branch predictions are obtained from a non-neural branch predictor. Further, a weight vector matrix comprising weight vectors for the identified subset of branch instructions of the neural branch predictor is pre-trained based on the execution trace.
-
公开(公告)号:US20190012164A1
公开(公告)日:2019-01-10
申请号:US16028321
申请日:2018-07-05
Applicant: QUALCOMM Incorporated
Inventor: Nieyan GENG , Gurvinder Singh CHHABRA , Chenyang LIU , Chuguang HE
Abstract: Embodiments of the present disclosure include systems and methods for efficient over-the-air updating of firmware having compressed and uncompressed segments. The method includes receiving a first update to the firmware via a radio, wherein the first update includes a first uncompressed segment and a first compressed segment, receiving a second update to the firmware, wherein the second update corresponds to the first compressed segment, compressing the second update to generate a compressed second update, applying the first update to the firmware, and applying the compressed second update to the firmware to generate an updated firmware.
-
公开(公告)号:US20170371797A1
公开(公告)日:2017-12-28
申请号:US15192984
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Nieyan GENG , Gurvinder Singh CHHABRA , Richard SENIOR , Anand JANAKIRAMAN
IPC: G06F12/0877 , G06F12/0842
CPC classification number: G06F12/0877 , G06F12/023 , G06F12/0842 , G06F12/0855 , G06F12/0886 , G06F2212/1024 , G06F2212/401 , G06F2212/604 , H03M7/30
Abstract: Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.
-
10.
公开(公告)号:US20170371792A1
公开(公告)日:2017-12-28
申请号:US15193001
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Nieyan GENG , Christopher Edward KOOB , Gurvinder Singh CHHABRA , Richard SENIOR , Anand JANAKIRAMAN
IPC: G06F12/0871 , G06F12/0868
CPC classification number: G06F12/0871 , G06F12/02 , G06F12/0802 , G06F12/0868 , G06F2212/1024 , G06F2212/1044 , G06F2212/281 , G06F2212/282 , G06F2212/313 , G06F2212/401 , G06F2212/601 , G06F2212/608
Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time
-
-
-
-
-
-
-
-
-