-
1.
公开(公告)号:US20140162620A1
公开(公告)日:2014-06-12
申请号:US14099385
申请日:2013-12-06
Applicant: QUALCOMM INCORPORATED
Inventor: Nieyan GENG , Gurvinder Singh CHHABRA , Thomas KLINGENBRUNN , Shyamal RAMACHANDRAN , Francesco GRILLI , Uttam PATTANAYAK
CPC classification number: H04W88/06 , G06F9/4401 , H04L69/321 , H04M1/72525 , H04W8/22 , H04W48/18
Abstract: Methods and apparatus are provided for device configuration (e.g., feature segment loading and system selection). Certain aspects of the present disclosure generally relate to operating a user equipment (UE) in a first radio access network (RAN) with a first set of modem features that supports the first RAN, detecting a second RAN not supported by the first set of modem features, and rebooting the modem software to load a second set of modem features that supports the detected RAN. For certain aspects, the first RAN may be a Time Division-Synchronous Code Division Multiple Access (TD-SCDMA) network and the second RAN may be a Wideband-Code Division Multiple Access (W-CDMA) network or Long-Term Evolution (LTE) network. This allows features to be loaded into memory (e.g., only) when they are required to support a detected RAN, rather than loading an entire image, thereby conserving DRAM and increasing efficiency.
Abstract translation: 提供了用于设备配置(例如,特征段加载和系统选择)的方法和装置。 本公开的某些方面通常涉及在具有支持第一RAN的第一组调制解调器特征的第一无线电接入网络(RAN)中操作用户设备(UE),检测不被第一组调制解调器支持的第二RAN 功能和重新启动调制解调器软件以加载支持检测到的RAN的第二组调制解调器功能。 对于某些方面,第一RAN可以是时分同步码分多址(TD-SCDMA)网络,而第二RAN可以是宽带码分多址(W-CDMA)网络或长期演进(LTE) )网络。 这允许在需要支持检测到的RAN时将特征加载到存储器(例如,仅)中,而不是加载整个图像,从而节约DRAM并提高效率。
-
公开(公告)号:US20200272520A1
公开(公告)日:2020-08-27
申请号:US16801776
申请日:2020-02-26
Applicant: QUALCOMM Incorporated
Inventor: Richard SENIOR , Sundeep KUSHWAHA , Harsha Gordhan JAGASIA , Christopher AHN , Gurvinder Singh CHHABRA , Nieyan GENG , Maksim KRASNYANSKIY , UNNI PRASAD
IPC: G06F9/50 , G06F9/48 , G06F12/0806
Abstract: A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.
-
公开(公告)号:US20170046274A1
公开(公告)日:2017-02-16
申请号:US14827255
申请日:2015-08-14
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Gurvinder Singh CHHABRA , Nieyan GENG , John BRENNEN , BalaSubrahmanyam CHINTAMNEEDI
IPC: G06F12/10
CPC classification number: G06F12/1036 , G06F12/023 , G06F12/0253 , G06F12/04 , G06F12/1027 , G06F2212/1044 , G06F2212/50
Abstract: Systems and methods pertain to a method of memory management. Gaps are unused portions of a physical memory in sections of the physical memory mapped to virtual addresses by entries of a translation look-aside buffer (TLB). Sizes and alignment of the sections in the physical memory may be based on the number of entries in the TLB, which leads to the gaps. One or more gaps identified in the physical memory are reclaimed or reused, where the one or more gaps are collected to form a dynamic buffer, by mapping physical addresses of the gaps to virtual addresses of the dynamic buffer.
Abstract translation: 系统和方法涉及内存管理方法。 间隙是通过翻译后备缓冲器(TLB)的条目映射到虚拟地址的物理内存的部分中的物理内存的未使用部分。 物理存储器中的段的大小和对齐可以基于TLB中的条目数量,这导致了间隙。 通过将间隙的物理地址映射到动态缓冲区的虚拟地址,在物理存储器中识别的一个或多个间隙被回收或再利用,其中通过将物理地址的间隔映射到虚拟地址来收集一个或多个间隙以形成动态缓冲器。
-
公开(公告)号:US20200089616A1
公开(公告)日:2020-03-19
申请号:US16130069
申请日:2018-09-13
Applicant: QUALCOMM Incorporated
Inventor: Nieyan GENG , Gurvinder Singh Chhabra , Caoye Shen , Samir Thakkar , Chuguang He
IPC: G06F12/109 , G06F9/50 , G06F9/4401 , G06F12/02 , G06F9/455
Abstract: Various embodiments include methods and devices for implementing external paging and swapping for dynamic modules on a computing device. Embodiments may include assigning static virtual addresses to a base image and dynamic modules of a static image of firmware of the computing device from a virtual address space for the static image, decompose static image into the base image and the dynamic modules, load the base image to an execution memory during a boot time from first partition of a storage memory, reserve a swap pool in the execution memory during the boot time, and load a dynamic module of the dynamic modules to the swap pool from a second partition of storage memory during a run time.
-
公开(公告)号:US20190012164A1
公开(公告)日:2019-01-10
申请号:US16028321
申请日:2018-07-05
Applicant: QUALCOMM Incorporated
Inventor: Nieyan GENG , Gurvinder Singh CHHABRA , Chenyang LIU , Chuguang HE
Abstract: Embodiments of the present disclosure include systems and methods for efficient over-the-air updating of firmware having compressed and uncompressed segments. The method includes receiving a first update to the firmware via a radio, wherein the first update includes a first uncompressed segment and a first compressed segment, receiving a second update to the firmware, wherein the second update corresponds to the first compressed segment, compressing the second update to generate a compressed second update, applying the first update to the firmware, and applying the compressed second update to the firmware to generate an updated firmware.
-
公开(公告)号:US20170371797A1
公开(公告)日:2017-12-28
申请号:US15192984
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Nieyan GENG , Gurvinder Singh CHHABRA , Richard SENIOR , Anand JANAKIRAMAN
IPC: G06F12/0877 , G06F12/0842
CPC classification number: G06F12/0877 , G06F12/023 , G06F12/0842 , G06F12/0855 , G06F12/0886 , G06F2212/1024 , G06F2212/401 , G06F2212/604 , H03M7/30
Abstract: Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.
-
7.
公开(公告)号:US20170371792A1
公开(公告)日:2017-12-28
申请号:US15193001
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Nieyan GENG , Christopher Edward KOOB , Gurvinder Singh CHHABRA , Richard SENIOR , Anand JANAKIRAMAN
IPC: G06F12/0871 , G06F12/0868
CPC classification number: G06F12/0871 , G06F12/02 , G06F12/0802 , G06F12/0868 , G06F2212/1024 , G06F2212/1044 , G06F2212/281 , G06F2212/282 , G06F2212/313 , G06F2212/401 , G06F2212/601 , G06F2212/608
Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time
-
-
-
-
-
-