-
公开(公告)号:US20240428048A1
公开(公告)日:2024-12-26
申请号:US18574786
申请日:2021-11-24
Applicant: Intel Corporation
Inventor: Wenjie Wang , Yi Zhang , Yi Qian , Wanglei Shen , Junjie Li , Lingyun Zhu
IPC: G06N3/04
Abstract: Technology providing in-memory neural network protection can include a memory to store a neural network, and a processor executing instructions to generate a neural network memory structure having a plurality of memory blocks in the memory, scatter the neural network among the plurality of memory blocks based on a randomized memory storage pattern, and reshuffle the neural network among the plurality of memory blocks based on a neural network memory access pattern. Scattering the neural network model can include dividing each layer of the neural network into a plurality of chunks, for each layer, selecting, for each chunk of the plurality of chunks, one of the plurality of memory blocks based on the randomized memory storage pattern, and storing each chunk in the respective selected memory block. The plurality of memory blocks can be organized into a groups of memory blocks and be divided between stack space and heap space.
-
公开(公告)号:US20240203025A1
公开(公告)日:2024-06-20
申请号:US18460183
申请日:2023-09-01
Applicant: Intel Corporation
Inventor: Linlin Zhang , Ning Luo , Changliang Wang , Yi Qian , Zhisheng Zhou
IPC: G06T15/00 , H04N13/366
CPC classification number: G06T15/005 , H04N13/366 , H04N2013/0092
Abstract: The disclosure relates to content aware foveated ASW for low latency rendering. A device for graphics processing comprises processing resources and a pipeline at least partly implemented by the processing resources. The pipeline is to: render input data to generate a first frame; perform ASW on the first frame on a block basis based on content and focusing related information associated with the first frame to generate a second frame; and output the first frame and the second frame.
-
公开(公告)号:US20240130002A1
公开(公告)日:2024-04-18
申请号:US18264214
申请日:2022-03-03
Applicant: Intel Corporation
Inventor: Rahul Khanna , Yi Qian , Greeshma Pisharody , Raju Arvind , Jiejie Wang , Laura M. Rumbel , Christopher R. Carlson , Jennifer M. Williams , Prince Adu Agyeman
CPC classification number: H04W76/40 , H04W12/009 , H04W74/006
Abstract: Various technologies relating to wireless sensor networks (WSNs) are disclosed, including, but not limited to, device onboarding and authentication, network association and synchronization, data logging and reporting, asset tracking, and automated flight state detection.
-
公开(公告)号:US20220311594A1
公开(公告)日:2022-09-29
申请号:US17569488
申请日:2022-01-05
Applicant: Intel Corporation
Inventor: Akshay Kadam , Sivakumar B , Lawrence Booth, JR. , Niraj Gupta , Steven Tu , Ricardo Becker , Subba Mungara , Tuyet-Trang Piel , Mitul Shah , Raynald Lim , Mihai Bogdan Bucsa , Cliodhna Ni Scanaill , Roman Zubarev , Dmitry Budnikov , Lingyun Zhu , Yi Qian , Stewart Taylor
Abstract: An accelerator includes a memory, a compute zone to receive an encrypted workload downloaded from a tenant application running in a virtual machine on a host computing system attached to the accelerator, and a processor subsystem to execute a cryptographic key exchange protocol with the tenant application to derive a session key for the compute zone and to program the session key into the compute zone. The compute zone is to decrypt the encrypted workload using the session key, receive an encrypted data stream from the tenant application, decrypt the encrypted data stream using the session key, and process the decrypted data stream by executing the workload to produce metadata.
-
公开(公告)号:US20240396711A1
公开(公告)日:2024-11-28
申请号:US18785435
申请日:2024-07-26
Applicant: Intel Corporation
Inventor: Akshay Kadam , Sivakumar B , Lawrence Booth, JR. , Niraj Gupta , Steven Tu , Ricardo Becker , Subba Mungara , Tuyet-Trang Piel , Mitul Shah , Raynald Lim , Mihai Bogdan Bucsa , Cliodhna Ni Scanaill , Roman Zubarev , Dmitry Budnikov , Lingyun Zhu , Yi Qian , Stewart Taylor
Abstract: An accelerator includes a memory, a compute zone to receive an encrypted workload downloaded from a tenant application running in a virtual machine on a host computing system attached to the accelerator, and a processor subsystem to execute a cryptographic key exchange protocol with the tenant application to derive a session key for the compute zone and to program the session key into the compute zone. The compute zone is to decrypt the encrypted workload using the session key, receive an encrypted data stream from the tenant application, decrypt the encrypted data stream using the session key, and process the decrypted data stream by executing the workload to produce metadata.
-
公开(公告)号:US09928067B2
公开(公告)日:2018-03-27
申请号:US13976359
申请日:2012-09-21
Applicant: Intel Corporation
Inventor: Xueliang Zhong , Jianhui Li , Jian Ping Jane Chen , Gang Wang , Yi Qian , Huifeng Gu
CPC classification number: G06F9/30181 , G06F8/52 , G06F9/30072 , G06F9/3861 , G06F9/4552
Abstract: Systems and methods are provided in example embodiments for performing binary translation. A binary translation system converts, by a translator module, source instructions to target instructions. The binary translation system identifies a condition code block in the source instructions, where the condition code block includes a plurality of condition bits. In response to identifying the condition code block, the binary translation system provides an optimizer module to convert the condition code block. Then, the binary translation system performs a pre-execution on the condition code block to resolve the plurality of condition bits in the condition code block.
-
公开(公告)号:US20250068916A1
公开(公告)日:2025-02-27
申请号:US18725028
申请日:2022-02-21
Applicant: Intel Corporation
Inventor: Yurong Chen , Anbang Yao , Yi Qian , Yu Zhang , Shandong Wang
IPC: G06N3/088
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for teacher-free self-feature distillation training of machine-learning (ML) models. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to perform a first comparison of (i) a first group of a first set of feature channels (FCs) of an ML model and (ii) a second group of the first set, perform a second comparison of (iii) a first group of a second set of FCs of the ML model and one of (iv) a third group of the first set or a first group of a third set of FCs of the ML model, adjust parameter(s) of the ML model based on the first and/or second comparisons, and, in response to an error value satisfying a threshold, deploy the ML model to execute a workload based on the parameter(s).
-
公开(公告)号:US20240135485A1
公开(公告)日:2024-04-25
申请号:US18460044
申请日:2023-09-01
Applicant: Intel Corporation
Inventor: Fan He , Yi Qian , Ning Luo , Yunbiao Lin , Changliang Wang , Ximin Zhang
CPC classification number: G06T1/20 , G06N3/092 , G06T15/005
Abstract: The disclosure relates to tuning configuration parameters for graphics pipeline for better user experience. A device for graphics processing, comprising: hardware engines; a graphics pipeline at least partly implemented by the hardware engines; and a tuner, coupled to the hardware engines and the graphics pipeline, the tuner to: collect statuses of the device during runtime for a previous frame; determine configuration parameters based on the collected statuses, the configuration parameters associated with three-dimensional 3D rendering, pre-processing and video encoding of the graphics pipeline; and tune the graphics pipeline with the determined configuration parameters for processing a next frame.
-
-
-
-
-
-
-