Optical Cross Apparatus
    11.
    发明申请

    公开(公告)号:US20220155528A1

    公开(公告)日:2022-05-19

    申请号:US17587091

    申请日:2022-01-28

    Abstract: An optical cross apparatus including a single-row fiber array, and a single-row input multidimensional output optical waveguide element, where the single-row fiber array is coupled to the single-row input multidimensional output optical waveguide element, and an arbitrarily curved spatial three-dimensional waveguide is generated inside the single-row input multidimensional output optical waveguide element, and where a coupling surface of the single-row fiber array is the same as that of the single-row input multidimensional output optical waveguide element.

    Bulk memory initialization
    14.
    发明授权

    公开(公告)号:US12298906B2

    公开(公告)日:2025-05-13

    申请号:US17902263

    申请日:2022-09-02

    Abstract: The disclosure relates to technology for bulk initialization of memory in a computer system. The computer system comprises a processor core comprising a load store unit and a last level cache in communication with the processor core. The last level cache is configured to receive bulk store operations from the load store unit. Each bulk store operation includes a physical address in the memory to be initialized. The last level cache is configured to send multiple write transactions to the memory for each bulk store operation to perform a bulk initialization of the memory for each bulk store operation. The last level cache is configured to track status of the bulk store operations.

    BULK MEMORY INITIALIZATION
    15.
    发明申请

    公开(公告)号:US20230004493A1

    公开(公告)日:2023-01-05

    申请号:US17902263

    申请日:2022-09-02

    Abstract: The disclosure relates to technology for bulk initialization of memory in a computer system. The computer system comprises a processor core comprising a load store unit and a last level cache in communication with the processor core. The last level cache is configured to receive bulk store operations from the load store unit. Each bulk store operation includes a physical address in the memory to be initialized. The last level cache is configured to send multiple write transactions to the memory for each bulk store operation to perform a bulk initialization of the memory for each bulk store operation. The last level cache is configured to track status of the bulk store operations.

    Associated plug-in management method, device and system
    16.
    发明授权
    Associated plug-in management method, device and system 有权
    相关的插件管理方法,设备和系统

    公开(公告)号:US09195480B2

    公开(公告)日:2015-11-24

    申请号:US14522385

    申请日:2014-10-23

    CPC classification number: G06F9/44526

    Abstract: An associated plug-in management method, device, and system are provided. A first associated plug-in and a second component that uses the first associated plug-in are determined by obtaining description information of the first associated plug-in and information about the second component that uses the associated plug-in, where the description information of the first associated plug-in and the information about the second component are provided by a first component; and then, based on the information about the second component and the description information of the first associated plug-in, the first associated plug-in is installed onto a device on which the second component is located. Thus decoupling during deployment of components related to an associated plug-in is implemented.

    Abstract translation: 提供了相关的插件管理方法,设备和系统。 使用第一相关联的插件的第一相关插件和第二组件通过获得第一相关插件的描述信息和关于使用相关联的插件的第二组件的信息来确定,其中描述信息 第一相关联的插件和关于第二组件的信息由第一组件提供; 然后,基于关于第二组件的信息和第一相关插件的描述信息,将第一相关联的插件安装到第二组件所在的设备上。 因此实现了在与相关插件相关的组件的部署期间的解耦。

    APPARATUS AND METHOD FOR EFFICIENT BRANCH PREDICTION USING MACHINE LEARNING

    公开(公告)号:US20220091850A1

    公开(公告)日:2022-03-24

    申请号:US17543096

    申请日:2021-12-06

    Abstract: The disclosure relates to branch prediction techniques that can improve the performance of pipelined microprocessors. A microprocessor for branch predictor selection includes a fetch stage configured to retrieve instructions from a memory. A buffer is configured to store instructions retrieved by the fetch stage, and one or more pipelined stages configured to execute the instructions stored in the buffer. The branch predictor, communicatively coupled to the buffer and the one or more pipelined stages, is configured to select a branch target predictor from a set of branch target predictors. Each of the branch target predictors comprise a trained model associated with a previously executed instruction to identify a target branch path for the instruction currently being executed based on the selected branch target predictor.

Patent Agency Ranking