-
1.
公开(公告)号:US20230316710A1
公开(公告)日:2023-10-05
申请号:US17707612
申请日:2022-03-29
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: SATISH KUMAR MOPUR , Gunalan Perumal Vijayan , Shounak Bandopadhyay , Krishnaprasad Lingadahalli Shastry
IPC: G06V10/762 , G06V10/776 , G06V10/82 , G06N3/04
CPC classification number: G06V10/763 , G06V10/776 , G06V10/82 , G06N3/0454
Abstract: Systems and methods are provided for implementing a Siamese neural network using improved “sub” neural networks and loss function. For example, the system can detect a granular change in images using a Siamese Neural Network with Convolutional Autoencoders as the twin sub networks (e.g., Siamese AutoEncoder or “SAE”). In some examples, the loss function may be an adaptive loss function to the SAE network rather than a contrastive loss function, which can help enable smooth control of granularity of change detection across the images. In some examples, an image separation distance value may be calculated to determine the value of change between the image pairs. The image separation distance value may be determined using an Euclidean distance associated with a latent space of an encoder portion of the autoencoder of the neural networks.
-
公开(公告)号:US20210295139A1
公开(公告)日:2021-09-23
申请号:US16826552
申请日:2020-03-23
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jitendra Onkar Kolhe , Gustavo Knuppe , Shyam Sankar Gopalakrishnan , Vaithyalingam Nagendran , Shounak Bandopadhyay
Abstract: In some examples, a system generates a neural network comprising logical identifiers of compute resources. For executing the neural network, the system maps the logical identifiers to physical addresses of physical resources, and loads instructions of the neural network onto the physical resources, wherein the loading comprises converting the logical identifiers in the neural network to the physical addresses.
-
公开(公告)号:US12254416B2
公开(公告)日:2025-03-18
申请号:US17229497
申请日:2021-04-13
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Jitendra Onkar Kolhe , Soumitra Chatterjee , Vaithyalingam Nagendran , Shounak Bandopadhyay
Abstract: Examples disclosed herein relate to using a compiler for implementing tensor operations in a neural network base computing system. A compiler defines the tensor operations to be implemented. The compiler identifies a binary tensor operation receiving input operands from a first output tensor of a first tensor operation and a second output tensor of a second tensor operation from two different paths of the convolution neural network. For the binary tensor operation, the compiler allocates a buffer space for a first input operand in the binary tensor operation based on a difference between a count of instances of the first output tensor and a count of instances of the second output tensor.
-
公开(公告)号:US11556766B2
公开(公告)日:2023-01-17
申请号:US16826552
申请日:2020-03-23
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jitendra Onkar Kolhe , Gustavo Knuppe , Shyam Sankar Gopalakrishnan , Vaithyalingam Nagendran , Shounak Bandopadhyay
Abstract: In some examples, a system generates a neural network comprising logical identifiers of compute resources. For executing the neural network, the system maps the logical identifiers to physical addresses of physical resources, and loads instructions of the neural network onto the physical resources, wherein the loading comprises converting the logical identifiers in the neural network to the physical addresses.
-
-
-