SPLASH DETECTION FOR SURFACE SPLASH SCORING
    2.
    发明申请

    公开(公告)号:WO2022010816A1

    公开(公告)日:2022-01-13

    申请号:PCT/US2021/040400

    申请日:2021-07-05

    Applicant: ECTO, INC.

    Abstract: A method of surface splash scoring includes receiving, at an electronic device, a set of camera frames corresponding to images of a water surface. The electronic device processes the set of camera frames with a trained machine learning model to generate one or more quantifications associated with fish activity proximate the water surface. In some embodiments, a surface splash score is computed that represents an appetite level anticipated to be exhibited for a first time period. Subsequently, the electronic device generates an output indicative of the surface splash score.

    KLASSIFIKATION VON BILDDATEN MIT ANPASSUNG DES DETAILLIERUNGSGRADES

    公开(公告)号:WO2021245156A1

    公开(公告)日:2021-12-09

    申请号:PCT/EP2021/064833

    申请日:2021-06-02

    Abstract: Vorrichtung (1) für die Klassifikation von Bilddaten (2), umfassend • eine trainierbare Vorverarbeitungseinheit (11, 11a-11d), welche dazu ausgebildet ist, anhand der Bilddaten (2) mindestens eine Vorgabe (3) dahingehend, inwieweit der Detaillierungsgrad der Bilddaten (2) zu reduzieren ist, aus einem trainierten Zusammenhang abzurufen (111) und den Detaillierungsgrad der Bilddaten (2) entsprechend dieser Vorgabe (3) zu reduzieren (112); sowie • einen trainierbaren Klassifikator (12), der dazu ausgebildet ist, die detailreduzierten Bilddaten (4) auf eine Zuordnung (5) zu einer oder mehreren Klassen einer vorgegebenen Klassifikation abzubilden. Verfahren (100) zum Trainieren der Vorrichtung (1), wobei Parameter (11*), die das Verhalten der trainierbaren Vorverarbeitungseinheit (11, 11a-11d) charakterisieren, werden auf die Ziele optimiert (130), dass die Vorrichtung (1) Lern-Bilddaten (2a) auf Lern-Zuordnungen (5a) abbildet und zugleich die Reduzierung des Detaillierungsgrades, die die Vorverarbeitungseinheit (11, 11a-11d) an den Lern-Bilddaten (2a) vornimmt, im Mittel einer Vorgabe (3a) entspricht.

    姿态估计方法、装置、计算机设备和存储介质

    公开(公告)号:WO2022237688A1

    公开(公告)日:2022-11-17

    申请号:PCT/CN2022/091484

    申请日:2022-05-07

    Inventor: 贾配洋 侯俊

    Abstract: 本申请涉及一种姿态估计方法、装置、计算机设备和存储介质。所述方法包括:获取待进行姿态估计的目标图像;所述目标图像中包括待处理的目标对象;基于所述目标图像进行特征提取,获取第一提取特征;通过图像特征扩张网络对所述第一提取特征进行特征扩张,得到扩张图像特征;对所述扩张图像特征进行特征提取,得到第二提取特征;通过图像特征压缩网络对所述第二提取特征进行特征压缩,得到压缩图像特征;基于所述压缩图像特征确定所述目标图像中的所述目标对象对应的关键点位置信息,基于所述关键点位置信息对所述目标对象进行姿态估计。本方法能够提高姿态估计的效率。

    A METHOD FOR DEFINING THE CLOTHES TYPE OF AN OCCUPANT OF A VEHICLE

    公开(公告)号:WO2022268537A1

    公开(公告)日:2022-12-29

    申请号:PCT/EP2022/065906

    申请日:2022-06-10

    Abstract: The invention relates to a method (1) for defining the clothes type (t1) of an occupant (o) of a vehicle (2), wherein said method (1) comprising: - acquiring (E1) at least one video stream (Vs) of the cabin (20) of said vehicle (2), - performing (E2) skin segmentation from images (I) of said at least one video stream (Vs) to classify each image pixels (p) into skin (s0) versus non skin (s1) as human pixels (p0, p1) or into background pixels (p2), - detecting (E3) from said images (I) of said video stream (Vs) body key points (K;) per occupant (o) within said cabin (20), - determining (E4) from said body key points (K;) body parts (B) of each occupant (o) within said cabin (20), a body part (B) being a connection between at least two body key points (Kt), - for each image (I) of the video stream (Vs), calculating (E5) skin (s0) versus non skin (s1) percentage levels (L) over body parts (B) of each occupant (o), - for each occupant (o) of the vehicle (2), defining (E6) clothes type (t1) based on said skin (sO) versus non skin (s1) percentage levels (L) of each body part (B) of said occupant (o).

    2D/3D TRACKING AND CAMERAS/ANIMATION PLUG-INS

    公开(公告)号:WO2022212183A1

    公开(公告)日:2022-10-06

    申请号:PCT/US2022/021825

    申请日:2022-03-24

    Inventor: ZHANG, Xinyu

    Abstract: Methods and systems for re-mastering animation files used in a video game includes identifying a rig used for representing a virtual character of the video game. Virtual markers are applied to the rig to generate a modified rig. Animation files for the virtual character are executed using the modified rig and a virtual camera is activated to capture images of the animation of the modified rig. Images of the modified rig are used to define performance data. The performance data is applied to a new rig of the virtual character to generate re-mastered animation files. The re-mastered animation files are used in the video game to generate gameplay data.

    AUTONOMOUS LIVESTOCK MONITORING
    10.
    发明申请

    公开(公告)号:WO2022002443A1

    公开(公告)日:2022-01-06

    申请号:PCT/EP2021/051991

    申请日:2021-01-28

    Applicant: CATTLE EYE LTD

    Abstract: The present invention provides a method and a system for monitoring the mobility levels of individual farm animals and accordingly determining their corresponding mobility score. The mobility score may be indicative of the health and/or welfare status of the animals. The present invention processes a 2D video recording obtained from an imaging device to detect the movement of individual animals through a space. The video recording is segmented over a set of individual frames and in each frame the individual instances of the animal appearing in the vide frame are detected. The detected instances of each animal over a number of frames are grouped together. From each detected instance of an individual animal a set of reference points are extracted. The reference points are associated with location on the animal body. The present invention determines the mobility score of each animal by monitoring the relative position between reference points in each frame and the relative position of each reference point across the set of individual frames associated with an animal.

Patent Agency Ranking