COLORIZING X-RAY IMAGES
    31.
    发明公开

    公开(公告)号:US20230186529A1

    公开(公告)日:2023-06-15

    申请号:US17548169

    申请日:2021-12-10

    Abstract: Implementations are described herein for colorizing an X-ray image and predicting one or more phenotypic traits about a plant based on the colorized X-ray image. In various implementations, an X-ray image that depicts a plant with a canopy of the plant partially occluding a part-of-interest is obtained, where the part-of-interest is visible through the canopy in the X-ray image. The X-ray images is colorized to predict one or more phenotypic traits of the part-of-interest. The colorization includes processing the X-ray image based on a machine learning model to generate a colorized version of the X-ray image, and predicting the one or more phenotypic traits based on one or more visual features of the colorized version of the X-ray image.

    ADAPTIVELY REALLOCATING RESOURCES OF RESOURCE-CONSTRAINED DEVICES

    公开(公告)号:US20230102495A1

    公开(公告)日:2023-03-30

    申请号:US17485903

    申请日:2021-09-27

    Abstract: Implementations are disclosed for adaptively reallocating computing resources of resource-constrained devices between tasks performed in situ by those resource-constrained devices. In various implementations, while the resource-constrained device is transported through an agricultural area, computing resource usage of the resource-constrained device ma may be monitored. Additionally, phenotypic output generated by one or more phenotypic tasks performed onboard the resource-constrained device may be monitored. Based on the monitored computing resource usage and the monitored phenotypic output, a state may be generated and processed based on a policy model to generate a probability distribution over a plurality of candidate reallocation actions. Based on the probability distribution, candidate reallocation action(s) may be selected and performed to reallocate at least some computing resources between a first phenotypic task of the one or more phenotypic tasks and a different task while the resource-constrained device is transported through the agricultural area.

    Using empirical evidence to generate synthetic training data for plant detection

    公开(公告)号:US11544920B2

    公开(公告)日:2023-01-03

    申请号:US17463360

    申请日:2021-08-31

    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.

    LOCALIZATION OF INDIVIDUAL PLANTS BASED ON HIGH-ELEVATION IMAGERY

    公开(公告)号:US20220398415A1

    公开(公告)日:2022-12-15

    申请号:US17344328

    申请日:2021-06-10

    Inventor: Zhiqiang Yuan

    Abstract: Implementations are described herein for localizing individual plants by aligning high-elevation images using invariant anchor points while disregarding variant feature points, such as deformable plants. High-elevation images that capture the plurality of plants at a resolution at which wind-triggered deformation of individual plants is perceptible between the high-elevation images may be obtained. First regions of the high-elevation images that depict the plurality of plants may be classified as variant features that are unusable as invariant anchor points. Second regions of the high-elevation images that are disjoint from the first set of regions may be classified as invariant anchor points. The high-elevation images may be aligned based on invariant anchor point(s) that are common among at least some of the high-elevation images. Based on the aligned high-elevation images, individual plant(s) may be localized within one of the high-elevation images for performance of one or more agricultural tasks.

    GENERATING LABELED SYNTHETIC IMAGES TO TRAIN MACHINE LEARNING MODELS

    公开(公告)号:US20220391752A1

    公开(公告)日:2022-12-08

    申请号:US17342196

    申请日:2021-06-08

    Abstract: Implementations are described herein for automatically generating labeled synthetic images that are usable as training data for training machine learning models to make an agricultural prediction based on digital images. A method includes: generating a plurality of simulated images, each simulated image depicting one or more simulated instances of a plant; for each of the plurality of simulated images, labeling the simulated image with at least one ground truth label that identifies an attribute of the one or more simulated instances of the plant depicted in the simulated image, the attribute describing both a visible portion and an occluded portion of the one or more simulated instances of the plant depicted in the simulated image; and training a machine learning model to make an agricultural prediction using the labeled plurality of simulated images.

    COORDINATING AGRICULTURAL ROBOTS
    36.
    发明申请

    公开(公告)号:US20220219329A1

    公开(公告)日:2022-07-14

    申请号:US17683696

    申请日:2022-03-01

    Abstract: Implementations are described herein for coordinating semi-autonomous robots to perform agricultural tasks on a plurality of plants with minimal human intervention. In various implementations, a plurality of robots may be deployed to perform a respective plurality of agricultural tasks. Each agricultural task may be associated with a respective plant of a plurality of plants, and each plant may have been previously designated as a target for one of the agricultural tasks. It may be determined that a given robot has reached an individual plant associated with the respective agricultural task that was assigned to the given robot. Based at least in part on that determination, a manual control interface may be provided at output component(s) of a computing device in network communication with the given robot. The manual control interface may be operable to manually control the given robot to perform the respective agricultural task.

    INFERRING MOISTURE FROM COLOR
    37.
    发明申请

    公开(公告)号:US20220036070A1

    公开(公告)日:2022-02-03

    申请号:US16943247

    申请日:2020-07-30

    Abstract: Techniques are described herein for using artificial intelligence to predict crop yields based on observational crop data. A method includes: obtaining a first digital image of at least one plant; segmenting the first digital image of the at least one plant to identify at least one seedpod in the first digital image; for each of the at least one seedpod in the first digital image: determining a color of the seedpod; determining a number of seeds in the seedpod; inferring, using one or more machine learning models, a moisture content of the seedpod based on the color of the seedpod; and estimating, based on the moisture content of the seedpod and the number of seeds in the seedpod, a weight of the seedpod; and predicting a crop yield based on the moisture content and the weight of each of the at least one seedpod.

    Using empirical evidence to generate synthetic training data for plant detection

    公开(公告)号:US11113525B1

    公开(公告)日:2021-09-07

    申请号:US16877138

    申请日:2020-05-18

    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.

    TRANSLATING BETWEEN PROGRAMMING LANGUAGES USING MACHINE LEARNING

    公开(公告)号:US20210011694A1

    公开(公告)日:2021-01-14

    申请号:US16506161

    申请日:2019-07-09

    Abstract: Techniques are described herein for translating source code in one programming language to source code in another programming language using machine learning. In various implementations, one or more components of one or more generative adversarial networks, such as a generator machine learning model, may be trained to generate “synthetically-naturalistic” source code that can be used as a translation of source code in an unfamiliar language. In some implementations, a discriminator machine learning model may be employed to aid in training the generator machine learning model, e.g., by being trained to discriminate between human-generated (“genuine”) and machine-generated (“synthetic”) source code.

Patent Agency Ranking