Abstract:
A system includes a carriage track positioned adjacent to a rotary milking platform, a robot carriage mounted to the carriage track, and a controller. The controller is operable to receive both a first rotary encoder signal indicating a first rotational position of a milking stall of the rotary milking platform (corresponding to a starting linear position of the robot carriage) and a second rotary encoder signal indicating a second rotational position of the milking stall. The controller is further operable to determine, based on a difference between the first and second signals, a desired linear position of the robot carriage on the carriage track (a position corresponding to the second rotational position of the milking stall). The controller is further operable to communicate a position signal to a carriage actuator, the position signal causing the carriage actuator to move the robot carriage to the desired linear position.
Abstract:
In certain embodiments, a system includes a three-dimensional camera and a processor communicatively coupled to the three-dimensional camera. The processor is operable to determine a first hind location of a first hind leg of a dairy livestock based at least in part on visual data captured by the three-dimensional camera and determine a second hind location of a second hind leg of the dairy livestock based at least in part on the visual data. The processor is further operable to determine a measurement, wherein the measurement is the distance between the first hind location and the second hind location. Additionally, the processor is operable to determine whether the measurement exceeds a minimum threshold.
Abstract:
In certain embodiments, a system includes a controller operable to access an image signal generated by a camera. The accessed image signal corresponds to one or more features of the rear of a dairy livestock. The controller is further operable to determine positions of each of the hind legs of the dairy livestock based on the accessed image signal. The controller is further operable to determine a position of an udder of the dairy livestock based on the accessed image signal and the determined positions of the hind legs of the dairy livestock. The controller is further operable to determine, based on the image signal and the determined position of the udder of the dairy livestock, a spray position from which a spray tool may apply disinfectant to the teats of the dairy livestock.
Abstract:
A defrost bypass dehumidifier includes an air flow path with first, second and third segments in series from upstream to downstream and passing ambient air respectively to an evaporator coil then to a condenser coil and then discharging same. The air flow path has a bypass segment passing ambient air to the evaporator coil in parallel with the noted first air flow path segment.
Abstract:
A footbath system for livestock includes a water and/or chemical containment tank, a footbath pan with a drainage exit door, non-turbulent flow, and a multiple branch system.
Abstract:
A leg (205) detection system comprising: a robotic arm (200) comprising a gripping portion (208) for holding a teat cup (203, 210) for attaching to a teat (1102, 1104, 1106, 1108, 203S, 203) of a dairy livestock (200, 202, 203); an imaging system coupled to the robotic arm (200) and configured to capture a first three-dimensional (3D) image (138, 2400, 2500) of a rearview of the dairy livestock (200, 202, 203) in a stall (402), the imaging system comprising a 3D camera (136, 138) or a laser (132), wherein each pixel of the first 3D image (138, 2400, 2500) is associated with a depth value; one or more memory (104) devices configured to store a reference (3D) 3D image (138, 2400, 2500) of the stall (402) without any dairy livestock (200, 202, 203); and a processor (102) communicatively coupled to the imaging system and the one or more memory (104) devices, the processor (102) configured to: access the first 3D image (138, 2400, 2500) and the reference (3D) 3D image (138, 2400, 2500); subtract the first 3D image (138, 2400, 2500) from the reference (3D) 3D image (138, 2400, 2500) to produce a second 3D image (138, 2400, 2500); perform morphological image (138, 2400, 2500) processing on the second 3D image (138, 2400, 2500) to produce a third 3D image (138, 2400, 2500); perform image (138, 2400, 2500) thresholding on the third 3D image (138, 2400, 2500) to produce a fourth 3D image (138, 2400, 2500); cluster (2616, 2618, 2626, 2628) data from the fourth 3D image (138, 2400, 2500); identify, using the clustered data from the fourth 3D image (138, 2400, 2500), one or more legs (205) of the dairy livestock (200, 202, 203); and provide instructions for movements of the robotic arm (200) to avoid the identified one or more legs (205) while attaching the teat cup (203, 210) to the teat (1102, 1104, 1106, 1108, 203S, 203) of the dairy livestock (200, 202, 203).
Abstract:
A method of desulfurizing a liquid hydrocarbon having the steps of: adding a liquid hydrocarbon to a vessel, the hydrocarbon having a sulfur content; adding a catalyst and an oxidizer to create a mixture; oxidizing at least some of the sulfur content of the liquid hydrocarbon to form oxidized sulfur in the liquid hydrocarbon; separating the liquid hydrocarbon from the mixture; and removing at least some of the oxidized sulfur from the liquid hydrocarbon. Such methods can be carried out by batch or continuously. Systems for undertaking such methods are likewise disclosed.
Abstract:
A system that includes a three-dimensional (3D) camera configured to capture a 3D image of a rearview of a dairy livestock in a stall and a processor. The processor is configured to obtain the 3D image, identify one or more regions within the 3D image comprising depth values greater than a depth value threshold, and s to identify a thigh gap region from the one or more regions. The processor is further configured to demarcate an access region within the thigh gap region and demarcate a tail detection region. The processor is further configured to identify one or more tail candidates within the tail detection region, to identify a tail candidate that corresponds with a tail model as the tail, and to determine position information for the tail.
Abstract:
A system comprising a robotic arm, a plurality of grabbers, a sensor, and a preparation cup. The robotic arm has a first end and a recessed portion. The grabbers are coupled to the robotic arm at the first end. The sensor is positioned inside the recessed portion of the robotic arm at a first distance from the first end and at a first angle. The preparation cup is coupled to wings having a body portion, a first extended portion, and a second extended portion. The body portion is coupled to a portion of the preparation cup, the first extended portion extends in a first direction and the second extended portion extends in a second direction. The wings are operable to be magnetically coupled to the plurality of grabbers via the first and second extended portions.
Abstract:
A robotic arm maneuvers a teat preparation cup and executes instructions from a robotic arm controller. The controller comprises an interface, a memory, and a processor. The processor instructs the sensor to perform a first scan. If the first scan discovers a first set of teats, the processor moves the robotic arm a first distance and instructs the sensor to perform a second scan. If the second scan discovers a second set of teats, the processor moves the robotic arm to a location under the first teat, and instructs the sensor to perform a third scan. The processor determines if the third scan discovers a third set of teats. If each of the first set, second set, and third set of discovered teats comprises the first teat, the processor instructs the robotic arm to attach the preparation cup to the first teat.