HVAC Airflow Throttling Mechanism
    1.
    发明公开

    公开(公告)号:US20240336110A1

    公开(公告)日:2024-10-10

    申请号:US18614026

    申请日:2024-03-22

    CPC classification number: B60H1/00692 B60H1/00849 B60H2001/00721

    Abstract: A vehicle HVAC system including a ram air throttling assembly. The ram air throttling assembly includes: a housing defining a ram air inlet, a recirculation air inlet, and an outlet; a first door within the housing, the first door movable to direct airflow from at least one of the ram air inlet and the recirculation air inlet to the outlet; and a second door slidably movable to throttle airflow into the housing through the ram air inlet.

    Air circulation system for a vehicle

    公开(公告)号:US12077033B2

    公开(公告)日:2024-09-03

    申请号:US17740758

    申请日:2022-05-10

    CPC classification number: B60H1/243 B60H1/3407

    Abstract: An air circulation system configured to mount within a B-pillar, C-pillar, other pillar, or interior trim or cross trims through an air channel of a vehicle. The air circulation system includes a scroll within the pillar, which defines an inlet configured to receive inlet air from a vehicle cabin at the pillar. A fan within the pillar is configured to circulate the inlet air within the scroll. An elongated throat extending from the scroll is configured to receive forced air from the scroll. The throat defines a length and an outlet extending along the length, wherein the outlet delivers the forced air into the vehicle cabin from the pillar in an air stream. The outlet is configured such that the forced air delivered from the outlet creates a Coandă effect that attracts air in the vehicle cabin to move in the direction of the air stream.

    PESSIMISTIC OFFLINE REINFORCEMENT LEARNING SYSTEM AND METHOD

    公开(公告)号:US20240037445A1

    公开(公告)日:2024-02-01

    申请号:US17969129

    申请日:2022-10-19

    CPC classification number: G06N20/00 G06N7/005

    Abstract: Systems and methods for pessimistic offline reinforcement learning are described herein. In one example, a method for performing offline reinforcement learning determines when sampled states are out of distribution, assigns high probability weights to the sampled states that are out of distribution, generates a fitted Q-function by solving an optimization problem with a minimization term and a maximization term, estimates a Q-value using the fitted Q-function by estimating the overall expected reward assuming the agent is in the present state and performs a present action, and updates the policy according to an existing reinforcement learning algorithm. The minimization term penalizes an overall expected reward when a present state is out of distribution. The maximization term cancels the minimization term when the present state is an in-distribution state.

Patent Agency Ranking