Reward-driven adaptive agents for video games
    1.
    发明申请
    Reward-driven adaptive agents for video games 有权
    奖励驱动的视频游戏自适应代理

    公开(公告)号:US20050245303A1

    公开(公告)日:2005-11-03

    申请号:US10837415

    申请日:2004-04-30

    IPC分类号: G06F17/00 G06F19/00

    摘要: Adaptive agents are driven by rewards they receive based on the outcome of their behavior during actual game play. Accordingly, the adaptive agents are able to learn from experience within the gaming environment. Reward-driven adaptive agents can be trained at either or both of game-time or development time. Computer-controlled agents receive rewards (either positive or negative) at individual action intervals based on the effectiveness of the agents' actions (e.g., compliance with defined goals). The adaptive computer-controlled agent is motivated to perform actions that maximize its positive rewards and minimize is negative rewards.

    摘要翻译: 自适应代理由他们在实际游戏过程中根据其行为的结果而获得的奖励驱动。 因此,适应性代理能够从游戏环境中的经验中学习。 奖励驱动的自适应代理可以在游戏时间或开发时间中的任一个或两者进行训练。 计算机控制的代理人可以根据代理人的行为的有效性(例如,遵守所定义的目标),在个别行动间隔收到奖励(正或负)。 自适应计算机控制的代理人有动机执行最大化其积极奖励的行动,最小化是否定的回报。

    Reward-driven adaptive agents for video games
    2.
    发明授权
    Reward-driven adaptive agents for video games 有权
    奖励驱动的视频游戏自适应代理

    公开(公告)号:US07837543B2

    公开(公告)日:2010-11-23

    申请号:US10837415

    申请日:2004-04-30

    摘要: Adaptive agents are driven by rewards they receive based on the outcome of their behavior during actual game play. Accordingly, the adaptive agents are able to learn from experience within the gaming environment. Reward-driven adaptive agents can be trained at either or both of game-time or development time. Computer-controlled agents receive rewards (either positive or negative) at individual action intervals based on the effectiveness of the agents' actions (e.g., compliance with defined goals). The adaptive computer-controlled agent is motivated to perform actions that maximize its positive rewards and minimize is negative rewards.

    摘要翻译: 自适应代理由他们在实际游戏过程中根据其行为的结果而获得的奖励驱动。 因此,适应性代理能够从游戏环境中的经验中学习。 奖励驱动的自适应代理可以在游戏时间或开发时间中的任一个或两者进行训练。 计算机控制的代理人可以根据代理人的行为的有效性(例如,遵守所定义的目标),在个别行动间隔收到奖励(正或负)。 自适应计算机控制的代理人有动机执行最大化其积极奖励的行动,最小化是否定的回报。

    Optimized meshlet ordering
    3.
    发明申请
    Optimized meshlet ordering 审中-公开
    优化的网格排序

    公开(公告)号:US20070013694A1

    公开(公告)日:2007-01-18

    申请号:US11181596

    申请日:2005-07-13

    IPC分类号: G06T17/00

    CPC分类号: G06T17/20 G06T15/005

    摘要: A mesh model may be divided into nontrivial meshlets that together form the mesh model, where each meshlet has a single associated render state and some meshlets have different respective render states. Cost metrics are assigned to respective render state transitions, where each render state transition comprises a transition between a different pair of render states. A render state can be anything whose modification in a renderer incurs a slowdown or cost in the renderer. The cost metrics may be provided to an optimization algorithm that automatically determines an optimal order for rendering the meshlets. That is to say, an order for rendering the meshlets that optimizes the cost of changing render states when rendering the meshlets.

    摘要翻译: 网格模型可以被划分成一起形成网格模型的平凡的网格,其中每个网格具有单个关联的渲染状态,并且一些网格具有不同的各自的渲染状态。 成本度量被分配给各个呈现状态转换,其中每个渲染状态转换包括不同呈现状态之间的转换。 渲染状态可以是任何在渲染器中修改会导致渲染器减速或成本的任何内容。 可以将成本指标提供给自动确定渲染网格的最佳顺序的优化算法。 也就是说,渲染网格的顺序优化了渲染网格时改变渲染状态的成本。