Query system for structured multimedia content retrieval
    5.
    发明申请
    Query system for structured multimedia content retrieval 审中-公开
    结构化多媒体内容检索查询系统

    公开(公告)号:US20040267720A1

    公开(公告)日:2004-12-30

    申请号:US10609257

    申请日:2003-06-27

    IPC分类号: G06F007/00

    CPC分类号: G06F16/40 G06F16/7343

    摘要: A query system for structured multimedia content retrieval comprises a query language based on logic formalism for content retrieval. The language includes query constructs and formalisms for specifying different aspects of XML documents and the constructs and formalisms are particularly adapted for spatial, temporal and visual datatypes. Certain critical specification issues in MPEG-7 XML queries are identified. An XML query language with multimedia query constructs is described which is based on a logic formalism, called path predicate calculus. In this path predicate calculus, the atomic logic formulas are element predicates rather than relation predicates in relational calculus. In this path calculus query language, queries in this calculus are equivalent to finding all proofs to existential closure of logical assertions in the form of path predicates that the tree document elements must satisfy. Spatial, temporal and visual datatypes and relationships can also be described in this formalism for content retrieval.

    摘要翻译: 用于结构化多媒体内容检索的查询系统包括基于用于内容检索的逻辑形式主义的查询语言。 该语言包括用于指定XML文档的不同方面的查询结构和形式,并且构造和形式主义特别适用于空间,时间和可视数据类型。 识别MPEG-7 XML查询中的某些关键规范问题。 描述了具有多媒体查询结构的XML查询语言,其基于称为路径谓词演算的逻辑形式。 在这个路径谓词演算中,原子逻辑公式是关系演算中的元素谓词,而不是关系谓词。 在这个路径微积分查询语言中,这个演算中的查询等效于找到树状文档元素必须满足的路径谓词形式的逻辑断言的存在关闭的所有证明。 空间,时间和视觉数据类型和关系也可以用于内容检索的形式主义。

    SYSTEM AND METHOD FOR NATURAL LANGUAGE DRIVEN SEARCH AND DISCOVERY IN LARGE DATA SOURCES
    8.
    发明申请
    SYSTEM AND METHOD FOR NATURAL LANGUAGE DRIVEN SEARCH AND DISCOVERY IN LARGE DATA SOURCES 审中-公开
    自动语言的系统和方法在大数据源中搜索和发现

    公开(公告)号:US20170026705A1

    公开(公告)日:2017-01-26

    申请号:US14808354

    申请日:2015-07-24

    摘要: Presenting natural-language-understanding (NLU) results can include redundancies and awkward sentence structures. In an embodiment of the present invention, a method includes, responsive to receiving a result to a NLU query, loading a matching template of a plurality of templates stored in a memory. Each template has mask fields associated with at least one property. The method compares the properties of the mask fields of each of the templates to properties of the query and properties of the result, and selects the matching template. The method further completes the matching template by inserting fields of the result into corresponding mask fields of the matching template. The method may further suppress certain mask fields of the matching template to increase brevity and improve the naturalness of the response when appropriate based on the results of the NLU query. The method further presents the completed matching template to a user via a display.

    摘要翻译: 呈现自然语言理解(NLU)的结果可能包括冗余和尴尬的句子结构。 在本发明的实施例中,一种方法包括响应于将结果接收到NLU查询,加载存储在存储器中的多个模板的匹配模板。 每个模板都有与至少一个属性相关联的掩码字段。 该方法将每个模板的掩码字段的属性与查询的属性和结果的属性进行比较,并选择匹配的模板。 该方法通过将结果的字段插入匹配模板的相应掩码字段来进一步完成匹配模板。 该方法可以基于NLU查询的结果进一步抑制匹配模板的某些掩码字段以增加简洁度并且在适当时提高响应的自然度。 该方法还通过显示器向用户呈现完成的匹配模板。

    Video based question and answer
    10.
    发明授权

    公开(公告)号:US11995412B1

    公开(公告)日:2024-05-28

    申请号:US18482828

    申请日:2023-10-06

    摘要: Disclosed are systems and methods that convert digital video data, such as two-dimensional digital video data, into a natural language text description describing the subject matter represented in the video. For example, the disclosed implementations may process video data in real-time, near real-time, or after the video data is created and generate a text-based video narrative describing the subject matter of the video. In addition, the disclosed implementations may also support a question and answer session in which a user may submit queries about the subject matter of one or more videos and the disclosed implementations will present natural language responses based on the subject matter of the video and any corresponding context.