Abstract:
A Virtual Reality (VR) method and system for text manipulation. According to an exemplary embodiment, a VR method displays and provides a user with interaction with displayed documents in a Virtual Environment (VE), the VE including a VR head-mounted display and one or more gesture sensors. Text manipulation is performed using natural human body interactions with the VR system.
Abstract:
A system and method for document image acquisition and retrieval find application in litigation for responding to discovery requests. The method includes receiving automatically acquired electronic image logs comprising image data and associated records for documents processed by a plurality of image output devices within an organization. When a request for document production is received, the image logs (and/or information extracted therefrom) are automatically filtered through at least one classifier trained to return documents responsive to the document request, and documents corresponding to the filtered out image logs are output. One of the filters may be configured for filtering out documents that include attorney-client exchanges.
Abstract:
An apparatus and a method increase data exploration and facilitate changing between exploratory and iterative searching. A virtual widget is movable on a display device in response to detected user gestures. Graphic objects are displayed on the display device, representing respective documents in a search document collection. The virtual widget is populated with a first query term, which can be used for an iterative search. Semantic terms that are predicted to be semantically related to it are identified, based on a computed similarity between multidimensional representations of terms in a training document collection. The multidimensional representations are output by a semantic model which takes into account context of the respective terms in the training document collection. A user selects one of the set of semantic terms for generating a semantic query for an exploratory search. Documents in the search document collection that are responsive to the semantic query are identified.
Abstract:
A system and method are provided for dynamically generating a query using touch gestures. A virtual magnet is movable on a display device of a tactile user interface in response to touch. A user selects one of a set of text documents for review, which is displayed on the display. The system is configured for recognizing a highlighting gesture on the tactile user interface over the displayed document as a selection of a text fragment from the document text. The virtual magnet is populated with a query which is based on the text fragment selected with the highlighting gesture. The populated magnet is able to cause a subset of displayed graphic objects to exhibit a response to the magnet as a function of the query and the text content of the respective documents which the objects represent and/or to cause responsive instances in a text document to be displayed.
Abstract:
A system and method for selection of a batch of objects are provided. Each object in a pool is assigned to a subset of a set of buckets. The assignment is based on signatures, generated, for example, by LSH hashing object representations of the objects in the pool. The signatures are then segmented into bands which are each assigned to a respective bucket in the set, based on the elements of the band. An entropy value is computed for each of a set of objects remaining in the pool using a current classifier model. A batch of objects for retraining the model is selected. This includes selecting objects from the set of objects based on their computed entropy values and respective assigned buckets.
Abstract:
A processing method includes, associating each of a plurality of hand gestures that are detectable with a three-dimensional sensor with a respective one of a plurality of item processing tasks in memory. A plurality of graphic objects is displayed on a touch-sensitive display device, each graphic object being associated with a respective item. With the three-dimensional sensor, a hand gesture is detected. The respective one of the item processing tasks that is associated with the detected hand gesture is identified and the identified one of the item processing tasks is implemented on the displayed graphic objects, comprising causing at least a subset of the displayed graphic objects to respond on the display device based on attributes of the respective items. Item processing tasks may also be implemented through predefined touch gestures.