Abstract:
At least some embodiments of the present disclosure feature systems and methods for assessing the impact of visual features within a region of a scene. With the input of a visual representation of a scene and at least one selected region within the scene, the system applies a visual attention model to the visual representation to determine visual conspicuity of the at least one selected region. The system computes feature-related data associated with a plurality of features of the at least one selected region. Based on the visual conspicuity and the feature-related data, the system assesses an impact that at least one of the features within the at least one selected region have on the visual conspicuity.
Abstract:
Methods for generating multiple orthodontic treatment options for a digital 3D model of teeth in malocclusion. The method generates a plurality of different orthodontic treatment plans for the teeth and displays in a user interface the digital 3D model of teeth in malocclusion with a visual indication of each of the plurality of different orthodontic treatment plans. The visual indication of the treatment plans can be overlaid on the digital 3D model of teeth in malocclusion and possibly include aligners, brackets, or a combination of aligners and brackets. A doctor, technician, or other user can then select one of the treatment plans for a particular patient.
Abstract:
Methods for generating stages for a portion of orthodontic aligner treatment for a digital 3D model of teeth in malocclusion. The methods generate a subset of stages of setups among a complete set of stages of setups for aligner treatment of the teeth. The subset of stages can be selected from a complete set of stages, based upon a target intermediate setup, or sequentially generated from one stage to the next in the subset. Aligners for the subset of stages of setups can then be manufactured without having to make a complete set of aligners. A method to generate a setup for the aligner treatment compares the digital 3D model of teeth in malocclusion to a plurality of setups for historical cases of teeth in malocclusion that have undergone aligner treatment.
Abstract:
This disclosure describes a computer-implemented method and system for evaluating, modifying, and determining setups for orthodontic treatment using metrics computed during virtual articulation. The virtual articulation techniques of this disclosure may further be combined with techniques for determining dynamic collision metrics, determining a comfort measurement, determining treatment efficacy, and/or determining dental conditions, such as bruxism. The techniques of this disclosure further include user interface techniques that provide an orthodontist/dentist/technician with information regarding various treatment plans based on metrics gathered during virtual articulation.
Abstract:
A method for generating digital setups for an orthodontic treatment path. The method includes receiving a digital 3D model of teeth, performing interproximal reduction (IPR) on the model and, after performing the IPR, generating an initial treatment path with stages including an initial setup, a final setup, and a plurality of intermediate setups. The method also includes computing IPR accessibility for each tooth at each stage of the initial treatment path, applying IPR throughout the initial treatment path based upon the computed IPR accessibility, and dividing the initial treatment path into steps of feasible motion of the teeth resulting in a final treatment path with setups corresponding with the steps. The setups can be used to make orthodontic appliances, such as clear tray aligners, for each stage of the treatment path.
Abstract:
At least some embodiments of the present disclosure feature systems and methods for assessing the impact of visual features within a region of a scene. With the input of a visual representation of a scene and at least one selected region within the scene, the system applies a visual attention model to the visual representation to determine visual conspicuity of the at least one selected region. The system computes feature-related data associated with a plurality of features of the at least one selected region. Based on the visual conspicuity and the feature-related data, the system assesses an impact that at least one of the features within the at least one selected region have on the visual conspicuity.
Abstract:
Methods for generating intermediate stages for orthodontic aligners using machine learning or deep learning techniques. The method receives a malocclusion of teeth and a planned setup position of the teeth. The malocclusion can be represented by translations and rotations, or by digital 3D models. The method generates intermediate stages for aligners, between the malocclusion and the planned setup position, using one or more deep learning methods. The intermediate stages can be used to generate setups that are output in a format, such as digital 3D models, suitable for use in manufacturing the corresponding aligners.
Abstract:
A method for generating and reusing digital setups for an orthodontic treatment path. The method receives a digital 3D model of teeth, optionally performs interproximal reduction on the model, and generates an initial treatment path with stages including an initial, final, and intermediate setups. The method divides the initial treatment path into initial steps of feasible motion of the teeth resulting in a final treatment path with setups corresponding with the initial steps. For a treatment redesign, the method computes new steps of feasible motion for only a portion of the initial treatment path and based upon the initial steps, and generates the final treatment path with new setups corresponding with the new steps. The setups can be used to make orthodontic appliances, such as clear tray aligners.
Abstract:
Methods for tracking gum line changes by comparing digital 3D models of teeth and gingiva taken at different times. The digital 3D models are segmented to digitally identify the teeth from the gingiva, and the segmented digital 3D models are compared to detect gum line changes by determining differences between them relating to the gum line. Gum line changes can also be estimated by comparing one of the digital 3D models with a 3D model having a predicted original location of the gum line. Gum line change maps can be displayed to show the gum line changes determined through the tracking or estimating of changes. The digital 3D models can also be displayed with periodontal measurements placed upon them.
Abstract:
Methods for tracking gum line changes by comparing digital 3D models of teeth and gingiva taken at different times. The digital 3D models are segmented to digitally identify the teeth from the gingiva, and the segmented digital 3D models are compared to detect gum line changes by determining differences between them relating to the gum line. Gum line changes can also be estimated by comparing one of the digital 3D models with a 3D model having a predicted original location of the gum line. Gum line change maps can be displayed to show the gum line changes determined through the tracking or estimating of changes. The digital 3D models can also be displayed with periodontal measurements placed upon them.