Abstract:
An example embodiment may involve obtaining (i) an a×b attribute macro-cell, and (ii) a×b pixel macro-cells for each of a luminance plane, a first color plane, and a second color plane of an input image. The a×b pixel macro-cells may each contain 4 non-overlapping m×n pixel cells. The example embodiment may also involve determining 4 attribute-plane output values that represent the 4 non-overlapping m×n attribute cells, 1 to 4 luminance-plane output values that represent the a×b pixel macro-cell of the luminance plane, a first color-plane output value to represent the a×b pixel macro-cell of the first color plane, and a second color-plane output value to represent the a×b pixel macro-cell of the second color plane. The example embodiment may further involve writing an interleaved representation of the output values to a computer-readable output medium.
Abstract:
Methods and apparatus for radial gradient rendering are provided. A graphics computing device can include a radial gradient module (RGM), which can include circuitry for radial gradient rendering. The RGM can receive one or more parameters associated with rendering at least a portion of an image utilizing radial gradient rendering. The RGM can map one or more input coordinates of the image to one or more source domain coordinates. The RGM can determine a t-value for the source domain coordinates, the t-value specifying an ellipse in the source domain whose edge includes the source domain coordinates. The RGM can determine a color value for the input coordinates based on the specified ellipse. The RGM can generate an output that is based on the color value.
Abstract:
Methods and apparatus for radial gradient rendering are provided. A graphics computing device can include a radial gradient module (RGM), which can include circuitry for radial gradient rendering. The RGM can receive one or more parameters associated with rendering at least a portion of an image utilizing radial gradient rendering. The RGM can map one or more input coordinates of the image to one or more source domain coordinates. The RGM can determine a t-value for the source domain coordinates, the t-value specifying an ellipse in the source domain whose edge includes the source domain coordinates. The RGM can determine a color value for the input coordinates based on the specified ellipse. The RGM can generate an output that is based on the color value.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. Pixels in the a×b pixel macro-cell may have respective pixel values and may be associated with respective tags. It may be determined whether at least e of the respective tags indicate that their associated pixels represent edges in the input image. Based on this determination, either a first encoding or a second encoding of the a×b pixel macro-cell may be selected. The first encoding may weigh pixels that represent edges in the input image heavier than pixels that do not represent edges in the input image, and the second encoding might not consider whether pixels represent edges. The selected encoding may be performed and written to a computer-readable output medium.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. Pixels in the a×b pixel macro-cell may have respective pixel values and may be associated with respective tags. It may be determined whether at least e of the respective tags indicate that their associated pixels represent edges in the input image. Based on this determination, either a first encoding or a second encoding of the a×b pixel macro-cell may be selected. The first encoding may weigh pixels that represent edges in the input image heavier than pixels that do not represent edges in the input image, and the second encoding might not consider whether pixels represent edges. The selected encoding may be performed and written to a computer-readable output medium.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. Pixels in the a×b pixel macro-cell may have respective pixel values and may be associated with respective tags. It may be determined whether at least e of the respective tags indicate that their associated pixels represent edges in the input image. Based on this determination, either a first encoding or a second encoding of the a×b pixel macro-cell may be selected. The first encoding may weigh pixels that represent edges in the input image heavier than pixels that do not represent edges in the input image, and the second encoding might not consider whether pixels represent edges. The selected encoding may be performed and written to a computer-readable output medium.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. The a×b pixel macro-cell may contain 4 non-overlapping m×n pixel cells. The a×b pixels in the a×b pixel macro-cell may have respective color values and may be associated with respective object type tags. The example embodiment may also include selecting a compression technique to either (i) compress the a×b pixel macro-cell as a whole, or (ii) compress the a×b pixel macro-cell by compressing each of the 4 non-overlapping m×n pixel cells separately. The example embodiment may further include compressing the a×b pixel macro-cell according to the selected compression technique, and writing a representation of the compressed a×b pixel macro-cell to a computer-readable output medium.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. The a×b pixel macro-cell may contain 4 non-overlapping m×n pixel cells. The a×b pixels in the a×b pixel macro-cell may have respective color values and may be associated with respective object type tags. The example embodiment may also include selecting a compression technique to either (i) compress the a×b pixel macro-cell as a whole, or (ii) compress the a×b pixel macro-cell by compressing each of the 4 non-overlapping m×n pixel cells separately. The example embodiment may further include compressing the a×b pixel macro-cell according to the selected compression technique, and writing a representation of the compressed a×b pixel macro-cell to a computer-readable output medium.
Abstract:
An example embodiment may involve obtaining an a×b pixel macro-cell from an input image. Pixels in the a×b pixel macro-cell may have respective pixel values and may be associated with respective tags. It may be determined whether at least e of the respective tags indicate that their associated pixels represent edges in the input image. Based on this determination, either a first encoding or a second encoding of the a×b pixel macro-cell may be selected. The first encoding may weigh pixels that represent edges in the input image heavier than pixels that do not represent edges in the input image, and the second encoding might not consider whether pixels represent edges. The selected encoding may be performed and written to a computer-readable output medium.
Abstract:
Example systems and related methods relate to rendering using smooth shading. A computing device can receive information about an image, the information including one or more patches, where a particular patch of the one or more patches is specified using a plurality of non-linear equations. The computing device can determine one or more linear approximations to a particular non-linear equation of the plurality of non-linear equations. The computing device can update the particular patch to replace the particular non-linear equation with at least one linear approximation of the one or more linear approximations. The computing device can render at least part of the image by at least rendering the updated particular patch. The computing device can generate an output that includes the rendered at least part of the image.