Abstract:
A neural network system includes a separable convolution subnetwork. The separable convolution subnetwork includes a plurality of separable convolutional neural network (SCNN) layers arranged in a stack manner in sequence. Each of the plurality of SCNN layers applies a first grouped convolution to an input to the SCNN layer. An input to the first grouped convolution includes a plurality of channels, and the first grouped convolution is a spatial convolution which divides channels of an input to the first grouped convolution into groups in a channel-wise manner, convolves the grouped channels, and couples the convolved channels to generate an output.
Abstract:
A quantization unit executes quantization processing for first print data corresponding to a first scan using a first threshold matrix and a third threshold matrix, and executes the quantization processing for second print data corresponding to a second scan using a second threshold matrix and a fourth threshold matrix, and a first degree that a dot arrangement that is a result of quantization using the third threshold matrix and a dot arrangement that is a result of quantization using the fourth threshold matrix hold an exclusive relationship is smaller than a second degree that a dot arrangement that is a result of quantization using the first threshold matrix and a dot arrangement that is a result of quantization using the second threshold matrix hold the exclusive relationship.
Abstract:
In an information processing system in which a plurality of modules are connected to a ring bus, data transfer efficiency is enhanced by deleting an unnecessary packet from the ring bus. This invention relates to an information processing system in which a plurality of modules that execute data processing are connected to a ring bus. More particularly, this invention relates to a ring bus operation technique that allows efficient data transfer by monitoring a flag of a packet, and removing an unnecessary packet from the ring bus.