Abstract:
Disclosed is an electronic device. An electronic device according to various embodiments comprises: a housing elongated between a first end portion and a second end portion; a resonant circuit having a coil placed inside the housing; a wireless communication circuit placed inside the housing; a rectifier for rectifying, to a direct-current power, an alternating-current power which is received via the resonant circuit; a battery which is charged by using the direct-current power; a switch for selectively connecting between the rectifier and the battery; a voltage detector which is set to detect the voltage value of the direct-current power and transmit, on the basis of the detected voltage value, a control signal for turning the switch on and off to the switch and the wireless communication circuit, wherein, when an interval of time between the control signals to be sequentially received from the voltage detector is the same or less than a predetermined value, the wireless communication circuit can be set to ignore at least one control signal to be received additionally after the sequentially received control signals. Other various embodiments can be provided.
Abstract:
Disclosed is an electronic device including a housing, a display panel which is viewable via a part of the housing, and is configured to detect an input by a stylus pen, a processor operatively connected to the display panel, and a memory operatively connected to the processor, wherein the memory is configured to store instructions which, when executed, enable the processor to receive a signal from the stylus pen via the display panel, determine a strength of the signal, a first phase of the signal, and a location of an input by the stylus pen based on at least the received signal, and adjust a threshold value used for determining a type of an input by the stylus pen based on at least the first phase.
Abstract:
Reducing computations in a neural network may include determining a group including a plurality of convolution kernels of a convolution stage of a neural network. The convolution kernels of the group are similar to one another. A base convolution kernel for the group may be determined. Scaling factors for a plurality of input feature maps processed by the group may be calculated. The convolution stage of the neural network may be modified to calculate a composite input feature map using the scaling factors and apply the base convolution kernel to the composite input feature map.
Abstract:
Executing a neural network includes generating an output tile of a first layer of the neural network by processing an input tile to the first layer and storing the output tile of the first layer in an internal memory of a processor. An output tile of a second layer of the neural network can be generated using the processor by processing the output tile of the first layer stored in the internal memory.
Abstract:
According to various exemplary embodiments, there may be provided an electronic device including a housing having a first side and an opposite second side, a display disposed between the first side and the second side, an ElectroMagnetic Resonance (EMR) sensor pad disposed between the display and the second side, a pen placing space disposed between the first side and the second side to accommodate an electronic pen, and a detecting member disposed in vicinity of the electronic pen for detecting the electronic pen when the electronic pen is fully inserted into the pen placing space.
Abstract:
A spiking neural network having a plurality layers partitioned into a plurality of frustums using a first partitioning may be implemented, where each frustum includes one tile of each partitioned layer of the spiking neural network. A first tile of a first layer of the spiking neural network may be read. Using a processor, a first tile of a second layer of the spiking neural network may be generated using the first tile of the first layer while storing intermediate data within an internal memory of the processor. The first tile of the first layer and the first tile of the second layer belong to a same frustum.
Abstract:
Reducing computations in a neural network may include determining a group including a plurality of convolution kernels of a convolution stage of a neural network. The convolution kernels of the group are similar to one another. A base convolution kernel for the group may be determined. Scaling factors for a plurality of input feature maps processed by the group may be calculated. The convolution stage of the neural network may be modified to calculate a composite input feature map using the scaling factors and apply the base convolution kernel to the composite input feature map.