Abstract:
A semiconductor memory device includes a memory cell array including first memory cells and second memory cell, and a peripheral circuit. When a first command, a first address, and first input data are received, the peripheral circuit reads first data from the first memory cells based on the first address in response to the first command, performs a first operation by using the first data and the first input data, and reads second data from the second memory cells by using a result of the first operation.
Abstract:
A method controls a memory device that includes a page buffer circuit comprising a plurality of page buffers each comprising at least one latch. The method includes generating by an internal voltage circuit at least one internal voltage among internal voltages used for an operation of the page buffer circuit, the internal voltage circuit providing the at least one internal voltage to the page buffer circuit; and providing to the page buffer circuit a control signal for forming an electrical connection between the internal voltage circuit and a first electrical node of a first page buffer unused for buffering in the page buffer circuit during a set operation for a first latch of a second page buffer.
Abstract:
A method of operating an accelerator includes receiving, from a central processing unit (CPU), commands for the accelerator and a peripheral device of the accelerator, processing the received commands according to a subject of performance of each of the commands, and transmitting a completion message indicating that performance of the commands is completed to the CPU after the performance of the commands is completed.
Abstract:
A ternary content addressable memory device (TCAM) may include: a cache memory storing a look-up table with respect to a calculation result of a plurality of functions; an approximation unit configured to generate mask bits; and a controller configured to obtain an approximation input value corresponding to an input key based on the mask bits and to retrieve an output value corresponding to the obtained approximation input value from the look-up table.
Abstract:
A method of processing multimedia data includes: separating a defined application kernel into a data patch kernel and a data processing kernel; requesting, by the data processing kernel, access to patch data of the multimedia data, from the data patch kernel; performing, by the data patch kernel, an operation that is independent of the request and preparing data for the data access based on the request; and performing, by the data processing kernel, an arithmetic operation on work items of the prepared data when the data has been prepared by the data patch kernel.
Abstract:
An interconnect device may include one or more hardware-implemented modules configured to: receive a command from a processing core; perform, based on the received command, an operation including either one or both of an accumulation of sets of data stored in a memory and an aggregation of results processed by the processing core; and provide a result of the performing of the operation.
Abstract:
A method controls a memory device that includes a page buffer circuit comprising a plurality of page buffers each comprising at least one latch. The method includes generating by an internal voltage circuit at least one internal voltage among internal voltages used for an operation of the page buffer circuit, the internal voltage circuit providing the at least one internal voltage to the page buffer circuit; and providing to the page buffer circuit a control signal for forming an electrical connection between the internal voltage circuit and a first electrical node of a first page buffer unused for buffering in the page buffer circuit during a set operation for a first latch of a second page buffer.
Abstract:
Provided are a multimedia data processing system and a selective caching method. The selective caching method in the multimedia data processing system includes inserting cacheability indicator information into an address translation table descriptor undergoing memory allocation to a graphics resource when the graphics resource needs to be cached and selectively controlling whether or not to prefetch multimedia data of the graphics resource present in a main memory to a system level cache memory, with reference to cacheability indicator information during an address translation operation of a graphic processing unit (GPU). The inventive concept can be implemented in a wide variety of computer-based systems having a graphical output, such as cell phones, laptops, tablets, and personal computers, as only a few examples.
Abstract:
A multimedia data processing method is provided which includes providing a conflict detection unit at a load/store pipeline unit; generating, by the conflict detection unit, speculative conflict information, which is used to predictively determine whether an address of a load/store instruction of a current thread causes a conflict miss before a cache access operation is performed by performing a history search for load/store instruction addresses of previous threads without referring to a cache memory; and storing information of the current thread directly in a standby buffer without an execution of the cache access operation in response to the generated speculative conflict information indicating the conflict miss.