Invention Grant
- Patent Title: Cache affinity and processor utilization technique
-
Application No.: US15051947Application Date: 2016-02-24
-
Publication No.: US09842008B2Publication Date: 2017-12-12
- Inventor: Jeffrey S. Kimmel , Christopher Joseph Corsi , Venkatesh Babu Chitlur Srinivasa
- Applicant: NetApp, Inc.
- Applicant Address: US CA Sunnyvale
- Assignee: NetApp, Inc.
- Current Assignee: NetApp, Inc.
- Current Assignee Address: US CA Sunnyvale
- Agency: Cesari and McKenna, LLP
- Main IPC: G06F9/46
- IPC: G06F9/46 ; G06F9/50 ; G06F12/084

Abstract:
A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
Public/Granted literature
- US20160246655A1 CACHE AFFINITY AND PROCESSOR UTILIZATION TECHNIQUE Public/Granted day:2016-08-25
Information query