- 专利标题: Cache affinity and processor utilization technique
-
申请号: US15051947申请日: 2016-02-24
-
公开(公告)号: US09842008B2公开(公告)日: 2017-12-12
- 发明人: Jeffrey S. Kimmel , Christopher Joseph Corsi , Venkatesh Babu Chitlur Srinivasa
- 申请人: NetApp, Inc.
- 申请人地址: US CA Sunnyvale
- 专利权人: NetApp, Inc.
- 当前专利权人: NetApp, Inc.
- 当前专利权人地址: US CA Sunnyvale
- 代理机构: Cesari and McKenna, LLP
- 主分类号: G06F9/46
- IPC分类号: G06F9/46 ; G06F9/50 ; G06F12/084
摘要:
A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
公开/授权文献
- US20160246655A1 CACHE AFFINITY AND PROCESSOR UTILIZATION TECHNIQUE 公开/授权日:2016-08-25
信息查询