swap: turn get_swap_page() into folio_alloc_swap()
This removes an assumption that a large folio is HPAGE_PMD_NR pages in size. Link: https://lkml.kernel.org/r/20220504182857.4013401-8-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
d33e4e1412
commit
e2e3fdc7d4
6 changed files with 35 additions and 31 deletions
|
|
@ -77,9 +77,9 @@ static PLIST_HEAD(swap_active_head);
|
|||
/*
|
||||
* all available (active, not full) swap_info_structs
|
||||
* protected with swap_avail_lock, ordered by priority.
|
||||
* This is used by get_swap_page() instead of swap_active_head
|
||||
* This is used by folio_alloc_swap() instead of swap_active_head
|
||||
* because swap_active_head includes all swap_info_structs,
|
||||
* but get_swap_page() doesn't need to look at full ones.
|
||||
* but folio_alloc_swap() doesn't need to look at full ones.
|
||||
* This uses its own lock instead of swap_lock because when a
|
||||
* swap_info_struct changes between not-full/full, it needs to
|
||||
* add/remove itself to/from this list, but the swap_info_struct->lock
|
||||
|
|
@ -2109,11 +2109,12 @@ retry:
|
|||
* Under global memory pressure, swap entries can be reinserted back
|
||||
* into process space after the mmlist loop above passes over them.
|
||||
*
|
||||
* Limit the number of retries? No: when mmget_not_zero() above fails,
|
||||
* that mm is likely to be freeing swap from exit_mmap(), which proceeds
|
||||
* at its own independent pace; and even shmem_writepage() could have
|
||||
* been preempted after get_swap_page(), temporarily hiding that swap.
|
||||
* It's easy and robust (though cpu-intensive) just to keep retrying.
|
||||
* Limit the number of retries? No: when mmget_not_zero()
|
||||
* above fails, that mm is likely to be freeing swap from
|
||||
* exit_mmap(), which proceeds at its own independent pace;
|
||||
* and even shmem_writepage() could have been preempted after
|
||||
* folio_alloc_swap(), temporarily hiding that swap. It's easy
|
||||
* and robust (though cpu-intensive) just to keep retrying.
|
||||
*/
|
||||
if (READ_ONCE(si->inuse_pages)) {
|
||||
if (!signal_pending(current))
|
||||
|
|
@ -2327,7 +2328,7 @@ static void _enable_swap_info(struct swap_info_struct *p)
|
|||
* which on removal of any swap_info_struct with an auto-assigned
|
||||
* (i.e. negative) priority increments the auto-assigned priority
|
||||
* of any lower-priority swap_info_structs.
|
||||
* swap_avail_head needs to be priority ordered for get_swap_page(),
|
||||
* swap_avail_head needs to be priority ordered for folio_alloc_swap(),
|
||||
* which allocates swap pages from the highest available priority
|
||||
* swap_info_struct.
|
||||
*/
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue