android_kernel_xiaomi_sdm845/mm
Mel Gorman dc83edd941 mm: kswapd: use the classzone idx that kswapd was using for sleeping_prematurely()
When kswapd is woken up for a high-order allocation, it takes account of
the highest usable zone by the caller (the classzone idx).  During
allocation, this index is used to select the lowmem_reserve[] that should
be applied to the watermark calculation in zone_watermark_ok().

When balancing a node, kswapd considers the highest unbalanced zone to be
the classzone index.  This will always be at least be the callers
classzone_idx and can be higher.  However, sleeping_prematurely() always
considers the lowest zone (e.g.  ZONE_DMA) to be the classzone index.
This means that sleeping_prematurely() can consider a zone to be balanced
that is unusable by the allocation request that originally woke kswapd.
This patch changes sleeping_prematurely() to use a classzone_idx matching
the value it used in balance_pgdat().

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: Eric B Munson <emunson@mgebm.net>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Simon Kirby <sim@hostway.ca>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Shaohua Li <shaohua.li@intel.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-13 17:32:37 -08:00
..
backing-dev.c
bootmem.c
bounce.c
compaction.c mm: compaction: perform a faster migration scan when migrating asynchronously 2011-01-13 17:32:34 -08:00
debug-pagealloc.c
dmapool.c
fadvise.c
failslab.c
filemap_xip.c
filemap.c mm: remove likely() from grab_cache_page_write_begin() 2011-01-13 17:32:36 -08:00
fremap.c
highmem.c
hugetlb.c
hwpoison-inject.c
init-mm.c
internal.h mlock: do not hold mmap_sem for extended periods of time 2011-01-13 17:32:36 -08:00
Kconfig
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c
ksm.c
maccess.c
madvise.c
Makefile
memblock.c
memcontrol.c
memory_hotplug.c mm: migration: cleanup migrate_pages API by matching types for offlining and sync 2011-01-13 17:32:34 -08:00
memory-failure.c mm: migration: allow migration to operate asynchronously and avoid synchronous compaction in the faster path 2011-01-13 17:32:34 -08:00
memory.c mlock: do not hold mmap_sem for extended periods of time 2011-01-13 17:32:36 -08:00
mempolicy.c mempolicy: remove tasklist_lock from migrate_pages 2011-01-13 17:32:36 -08:00
mempool.c
migrate.c mm: migration: cleanup migrate_pages API by matching types for offlining and sync 2011-01-13 17:32:34 -08:00
mincore.c
mlock.c mlock: do not hold mmap_sem for extended periods of time 2011-01-13 17:32:36 -08:00
mm_init.c
mmap.c
mmu_context.c
mmu_notifier.c
mmzone.c mm: page allocator: adjust the per-cpu counter threshold when memory is low 2011-01-13 17:32:31 -08:00
mprotect.c
mremap.c
msync.c
nommu.c mlock: do not hold mmap_sem for extended periods of time 2011-01-13 17:32:36 -08:00
oom_kill.c
page_alloc.c mm: kswapd: stop high-order balancing when any suitable zone is balanced 2011-01-13 17:32:37 -08:00
page_cgroup.c
page_io.c
page_isolation.c
page-writeback.c mm/page-writeback.c: fix __set_page_dirty_no_writeback() return value 2011-01-13 17:32:32 -08:00
pagewalk.c
percpu-km.c
percpu-vm.c mm: remove gfp mask from pcpu_get_vm_areas 2011-01-13 17:32:34 -08:00
percpu.c
prio_tree.c
quicklist.c
readahead.c
rmap.c
shmem.c
slab.c
slob.c
slub.c mm: convert sprintf_symbol to %pS 2011-01-13 17:32:33 -08:00
sparse-vmemmap.c
sparse.c
swap_state.c
swap.c
swapfile.c
thrash.c
truncate.c
util.c
vmalloc.c vmalloc: remove redundant unlikely() 2011-01-13 17:32:36 -08:00
vmscan.c mm: kswapd: use the classzone idx that kswapd was using for sleeping_prematurely() 2011-01-13 17:32:37 -08:00
vmstat.c mm: vmstat: use a single setter function and callback for adjusting percpu thresholds 2011-01-13 17:32:31 -08:00