| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
If a node page is trucated, we'd better drop the page in the node_inode's page
cache for better memory footprint.
Change-Id: I3c7187676b5899c211857b9b94fad4f2a412e34d
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
| |
This patch adds NODE_MAPPING which is similar as META_MAPPING introduced by
Gu Zheng.
Change-Id: I535e5fc3573f408245c4630934a308db06e1f999
Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the orphan_blocks may be max to 504, so it is not security
and rigorous to store such a large array in the kernel stack
as Dan Carpenter said.
In fact, grab_meta_page has locked the page in the page cache,
and we can use find_get_page() to fetch the page safely in the
downstream, so we can remove the page array directly.
Change-Id: Ie73897038bc06f6a6342d6fbdab8f8ff36b2cd96
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
| |
Introduce help function META_MAPPING() to get the cache meta blocks'
address space.
Change-Id: I5dcfb055444bbdd6160c3b5b631878f135c767ad
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
| |
This patch moves a function in f2fs_delete_entry for code readability.
Change-Id: I00419bcbd9d353a95491af6d8ec8e418b45171f3
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
| |
If a dentry page is updated, we should call mark_inode_dirty to add the inode
into the dirty list, so that its dentry pages are flushed to the disk.
Otherwise, the inode can be evicted without flush.
Change-Id: Ia31244451976a13a69e99342c5c4f5e5238a7b4f
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
| |
Fixed a variety of trivial checkpatch warnings. The only delta should
be some minor formatting on log strings that were split / too long.
Change-Id: I1477b17f7d4e2acd2d5370bb389f0bd064834059
Signed-off-by: Chris Fries <cfries@motorola.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
| |
Doing sync_meta_pages with META_FLUSH when checkpoint, we overide rw
using WRITE_FLUSH_FUA. At this time, we also should set
REQ_META|REQ_PRIO.
Change-Id: I079dc24c8d03451c942b150df7650c3967bb3ea1
Signed-off-by: Changman Lee <cm224.lee@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch should resolve the following bug.
=========================================================
[ INFO: possible irq lock inversion dependency detected ]
3.13.0-rc5.f2fs+ #6 Not tainted
---------------------------------------------------------
kswapd0/41 just changed the state of lock:
(&sbi->gc_mutex){+.+.-.}, at: [<ffffffffa030503e>] f2fs_balance_fs+0xae/0xd0 [f2fs]
but this lock took another, RECLAIM_FS-READ-unsafe lock in the past:
(&sbi->cp_rwsem){++++.?}
and interrupts could create inverse lock ordering between them.
other info that might help us debug this:
Chain exists of:
&sbi->gc_mutex --> &sbi->cp_mutex --> &sbi->cp_rwsem
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&sbi->cp_rwsem);
local_irq_disable();
lock(&sbi->gc_mutex);
lock(&sbi->cp_mutex);
<Interrupt>
lock(&sbi->gc_mutex);
*** DEADLOCK ***
This bug is due to the f2fs_balance_fs call in f2fs_write_data_page.
If f2fs_write_data_page is triggered by wbc->for_reclaim via kswapd, it should
not call f2fs_balance_fs which tries to get a mutex grabbed by original syscall
flow.
Change-Id: I795c071696885b5d048750fb10ae53594f896a2f
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
| |
Support for f2fs-tools/tools/f2stat to monitor
/sys/kernel/debug/f2fs/status
Change-Id: Ic97f5b9da15129a24959d0fbaeb91e1c84c7f1ac
Signed-off-by: Changman Lee <cm224.lee@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
| |
With the 2 previous changes, all the long time operations are moved out
of the protection region, so here we can use spinlock rather than mutex
(orphan_inode_mutex) for lower overhead.
Change-Id: Iafe2320add741da9e119de9f30b0f90799d68b27
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
| |
Change-Id: I311a4ca284237c557bdbfbc37d1929c905d3c059
|
|
|
|
|
|
|
|
| |
Move alloc new orphan node out of lock protection region.
Change-Id: Iea8bae6e1561e1a0644416ae07177c8f165e5393
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
| |
"boo sync" parameter is never referenced in f2fs_wait_on_page_writeback.
We should remove this parameter.
Change-Id: I14f0cb0eba62c04a53181674d254753b6cdc0f7f
Signed-off-by: Yuan Zhong <yuan.mark.zhong@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously during SSR and GC, the maximum number of retrials to find a victim
segment was hard-coded by MAX_VICTIM_SEARCH, 4096 by default.
This number makes an effect on IO locality, when SSR mode is activated, which
results in performance fluctuation on some low-end devices.
If max_victim_search = 4, the victim will be searched like below.
("D" represents a dirty segment, and "*" indicates a selected victim segment.)
D1 D2 D3 D4 D5 D6 D7 D8 D9
[ * ]
[ * ]
[ * ]
[ ....]
This patch adds a sysfs entry to control the number dynamically through:
/sys/fs/f2fs/$dev/max_victim_search
Change-Id: I0f98609540351fd7451e3c23bc65f8d165ba13b6
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When considering a bunch of data writes with very frequent fsync calls, we
are able to think the following performance regression.
N: Node IO, D: Data IO, IO scheduler: cfq
Issue pending IOs
D1 D2 D3 D4
D1 D2 D3 D4 N1
D2 D3 D4 N1 N2
N1 D3 D4 N2 D1
--> N1 can be selected by cfq becase of the same priority of N and D.
Then D3 and D4 would be delayed, resuling in performance degradation.
So, when processing the fsync call, it'd better give higher priority to data IOs
than node IOs by assigning WRITE and WRITE_SYNC respectively.
This patch improves the random wirte performance with frequent fsync calls by up
to 10%.
Change-Id: I4812ac05db179d83914c7bb0942fa6738280e1ea
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
|
|
|
|
|
|
|
|
| |
Initial backporting done by nowcomputing,
(https://github.com/nowcomputing/f2fs-backports.git)
Additional patches required by upstream jaegeuk/f2fs.git/linux-3.4 done by arter97.
Change-Id: Ibbd3a608857338482f974fa4b1a8d3c02c267d9f
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
|
|
|
|
|
| |
Change-Id: I20b6a2877e072a4b9639001fa0198837cfa74aff
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git (branch dev)
Up-to-date with
git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git @04a17fb17fafada39f96bfb41ceb2dc1c11b2af6
(f2fs: avoid to read inline data except first page)
Change-Id: I1fc76a61defd530c4e97587980ba43e98db6119e
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
|