- 16 Feb, 2017 1 commit
-
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Fixes: d82718e3 ("fuse_dev_splice_read(): switch to add_to_pipe()") Cc: <stable@vger.kernel.org> # 4.9+
-
- 15 Feb, 2017 1 commit
-
-
Sahitya Tummala authored
There is a potential race between fuse_dev_do_write() and request_wait_answer() contexts as shown below: TASK 1: __fuse_request_send(): |--spin_lock(&fiq->waitq.lock); |--queue_request(); |--spin_unlock(&fiq->waitq.lock); |--request_wait_answer(): |--if (test_bit(FR_SENT, &req->flags)) <gets pre-empted after it is validated true> TASK 2: fuse_dev_do_write(): |--clears bit FR_SENT, |--request_end(): |--sets bit FR_FINISHED |--spin_lock(&fiq->waitq.lock); |--list_del_init(&req->intr_entry); |--spin_unlock(&fiq->waitq.lock); |--fuse_put_request(); |--queue_interrupt(); <request gets queued to interrupts list> |--wake_up_locked(&fiq->waitq); |--wait_event_freezable(); <as FR_FINISHED is set, it returns and then the caller frees this request> Now, the next fuse_dev_do_read(), see interrupts list is not empty and then calls fuse_read_interrupt() which tries to access the request which is already free'd and gets the below crash: [11432.401266] Unable to handle kernel paging request at virtual address 6b6b6b6b6b6b6b6b ... [11432.418518] Kernel BUG at ffffff80083720e0 [11432.456168] PC is at __list_del_entry+0x6c/0xc4 [11432.463573] LR is at fuse_dev_do_read+0x1ac/0x474 ... [11432.679999] [<ffffff80083720e0>] __list_del_entry+0x6c/0xc4 [11432.687794] [<ffffff80082c65e0>] fuse_dev_do_read+0x1ac/0x474 [11432.693180] [<ffffff80082c6b14>] fuse_dev_read+0x6c/0x78 [11432.699082] [<ffffff80081d5638>] __vfs_read+0xc0/0xe8 [11432.704459] [<ffffff80081d5efc>] vfs_read+0x90/0x108 [11432.709406] [<ffffff80081d67f0>] SyS_read+0x58/0x94 As FR_FINISHED bit is set before deleting the intr_entry with input queue lock in request completion path, do the testing of this flag and queueing atomically with the same lock in queue_interrupt(). Signed-off-by:
Sahitya Tummala <stummala@codeaurora.org> Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Fixes: fd22d62e ("fuse: no fc->lock for iqueue parts") Cc: <stable@vger.kernel.org> # 4.2+
-
- 13 Jan, 2017 1 commit
-
-
Tahsin Erdogan authored
fuse_abort_conn() moves requests from pending list to a temporary list before canceling them. This operation races with request_wait_answer() which also tries to remove the request after it gets a fatal signal. It checks FR_PENDING flag to determine whether the request is still in the pending list. Make fuse_abort_conn() clear FR_PENDING flag so that request_wait_answer() does not remove the request from temporary list. This bug causes an Oops when trying to delete an already deleted list entry in end_requests(). Fixes: ee314a87 ("fuse: abort: no fc->lock needed for request ending") Signed-off-by:
Tahsin Erdogan <tahsin@google.com> Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Cc: <stable@vger.kernel.org> # 4.2+
-
- 05 Oct, 2016 4 commits
-
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 04 Oct, 2016 2 commits
-
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
* splice_to_pipe() stops at pipe overflow and does *not* take pipe_lock * ->splice_read() instances do the same * vmsplice_to_pipe() and do_splice() (ultimate callers of splice_to_pipe()) arrange for waiting, looping, etc. themselves. That should make pipe_lock the outermost one. Unfortunately, existing rules for the amount passed by vmsplice_to_pipe() and do_splice() are quite ugly _and_ userland code can be easily broken by changing those. It's not even "no more than the maximal capacity of this pipe" - it's "once we'd fed pipe->nr_buffers pages into the pipe, leave instead of waiting". Considering how poorly these rules are documented, let's try "wait for some space to appear, unless given SPLICE_F_NONBLOCK, then push into pipe and if we run into overflow, we are done". Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 01 Oct, 2016 1 commit
-
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com>
-
- 19 Jul, 2016 1 commit
-
-
Al Viro authored
just use wait_event_killable{,_exclusive}(). Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 11 Jun, 2016 1 commit
-
-
Linus Torvalds authored
We always mixed in the parent pointer into the dentry name hash, but we did it late at lookup time. It turns out that we can simplify that lookup-time action by salting the hash with the parent pointer early instead of late. A few other users of our string hashes also wanted to mix in their own pointers into the hash, and those are updated to use the same mechanism. Hash users that don't have any particular initial salt can just use the NULL pointer as a no-salt. Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: George Spelvin <linux@sciencehorizons.net> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 04 Apr, 2016 1 commit
-
-
Kirill A. Shutemov authored
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 16 Aug, 2015 1 commit
-
-
Jann Horn authored
fuse_dev_ioctl() performed fuse_get_dev() on a user-supplied fd, leading to a type confusion issue. Fix it by checking file->f_op. Signed-off-by:
Jann Horn <jann@thejh.net> Acked-by:
Miklos Szeredi <miklos@szeredi.hu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 01 Jul, 2015 26 commits
-
-
Miklos Szeredi authored
Make each fuse device clone refer to a separate processing queue. The only constraint on userspace code is that the request answer must be written to the same device clone as it was read off. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz>
-
Miklos Szeredi authored
Allow fuse device clones to refer to be distinguished. This patch just adds the infrastructure by associating a separate "struct fuse_dev" with each clone. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Allow an open fuse device to be "cloned". Userspace can create a clone by: newfd = open("/dev/fuse", O_RDWR) ioctl(newfd, FUSE_DEV_IOC_CLONE, &oldfd); At this point newfd will refer to the same fuse connection as oldfd. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
In fuse_abort_conn() when all requests are on private lists we no longer need fc->lock protection. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Remove fc->lock protection from processing queue members, now protected by fpq->lock. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
No longer need to call request_end() with the connection lock held. We still protect the background counters and queue with fc->lock, so acquire it if necessary. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Now that we atomically test having already done everything we no longer need other protection. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
When the connection is aborted it is possible that request_end() will be called twice. Use atomic test and set to do the actual ending only once. test_and_set_bit() also provides the necessary barrier semantics so no explicit smp_wmb() is necessary. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
When an unlocked request is aborted, it is moved from fpq->io to a private list. Then, after unlocking fpq->lock, the private list is processed and the requests are finished off. To protect the private list, we need to mark the request with a flag, so if in the meantime the request is unlocked the list is not corrupted. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Add a fpq->lock for protecting members of struct fuse_pqueue and FR_LOCKED request flag. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Rearrange fuse_abort_conn() so that processing queue accesses are grouped together. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
- locked list_add() + list_del_init() cancel out - common handling of case when request is ended here in the read phase Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz>
-
Miklos Szeredi authored
This will allow checking ->connected just with the processing queue lock. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
This is just two fields: fc->io and fc->processing. This patch just rearranges the fields, no functional change. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
wait_event_interruptible_exclusive_locked() will do everything request_wait() does, so replace it. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Remove fc->lock protection from input queue members, now protected by fiq->waitq.lock. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Interrupt is only queued after the request has been sent to userspace. This is either done in request_wait_answer() or fuse_dev_do_read() depending on which state the request is in at the time of the interrupt. If it's not yet sent, then queuing the interrupt is postponed until the request is read. Otherwise (the request has already been read and is waiting for an answer) the interrupt is queued immedidately. We want to call queue_interrupt() without fc->lock protection, in which case there can be a race between the two functions: - neither of them queue the interrupt (thinking the other one has already done it). - both of them queue the interrupt The first one is prevented by adding memory barriers, the second is prevented by checking (under fiq->waitq.lock) if the interrupt has already been queued. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz>
-
Miklos Szeredi authored
Use fiq->waitq.lock for protecting members of struct fuse_iqueue and FR_PENDING request flag, previously protected by fc->lock. Following patches will remove fc->lock protection from these members. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Different lists will need different locks. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Rearrange fuse_abort_conn() so that input queue accesses are grouped together. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
This will allow checking ->connected just with the input queue lock. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
The input queue contains normal requests (fc->pending), forgets (fc->forget_*) and interrupts (fc->interrupts). There's also fc->waitq and fc->fasync for waking up the readers of the fuse device when a request is available. The fc->reqctr is also moved to the input queue (assigned to the request when the request is added to the input queue. This patch just rearranges the fields, no functional change. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Use flags for representing the state in fuse_req. This is needed since req->list will be protected by different locks in different states, hence we'll want the state itself to be split into distinct bits, each protected with the relevant lock in that state. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz>
-
Miklos Szeredi authored
FUSE_REQ_INIT is actually the same state as FUSE_REQ_PENDING and FUSE_REQ_READING and FUSE_REQ_WRITING can be merged into a common FUSE_REQ_IO state. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz> Reviewed-by:
Ashish Samant <ashish.samant@oracle.com>
-
Miklos Szeredi authored
Only hold fc->lock over sections of request_wait_answer() that actually need it. If wait_event_interruptible() returns zero, it means that the request finished. Need to add memory barriers, though, to make sure that all relevant data in the request is synchronized. Signed-off-by:
Miklos Szeredi <mszeredi@suse.cz>
-