Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in
  • L linux-iv
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • linux-arm
  • linux-iv
  • Repository
Switch branch/tag
  • linux-iv
  • fs
  • eventpoll.c
Find file BlameHistoryPermalink
  • Davidlohr Bueso's avatar
    fs/epoll: reduce the scope of wq lock in epoll_wait() · c5a282e9
    Davidlohr Bueso authored Jan 03, 2019
    This patch aims at reducing ep wq.lock hold times in epoll_wait(2).  For
    the blocking case, there is no need to constantly take and drop the
    spinlock, which is only needed to manipulate the waitqueue.
    
    The call to ep_events_available() is now lockless, and only exposed to
    benign races.  Here, if false positive (returns available events and
    does not see another thread deleting an epi from the list) we call into
    send_events and then the list's state is correctly seen.  Otoh, if a
    false negative and we don't see a list_add_tail(), for example, from irq
    callback, then it is rechecked again before blocking, which will see the
    correct state.
    
    In order for more accuracy to see concurrent list_del_init(), use the
    list_empty_careful() variant -- of course, this won't be safe against
    insertions from wakeup.
    
    For the overflow list we obviously need to prevent load/store tearing as
    we don't want to see partial values while the ready list is disabled.
    
    [dave@stgolabs.net: forgotten fixlets]
      Link: http://lkml.kernel.org/r/20181109155258.jxcr4t2pnz6zqct3@linux-r8p5
    Link: http://lkml.kernel.org/r/20181108051006.18751-6-dave@stgolabs.net
    
    
    Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
    Suggested-by: default avatarJason Baron <jbaron@akamai.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    c5a282e9