Commit 8f744bde authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge tag 'virtio-fs-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse

Pull fuse virtio-fs support from Miklos Szeredi:
 "Virtio-fs allows exporting directory trees on the host and mounting
  them in guest(s).

  This isn't actually a new filesystem, but a glue layer between the
  fuse filesystem and a virtio based back-end.

  It's similar in functionality to the existing virtio-9p solution, but
  significantly faster in benchmarks and has better POSIX compliance.
  Further permformance improvements can be achieved by sharing the page
  cache between host and guest, allowing for faster I/O and reduced
  memory use.

  Kata Containers have been including the out-of-tree virtio-fs (with
  the shared page cache patches as well) since version 1.7 as an
  experimental feature. They have been active in development and plan to
  switch from virtio-9p to virtio-fs as their default solution. There
  has been interest from other sources as well.

  The userspace infrastructure is slated to be merged into qemu once the
  kernel part hits mainline.

  This was developed by Vivek Goyal, Dave Gilbert and Stefan Hajnoczi"

* tag 'virtio-fs-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse:
  virtio-fs: add virtiofs filesystem
  virtio-fs: add Documentation/filesystems/virtiofs.rst
  fuse: reserve values for mapping protocol
parents 9977b1a7 a62a8ef9
......@@ -37,3 +37,13 @@ filesystem implementations.
journalling
fscrypt
fsverity
Filesystems
===========
Documentation for filesystem implementations.
.. toctree::
:maxdepth: 2
virtiofs
.. SPDX-License-Identifier: GPL-2.0
===================================================
virtiofs: virtio-fs host<->guest shared file system
===================================================
- Copyright (C) 2019 Red Hat, Inc.
Introduction
============
The virtiofs file system for Linux implements a driver for the paravirtualized
VIRTIO "virtio-fs" device for guest<->host file system sharing. It allows a
guest to mount a directory that has been exported on the host.
Guests often require access to files residing on the host or remote systems.
Use cases include making files available to new guests during installation,
booting from a root file system located on the host, persistent storage for
stateless or ephemeral guests, and sharing a directory between guests.
Although it is possible to use existing network file systems for some of these
tasks, they require configuration steps that are hard to automate and they
expose the storage network to the guest. The virtio-fs device was designed to
solve these problems by providing file system access without networking.
Furthermore the virtio-fs device takes advantage of the co-location of the
guest and host to increase performance and provide semantics that are not
possible with network file systems.
Usage
=====
Mount file system with tag ``myfs`` on ``/mnt``:
.. code-block:: sh
guest# mount -t virtiofs myfs /mnt
Please see https://virtio-fs.gitlab.io/ for details on how to configure QEMU
and the virtiofsd daemon.
Internals
=========
Since the virtio-fs device uses the FUSE protocol for file system requests, the
virtiofs file system for Linux is integrated closely with the FUSE file system
client. The guest acts as the FUSE client while the host acts as the FUSE
server. The /dev/fuse interface between the kernel and userspace is replaced
with the virtio-fs device interface.
FUSE requests are placed into a virtqueue and processed by the host. The
response portion of the buffer is filled in by the host and the guest handles
the request completion.
Mapping /dev/fuse to virtqueues requires solving differences in semantics
between /dev/fuse and virtqueues. Each time the /dev/fuse device is read, the
FUSE client may choose which request to transfer, making it possible to
prioritize certain requests over others. Virtqueues have queue semantics and
it is not possible to change the order of requests that have been enqueued.
This is especially important if the virtqueue becomes full since it is then
impossible to add high priority requests. In order to address this difference,
the virtio-fs device uses a "hiprio" virtqueue specifically for requests that
have priority over normal requests.
......@@ -17280,6 +17280,18 @@ S: Supported
F: drivers/s390/virtio/
F: arch/s390/include/uapi/asm/virtio-ccw.h
 
VIRTIO FILE SYSTEM
M: Vivek Goyal <vgoyal@redhat.com>
M: Stefan Hajnoczi <stefanha@redhat.com>
M: Miklos Szeredi <miklos@szeredi.hu>
L: virtualization@lists.linux-foundation.org
L: linux-fsdevel@vger.kernel.org
W: https://virtio-fs.gitlab.io/
S: Supported
F: fs/fuse/virtio_fs.c
F: include/uapi/linux/virtio_fs.h
F: Documentation/filesystems/virtiofs.rst
VIRTIO GPU DRIVER
M: David Airlie <airlied@linux.ie>
M: Gerd Hoffmann <kraxel@redhat.com>
......
......@@ -27,3 +27,14 @@ config CUSE
If you want to develop or use a userspace character device
based on CUSE, answer Y or M.
config VIRTIO_FS
tristate "Virtio Filesystem"
depends on FUSE_FS
select VIRTIO
help
The Virtio Filesystem allows guests to mount file systems from the
host.
If you want to share files between guests or with the host, answer Y
or M.
......@@ -5,5 +5,6 @@
obj-$(CONFIG_FUSE_FS) += fuse.o
obj-$(CONFIG_CUSE) += cuse.o
obj-$(CONFIG_VIRTIO_FS) += virtio_fs.o
fuse-objs := dev.o dir.o file.o inode.o control.o xattr.o acl.o readdir.o
......@@ -353,6 +353,10 @@ struct fuse_req {
/** Used to wake up the task waiting for completion of request*/
wait_queue_head_t waitq;
#if IS_ENABLED(CONFIG_VIRTIO_FS)
/** virtio-fs's physically contiguous buffer for in and out args */
void *argbuf;
#endif
};
struct fuse_iqueue;
......@@ -383,6 +387,11 @@ struct fuse_iqueue_ops {
*/
void (*wake_pending_and_unlock)(struct fuse_iqueue *fiq)
__releases(fiq->lock);
/**
* Clean up when fuse_iqueue is destroyed
*/
void (*release)(struct fuse_iqueue *fiq);
};
/** /dev/fuse input queue operations */
......
......@@ -630,6 +630,10 @@ EXPORT_SYMBOL_GPL(fuse_conn_init);
void fuse_conn_put(struct fuse_conn *fc)
{
if (refcount_dec_and_test(&fc->count)) {
struct fuse_iqueue *fiq = &fc->iq;
if (fiq->ops->release)
fiq->ops->release(fiq);
put_pid_ns(fc->pid_ns);
put_user_ns(fc->user_ns);
fc->release(fc);
......
// SPDX-License-Identifier: GPL-2.0
/*
* virtio-fs: Virtio Filesystem
* Copyright (C) 2018 Red Hat, Inc.
*/
#include <linux/fs.h>
#include <linux/module.h>
#include <linux/virtio.h>
#include <linux/virtio_fs.h>
#include <linux/delay.h>
#include <linux/fs_context.h>
#include <linux/highmem.h>
#include "fuse_i.h"
/* List of virtio-fs device instances and a lock for the list. Also provides
* mutual exclusion in device removal and mounting path
*/
static DEFINE_MUTEX(virtio_fs_mutex);
static LIST_HEAD(virtio_fs_instances);
enum {
VQ_HIPRIO,
VQ_REQUEST
};
/* Per-virtqueue state */
struct virtio_fs_vq {
spinlock_t lock;
struct virtqueue *vq; /* protected by ->lock */
struct work_struct done_work;
struct list_head queued_reqs;
struct delayed_work dispatch_work;
struct fuse_dev *fud;
bool connected;
long in_flight;
char name[24];
} ____cacheline_aligned_in_smp;
/* A virtio-fs device instance */
struct virtio_fs {
struct kref refcount;
struct list_head list; /* on virtio_fs_instances */
char *tag;
struct virtio_fs_vq *vqs;
unsigned int nvqs; /* number of virtqueues */
unsigned int num_request_queues; /* number of request queues */
};
struct virtio_fs_forget {
struct fuse_in_header ih;
struct fuse_forget_in arg;
/* This request can be temporarily queued on virt queue */
struct list_head list;
};
static inline struct virtio_fs_vq *vq_to_fsvq(struct virtqueue *vq)
{
struct virtio_fs *fs = vq->vdev->priv;
return &fs->vqs[vq->index];
}
static inline struct fuse_pqueue *vq_to_fpq(struct virtqueue *vq)
{
return &vq_to_fsvq(vq)->fud->pq;
}
static void release_virtio_fs_obj(struct kref *ref)
{
struct virtio_fs *vfs = container_of(ref, struct virtio_fs, refcount);
kfree(vfs->vqs);
kfree(vfs);
}
/* Make sure virtiofs_mutex is held */
static void virtio_fs_put(struct virtio_fs *fs)
{
kref_put(&fs->refcount, release_virtio_fs_obj);
}
static void virtio_fs_fiq_release(struct fuse_iqueue *fiq)
{
struct virtio_fs *vfs = fiq->priv;
mutex_lock(&virtio_fs_mutex);
virtio_fs_put(vfs);
mutex_unlock(&virtio_fs_mutex);
}
static void virtio_fs_drain_queue(struct virtio_fs_vq *fsvq)
{
WARN_ON(fsvq->in_flight < 0);
/* Wait for in flight requests to finish.*/
while (1) {
spin_lock(&fsvq->lock);
if (!fsvq->in_flight) {
spin_unlock(&fsvq->lock);
break;
}
spin_unlock(&fsvq->lock);
/* TODO use completion instead of timeout */
usleep_range(1000, 2000);
}
flush_work(&fsvq->done_work);
flush_delayed_work(&fsvq->dispatch_work);
}
static inline void drain_hiprio_queued_reqs(struct virtio_fs_vq *fsvq)
{
struct virtio_fs_forget *forget;
spin_lock(&fsvq->lock);
while (1) {
forget = list_first_entry_or_null(&fsvq->queued_reqs,
struct virtio_fs_forget, list);
if (!forget)
break;
list_del(&forget->list);
kfree(forget);
}
spin_unlock(&fsvq->lock);
}
static void virtio_fs_drain_all_queues(struct virtio_fs *fs)
{
struct virtio_fs_vq *fsvq;
int i;
for (i = 0; i < fs->nvqs; i++) {
fsvq = &fs->vqs[i];
if (i == VQ_HIPRIO)
drain_hiprio_queued_reqs(fsvq);
virtio_fs_drain_queue(fsvq);
}
}
static void virtio_fs_start_all_queues(struct virtio_fs *fs)
{
struct virtio_fs_vq *fsvq;
int i;
for (i = 0; i < fs->nvqs; i++) {
fsvq = &fs->vqs[i];
spin_lock(&fsvq->lock);
fsvq->connected = true;
spin_unlock(&fsvq->lock);
}
}
/* Add a new instance to the list or return -EEXIST if tag name exists*/
static int virtio_fs_add_instance(struct virtio_fs *fs)
{
struct virtio_fs *fs2;
bool duplicate = false;
mutex_lock(&virtio_fs_mutex);
list_for_each_entry(fs2, &virtio_fs_instances, list) {
if (strcmp(fs->tag, fs2->tag) == 0)
duplicate = true;
}
if (!duplicate)
list_add_tail(&fs->list, &virtio_fs_instances);
mutex_unlock(&virtio_fs_mutex);
if (duplicate)
return -EEXIST;
return 0;
}
/* Return the virtio_fs with a given tag, or NULL */
static struct virtio_fs *virtio_fs_find_instance(const char *tag)
{
struct virtio_fs *fs;
mutex_lock(&virtio_fs_mutex);
list_for_each_entry(fs, &virtio_fs_instances, list) {
if (strcmp(fs->tag, tag) == 0) {
kref_get(&fs->refcount);
goto found;
}
}
fs = NULL; /* not found */
found:
mutex_unlock(&virtio_fs_mutex);
return fs;
}
static void virtio_fs_free_devs(struct virtio_fs *fs)
{
unsigned int i;
for (i = 0; i < fs->nvqs; i++) {
struct virtio_fs_vq *fsvq = &fs->vqs[i];
if (!fsvq->fud)
continue;
fuse_dev_free(fsvq->fud);
fsvq->fud = NULL;
}
}
/* Read filesystem name from virtio config into fs->tag (must kfree()). */
static int virtio_fs_read_tag(struct virtio_device *vdev, struct virtio_fs *fs)
{
char tag_buf[sizeof_field(struct virtio_fs_config, tag)];
char *end;
size_t len;
virtio_cread_bytes(vdev, offsetof(struct virtio_fs_config, tag),
&tag_buf, sizeof(tag_buf));
end = memchr(tag_buf, '\0', sizeof(tag_buf));
if (end == tag_buf)
return -EINVAL; /* empty tag */
if (!end)
end = &tag_buf[sizeof(tag_buf)];
len = end - tag_buf;
fs->tag = devm_kmalloc(&vdev->dev, len + 1, GFP_KERNEL);
if (!fs->tag)
return -ENOMEM;
memcpy(fs->tag, tag_buf, len);
fs->tag[len] = '\0';
return 0;
}
/* Work function for hiprio completion */
static void virtio_fs_hiprio_done_work(struct work_struct *work)
{
struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq,
done_work);
struct virtqueue *vq = fsvq->vq;
/* Free completed FUSE_FORGET requests */
spin_lock(&fsvq->lock);
do {
unsigned int len;
void *req;
virtqueue_disable_cb(vq);
while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
kfree(req);
fsvq->in_flight--;
}
} while (!virtqueue_enable_cb(vq) && likely(!virtqueue_is_broken(vq)));
spin_unlock(&fsvq->lock);
}
static void virtio_fs_dummy_dispatch_work(struct work_struct *work)
{
}
static void virtio_fs_hiprio_dispatch_work(struct work_struct *work)
{
struct virtio_fs_forget *forget;
struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq,
dispatch_work.work);
struct virtqueue *vq = fsvq->vq;
struct scatterlist sg;
struct scatterlist *sgs[] = {&sg};
bool notify;
int ret;
pr_debug("virtio-fs: worker %s called.\n", __func__);
while (1) {
spin_lock(&fsvq->lock);
forget = list_first_entry_or_null(&fsvq->queued_reqs,
struct virtio_fs_forget, list);
if (!forget) {
spin_unlock(&fsvq->lock);
return;
}
list_del(&forget->list);
if (!fsvq->connected) {
spin_unlock(&fsvq->lock);
kfree(forget);
continue;
}
sg_init_one(&sg, forget, sizeof(*forget));
/* Enqueue the request */
dev_dbg(&vq->vdev->dev, "%s\n", __func__);
ret = virtqueue_add_sgs(vq, sgs, 1, 0, forget, GFP_ATOMIC);
if (ret < 0) {
if (ret == -ENOMEM || ret == -ENOSPC) {
pr_debug("virtio-fs: Could not queue FORGET: err=%d. Will try later\n",
ret);
list_add_tail(&forget->list,
&fsvq->queued_reqs);
schedule_delayed_work(&fsvq->dispatch_work,
msecs_to_jiffies(1));
} else {
pr_debug("virtio-fs: Could not queue FORGET: err=%d. Dropping it.\n",
ret);
kfree(forget);
}
spin_unlock(&fsvq->lock);
return;
}
fsvq->in_flight++;
notify = virtqueue_kick_prepare(vq);
spin_unlock(&fsvq->lock);
if (notify)
virtqueue_notify(vq);
pr_debug("virtio-fs: worker %s dispatched one forget request.\n",
__func__);
}
}
/* Allocate and copy args into req->argbuf */
static int copy_args_to_argbuf(struct fuse_req *req)
{
struct fuse_args *args = req->args;
unsigned int offset = 0;
unsigned int num_in;
unsigned int num_out;
unsigned int len;
unsigned int i;
num_in = args->in_numargs - args->in_pages;
num_out = args->out_numargs - args->out_pages;
len = fuse_len_args(num_in, (struct fuse_arg *) args->in_args) +
fuse_len_args(num_out, args->out_args);
req->argbuf = kmalloc(len, GFP_ATOMIC);
if (!req->argbuf)
return -ENOMEM;
for (i = 0; i < num_in; i++) {
memcpy(req->argbuf + offset,
args->in_args[i].value,
args->in_args[i].size);
offset += args->in_args[i].size;
}
return 0;
}
/* Copy args out of and free req->argbuf */
static void copy_args_from_argbuf(struct fuse_args *args, struct fuse_req *req)
{
unsigned int remaining;
unsigned int offset;
unsigned int num_in;
unsigned int num_out;
unsigned int i;
remaining = req->out.h.len - sizeof(req->out.h);
num_in = args->in_numargs - args->in_pages;
num_out = args->out_numargs - args->out_pages;
offset = fuse_len_args(num_in, (struct fuse_arg *)args->in_args);
for (i = 0; i < num_out; i++) {
unsigned int argsize = args->out_args[i].size;
if (args->out_argvar &&
i == args->out_numargs - 1 &&
argsize > remaining) {
argsize = remaining;
}
memcpy(args->out_args[i].value, req->argbuf + offset, argsize);
offset += argsize;
if (i != args->out_numargs - 1)
remaining -= argsize;
}
/* Store the actual size of the variable-length arg */
if (args->out_argvar)
args->out_args[args->out_numargs - 1].size = remaining;
kfree(req->argbuf);
req->argbuf = NULL;
}
/* Work function for request completion */
static void virtio_fs_requests_done_work(struct work_struct *work)
{
struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq,
done_work);
struct fuse_pqueue *fpq = &fsvq->fud->pq;
struct fuse_conn *fc = fsvq->fud->fc;
struct virtqueue *vq = fsvq->vq;
struct fuse_req *req;
struct fuse_args_pages *ap;
struct fuse_req *next;
struct fuse_args *args;
unsigned int len, i, thislen;
struct page *page;
LIST_HEAD(reqs);
/* Collect completed requests off the virtqueue */
spin_lock(&fsvq->lock);
do {
virtqueue_disable_cb(vq);
while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
spin_lock(&fpq->lock);
list_move_tail(&req->list, &reqs);
spin_unlock(&fpq->lock);
}
} while (!virtqueue_enable_cb(vq) && likely(!virtqueue_is_broken(vq)));
spin_unlock(&fsvq->lock);
/* End requests */
list_for_each_entry_safe(req, next, &reqs, list) {
/*
* TODO verify that server properly follows FUSE protocol
* (oh.uniq, oh.len)
*/
args = req->args;
copy_args_from_argbuf(args, req);
if (args->out_pages && args->page_zeroing) {
len = args->out_args[args->out_numargs - 1].size;
ap = container_of(args, typeof(*ap), args);
for (i = 0; i < ap->num_pages; i++) {
thislen = ap->descs[i].length;
if (len < thislen) {
WARN_ON(ap->descs[i].offset);
page = ap->pages[i];
zero_user_segment(page, len, thislen);
len = 0;
} else {
len -= thislen;
}
}
}
spin_lock(&fpq->lock);
clear_bit(FR_SENT, &req->flags);
list_del_init(&req->list);
spin_unlock(&fpq->lock);
fuse_request_end(fc, req);
spin_lock(&fsvq->lock);
fsvq->in_flight--;
spin_unlock(&fsvq->lock);
}
}
/* Virtqueue interrupt handler */
static void virtio_fs_vq_done(struct virtqueue *vq)
{
struct virtio_fs_vq *fsvq = vq_to_fsvq(vq);
dev_dbg(&vq->vdev->dev, "%s %s\n", __func__, fsvq->name);
schedule_work(&fsvq->done_work);
}