1. 14 Jun, 2018 1 commit
  2. 14 May, 2018 1 commit
  3. 23 Aug, 2017 1 commit
    • Christoph Hellwig's avatar
      block: replace bi_bdev with a gendisk pointer and partitions index · 74d46992
      Christoph Hellwig authored
      
      
      This way we don't need a block_device structure to submit I/O.  The
      block_device has different life time rules from the gendisk and
      request_queue and is usually only available when the block device node
      is open.  Other callers need to explicitly create one (e.g. the lightnvm
      passthrough code, or the new nvme multipathing code).
      
      For the actual I/O path all that we need is the gendisk, which exists
      once per block device.  But given that the block layer also does
      partition remapping we additionally need a partition index, which is
      used for said remapping in generic_make_request.
      
      Note that all the block drivers generally want request_queue or
      sometimes the gendisk, so this removes a layer of indirection all
      over the stack.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      74d46992
  4. 07 Jun, 2016 1 commit
  5. 19 Oct, 2014 1 commit
  6. 22 May, 2014 3 commits
    • Boaz Harrosh's avatar
      ore: Support for raid 6 · ce5d36aa
      Boaz Harrosh authored
      
      
      This simple patch adds support for raid6 to the ORE.
      Most operations and calculations where already for the general
      case. Only things left:
      * call async_gen_syndrome() in the case of raid6
        (NOTE that the raid6 math is the one supported by the Linux Kernel
         see: crypto/async_tx/async_pq.c)
      * call _ore_add_parity_unit() twice with only last call generating
        the redundancy pages.
      
      * Fix couple BUGS in old code
        a. In reads when parity==2 it can happen that per_dev->length=0
           but per_dev->offset was set and adjusted by _ore_add_sg_seg().
           Don't let it be overwritten.
        b. The all 'cur_comp > starting_dev' thing to determine if:
             "per_dev->offset is in the current stripe number or the
             next one."
           Was a complete raid5/4 accident. When parity==2 this is not
           at all true usually. All we need to do is increment si->ob_offset
           once we pass by the first parity device.
           (This also greatly simplifies the code, amen)
        c. Calculation of si->dev rotation can overflow when parity==2.
      
      * Then last enable raid6 in ore_verify_layout()
      
      I want to deeply thank Daniel Gryniewicz who found first all the
      bugs in the old raid code, and inspired these patches:
      	Inspired-by Daniel Gryniewicz <dang@linuxbox.com>
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      ce5d36aa
    • Boaz Harrosh's avatar
      ore: Remove redundant dev_order(), more cleanups · 455682ce
      Boaz Harrosh authored
      
      
      Two cleanups:
      * si->cur_comp, si->cur_pg where always calculated after
        the call to ore_calc_stripe_info() with the help of
        _dev_order(...). But these are already calculated by
        ore_calc_stripe_info() and can be just set there.
        (This is left over from the time that si->cur_comp, si->cur_pg
         were only used by raid code, but now the main loop manages
         them anyway even though they are ultimately not used in
         none raid code)
      
      * si->cur_comp - For the very last stripe case, was set inside
        _ore_add_parity_unit(). This is not clear and will be wrong
        for coming raid6 so move this to only caller. Now si->cur_comp
        is only manipulated within _prepare_for_striping(), always next
        to the manipulation of cur_dev.
        Which is much easier to understand and follow.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      455682ce
    • Boaz Harrosh's avatar
      ore: (trivial) reformat some code · 101a6427
      Boaz Harrosh authored
      
      
      rearrange some source lines. Nothing changed.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      101a6427
  7. 23 Jan, 2014 2 commits
    • Boaz Harrosh's avatar
      ore: Don't crash on NULL bio in _clear_bio · 2deb76db
      Boaz Harrosh authored
      
      
      In the case of target returning OSD_ERR_PRI_CLEAR_PAGES when we
      only sent for attributes don't crash on NULL bio.
      This is an osd-target bug but don't crash regardless
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      2deb76db
    • Boaz Harrosh's avatar
      ore: Fix wrong math in allocation of per device BIO · aad560b7
      Boaz Harrosh authored
      
      
      At IO preparation we calculate the max pages at each device and
      allocate a BIO per device of that size. The calculation was wrong
      on some unaligned corner cases offset/length combination and would
      make prepare return with -ENOMEM. This would be bad for pnfs-objects
      that would in that case IO through MDS. And fatal for exofs were it
      would fail writes with EIO.
      
      Fix it by doing the proper math, that will work in all cases. (I
      ran a test with all possible offset/length combinations this time
      round).
      
      Also when reading we do not need to allocate for the parity units
      since we jump over them.
      
      Also lower the max_io_length to take into account the parity pages
      so not to allocate BIOs bigger than PAGE_SIZE
      
      CC: Stable Kernel <stable@vger.kernel.org>
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      aad560b7
  8. 23 Mar, 2013 1 commit
    • Kent Overstreet's avatar
      block: Add bio_for_each_segment_all() · d74c6d51
      Kent Overstreet authored
      
      
      __bio_for_each_segment() iterates bvecs from the specified index
      instead of bio->bv_idx.  Currently, the only usage is to walk all the
      bvecs after the bio has been advanced by specifying 0 index.
      
      For immutable bvecs, we need to split these apart;
      bio_for_each_segment() is going to have a different implementation.
      This will also help document the intent of code that's using it -
      bio_for_each_segment_all() is only legal to use for code that owns the
      bio.
      Signed-off-by: default avatarKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: Neil Brown <neilb@suse.de>
      CC: Boaz Harrosh <bharrosh@panasas.com>
      d74c6d51
  9. 09 Sep, 2012 1 commit
    • Kent Overstreet's avatar
      block: Add bio_clone_bioset(), bio_clone_kmalloc() · bf800ef1
      Kent Overstreet authored
      
      
      Previously, there was bio_clone() but it only allocated from the fs bio
      set; as a result various users were open coding it and using
      __bio_clone().
      
      This changes bio_clone() to become bio_clone_bioset(), and then we add
      bio_clone() and bio_clone_kmalloc() as wrappers around it, making use of
      the functionality the last patch adedd.
      
      This will also help in a later patch changing how bio cloning works.
      Signed-off-by: default avatarKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: NeilBrown <neilb@suse.de>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: Boaz Harrosh <bharrosh@panasas.com>
      CC: Jeff Garzik <jeff@garzik.org>
      Acked-by: default avatarJeff Garzik <jgarzik@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      bf800ef1
  10. 02 Aug, 2012 1 commit
    • Boaz Harrosh's avatar
      ore: Fix out-of-bounds access in _ios_obj() · 9e62bb44
      Boaz Harrosh authored
      
      
      _ios_obj() is accessed by group_index not device_table index.
      
      The oc->comps array is only a group_full of devices at a time
      it is not like ore_comp_dev() which is indexed by a global
      device_table index.
      
      This did not BUG until now because exofs only uses a single
      COMP for all devices. But with other FSs like PanFS this is
      not true.
      
      This bug was only in the write_path, all other users were
      using it correctly
      
      [This is a bug since 3.2 Kernel]
      CC: Stable Tree <stable@kernel.org>
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      9e62bb44
  11. 20 Jul, 2012 1 commit
    • Boaz Harrosh's avatar
      ore: Remove support of partial IO request (NFS crash) · 62b62ad8
      Boaz Harrosh authored
      
      
      Do to OOM situations the ore might fail to allocate all resources
      needed for IO of the full request. If some progress was possible
      it would proceed with a partial/short request, for the sake of
      forward progress.
      
      Since this crashes NFS-core and exofs is just fine without it just
      remove this contraption, and fail.
      
      TODO:
      	Support real forward progress with some reserved allocations
      	of resources, such as mem pools and/or bio_sets
      
      [Bug since 3.2 Kernel]
      CC: Stable Tree <stable@kernel.org>
      CC: Benny Halevy <bhalevy@tonian.com>
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      62b62ad8
  12. 06 Jan, 2012 2 commits
    • Boaz Harrosh's avatar
      ore: fix BUG_ON, too few sgs when reading · 361aba56
      Boaz Harrosh authored
      
      
      When reading RAID5 files, in rare cases, we calculated too
      few sg segments. There should be two extra for the beginning
      and end partial units.
      
      Also "too few sg segments" should not be a BUG_ON there is
      all the mechanics in place to handle it, as a short read.
      So just return -ENOMEM and the rest of the code will gracefully
      split the IO.
      
      [Bug in 3.2.0 Kernel]
      CC: Stable Tree <stable@kernel.org>
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      361aba56
    • Boaz Harrosh's avatar
      ore: Fix crash in case of an IO error. · ffefb8ea
      Boaz Harrosh authored
      
      
      The users of ore_check_io() expect the reported device
      (In case of error) to be indexed relative to the passed-in
      ore_components table, and not the logical dev index.
      
      This causes a crash inside objlayoutdriver in case of
      an IO error.
      
      [Bug in 3.2.0 Kernel]
      CC: Stable Tree <stable@kernel.org>
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      ffefb8ea
  13. 31 Oct, 2011 1 commit
    • Paul Gortmaker's avatar
      fs: add module.h to files that were implicitly using it · 143cb494
      Paul Gortmaker authored
      
      
      Some files were using the complete module.h infrastructure without
      actually including the header at all.  Fix them up in advance so
      once the implicit presence is removed, we won't get failures like this:
      
        CC [M]  fs/nfsd/nfssvc.o
      fs/nfsd/nfssvc.c: In function 'nfsd_create_serv':
      fs/nfsd/nfssvc.c:335: error: 'THIS_MODULE' undeclared (first use in this function)
      fs/nfsd/nfssvc.c:335: error: (Each undeclared identifier is reported only once
      fs/nfsd/nfssvc.c:335: error: for each function it appears in.)
      fs/nfsd/nfssvc.c: In function 'nfsd':
      fs/nfsd/nfssvc.c:555: error: implicit declaration of function 'module_put_and_exit'
      make[3]: *** [fs/nfsd/nfssvc.o] Error 1
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      143cb494
  14. 25 Oct, 2011 2 commits
    • Boaz Harrosh's avatar
      ore: Enable RAID5 mounts · 44231e68
      Boaz Harrosh authored
      
      
      Now that we support raid5 Enable it at mount. Raid6 will come next
      raid4 is not demanded for so it will probably not be enabled.
      (Until some one wants it)
      
      NOTE: That mkfs.exofs had support for raid5/6 since long time
      ago. (Making an empty raidX FS is just as easy as raid0 ;-} )
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      44231e68
    • Boaz Harrosh's avatar
      ore: RAID5 Write · 769ba8d9
      Boaz Harrosh authored
      
      
      This is finally the RAID5 Write support.
      
      The bigger part of this patch is not the XOR engine itself, But the
      read4write logic, which is a complete mini prepare_for_striping
      reading engine that can read scattered pages of a stripe into cache
      so it can be used for XOR calculation. That is, if the write was not
      stripe aligned.
      
      The main algorithm behind the XOR engine is the 2 dimensional array:
      	struct __stripe_pages_2d.
      A drawing might save 1000 words
      ---
      
      __stripe_pages_2d
             |
       n = pages_in_stripe_unit;
       w = group_width - parity;
             |                            pages array presented to the XOR lib
             |                                                |
             V                                                |
       __1_page_stripe[0].pages --> [c0][c1]..[cw][c_par] <---|
             |                                                |
       __1_page_stripe[1].pages --> [c0][c1]..[cw][c_par] <---
             |
      ...    |                         ...
             |
       __1_page_stripe[n].pages --> [c0][c1]..[cw][c_par]
                                     ^
                                     |
                 data added columns first then row
      
      ---
      The pages are put on this array columns first. .i.e:
      	p0-of-c0, p1-of-c0, ... pn-of-c0, p0-of-c1, ...
      So we are doing a corner turn of the pages.
      
      Note that pages will zigzag down and left. but are put sequentially
      in growing order. So when the time comes to XOR the stripe, only the
      beginning and end of the array need be checked. We scan the array
      and any NULL spot will be field by pages-to-be-read.
      
      The FS that wants to support RAID5 needs to supply an
      operations-vector that searches a given page in cache, and specifies
      if the page is uptodate or need reading. All these pages to be read
      are put on a slave ore_io_state and synchronously read. All the pages
      of a stripe are read in one IO, using the scatter gather mechanism.
      
      In write we constrain our IO to only be incomplete on a single
      stripe. Meaning either the complete IO is within a single stripe so
      we might have pages to read from both beginning  or end of the
      strip. Or we have some reading to do at beginning but end at strip
      boundary. The left over pages are pushed to the next IO by the API
      already established by previous work, where an IO offset/length
      combination presented to the ORE might get the length truncated and
      the user must re-submit the leftover pages. (Both exofs and NFS
      support this)
      
      But any ORE user should make it's best effort to align it's IO
      before hand and avoid complications. A cached ore_layout->stripe_size
      member can be used for that calculation. (NOTE: that ORE demands
      that stripe_size may not be bigger then 32bit)
      
      What else? Well read it and tell me.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      769ba8d9
  15. 24 Oct, 2011 2 commits
    • Boaz Harrosh's avatar
      ore: RAID5 read · a1fec1db
      Boaz Harrosh authored
      
      
      This patch introduces the first stage of RAID5 support
      mainly the skip-over-raid-units when reading. For
      writes it inserts BLANK units, into where XOR blocks
      should be calculated and written to.
      
      It introduces the new "general raid maths", and the main
      additional parameters and components needed for raid5.
      
      Since at this stage it could corrupt future version that
      actually do support raid5. The enablement of raid5
      mounting and setting of parity-count > 0 is disabled. So
      the raid5 code will never be used. Mounting of raid5 is
      only enabled later once the basic XOR write is also in.
      But if the patch "enable RAID5" is applied this code has
      been tested to be able to properly read raid5 volumes
      and is according to standard.
      
      Also it has been tested that the new maths still properly
      supports RAID0 and grouping code just as before.
      (BTW: I have found more bugs in the pnfs-obj RAID math
       fixed here)
      
      The ore.c file is getting too big, so new ore_raid.[hc]
      files are added that will include the special raid stuff
      that are not used in striping and mirrors. In future write
      support these will get bigger.
      When adding the ore_raid.c to Kbuild file I was forced to
      rename ore.ko to libore.ko. Is it possible to keep source
      file, say ore.c and module file ore.ko the same even if there
      are multiple files inside ore.ko?
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      a1fec1db
    • Boaz Harrosh's avatar
      ore: Make ore_calc_stripe_info EXPORT_SYMBOL · 611d7a5d
      Boaz Harrosh authored
      
      
      ore_calc_stripe_info is needed by exofs::export.c
      for the layout calculations. Make it exportable
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      611d7a5d
  16. 14 Oct, 2011 7 commits
    • Boaz Harrosh's avatar
      ore/exofs: Change ore_check_io API · 4b46c9f5
      Boaz Harrosh authored
      
      
      Current ore_check_io API receives a residual
      pointer, to report partial IO. But it is actually
      not used, because in a multiple devices IO there
      is never a linearity in the IO failure.
      
      On the other hand if every failing device is reported
      through a received callback measures can be taken to
      handle only failed devices. One at a time.
      
      This will also be needed by the objects-layout-driver
      for it's error reporting facility.
      
      Exofs is not currently using the new information and
      keeps the old behaviour of failing the complete IO in
      case of an error. (No partial completion)
      
      TODO: Use an ore_check_io callback to set_page_error only
      the failing pages. And re-dirty write pages.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      4b46c9f5
    • Boaz Harrosh's avatar
      ore/exofs: Define new ore_verify_layout · 5a51c0c7
      Boaz Harrosh authored
      
      
      All users of the ore will need to check if current code
      supports the given layout. For example RAID5/6 is not
      currently supported.
      
      So move all the checks from exofs/super.c to a new
      ore_verify_layout() to be used by ore users.
      
      Note that any new layout should be passed through the
      ore_verify_layout() because the ore engine will prepare
      and verify some internal members of ore_layout, and
      assumes it's called.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      5a51c0c7
    • Boaz Harrosh's avatar
      ore: Support for partial component table · 3bd98568
      Boaz Harrosh authored
      
      
      Users like the objlayout-driver would like to only pass
      a partial device table that covers the IO in question.
      For example exofs divides the file into raid-group-sized
      chunks and only serves group_width number of devices at
      a time.
      
      The partiality is communicated by setting
      ore_componets->first_dev and the array covers all logical
      devices from oc->first_dev upto (oc->first_dev + oc->numdevs)
      
      The ore_comp_dev() API receives a logical device index
      and returns the actual present device in the table.
      An out-of-range dev_index will BUG.
      
      Logical device index is the theoretical device index as if
      all the devices of a file are present. .i.e:
      	total_devs = group_width * mirror_p1 * group_count
      	0 <= dev_index < total_devs
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      3bd98568
    • Boaz Harrosh's avatar
      ore: Support for short read/writes · bbf9a31b
      Boaz Harrosh authored
      
      
      Memory conditions and max_bio constraints might cause us to
      not comply to the full length of the requested IO. Instead of
      failing the complete IO we can issue a shorter read/write and
      report how much was actually executed in the ios->length
      member.
      
      All users must check ios->length at IO_done or upon return of
      ore_read/write and re-issue the reminder of the bytes. Because
      other wise there is no error returned like before.
      
      This is part of the effort to support the pnfs-obj layout driver.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      bbf9a31b
    • Boaz Harrosh's avatar
      ore: Remove check for ios->kern_buff in _prepare_for_striping to later · 6851a5e5
      Boaz Harrosh authored
      
      
      Move the check and preparation of the ios->kern_buff case to
      later inside _write_mirror().
      
      Since read was never used with ios->kern_buff its support is removed
      instead of fixed.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      6851a5e5
    • Boaz Harrosh's avatar
      ore: cleanup: Embed an ore_striping_info inside ore_io_state · 98260754
      Boaz Harrosh authored
      
      
      Now that each ore_io_state covers only a single raid group.
      A single striping_info math is needed. Embed one inside
      ore_io_state to cache the calculation results and eliminate
      an extra call.
      
      Also the outer _prepare_for_striping is removed since it does nothing.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      98260754
    • Boaz Harrosh's avatar
      ore: Only IO one group at a time (API change) · b916c5cd
      Boaz Harrosh authored
      
      
      Usually a single IO is confined to one group of devices
      (group_width) and at the boundary of a raid group it can
      spill into a second group. Current code would allocate a
      full device_table size array at each io_state so it can
      comply to requests that span two groups. Needless to say
      that is very wasteful, specially when device_table count
      can get very large (hundreds even thousands), while a
      group_width is usually 8 or 10.
      
      * Change ore API to trim on IO that spans two raid groups.
        The user passes offset+length to ore_get_rw_state, the
        ore might trim on that length if spanning a group boundary.
        The user must check ios->length or ios->nrpages to see
        how much IO will be preformed. It is the responsibility
        of the user to re-issue the reminder of the IO.
      
      * Modify exofs To copy spilled pages on to the next IO.
        This means one last kick is needed after all coalescing
        of pages is done.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      b916c5cd
  17. 04 Oct, 2011 1 commit
    • Boaz Harrosh's avatar
      ore/exofs: Change the type of the devices array (API change) · d866d875
      Boaz Harrosh authored
      
      
      In the pNFS obj-LD the device table at the layout level needs
      to point to a device_cache node, where it is possible and likely
      that many layouts will point to the same device-nodes.
      
      In Exofs we have a more orderly structure where we have a single
      array of devices that repeats twice for a round-robin view of the
      device table
      
      This patch moves to a model that can be used by the pNFS obj-LD
      where struct ore_components holds an array of ore_dev-pointers.
      (ore_dev is newly defined and contains a struct osd_dev *od
       member)
      
      Each pointer in the array of pointers will point to a bigger
      user-defined dev_struct. That can be accessed by use of the
      container_of macro.
      
      In Exofs an __alloc_dev_table() function allocates the
      ore_dev-pointers array as well as an exofs_dev array, in one
      allocation and does the addresses dance to set everything pointing
      correctly. It still keeps the double allocation trick for the
      inodes round-robin view of the table.
      
      The device table is always allocated dynamically, also for the
      single device case. So it is unconditionally freed at umount.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      d866d875
  18. 03 Oct, 2011 3 commits
  19. 07 Aug, 2011 5 commits
    • Boaz Harrosh's avatar
      ore: Make ore its own module · cf283ade
      Boaz Harrosh authored
      
      
      Export everything from ore need exporting. Change Kbuild and Kconfig
      to build ore.ko as an independent module. Import ore from exofs
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      cf283ade
    • Boaz Harrosh's avatar
      exofs: Rename raid engine from exofs/ios.c => ore · 8ff660ab
      Boaz Harrosh authored
      
      
      ORE stands for "Objects Raid Engine"
      
      This patch is a mechanical rename of everything that was in ios.c
      and its API declaration to an ore.c and an osd_ore.h header. The ore
      engine will later be used by the pnfs objects layout driver.
      
      * File ios.c => ore.c
      
      * Declaration of types and API are moved from exofs.h to a new
        osd_ore.h
      
      * All used types are prefixed by ore_ from their exofs_ name.
      
      * Shift includes from exofs.h to osd_ore.h so osd_ore.h is
        independent, include it from exofs.h.
      
      Other than a pure rename there are no other changes. Next patch
      will move the ore into it's own module and will export the API
      to be used by exofs and later the layout driver
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      8ff660ab
    • Boaz Harrosh's avatar
      exofs: ios: Move to a per inode components & device-table · 9e9db456
      Boaz Harrosh authored
      
      
      Exofs raid engine was saving on memory space by having a single layout-info,
      single pid, and a single device-table, global to the filesystem. Then passing
      a credential and object_id info at the io_state level, private for each
      inode. It would also devise this contraption of rotating the device table
      view for each inode->ino to spread out the device usage.
      
      This is not compatible with the pnfs-objects standard, demanding that
      each inode can have it's own layout-info, device-table, and each object
      component it's own pid, oid and creds.
      
      So: Bring exofs raid engine to be usable for generic pnfs-objects use by:
      
      * Define an exofs_comp structure that holds obj_id and credential info.
      
      * Break up exofs_layout struct to an exofs_components structure that holds a
        possible array of exofs_comp and the array of devices + the size of the
        arrays.
      
      * Add a "comps" parameter to get_io_state() that specifies the ids creds
        and device array to use for each IO.
      
        This enables to keep the layout global, but the device-table view, creds
        and IDs at the inode level. It only adds two 64bit to each inode, since
        some of these members already existed in another form.
      
      * ios raid engine now access layout-info and comps-info through the passed
        pointers. Everything is pre-prepared by caller for generic access of
        these structures and arrays.
      
      At the exofs Level:
      
      * Super block holds an exofs_components struct that holds the device
        array, previously in layout. The devices there are in device-table
        order. The device-array is twice bigger and repeats the device-table
        twice so now each inode's device array can point to a random device
        and have a round-robin view of the table, making it compatible to
        previous exofs versions.
      
      * Each inode has an exofs_components struct that is initialized at
        load time, with it's own view of the device table IDs and creds.
        When doing IO this gets passed to the io_state together with the
        layout.
      
      While preforming this change. Bugs where found where credentials with the
      wrong IDs where used to access the different SB objects (super.c). As well
      as some dead code. It was never noticed because the target we use does not
      check the credentials.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      9e9db456
    • Boaz Harrosh's avatar
      exofs: Move exofs specific osd operations out of ios.c · 85e44df4
      Boaz Harrosh authored
      
      
      ios.c will be moving to an external library, for use by the
      objects-layout-driver. Remove from it some exofs specific functions.
      
      Also g_attr_logical_length is used both by inode.c and ios.c
      move definition to the later, to keep it independent
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      85e44df4
    • Boaz Harrosh's avatar
      exofs: Add offset/length to exofs_get_io_state · e1042ba0
      Boaz Harrosh authored
      
      
      In future raid code we will need to know the IO offset/length
      and if it's a read or write to determine some of the array
      sizes we'll need.
      
      So add a new exofs_get_rw_state() API for use when
      writeing/reading. All other simple cases are left using the
      old way.
      
      The major change to this is that now we need to call
      exofs_get_io_state later at inode.c::read_exec and
      inode.c::write_exec when we actually know these things. So this
      patch is kept separate so I can test things apart from other
      changes.
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      e1042ba0
  20. 04 Aug, 2011 1 commit
    • Boaz Harrosh's avatar
      exofs: Fix truncate for the raid-groups case · 16f75bb3
      Boaz Harrosh authored
      
      
      In the general raid-group case the truncate was wrong in that
      it did not also fix the object length of the neighboring groups.
      
      There are two bad cases in the old code:
      1. Space that should be freed was not.
      2. If a file That was big is truncated small, then made bigger
         again, the holes would not contain zeros but could expose old data.
         (If the growing of the file expands to more than a full
          groups cycle + group size (> S + T))
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      16f75bb3
  21. 25 Oct, 2010 1 commit
  22. 09 Aug, 2010 1 commit