Lines Matching refs:vdev

163  * to get the vdev stats associated with the imported devices.
485 * Make sure the vdev config is bootable
764 * the root vdev's guid, our own pool guid, and then mark all of our
1094 offsetof(struct vdev, vdev_txg_node));
1174 * Verify a pool configuration, and construct the vdev tree appropriately. This
1175 * will create all the necessary vdevs in the appropriate layout, with each vdev
1177 * All vdev validation is done by the vdev_alloc() routine.
1381 * for basic validation purposes) and one in the active vdev
1383 * validate each vdev on the spare list. If the vdev also exists in the
1384 * active configuration, then we also mark this vdev as an active spare.
1402 * able to load the vdev. Otherwise, importing a pool
1492 * Retain previous vdev for add/remove ops.
1502 * Create new vdev
1510 * Commit this vdev as an l2cache device,
1604 * Checks to see if the given vdev could not be opened, in which case we post a
1687 * Compare the root vdev tree with the information we have
1688 * from the MOS config (mrvd). Check each top-level vdev
1698 * about the top-level vdev then use that vdev instead.
1722 * Swap the missing vdev with the data we were
1749 * Per-vdev ZAP info is stored exclusively in the MOS.
2031 spa_vdev_err(vdev_t *vdev, vdev_aux_t aux, int err)
2033 vdev_set_state(vdev, B_TRUE, VDEV_STATE_CANT_OPEN, aux);
2046 * we do a reopen() call. If the vdev label for every disk that was
2076 if (glist[i] == 0) /* vdev is hole */
2190 * Count the number of per-vdev ZAPs associated with all of the vdevs in the
2191 * vdev tree rooted in the given vd, and ensure that each ZAP is present in the
2192 * spa's per-vdev ZAP list.
2266 * Parse the configuration into a vdev tree. We explicitly set the
2295 * We need to validate the vdev labels against the configuration that
2297 * mosconfig is true then we're validating the vdev labels based on
2301 * the vdev config.
2396 * If the vdev guid sum doesn't match the uberblock, we have an
2646 * Load the per-vdev ZAP map. If we have an older pool, this will not
2666 * we have orphaned per-vdev ZAPs in the MOS. Defer their
2764 * Load the vdev state for all toplevel vdevs.
2806 * root vdev. If it can't be opened, it indicates one or
3045 * The stats information (gen/count/ustats) is used to gather vdev statistics at
3113 * information: the state of each vdev after the
3250 * Add l2cache device information to the nvlist, including vdev stats.
3415 * array of nvlists, each which describes a valid leaf vdev. If this is an
3660 * Create the root vdev.
3856 * Add this top-level vdev to the child array.
3865 * Put this pool's top-level vdevs into a root vdev.
3876 * Replace the existing vdev_tree with the new root vdev in
3885 * Walk the vdev tree and see if we can find a device with "better"
3921 * the vdev (e.g. "id1,sd@SSEAGATE..." or "/pci@1f,0/ide@d/disk@0,0:a").
3922 * The GRUB "findroot" command will return the vdev we should boot.
3976 * Build up a vdev tree based on the boot device's label config.
3993 * Get the boot vdev.
3996 cmn_err(CE_NOTE, "Can not find the boot vdev for guid %llu",
4015 * If the boot device is part of a spare vdev then ensure that
4514 * Transfer each new top-level vdev from vd to rvd.
4519 * Set the vdev id to the first hole, if one exists.
4574 * a device that is not mirrored, we automatically insert the mirror vdev.
4578 * mirror using the 'replacing' vdev, which is functionally identical to
4579 * the mirror vdev (it actually reuses all the same ops) but has a few
4633 * vdev.
4653 * want to create a replacing vdev. The user is not allowed to
4654 * attach to a spared vdev child unless the 'isspare' state is
4680 * than the top-level vdev.
4706 * mirror/replacing/spare vdev above oldvd.
4770 spa_history_log_internal(spa, "vdev attach", NULL,
4771 "%s vdev=%s %s vdev=%s",
4783 * Detach a device from a mirror or replacing vdev.
4786 * is a replacing vdev.
4816 * vdev that's replacing B with C. The user's intent in replacing
4824 * that C's parent is still the replacing vdev R.
4857 * If we are detaching the second disk from a replacing vdev, then
4858 * check to see if we changed the original vdev's path to have "/old"
4913 * do it now, marking the vdev as no longer a spare in the process.
4929 * If the parent mirror/replacing vdev only has one child,
4941 * may have been the previous top-level vdev.
4947 * Reevaluate the parent vdev state.
4955 * add metaslabs (i.e. grow the pool). We need to reopen the vdev
4985 "vdev=%s", vdpath);
4989 * If this was the removal of the original device in a hot spare vdev,
5036 vdev_t *rvd, **vml = NULL; /* vdev modify list */
5093 /* then, loop over each vdev and validate it */
5153 /* transfer per-vdev ZAPs */
5266 "vdev=%s", vml[c]->vdev_path);
5389 * associated with this vdev, and wait for these changes to sync.
5439 * Reassess the health of our root vdev.
5447 * Removing a device from the vdev namespace requires several steps
5514 * Stop allocating from this vdev.
5526 * Attempt to evacuate the vdev.
5533 * If we couldn't evacuate the vdev, unwind.
5541 * Clean up the vdev namespace.
5553 * There is no vdev of any kind with the specified guid.
5568 * Find any device that's done replacing, or a vdev marked 'unspare' that's
5584 * vdev in the list to be the oldest vdev, and the last one to be
5586 * the case where the newest vdev is faulted, we will not automatically
5685 * Update the stored path or FRU for this vdev.
5875 spa_history_log_internal(spa, "vdev online", NULL,
6121 * Rebuild spa's all-vdev ZAP from the vdev ZAPs indicated in each vdev_t.
6122 * The all-vdev ZAP must be empty.
6147 * If the pool is being imported from a pre-per-vdev-ZAP version of ZFS,
6148 * its config may not be dirty but we still need to build per-vdev ZAPs.
6339 * to do this for pool creation since the vdev's
6522 * If there are any pending vdev state changes, convert them
6531 * eliminate the aux vdev wart by integrating all vdevs
6532 * into the root vdev tree.
6573 * Set the top-level vdev's max queue depth. Evaluate each
6673 * the number of ZAPs in the per-vdev ZAP list. This only gets
6686 * Rewrite the vdev configuration (which includes the uberblock)
6696 * We hold SCL_STATE to prevent vdev open/close/etc.
6697 * while we're attempting to write the vdev labels.
7009 * filled in from the spa and (optionally) the vdev. This doesn't do anything