Lines Matching refs:vdev

66 /* maximum scrub/resilver I/O queue per leaf vdev */
70 * When a vdev is added, it will be divided into approximately (but no
76 * Given a vdev type, return the appropriate ops vector.
110 * the vdev's asize rounded to the nearest metaslab. This allows us to
127 * The top-level vdev just returns the allocatable size rounded
134 * The allocatable space for a raidz vdev is N * sizeof(smallest child),
153 vdev_lookup_top(spa_t *spa, uint64_t vdev)
159 if (vdev < rvd->vdev_children) {
160 ASSERT(rvd->vdev_child[vdev] != NULL);
161 return (rvd->vdev_child[vdev]);
327 * The root vdev's guid will also be the pool guid,
333 * Any other vdev's guid must be unique within the pool.
359 offsetof(struct vdev, vdev_dtl_node));
368 * Allocate a new vdev. The 'alloctype' is used to control whether we are
369 * creating a new vdev or loading an existing one - the behavior is slightly
390 * If this is a load, get the vdev guid from the nvlist.
414 * The first allocated vdev must be of type 'root'.
420 * Determine whether we're a log vdev.
502 * Retrieve the vdev creation time.
508 * If we're a top-level vdev, try to load the allocation parameters.
544 * If we're a leaf vdev, try to load the DTL object and other state.
614 * vdev_free() implies closing the vdev first. This is simpler than
644 * Remove this vdev from its parent's child list.
651 * Clean up vdev structure.
693 * Transfer top-level vdev state from svd to tvd.
773 * Add a mirror/replacing vdev above an existing vdev.
806 * Remove a 1-way mirror/replacing vdev from the tree.
826 * If cvd will replace mvd as a top-level vdev, preserve mvd's guid.
829 * instead of a different version of the same top-level vdev.
862 * This vdev is not being allocated from yet or is a hole.
911 * If the vdev is being removed we don't activate
1007 * vdev label but the first, which we leave alone in case it contains
1028 * this vdev will become parents of the probe io.
1066 * We can't change the vdev state in this context, so we
1175 * If this vdev is not removed, check its fault status. If it's
1195 * the vdev on error.
1215 * the vdev is accessible. If we're faulted, bail.
1321 * vdev open for business.
1342 * If a leaf vdev has a DTL, and seems healthy, then kick off a
1360 * to all of the vdev labels, but not the cached config. The strict check
1365 * /etc/zfs/zpool.cache was readonly at the time. Otherwise, the vdev state
1398 * Determine if this vdev has been split off into another
1424 * If this vdev just became a top-level vdev because its
1426 * vdev guid -- but the label may or may not be on disk yet.
1428 * same top guid, so if we're a top-level vdev, we can
1431 * If we split this vdev off instead, then we also check the
1432 * original pool's guid. We don't want to consider the vdev
1467 * If we were able to open and validate a vdev that was
1556 /* set the reopening flag unless we're taking the vdev offline */
1577 * Reassess parent vdev's health.
1616 * Aim for roughly metaslabs_per_vdev (default 200) metaslabs per vdev.
1652 * A vdev's DTL (dirty time log) is the set of transaction groups for which
1653 * the vdev has less than perfect replication. There are four kinds of DTL:
1655 * DTL_MISSING: txgs for which the vdev has no valid copies of the data
1673 * A vdev's DTL_PARTIAL is the union of its children's DTL_PARTIALs, because
1675 * A vdev's DTL_MISSING is a modified union of its children's DTL_MISSINGs,
1765 * Determine if a resilvering vdev should remove any DTL entries from
1766 * its range. If the vdev was resilvering for the entire duration of the
1768 * vdev is considered partially resilvered and should leave its DTL
1827 * if this vdev should remove any DTLs. We only want to
1875 * If the vdev was resilvering and no longer has any
2085 * Determine whether the specified vdev can be offlined/detached/removed
2103 * whether this results in any DTL outages in the top-level vdev.
2168 * If this is a top-level vdev, initialize its metaslabs.
2177 * If this is a leaf vdev, load its DTL.
2185 * The special vdev case is used for hot spares and l2cache devices. Its
2186 * sole purpose it to set the vdev state for the associated vdev. To do this,
2251 * If the metaslab was not loaded when the vdev
2321 * Remove the metadata associated with this vdev once it's empty.
2344 * Mark the given vdev faulted. A faulted vdev behaves as if the device could
2379 * back off and simply mark the vdev as degraded instead.
2399 * Mark the given vdev degraded. A degraded vdev is purely an indication to the
2400 * user that something is wrong. The vdev continues to operate as normal as far
2417 * If the vdev is already faulted, then don't do anything.
2431 * Online the given vdev.
2534 * then proceed. We check that the vdev's metaslab group
2536 * added this vdev but not yet initialized its metaslabs.
2564 * Offline this device and reopen its top-level vdev.
2565 * If the top-level vdev is a log device then just offline
2567 * vdev becoming unusable, undo it and fail the request.
2605 * Clear the error counts associated with this vdev. Unlike vdev_online() and
2629 * also mark the vdev config dirty, so that the new faulted state is
2705 * the proper locks. Note that we have to get the vdev state
2732 * Get statistics for the given vdev.
2764 * If we're getting stats on the root vdev, aggregate the I/O counts
2830 * (Holes never create vdev children, so all the counters
2836 * one top-level vdev does not imply a root-level error.
2946 * Update the in-core space usage stats for this vdev, its metaslab class,
2947 * and the root vdev.
2963 * factor. We must calculate this here and not at the root vdev
2964 * because the root vdev's psize-to-asize is simply the max of its
2996 * Mark a top-level vdev's config as dirty, placing it on the dirty list
2997 * so that it will be written out next time the vdev configuration is synced.
2998 * If the root vdev is specified (vdev_top == NULL), dirty all top-level vdevs.
3010 * If this is an aux vdev (as with l2cache and spare devices), then we
3011 * update the vdev config manually and set the sync flag.
3087 * Mark a top-level vdev's state as dirty, so that the next pass of
3128 * Propagate vdev state up from children to parent.
3153 * device, treat the root vdev as if it were
3171 * Root special: if there is a top-level vdev that cannot be
3173 * vdev's aux state as 'corrupt' rather than 'insufficient
3187 * Set a vdev's state. If this is during an open, we don't update the parent
3211 * If we are setting the vdev state to anything but an open state, then
3225 * If we have brought this vdev back into service, we need
3231 * double-check the state of the vdev before repairing it.
3255 * If we fail to open a vdev during an import or recovery, we
3322 * Check the vdev configuration to ensure that it's capable of supporting
3324 * In addition, only a single top-level vdev is allowed.
3348 * Load the state from the original vdev tree (ovd) which
3350 * vdev was offline or faulted then we transfer that state to the
3351 * device in the current vdev tree (nvd).
3367 * Restore the persistent vdev state
3377 * Determine if a log device has valid content. If the vdev was
3396 * Expand a vdev if possible.
3411 * Split a vdev.