Lines Matching refs:zone

30  *   A zone is a named collection of processes, namespace constraints,
36 * (zoneid_t) are used to track zone association. Zone IDs are
37 * dynamically generated when the zone is created; if a persistent
39 * etc.), the zone name should be used.
44 * The global zone (zoneid 0) is automatically associated with all
45 * system resources that have not been bound to a user-created zone.
47 * have a global zone, and all processes, mounts, etc. are
48 * associated with that zone. The global zone is generally
55 * The states in which a zone may be in and the transitions are as
58 * ZONE_IS_UNINITIALIZED: primordial state for a zone. The partially
59 * initialized zone is added to the list of active zones on the system but
63 * not yet completed. Not possible to enter the zone, but attributes can
66 * ZONE_IS_READY: zsched (the kernel dummy process for a zone) is
67 * ready. The zone is made visible after the ZSD constructor callbacks are
68 * executed. A zone remains in this state until it transitions into
72 * init. Should that fail, the zone proceeds to the ZONE_IS_SHUTTING_DOWN
75 * ZONE_IS_RUNNING: The zone is open for business: zsched has
76 * successfully started init. A zone remains in this state until
80 * killing all processes running in the zone. The zone remains
81 * in this state until there are no more user processes running in the zone.
82 * zone_create(), zone_enter(), and zone_destroy() on this zone will fail.
84 * multiple times for the same zone_t. Setting of the zone's state to
86 * the zone's status without worrying about it being a moving target.
89 * are no more user processes in the zone. The zone remains in this
91 * zone. zone_create(), zone_enter(), and zone_destroy() on this zone will
94 * ZONE_IS_DOWN: All kernel threads doing work on behalf of the zone
96 * join the zone or create kernel threads therein.
98 * ZONE_IS_DYING: zone_destroy() has been called on the zone; zone
103 * processes or threads doing work on behalf of the zone. The zone is
105 * the zone can be recreated.
108 * callbacks are executed, and all memory associated with the zone is
111 * Threads can wait for the zone to enter a requested state by using
119 * Subsystems needing to maintain zone-specific data can store that
120 * data using the ZSD mechanism. This provides a zone-specific data
123 * to register callbacks to be invoked when a zone is created, shut
124 * down, or destroyed. This can be used to initialize zone-specific
130 * The per-zone structure (zone_t) is reference counted, and freed
139 * zone_find_by_name. Both return zone_t pointers with the zone
142 * returns the zone with which a path name is associated (global
143 * zone if the path is not within some other zone's file system
144 * hierarchy). This currently requires iterating through each zone,
151 * zone hash tables and lists. Zones cannot be created or destroyed
153 * zone_status_lock: This is a global lock protecting zone state.
155 * protects the list of kernel threads associated with a zone.
156 * zone_lock: This is a per-zone lock used to protect several fields of
157 * the zone_t (see <sys/zone.h> for details). In addition, holding
158 * this lock means that the zone cannot go away.
159 * zone_nlwps_lock: This is a per-zone lock used to protect the fields
160 * related to the zone.max-lwps rctl.
161 * zone_mem_lock: This is a per-zone lock used to protect the fields
162 * related to the zone.max-locked-memory and zone.max-swap rctls.
163 * zone_rctl_lock: This is a per-zone lock used to protect other rctls,
178 * zone locks.
183 * The zone subsystem can be managed and queried from user level with
184 * the following system calls (all subcodes of the primary "zone"
186 * - zone_create: creates a zone with selected attributes (name,
188 * - zone_enter: allows the current process to enter a zone
189 * - zone_getattr: reports attributes of a zone
190 * - zone_setattr: set attributes of a zone
191 * - zone_boot: set 'init' running for the zone
193 * - zone_lookup: looks up zone id based on name
248 #include <sys/zone.h>
256 * subsystems to release a zone's general-purpose references will wait before
257 * they log the zone's reference counts. The constant's value shouldn't
265 /* List of data link IDs which are accessible from the zone */
273 * cv used to signal that all references to the zone have been released. This
275 * wake up will free the zone_t, hence we cannot use zone->zone_cv.
279 * Lock used to serialize access to zone_cv. This could have been per-zone,
293 * Global list of registered keys. We use this when a new zone is created.
304 * The global zone (aka zone0) is the all-seeing, all-knowing zone in which the
308 * except for by code that needs to reference the global zone early on in boot,
313 zone_t *global_zone = NULL; /* Set when the global zone is initialized */
331 /* Event channel to sent zone state change notifications */
335 * This table holds the mapping from kernel zone states to
355 * (see sys/zone.h).
383 static char * const zone_prefix = "/zone/";
405 * Bump this number when you alter the zone syscall interfaces; this is
424 * Certain filesystems (such as NFS and autofs) need to know which zone
426 * ensure that a zone isn't in the process of being created/destroyed such
427 * that nfs_mount() thinks it is in the global/NGZ zone, while by the time
428 * it gets added the list of mounted zones, it ends up on the wrong zone's
429 * mount list. Since a zone can't reside on an NFS file system, we don't
434 * layer (respectively) to synchronize zone state transitions and new
435 * mounts within a zone. This syncronization is on a per-zone basis, so
436 * activity for one zone will not interfere with activity for another zone.
439 * either be multiple mounts (or zone state transitions, if that weren't
447 * "current" operation. This means that zone halt may starve if
448 * there is a rapid succession of new mounts coming in to the zone.
497 * The VFS layer is busy with a mount; this zone should wait until all
530 * callbacks to be executed when a zone is created, shutdown, or
543 * ZSD_CREATE_NEEDED and a copy of the ZSD entry added to the per-zone
613 struct zone *zone;
623 * Insert in global list of callbacks. Makes future zone creations
637 for (zone = list_head(&zone_active); zone != NULL;
638 zone = list_next(&zone_active, zone)) {
641 mutex_enter(&zone->zone_lock);
644 status = zone_status_get(zone);
647 mutex_exit(&zone->zone_lock);
651 t = zsd_find_mru(&zone->zone_zsd, key);
657 mutex_exit(&zone->zone_lock);
668 zone_t *, zone, zone_key_t, key);
670 list_insert_tail(&zone->zone_zsd, t);
671 mutex_exit(&zone->zone_lock);
682 * always successfully return the zone specific data associated
702 zone_t *zone;
714 for (zone = list_head(&zone_active); zone != NULL;
715 zone = list_next(&zone_active, zone)) {
718 mutex_enter(&zone->zone_lock);
719 del = zsd_find_mru(&zone->zone_zsd, key);
722 * Somebody else got here first e.g the zone going
725 mutex_exit(&zone->zone_lock);
734 zone_t *, zone, zone_key_t, key);
740 zone_t *, zone, zone_key_t, key);
742 mutex_exit(&zone->zone_lock);
751 /* Now we can free up the zsdp structures in each zone */
753 for (zone = list_head(&zone_active); zone != NULL;
754 zone = list_next(&zone_active, zone)) {
757 mutex_enter(&zone->zone_lock);
758 del = zsd_find(&zone->zone_zsd, key);
760 list_remove(&zone->zone_zsd, del);
764 mutex_exit(&zone->zone_lock);
780 zone_setspecific(zone_key_t key, zone_t *zone, const void *data)
784 mutex_enter(&zone->zone_lock);
785 t = zsd_find_mru(&zone->zone_zsd, key);
791 mutex_exit(&zone->zone_lock);
794 mutex_exit(&zone->zone_lock);
802 zone_getspecific(zone_key_t key, zone_t *zone)
807 mutex_enter(&zone->zone_lock);
808 t = zsd_find_mru(&zone->zone_zsd, key);
810 mutex_exit(&zone->zone_lock);
815 * Function used to initialize a zone's list of ZSD callbacks and data
816 * when the zone is being created. The callbacks are initialized from
818 * executed later (once the zone exists and with locks dropped).
821 zone_zsd_configure(zone_t *zone)
827 ASSERT(list_head(&zone->zone_zsd) == NULL);
828 mutex_enter(&zone->zone_lock);
833 * Since this zone is ZONE_IS_UNCONFIGURED, zone_key_create
836 ASSERT(zsd_find(&zone->zone_zsd, zsdp->zsd_key) == NULL);
846 zone_t *, zone, zone_key_t, zsdp->zsd_key);
848 list_insert_tail(&zone->zone_zsd, t);
851 mutex_exit(&zone->zone_lock);
860 zone_zsd_callbacks(zone_t *zone, enum zsd_callback_type ct)
865 ASSERT(ct != ZSD_SHUTDOWN || zone_status_get(zone) >= ZONE_IS_EMPTY);
866 ASSERT(ct != ZSD_DESTROY || zone_status_get(zone) >= ZONE_IS_DOWN);
869 * Run the callback solely based on what is registered for the zone
872 * callbacks for a zone that is in the process of going away.
874 mutex_enter(&zone->zone_lock);
875 for (t = list_head(&zone->zone_zsd); t != NULL;
876 t = list_next(&zone->zone_zsd, t)) {
886 zone_t *, zone, zone_key_t, key);
893 zone_t *, zone, zone_key_t, key);
897 mutex_exit(&zone->zone_lock);
900 zsd_apply_all_keys(zsd_apply_shutdown, zone);
901 zsd_apply_all_keys(zsd_apply_destroy, zone);
906 * Called when the zone is going away; free ZSD-related memory, and
910 zone_free_zsd(zone_t *zone)
915 * Free all the zsd_entry's we had on this zone.
917 mutex_enter(&zone->zone_lock);
918 for (t = list_head(&zone->zone_zsd); t != NULL; t = next) {
919 next = list_next(&zone->zone_zsd, t);
920 list_remove(&zone->zone_zsd, t);
924 list_destroy(&zone->zone_zsd);
925 mutex_exit(&zone->zone_lock);
948 zone_t *zone;
951 zone = list_head(&zone_active);
952 while (zone != NULL) {
953 if ((applyfn)(&zonehash_lock, B_FALSE, zone, key)) {
955 zone = list_head(&zone_active);
957 zone = list_next(&zone_active, zone);
964 * Apply a function to all keys for a particular zone.
980 zsd_apply_all_keys(zsd_applyfn_t *applyfn, zone_t *zone)
984 mutex_enter(&zone->zone_lock);
985 t = list_head(&zone->zone_zsd);
987 if ((applyfn)(NULL, B_TRUE, zone, t->zsd_key)) {
989 t = list_head(&zone->zone_zsd);
991 t = list_next(&zone->zone_zsd, t);
994 mutex_exit(&zone->zone_lock);
998 * Call the create function for the zone and key if CREATE_NEEDED
1012 zone_t *zone, zone_key_t key)
1022 ASSERT(MUTEX_HELD(&zone->zone_lock));
1024 mutex_enter(&zone->zone_lock);
1027 t = zsd_find(&zone->zone_zsd, key);
1030 * Somebody else got here first e.g the zone going
1034 mutex_exit(&zone->zone_lock);
1038 if (zsd_wait_for_inprogress(zone, t, lockp))
1045 zone_t *, zone, zone_key_t, key);
1046 mutex_exit(&zone->zone_lock);
1053 zone_t *, zone, zone_key_t, key);
1055 result = (*t->zsd_create)(zone->zone_id);
1058 zone_t *, zone, voidn *, result);
1063 mutex_enter(&zone->zone_lock);
1069 zone_t *, zone, zone_key_t, key);
1072 mutex_exit(&zone->zone_lock);
1077 * Call the shutdown function for the zone and key if SHUTDOWN_NEEDED
1091 zone_t *zone, zone_key_t key)
1101 ASSERT(MUTEX_HELD(&zone->zone_lock));
1103 mutex_enter(&zone->zone_lock);
1106 t = zsd_find(&zone->zone_zsd, key);
1109 * Somebody else got here first e.g the zone going
1113 mutex_exit(&zone->zone_lock);
1117 if (zsd_wait_for_creator(zone, t, lockp))
1120 if (zsd_wait_for_inprogress(zone, t, lockp))
1127 zone_t *, zone, zone_key_t, key);
1128 mutex_exit(&zone->zone_lock);
1137 zone_t *, zone, zone_key_t, key);
1139 (t->zsd_shutdown)(zone->zone_id, data);
1141 zone_t *, zone, zone_key_t, key);
1145 mutex_enter(&zone->zone_lock);
1150 zone_t *, zone, zone_key_t, key);
1153 mutex_exit(&zone->zone_lock);
1158 * Call the destroy function for the zone and key if DESTROY_NEEDED
1172 zone_t *zone, zone_key_t key)
1182 ASSERT(MUTEX_HELD(&zone->zone_lock));
1184 mutex_enter(&zone->zone_lock);
1187 t = zsd_find(&zone->zone_zsd, key);
1190 * Somebody else got here first e.g the zone going
1194 mutex_exit(&zone->zone_lock);
1198 if (zsd_wait_for_creator(zone, t, lockp))
1201 if (zsd_wait_for_inprogress(zone, t, lockp))
1208 zone_t *, zone, zone_key_t, key);
1209 mutex_exit(&zone->zone_lock);
1217 zone_t *, zone, zone_key_t, key);
1219 (t->zsd_destroy)(zone->zone_id, data);
1221 zone_t *, zone, zone_key_t, key);
1225 mutex_enter(&zone->zone_lock);
1231 zone_t *, zone, zone_key_t, key);
1234 mutex_exit(&zone->zone_lock);
1243 zsd_wait_for_creator(zone_t *zone, struct zsd_entry *t, kmutex_t *lockp)
1249 zone_t *, zone, struct zsd_entry *, t);
1254 cv_wait(&t->zsd_cv, &zone->zone_lock);
1257 mutex_exit(&zone->zone_lock);
1259 mutex_enter(&zone->zone_lock);
1270 zsd_wait_for_inprogress(zone_t *zone, struct zsd_entry *t, kmutex_t *lockp)
1276 zone_t *, zone, struct zsd_entry *, t);
1281 cv_wait(&t->zsd_cv, &zone->zone_lock);
1284 mutex_exit(&zone->zone_lock);
1286 mutex_enter(&zone->zone_lock);
1293 * Frees memory associated with the zone dataset list.
1296 zone_free_datasets(zone_t *zone)
1300 for (t = list_head(&zone->zone_datasets); t != NULL; t = next) {
1301 next = list_next(&zone->zone_datasets, t);
1302 list_remove(&zone->zone_datasets, t);
1306 list_destroy(&zone->zone_datasets);
1310 * zone.cpu-shares resource control support.
1327 if (e->rcep_p.zone == NULL)
1330 e->rcep_p.zone->zone_shares = nv;
1342 * zone.cpu-cap resource control support.
1357 zone_t *zone = e->rcep_p.zone;
1362 if (zone == NULL)
1368 return (cpucaps_zone_set(zone, nv));
1383 zone_t *zone = p->p_zone;
1387 mutex_enter(&zone->zone_nlwps_lock);
1388 nlwps = zone->zone_nlwps;
1389 mutex_exit(&zone->zone_nlwps_lock);
1403 if (e->rcep_p.zone == NULL)
1405 ASSERT(MUTEX_HELD(&(e->rcep_p.zone->zone_nlwps_lock)));
1406 nlwps = e->rcep_p.zone->zone_nlwps;
1420 if (e->rcep_p.zone == NULL)
1422 e->rcep_p.zone->zone_nlwps_ctl = nv;
1438 zone_t *zone = p->p_zone;
1442 mutex_enter(&zone->zone_nlwps_lock);
1443 nprocs = zone->zone_nprocs;
1444 mutex_exit(&zone->zone_nlwps_lock);
1458 if (e->rcep_p.zone == NULL)
1460 ASSERT(MUTEX_HELD(&(e->rcep_p.zone->zone_nlwps_lock)));
1461 nprocs = e->rcep_p.zone->zone_nprocs;
1475 if (e->rcep_p.zone == NULL)
1477 e->rcep_p.zone->zone_nprocs_ctl = nv;
1504 v = e->rcep_p.zone->zone_shmmax + incr;
1533 v = e->rcep_p.zone->zone_ipc.ipcq_shmmni + incr;
1562 v = e->rcep_p.zone->zone_ipc.ipcq_semmni + incr;
1591 v = e->rcep_p.zone->zone_ipc.ipcq_msgmni + incr;
1624 z = e->rcep_p.zone;
1640 if (e->rcep_p.zone == NULL)
1642 e->rcep_p.zone->zone_locked_mem_ctl = nv;
1675 z = e->rcep_p.zone;
1691 if (e->rcep_p.zone == NULL)
1693 e->rcep_p.zone->zone_max_swap_ctl = nv;
1726 z = e->rcep_p.zone;
1742 if (e->rcep_p.zone == NULL)
1744 e->rcep_p.zone->zone_max_lofi_ctl = nv;
1756 * Helper function to brand the zone with a unique ID.
1759 zone_uniqid(zone_t *zone)
1764 zone->zone_uniqid = uniqid++;
1768 * Returns a held pointer to the "kcred" for the specified zone.
1773 zone_t *zone;
1776 if ((zone = zone_find_by_id(zoneid)) == NULL)
1778 cr = zone->zone_kcred;
1780 zone_rele(zone);
1787 zone_t *zone = ksp->ks_private;
1793 zk->zk_usage.value.ui64 = zone->zone_locked_mem;
1794 zk->zk_value.value.ui64 = zone->zone_locked_mem_ctl;
1801 zone_t *zone = ksp->ks_private;
1807 zk->zk_usage.value.ui64 = zone->zone_nprocs;
1808 zk->zk_value.value.ui64 = zone->zone_nprocs_ctl;
1815 zone_t *zone = ksp->ks_private;
1821 zk->zk_usage.value.ui64 = zone->zone_max_swap;
1822 zk->zk_value.value.ui64 = zone->zone_max_swap_ctl;
1827 zone_kstat_create_common(zone_t *zone, char *name,
1833 ksp = rctl_kstat_create_zone(zone, name, KSTAT_TYPE_NAMED,
1841 ksp->ks_data_size += strlen(zone->zone_name) + 1;
1843 kstat_named_setstr(&zk->zk_zonename, zone->zone_name);
1847 ksp->ks_private = zone;
1856 zone_t *zone = ksp->ks_private;
1862 zmp->zm_pgpgin.value.ui64 = zone->zone_pgpgin;
1863 zmp->zm_anonpgin.value.ui64 = zone->zone_anonpgin;
1864 zmp->zm_execpgin.value.ui64 = zone->zone_execpgin;
1865 zmp->zm_fspgin.value.ui64 = zone->zone_fspgin;
1866 zmp->zm_anon_alloc_fail.value.ui64 = zone->zone_anon_alloc_fail;
1872 zone_mcap_kstat_create(zone_t *zone)
1877 if ((ksp = kstat_create_zone("memory_cap", zone->zone_id,
1878 zone->zone_name, "zone_memory_cap", KSTAT_TYPE_NAMED,
1880 KSTAT_FLAG_VIRTUAL, zone->zone_id)) == NULL)
1883 if (zone->zone_id != GLOBAL_ZONEID)
1887 ksp->ks_data_size += strlen(zone->zone_name) + 1;
1888 ksp->ks_lock = &zone->zone_mcap_lock;
1889 zone->zone_mcap_stats = zmp;
1893 kstat_named_setstr(&zmp->zm_zonename, zone->zone_name);
1902 ksp->ks_private = zone;
1911 zone_t *zone = ksp->ks_private;
1918 tmp = zone->zone_utime;
1921 tmp = zone->zone_stime;
1924 tmp = zone->zone_wtime;
1928 zmp->zm_avenrun1.value.ui32 = zone->zone_avenrun[0];
1929 zmp->zm_avenrun5.value.ui32 = zone->zone_avenrun[1];
1930 zmp->zm_avenrun15.value.ui32 = zone->zone_avenrun[2];
1932 zmp->zm_ffcap.value.ui32 = zone->zone_ffcap;
1933 zmp->zm_ffnoproc.value.ui32 = zone->zone_ffnoproc;
1934 zmp->zm_ffnomem.value.ui32 = zone->zone_ffnomem;
1935 zmp->zm_ffmisc.value.ui32 = zone->zone_ffmisc;
1937 zmp->zm_nested_intp.value.ui32 = zone->zone_nested_intp;
1939 zmp->zm_init_pid.value.ui32 = zone->zone_proc_initpid;
1940 zmp->zm_boot_time.value.ui64 = (uint64_t)zone->zone_boot_time;
1946 zone_misc_kstat_create(zone_t *zone)
1951 if ((ksp = kstat_create_zone("zones", zone->zone_id,
1952 zone->zone_name, "zone_misc", KSTAT_TYPE_NAMED,
1954 KSTAT_FLAG_VIRTUAL, zone->zone_id)) == NULL)
1957 if (zone->zone_id != GLOBAL_ZONEID)
1961 ksp->ks_data_size += strlen(zone->zone_name) + 1;
1962 ksp->ks_lock = &zone->zone_misc_lock;
1963 zone->zone_misc_stats = zmp;
1967 kstat_named_setstr(&zmp->zm_zonename, zone->zone_name);
1986 ksp->ks_private = zone;
1993 zone_kstat_create(zone_t *zone)
1995 zone->zone_lockedmem_kstat = zone_kstat_create_common(zone,
1997 zone->zone_swapresv_kstat = zone_kstat_create_common(zone,
1999 zone->zone_nprocs_kstat = zone_kstat_create_common(zone,
2002 if ((zone->zone_mcap_ksp = zone_mcap_kstat_create(zone)) == NULL) {
2003 zone->zone_mcap_stats = kmem_zalloc(
2007 if ((zone->zone_misc_ksp = zone_misc_kstat_create(zone)) == NULL) {
2008 zone->zone_misc_stats = kmem_zalloc(
2027 zone_kstat_delete(zone_t *zone)
2029 zone_kstat_delete_common(&zone->zone_lockedmem_kstat,
2031 zone_kstat_delete_common(&zone->zone_swapresv_kstat,
2033 zone_kstat_delete_common(&zone->zone_nprocs_kstat,
2035 zone_kstat_delete_common(&zone->zone_mcap_ksp,
2037 zone_kstat_delete_common(&zone->zone_misc_ksp,
2118 * The global zone has all privileges
2122 * Add p0 to the global zone
2189 * Create ID space for zone IDs. ID 0 is reserved for the
2190 * global zone.
2195 * Initialize generic zone resource controls, if any.
2197 rc_zone_cpu_shares = rctl_register("zone.cpu-shares",
2202 rc_zone_cpu_cap = rctl_register("zone.cpu-cap",
2208 rc_zone_nlwps = rctl_register("zone.max-lwps", RCENTITY_ZONE,
2212 rc_zone_nprocs = rctl_register("zone.max-processes", RCENTITY_ZONE,
2219 rc_zone_msgmni = rctl_register("zone.max-msg-ids",
2223 rc_zone_semmni = rctl_register("zone.max-sem-ids",
2227 rc_zone_shmmni = rctl_register("zone.max-shm-ids",
2231 rc_zone_shmmax = rctl_register("zone.max-shm-memory",
2237 * this at the head of the rctl_dict_entry for ``zone.cpu-shares''.
2246 rde = rctl_dict_lookup("zone.cpu-shares");
2249 rc_zone_locked_mem = rctl_register("zone.max-locked-memory",
2254 rc_zone_max_swap = rctl_register("zone.max-swap",
2259 rc_zone_max_lofi = rctl_register("zone.max-lofi",
2265 * Initialize the ``global zone''.
2270 e.rcep_p.zone = &zone0;
2284 * take care of making sure the global zone is in the default pool.
2288 * Initialize global zone kstats
2293 * Initialize zone label.
2340 * The global zone is fully initialized (except for zone_rootvp which
2346 * Setup an event channel to send zone status change notifications on
2352 panic("Sysevent_evc_bind failed during zone setup.\n");
2357 zone_free(zone_t *zone)
2359 ASSERT(zone != global_zone);
2360 ASSERT(zone->zone_ntasks == 0);
2361 ASSERT(zone->zone_nlwps == 0);
2362 ASSERT(zone->zone_nprocs == 0);
2363 ASSERT(zone->zone_cred_ref == 0);
2364 ASSERT(zone->zone_kcred == NULL);
2365 ASSERT(zone_status_get(zone) == ZONE_IS_DEAD ||
2366 zone_status_get(zone) == ZONE_IS_UNINITIALIZED);
2367 ASSERT(list_is_empty(&zone->zone_ref_list));
2370 * Remove any zone caps.
2372 cpucaps_zone_remove(zone);
2374 ASSERT(zone->zone_cpucap == NULL);
2377 if (zone_status_get(zone) == ZONE_IS_DEAD) {
2378 ASSERT(zone->zone_ref == 0);
2380 list_remove(&zone_deathrow, zone);
2384 list_destroy(&zone->zone_ref_list);
2385 zone_free_zsd(zone);
2386 zone_free_datasets(zone);
2387 list_destroy(&zone->zone_dl_list);
2389 if (zone->zone_rootvp != NULL)
2390 VN_RELE(zone->zone_rootvp);
2391 if (zone->zone_rootpath)
2392 kmem_free(zone->zone_rootpath, zone->zone_rootpathlen);
2393 if (zone->zone_name != NULL)
2394 kmem_free(zone->zone_name, ZONENAME_MAX);
2395 if (zone->zone_slabel != NULL)
2396 label_rele(zone->zone_slabel);
2397 if (zone->zone_nodename != NULL)
2398 kmem_free(zone->zone_nodename, _SYS_NMLN);
2399 if (zone->zone_domain != NULL)
2400 kmem_free(zone->zone_domain, _SYS_NMLN);
2401 if (zone->zone_privset != NULL)
2402 kmem_free(zone->zone_privset, sizeof (priv_set_t));
2403 if (zone->zone_rctls != NULL)
2404 rctl_set_free(zone->zone_rctls);
2405 if (zone->zone_bootargs != NULL)
2406 strfree(zone->zone_bootargs);
2407 if (zone->zone_initname != NULL)
2408 strfree(zone->zone_initname);
2409 if (zone->zone_fs_allowed != NULL)
2410 strfree(zone->zone_fs_allowed);
2411 if (zone->zone_pfexecd != NULL)
2412 klpd_freelist(&zone->zone_pfexecd);
2413 id_free(zoneid_space, zone->zone_id);
2414 mutex_destroy(&zone->zone_lock);
2415 cv_destroy(&zone->zone_cv);
2416 rw_destroy(&zone->zone_mlps.mlpl_rwlock);
2417 rw_destroy(&zone->zone_mntfs_db_lock);
2418 kmem_free(zone, sizeof (zone_t));
2422 * See block comment at the top of this file for information about zone
2426 * Convenience function for setting zone status.
2429 zone_status_set(zone_t *zone, zone_status_t status)
2435 status >= zone_status_get(zone));
2438 nvlist_add_string(nvl, ZONE_CB_NAME, zone->zone_name) ||
2442 zone_status_table[zone->zone_status]) ||
2443 nvlist_add_int32(nvl, ZONE_CB_ZONEID, zone->zone_id) ||
2449 "Failed to allocate and send zone state change event.\n");
2454 zone->zone_status = status;
2456 cv_broadcast(&zone->zone_cv);
2460 * Public function to retrieve the zone status. The zone status may
2464 zone_status_get(zone_t *zone)
2466 return (zone->zone_status);
2470 zone_set_bootargs(zone_t *zone, const char *zone_bootargs)
2475 ASSERT(zone != global_zone);
2479 if (zone->zone_bootargs != NULL)
2480 strfree(zone->zone_bootargs);
2482 zone->zone_bootargs = strdup(buf);
2490 zone_set_brand(zone_t *zone, const char *brand)
2507 * This is the only place where a zone can change it's brand.
2508 * We already need to hold zone_status_lock to check the zone
2509 * status, so we'll just use that lock to serialize zone
2514 /* Re-Branding is not allowed and the zone can't be booted yet */
2515 if ((ZONE_IS_BRANDED(zone)) ||
2516 (zone_status_get(zone) >= ZONE_IS_BOOTING)) {
2523 zone->zone_brand = bp;
2524 ZBROP(zone)->b_init_brand_data(zone);
2531 zone_set_secflags(zone_t *zone, const psecflags_t *zone_secflags)
2536 ASSERT(zone != global_zone);
2541 if (zone_status_get(zone) > ZONE_IS_READY)
2547 (void) memcpy(&zone->zone_secflags, &psf, sizeof (psf));
2549 /* Set security flags on the zone's zsched */
2550 (void) memcpy(&zone->zone_zsched->p_secflags, &zone->zone_secflags,
2551 sizeof (zone->zone_zsched->p_secflags));
2557 zone_set_fs_allowed(zone_t *zone, const char *zone_fs_allowed)
2562 ASSERT(zone != global_zone);
2567 if (zone->zone_fs_allowed != NULL)
2568 strfree(zone->zone_fs_allowed);
2570 zone->zone_fs_allowed = strdup(buf);
2578 zone_set_initname(zone_t *zone, const char *zone_initname)
2584 ASSERT(zone != global_zone);
2588 if (zone->zone_initname != NULL)
2589 strfree(zone->zone_initname);
2591 zone->zone_initname = kmem_alloc(strlen(initname) + 1, KM_SLEEP);
2592 (void) strcpy(zone->zone_initname, initname);
2597 zone_set_phys_mcap(zone_t *zone, const uint64_t *zone_mcap)
2603 zone->zone_phys_mcap = mcap;
2609 zone_set_sched_class(zone_t *zone, const char *new_class)
2615 ASSERT(zone != global_zone);
2621 zone->zone_defaultcid = classid;
2622 ASSERT(zone->zone_defaultcid > 0 &&
2623 zone->zone_defaultcid < loaded_classes);
2632 zone_status_wait(zone_t *zone, zone_status_t status)
2637 while (zone->zone_status < status) {
2638 cv_wait(&zone->zone_cv, &zone_status_lock);
2647 zone_status_wait_cpr(zone_t *zone, zone_status_t status, char *str)
2656 while (zone->zone_status < status) {
2658 cv_wait(&zone->zone_cv, &zone_status_lock);
2668 * Block until zone enters requested state or signal is received. Return (0)
2672 zone_status_wait_sig(zone_t *zone, zone_status_t status)
2677 while (zone->zone_status < status) {
2678 if (!cv_wait_sig(&zone->zone_cv, &zone_status_lock)) {
2688 * Block until the zone enters the requested state or the timeout expires,
2693 zone_status_timedwait(zone_t *zone, clock_t tim, zone_status_t status)
2700 while (zone->zone_status < status && timeleft != -1) {
2701 timeleft = cv_timedwait(&zone->zone_cv, &zone_status_lock, tim);
2708 * Block until the zone enters the requested state, the current process is
2713 zone_status_timedwait_sig(zone_t *zone, clock_t tim, zone_status_t status)
2720 while (zone->zone_status < status) {
2721 timeleft = cv_timedwait_sig(&zone->zone_cv, &zone_status_lock,
2733 * This is so we can allow a zone to be rebooted while there are still
2736 * 0 (actually 1), but not zone_cred_ref. The zone structure itself is
2738 * than the zone id and privilege set should be accessed once the zone
2746 * Zones also provide a tracked reference counting mechanism in which zone
2748 * debuggers determine the sources of leaked zone references. See
2763 * Increment the specified zone's reference count. The zone's zone_t structure
2764 * will not be freed as long as the zone's reference count is nonzero.
2765 * Decrement the zone's reference count via zone_rele().
2768 * time. Use zone_hold_ref() if the zone must be held for a long time.
2780 * is 0 or we aren't waiting for cred references, the zone is ready to
2783 #define ZONE_IS_UNREF(zone) ((zone)->zone_ref == 1 && \
2784 (!zone_wait_for_cred || (zone)->zone_cred_ref == 0))
2787 * Common zone reference release function invoked by zone_rele() and
2789 * zone's subsystem-specific reference counters are not affected by the
2791 * removed from the specified zone's reference list. ref must be non-NULL iff
2813 /* signal zone_destroy so the zone can finish halting */
2829 * Decrement the specified zone's reference count. The specified zone will
2840 * Initialize a zone reference structure. This function must be invoked for
2851 * Acquire a reference to zone z. The caller must specify the
2853 * zone_ref_t structure will represent a reference to the specified zone. Use
2857 * zone_status field is not ZONE_IS_DEAD and the zone has outstanding
2884 * Release the zone reference represented by the specified zone_ref_t.
2957 zone_task_rele(zone_t *zone)
2961 mutex_enter(&zone->zone_lock);
2962 ASSERT(zone->zone_ntasks != 0);
2963 refcnt = --zone->zone_ntasks;
2965 mutex_exit(&zone->zone_lock);
2968 zone_hold_locked(zone); /* so we can use the zone_t later */
2969 mutex_exit(&zone->zone_lock);
2972 * See if the zone is shutting down.
2975 if (zone_status_get(zone) != ZONE_IS_SHUTTING_DOWN) {
2983 mutex_enter(&zone->zone_lock);
2984 if (refcnt != zone->zone_ntasks) {
2985 mutex_exit(&zone->zone_lock);
2988 mutex_exit(&zone->zone_lock);
2991 * No more user processes in the zone. The zone is empty.
2993 zone_status_set(zone, ZONE_IS_EMPTY);
2999 * zsched has exited; the zone is dead.
3001 zone->zone_zsched = NULL; /* paranoia */
3003 zone_status_set(zone, ZONE_IS_DEAD);
3006 zone_rele(zone);
3017 * check the validity of a zone's state.
3023 zone_t *zone = NULL;
3029 zone = (zone_t *)hv;
3030 return (zone);
3037 zone_t *zone = NULL;
3047 zone = (zone_t *)hv;
3048 return (zone);
3055 zone_t *zone = NULL;
3060 zone = (zone_t *)hv;
3061 return (zone);
3065 * Public interface for looking up a zone by zoneid. Only returns the zone if
3067 * Caller must call zone_rele() once it is done with the zone.
3069 * The zone may begin the zone_destroy() sequence immediately after this
3075 zone_t *zone;
3079 if ((zone = zone_find_all_by_id(zoneid)) == NULL) {
3083 status = zone_status_get(zone);
3086 * For all practical purposes the zone doesn't exist.
3091 zone_hold(zone);
3093 return (zone);
3097 * Similar to zone_find_by_id, but using zone label as the key.
3102 zone_t *zone;
3106 if ((zone = zone_find_all_by_label(label)) == NULL) {
3111 status = zone_status_get(zone);
3114 * For all practical purposes the zone doesn't exist.
3119 zone_hold(zone);
3121 return (zone);
3125 * Similar to zone_find_by_id, but using zone name as the key.
3130 zone_t *zone;
3134 if ((zone = zone_find_all_by_name(name)) == NULL) {
3138 status = zone_status_get(zone);
3141 * For all practical purposes the zone doesn't exist.
3146 zone_hold(zone);
3148 return (zone);
3153 * if there is a zone "foo" rooted at /foo/root, and the path argument
3155 * zone "foo".
3158 * very least every path will be contained in the global zone.
3166 zone_t *zone;
3179 for (zone = list_head(&zone_active); zone != NULL;
3180 zone = list_next(&zone_active, zone)) {
3181 if (ZONE_PATH_VISIBLE(path, zone))
3182 zret = zone;
3198 * Public interface for updating per-zone load averages. Called once per
3224 /* For all practical purposes the zone doesn't exist. */
3246 * until the zone has been up for at least 10 seconds and our
3284 * Get the number of cpus visible to this zone. The system-wide global
3286 * global zone, or a NULL zone argument is passed in.
3289 zone_ncpus_get(zone_t *zone)
3291 int myncpus = zone == NULL ? 0 : zone->zone_ncpus;
3297 * Get the number of online cpus visible to this zone. The system-wide
3299 * is in the global zone, or a NULL zone argument is passed in.
3302 zone_ncpus_online_get(zone_t *zone)
3304 int myncpus_online = zone == NULL ? 0 : zone->zone_ncpus_online;
3310 * Return the pool to which the zone is currently bound.
3313 zone_pool_get(zone_t *zone)
3317 return (zone->zone_pool);
3321 * Set the zone's pool pointer and update the zone's visibility to match
3325 zone_pool_set(zone_t *zone, pool_t *pool)
3330 zone->zone_pool = pool;
3331 zone_pset_set(zone, pool->pool_pset->pset_id);
3336 * zone is currently bound. The value will be ZONE_PS_INVAL if the pools
3340 zone_pset_get(zone_t *zone)
3344 return (zone->zone_psetid);
3348 * Set the cached value of the id of the processor set to which the zone
3349 * is currently bound. Also update the zone's visibility to match the
3353 zone_pset_set(zone_t *zone, psetid_t newpsetid)
3358 oldpsetid = zone_pset_get(zone);
3363 * Global zone sees all.
3365 if (zone != global_zone) {
3366 zone->zone_psetid = newpsetid;
3368 pool_pset_visibility_add(newpsetid, zone);
3370 pool_pset_visibility_remove(oldpsetid, zone);
3377 zone->zone_ncpus = 0;
3378 zone->zone_ncpus_online = 0;
3393 zone_t *zone;
3398 for (zone = list_head(&zone_active); zone != NULL;
3399 zone = list_next(&zone_active, zone)) {
3403 status = zone_status_get(zone);
3410 ret = (*cb)(zone, data);
3419 zone_set_root(zone_t *zone, const char *upath)
3470 zone->zone_rootvp = vp; /* we hold a reference to vp */
3471 zone->zone_rootpath = path;
3472 zone->zone_rootpathlen = pathlen;
3474 zone->zone_flags |= ZF_IS_SCRATCH;
3488 zone_set_name(zone_t *zone, const char *uname)
3521 zone->zone_name = kname;
3526 * Gets the 32-bit hostid of the specified zone as an unsigned int. If 'zonep'
3527 * is NULL or it points to a zone with no hostid emulation, then the machine's
3528 * hostid (i.e., the global zone's hostid) is returned. This function returns
3529 * zero if neither the zone nor the host machine (global zone) have hostids. It
3548 * zone's zsched process (curproc->p_zone->zone_zsched) before returning.
3561 zone_t *zone = curproc->p_zone;
3562 proc_t *pp = zone->zone_zsched;
3564 zone_hold(zone); /* Reference to be dropped when thread exits */
3567 * No-one should be trying to create threads if the zone is shutting
3571 ASSERT(!(zone->zone_kthreads == NULL &&
3572 zone_status_get(zone) >= ZONE_IS_EMPTY));
3580 if (zone->zone_kthreads == NULL) {
3583 kthread_t *tx = zone->zone_kthreads;
3590 zone->zone_kthreads = t;
3620 zone_t *zone = pp->p_zone;
3638 * If the zone is empty, once the thread count
3641 * in the zone, then it must have exited before the zone
3644 * zone, the thread count is non-zero.
3646 * This really means that non-zone kernel threads should
3647 * not create zone kernel threads.
3649 zone->zone_kthreads = NULL;
3650 if (zone_status_get(zone) == ZONE_IS_EMPTY) {
3651 zone_status_set(zone, ZONE_IS_DOWN);
3653 * Remove any CPU caps on this zone.
3655 cpucaps_zone_remove(zone);
3660 if (zone->zone_kthreads == t)
3661 zone->zone_kthreads = t->t_forw;
3664 zone_rele(zone);
3742 * Non-global zone version of start_init.
3766 * global zone is shutting down.
3795 zone_t *zone;
3800 * Per-zone "sched" workalike. The similarity to "sched" doesn't have
3802 * per-zone kernel threads are parented to zsched, just like regular
3805 * zsched is also responsible for launching init for the zone.
3813 zone_t *zone = za->zone;
3833 * We are this zone's "zsched" process. As the zone isn't generally
3837 zone_hold(zone); /* this hold is released by zone_destroy() */
3838 zone->zone_zsched = pp;
3840 pp->p_zone = zone;
3867 upcount_inc(crgetruid(kcred), zone->zone_id);
3871 * getting out of global zone, so decrement lwp and process counts
3882 * Decrement locked memory counts on old zone and project.
3890 * Create and join a new task in project '0' of this zone.
3897 tk = task_create(0, zone);
3903 mutex_enter(&zone->zone_mem_lock);
3904 zone->zone_locked_mem += pp->p_locked_mem;
3906 mutex_exit(&zone->zone_mem_lock);
3909 * add lwp and process counts to zsched's zone, and increment
3913 mutex_enter(&zone->zone_nlwps_lock);
3916 zone->zone_nlwps += pp->p_lwpcnt;
3918 zone->zone_nprocs++;
3919 mutex_exit(&zone->zone_nlwps_lock);
3926 * The process was created by a process in the global zone, hence the
3929 cr = zone->zone_kcred;
3950 zone_chdir(zone->zone_rootvp, &PTOU(pp)->u_cdir, pp);
3951 zone_chdir(zone->zone_rootvp, &PTOU(pp)->u_rdir, pp);
3954 * Initialize zone's rctl set.
3959 e.rcep_p.zone = zone;
3961 zone->zone_rctls = rctl_set_init(RCENTITY_ZONE, pp, &e, set, gp);
4022 * At this point we want to set the zone status to ZONE_IS_INITIALIZED
4023 * and atomically set the zone's processor set visibility. Once
4024 * we drop pool_lock() this zone will automatically get updated
4029 * now proceed and observe the zone. That is the reason for doing a
4035 zone_uniqid(zone);
4036 zone_zsd_configure(zone);
4038 zone_pset_set(zone, pool_default->pool_pset->pset_id);
4040 ASSERT(zone_status_get(zone) == ZONE_IS_UNINITIALIZED);
4041 zone_status_set(zone, ZONE_IS_INITIALIZED);
4048 zsd_apply_all_keys(zsd_apply_create, zone);
4052 ASSERT(zone_status_get(zone) == ZONE_IS_INITIALIZED);
4053 zone_status_set(zone, ZONE_IS_READY);
4057 * Once we see the zone transition to the ZONE_IS_BOOTING state,
4060 zone_status_wait_cpr(zone, ZONE_IS_BOOTING, "zsched");
4062 if (zone_status_get(zone) == ZONE_IS_BOOTING) {
4067 * zone's pool's scheduling class ID; note that by now, we
4070 * state). *But* the scheduling class for the zone's 'init'
4080 if (zone->zone_defaultcid > 0)
4081 cid = zone->zone_defaultcid;
4083 cid = pool_get_class(zone->zone_pool);
4089 * state of the zone will be set to SHUTTING_DOWN-- userland
4090 * will have to tear down the zone, and fail, or try again.
4092 if ((zone->zone_boot_err = newproc(zone_start_init, NULL, cid,
4095 zone_status_set(zone, ZONE_IS_SHUTTING_DOWN);
4098 zone->zone_boot_time = gethrestime_sec();
4108 zone_status_wait_cpr(zone, ZONE_IS_DYING, "zsched");
4126 crfree(zone->zone_kcred);
4127 zone->zone_kcred = NULL;
4134 * provided path. Used to make sure the zone doesn't "inherit" any
4172 * Helper function to make sure that a zone created on 'rootpath'
4178 zone_t *zone;
4191 for (zone = list_head(&zone_active); zone != NULL;
4192 zone = list_next(&zone_active, zone)) {
4193 if (zone == global_zone)
4195 len = strlen(zone->zone_rootpath);
4196 if (strncmp(rootpath, zone->zone_rootpath,
4204 zone_set_privset(zone_t *zone, const priv_set_t *zone_privs,
4219 zone->zone_privset = privs;
4273 if (strncmp(nvpair_name(nvp), "zone.", sizeof ("zone.") - 1)
4313 zone_set_label(zone_t *zone, const bslabel_t *lab, uint32_t doi)
4325 zone->zone_slabel = tsl;
4330 * Parses a comma-separated list of ZFS datasets into a per-zone dictionary.
4333 parse_zfs(zone_t *zone, caddr_t ubuf, size_t buflen)
4366 list_insert_head(&zone->zone_datasets, zd);
4379 * System call to create/initialize a new zone named 'zone_name', rooted
4380 * at 'zone_root', with a zone-wide privilege limit set of 'zone_privs',
4381 * and initialized with the zone-wide rctls described in 'rctlbuf', and
4398 zone_t *zone, *ztmp;
4409 /* can't boot zone from within chroot environment */
4414 * As the first step of zone creation, we want to allocate a zoneid.
4417 * freed asynchronously with respect to zone destruction. This means
4425 * referencing zone -- and changing them to have such a pointer would
4455 "zone IDs have netstacks still in use");
4459 cmn_err(CE_WARN, "unable to reuse zone ID %d; "
4463 zone = kmem_zalloc(sizeof (zone_t), KM_SLEEP);
4464 zone->zone_id = zoneid;
4465 zone->zone_status = ZONE_IS_UNINITIALIZED;
4466 zone->zone_pool = pool_default;
4467 zone->zone_pool_mod = gethrtime();
4468 zone->zone_psetid = ZONE_PS_INVAL;
4469 zone->zone_ncpus = 0;
4470 zone->zone_ncpus_online = 0;
4471 zone->zone_restart_init = B_TRUE;
4472 zone->zone_brand = &native_brand;
4473 zone->zone_initname = NULL;
4474 mutex_init(&zone->zone_lock, NULL, MUTEX_DEFAULT, NULL);
4475 mutex_init(&zone->zone_nlwps_lock, NULL, MUTEX_DEFAULT, NULL);
4476 mutex_init(&zone->zone_mem_lock, NULL, MUTEX_DEFAULT, NULL);
4477 cv_init(&zone->zone_cv, NULL, CV_DEFAULT, NULL);
4478 list_create(&zone->zone_ref_list, sizeof (zone_ref_t),
4480 list_create(&zone->zone_zsd, sizeof (struct zsd_entry),
4482 list_create(&zone->zone_datasets, sizeof (zone_dataset_t),
4484 list_create(&zone->zone_dl_list, sizeof (zone_dl_t),
4486 rw_init(&zone->zone_mlps.mlpl_rwlock, NULL, RW_DEFAULT, NULL);
4487 rw_init(&zone->zone_mntfs_db_lock, NULL, RW_DEFAULT, NULL);
4490 zone->zone_flags |= ZF_NET_EXCL;
4493 if ((error = zone_set_name(zone, zone_name)) != 0) {
4494 zone_free(zone);
4498 if ((error = zone_set_root(zone, zone_root)) != 0) {
4499 zone_free(zone);
4502 if ((error = zone_set_privset(zone, zone_privs, zone_privssz)) != 0) {
4503 zone_free(zone);
4507 /* initialize node name to be the same as zone name */
4508 zone->zone_nodename = kmem_alloc(_SYS_NMLN, KM_SLEEP);
4509 (void) strncpy(zone->zone_nodename, zone->zone_name, _SYS_NMLN);
4510 zone->zone_nodename[_SYS_NMLN - 1] = '\0';
4512 zone->zone_domain = kmem_alloc(_SYS_NMLN, KM_SLEEP);
4513 zone->zone_domain[0] = '\0';
4514 zone->zone_hostid = HW_INVALID_HOSTID;
4515 zone->zone_shares = 1;
4516 zone->zone_shmmax = 0;
4517 zone->zone_ipc.ipcq_shmmni = 0;
4518 zone->zone_ipc.ipcq_semmni = 0;
4519 zone->zone_ipc.ipcq_msgmni = 0;
4520 zone->zone_bootargs = NULL;
4521 zone->zone_fs_allowed = NULL;
4528 zone->zone_initname =
4530 (void) strcpy(zone->zone_initname, zone_default_initname);
4531 zone->zone_nlwps = 0;
4532 zone->zone_nlwps_ctl = INT_MAX;
4533 zone->zone_nprocs = 0;
4534 zone->zone_nprocs_ctl = INT_MAX;
4535 zone->zone_locked_mem = 0;
4536 zone->zone_locked_mem_ctl = UINT64_MAX;
4537 zone->zone_max_swap = 0;
4538 zone->zone_max_swap_ctl = UINT64_MAX;
4539 zone->zone_max_lofi = 0;
4540 zone->zone_max_lofi_ctl = UINT64_MAX;
4547 zone->zone_rctls = NULL;
4550 zone_free(zone);
4554 if ((error = parse_zfs(zone, zfsbuf, zfsbufsz)) != 0) {
4555 zone_free(zone);
4563 zone->zone_match = match;
4564 if (is_system_labeled() && !(zone->zone_flags & ZF_IS_SCRATCH)) {
4567 zone_free(zone);
4570 /* Always apply system's doi to the zone */
4571 error = zone_set_label(zone, label, default_doi);
4573 zone_free(zone);
4579 zone->zone_slabel = l_admin_low;
4590 zone_free(zone);
4595 if (block_mounts(zone) == 0) {
4600 zone_free(zone);
4610 zone->zone_kcred = crdup(kcred);
4611 crsetzone(zone->zone_kcred, zone);
4612 priv_intersect(zone->zone_privset, &CR_PPRIV(zone->zone_kcred));
4613 priv_intersect(zone->zone_privset, &CR_EPRIV(zone->zone_kcred));
4614 priv_intersect(zone->zone_privset, &CR_IPRIV(zone->zone_kcred));
4615 priv_intersect(zone->zone_privset, &CR_LPRIV(zone->zone_kcred));
4619 * Make sure zone doesn't already exist.
4621 * If the system and zone are labeled,
4622 * make sure no other zone exists that has the same label.
4624 if ((ztmp = zone_find_all_by_name(zone->zone_name)) != NULL ||
4626 (ztmp = zone_find_all_by_label(zone->zone_slabel)) != NULL)) {
4642 * Don't allow zone creations which would cause one zone's rootpath to
4643 * be accessible from that of another (non-global) zone.
4645 if (zone_is_nested(zone->zone_rootpath)) {
4656 if (zone_mount_count(zone->zone_rootpath) != 0) {
4664 * zsched() initializes this zone's kernel process. We
4665 * optimistically add the zone to the hashtable and associated
4667 * same zone.
4671 (mod_hash_key_t)(uintptr_t)zone->zone_id,
4672 (mod_hash_val_t)(uintptr_t)zone);
4673 str = kmem_alloc(strlen(zone->zone_name) + 1, KM_SLEEP);
4674 (void) strcpy(str, zone->zone_name);
4676 (mod_hash_val_t)(uintptr_t)zone);
4679 (mod_hash_key_t)zone->zone_slabel, (mod_hash_val_t)zone);
4680 zone->zone_flags |= ZF_HASHED_LABEL;
4685 * on the zone, but everyone else knows not to use it, so we can
4689 list_insert_tail(&zone_active, zone);
4692 zarg.zone = zone;
4706 list_remove(&zone_active, zone);
4707 if (zone->zone_flags & ZF_HASHED_LABEL) {
4708 ASSERT(zone->zone_slabel != NULL);
4710 (mod_hash_key_t)zone->zone_slabel);
4713 (mod_hash_key_t)(uintptr_t)zone->zone_name);
4715 (mod_hash_key_t)(uintptr_t)zone->zone_id);
4726 * Create zone kstats
4728 zone_kstat_create(zone);
4739 * Wait for zsched to finish initializing the zone.
4741 zone_status_wait(zone, ZONE_IS_READY);
4743 * The zone is fully visible, so we can let mounts progress.
4745 resume_mounts(zone);
4760 resume_mounts(zone);
4763 * There is currently one reference to the zone, a cred_ref from
4764 * zone_kcred. To free the zone, we call crfree, which will call
4767 ASSERT(zone->zone_cred_ref == 1);
4768 ASSERT(zone->zone_kcred->cr_ref == 1);
4769 ASSERT(zone->zone_ref == 0);
4770 zkcr = zone->zone_kcred;
4771 zone->zone_kcred = NULL;
4777 * Cause the zone to boot. This is pretty simple, since we let zoneadmd do
4779 * at the "top" of the zone; if this is NULL, we use the system default,
4786 zone_t *zone;
4795 * Look for zone under hash lock to prevent races with calls to
4798 if ((zone = zone_find_all_by_id(zoneid)) == NULL) {
4804 if (zone_status_get(zone) != ZONE_IS_READY) {
4809 zone_status_set(zone, ZONE_IS_BOOTING);
4812 zone_hold(zone); /* so we can use the zone_t later */
4815 if (zone_status_wait_sig(zone, ZONE_IS_RUNNING) == 0) {
4816 zone_rele(zone);
4821 * Boot (starting init) might have failed, in which case the zone
4823 * be placed in zone->zone_boot_err, and so we return that.
4825 err = zone->zone_boot_err;
4826 zone_rele(zone);
4831 * Kills all user processes in the zone, waiting for them all to exit
4835 zone_empty(zone_t *zone)
4845 while ((waitstatus = zone_status_timedwait_sig(zone,
4847 killall(zone->zone_id);
4858 * This function implements the policy for zone visibility.
4860 * In standard Solaris, a non-global zone can only see itself.
4862 * In Trusted Extensions, a labeled zone can lookup any zone whose label
4863 * it dominates. For this test, the label of the global zone is treated as
4866 * Returns true if zone attributes are viewable, false otherwise.
4869 zone_list_access(zone_t *zone)
4873 curproc->p_zone == zone) {
4875 } else if (is_system_labeled() && !(zone->zone_flags & ZF_IS_SCRATCH)) {
4880 zone_label = label2bslabel(zone->zone_slabel);
4882 if (zone->zone_id != GLOBAL_ZONEID &&
4894 * Systemcall to start the zone's halt sequence. By the time this
4897 * and the zone status set to ZONE_IS_DOWN.
4900 * parent of any process running in the zone, and doesn't have SIGCHLD blocked.
4906 zone_t *zone;
4916 * Look for zone under hash lock to prevent races with other
4919 if ((zone = zone_find_all_by_id(zoneid)) == NULL) {
4926 * Hold the zone so we can continue to use the zone_t.
4928 zone_hold(zone);
4933 * the zone's status with regards to ZONE_IS_SHUTTING down.
4935 * e.g. NFS can fail the mount if it determines that the zone
4939 if (block_mounts(zone) == 0) {
4940 zone_rele(zone);
4946 status = zone_status_get(zone);
4948 * Fail if the zone isn't fully initialized yet.
4953 resume_mounts(zone);
4954 zone_rele(zone);
4964 resume_mounts(zone);
4965 zone_rele(zone);
4976 mutex_enter(&zone->zone_lock);
4977 if ((ntasks = zone->zone_ntasks) != 1) {
4981 zone_status_set(zone, ZONE_IS_SHUTTING_DOWN);
4983 mutex_exit(&zone->zone_lock);
4988 * zonehash_lock. The zone is empty.
4990 if (zone->zone_kthreads == NULL) {
4994 zone_status_set(zone, ZONE_IS_DOWN);
4996 zone_status_set(zone, ZONE_IS_EMPTY);
5002 resume_mounts(zone);
5004 if (error = zone_empty(zone)) {
5005 zone_rele(zone);
5009 * After the zone status goes to ZONE_IS_DOWN this zone will no
5017 * This rebinding of the zone can happen multiple times
5022 zone_rele(zone);
5027 zone_pool_set(zone, pool_default);
5029 * The zone no longer needs to be able to see any cpus.
5031 zone_pset_set(zone, ZONE_PS_INVAL);
5040 zone_zsd_callbacks(zone, ZSD_SHUTDOWN);
5043 if (zone->zone_kthreads == NULL && zone_status_get(zone) < ZONE_IS_DOWN)
5044 zone_status_set(zone, ZONE_IS_DOWN);
5050 if (!zone_status_wait_sig(zone, ZONE_IS_DOWN)) {
5051 zone_rele(zone);
5061 zone_rele(zone);
5066 * Log the specified zone's reference counts. The caller should not be
5067 * holding the zone's zone_lock.
5070 zone_log_refcounts(zone_t *zone)
5092 * NOTE: We have to grab the zone's zone_lock to create a consistent
5093 * snapshot of the zone's reference counters.
5099 mutex_enter(&zone->zone_lock);
5100 zone->zone_flags |= ZF_REFCOUNTS_LOGGED;
5101 ref = zone->zone_ref;
5102 cred_ref = zone->zone_cred_ref;
5104 if (zone->zone_subsys_ref[index] != 0)
5113 mutex_exit(&zone->zone_lock);
5115 "Zone '%s' (ID: %d) is shutting down, but %u zone "
5117 zone->zone_name, zone->zone_id, ref, cred_ref);
5139 if (zone->zone_subsys_ref[index] != 0)
5142 zone->zone_subsys_ref[index]);
5144 mutex_exit(&zone->zone_lock);
5154 "Zone '%s' (ID: %d) is shutting down, but %u zone references and "
5155 "%u credential references are still extant %s", zone->zone_name,
5156 zone->zone_id, ref, cred_ref, buffer);
5161 * Systemcall entry point to finalize the zone halt process. The caller
5164 * Upon successful completion, the zone will have been fully destroyed:
5165 * zsched will have exited, destructor callbacks executed, and the zone
5172 zone_t *zone;
5184 * Look for zone under hash lock to prevent races with other
5187 if ((zone = zone_find_all_by_id(zoneid)) == NULL) {
5192 if (zone_mount_count(zone->zone_rootpath) != 0) {
5197 status = zone_status_get(zone);
5203 zone_status_set(zone, ZONE_IS_DYING); /* Tell zsched to exit */
5206 zone_hold(zone);
5212 zone_status_wait(zone, ZONE_IS_DEAD);
5213 zone_zsd_callbacks(zone, ZSD_DESTROY);
5214 zone->zone_netstack = NULL;
5215 uniqid = zone->zone_uniqid;
5216 zone_rele(zone);
5217 zone = NULL; /* potentially free'd */
5226 if ((zone = zone_find_all_by_id(zoneid)) == NULL ||
5227 zone->zone_uniqid != uniqid) {
5229 * The zone has gone away. Necessary conditions
5235 mutex_enter(&zone->zone_lock);
5236 unref = ZONE_IS_UNREF(zone);
5237 refs_have_been_logged = (zone->zone_flags &
5239 mutex_exit(&zone->zone_lock);
5242 * There is only one reference to the zone -- that
5243 * added when the zone was added to the hashtables --
5246 * zone.
5254 * some zone's general-purpose reference count reaches one.
5256 * on zone_destroy_cv, then log the zone's reference counts and
5265 * seconds) for the zone's references to clear.
5276 * wait timed out. The zone might have
5291 * destroyed the zone.
5293 * If the zone still exists and has more than
5295 * then log the zone's reference counts.
5303 * waiting for subsystems to release the zone's last
5304 * general-purpose references. Log the zone's reference
5307 zone_log_refcounts(zone);
5317 * Remove CPU cap for this zone now since we're not going to
5320 cpucaps_zone_remove(zone);
5322 /* Get rid of the zone's kstats */
5323 zone_kstat_delete(zone);
5326 if (zone->zone_pfexecd != NULL) {
5327 klpd_freelist(&zone->zone_pfexecd);
5328 zone->zone_pfexecd = NULL;
5332 if (ZONE_IS_BRANDED(zone))
5333 ZBROP(zone)->b_free_brand_data(zone);
5336 brand_unregister_zone(zone->zone_brand);
5339 * It is now safe to let the zone be recreated; remove it from the
5343 ASSERT(zonecount > 1); /* must be > 1; can't destroy global zone */
5346 list_remove(&zone_active, zone);
5348 (mod_hash_key_t)zone->zone_name);
5350 (mod_hash_key_t)(uintptr_t)zone->zone_id);
5351 if (zone->zone_flags & ZF_HASHED_LABEL)
5353 (mod_hash_key_t)zone->zone_slabel);
5360 if (zone->zone_rootvp != NULL) {
5361 VN_RELE(zone->zone_rootvp);
5362 zone->zone_rootvp = NULL;
5367 list_insert_tail(&zone_deathrow, zone);
5372 * free the zone unless there are outstanding cred references.
5374 zone_rele(zone);
5386 zone_t *zone;
5397 if ((zone = zone_find_all_by_id(zoneid)) == NULL) {
5401 zone_status = zone_status_get(zone);
5406 zone_hold(zone);
5410 * If not in the global zone, don't show information about other zones,
5411 * unless the system is labeled and the local zone's label dominates
5412 * the other zone.
5414 if (!zone_list_access(zone)) {
5415 zone_rele(zone);
5424 * the global zone).
5426 if (zone != global_zone)
5427 size = zone->zone_rootpathlen - 1;
5429 size = zone->zone_rootpathlen;
5431 bcopy(zone->zone_rootpath, zonepath, size);
5436 * Caller is not in the global zone.
5437 * if the query is on the current zone
5439 * just return faked-up path for current zone.
5445 * Return related path for current zone.
5448 int zname_len = strlen(zone->zone_name);
5453 bcopy(zone->zone_name, zonepath +
5470 size = strlen(zone->zone_name) + 1;
5474 err = copyoutstr(zone->zone_name, buf, bufsize, NULL);
5482 * Since we're not holding zonehash_lock, the zone status
5488 zone_status = zone_status_get(zone);
5494 size = sizeof (zone->zone_flags);
5497 flags = zone->zone_flags;
5507 copyout(zone->zone_privset, buf, bufsize) != 0)
5511 size = sizeof (zone->zone_uniqid);
5515 copyout(&zone->zone_uniqid, buf, bufsize) != 0)
5527 pool = zone_pool_get(zone);
5541 if (zone->zone_slabel == NULL)
5544 copyout(label2bslabel(zone->zone_slabel), buf,
5552 initpid = zone->zone_proc_initpid;
5562 size = strlen(zone->zone_brand->b_name) + 1;
5567 err = copyoutstr(zone->zone_brand->b_name, buf,
5574 size = strlen(zone->zone_initname) + 1;
5578 err = copyoutstr(zone->zone_initname, buf, bufsize,
5585 if (zone->zone_bootargs == NULL)
5588 outstr = zone->zone_bootargs;
5599 size = sizeof (zone->zone_phys_mcap);
5603 copyout(&zone->zone_phys_mcap, buf, bufsize) != 0)
5609 if (zone->zone_defaultcid >= loaded_classes)
5612 outstr = sclass[zone->zone_defaultcid].cl_name;
5625 if (zone->zone_hostid != HW_INVALID_HOSTID &&
5626 bufsize == sizeof (zone->zone_hostid)) {
5627 size = sizeof (zone->zone_hostid);
5628 if (buf != NULL && copyout(&zone->zone_hostid, buf,
5636 if (zone->zone_fs_allowed == NULL)
5639 outstr = zone->zone_fs_allowed;
5650 size = sizeof (zone->zone_secflags);
5653 if ((err = copyout(&zone->zone_secflags, buf, bufsize)) != 0)
5668 if ((attr >= ZONE_ATTR_BRAND_ATTRS) && ZONE_IS_BRANDED(zone)) {
5670 error = ZBROP(zone)->b_getattr(zone, attr, buf, &size);
5675 zone_rele(zone);
5689 zone_t *zone;
5699 * global zone.
5706 if ((zone = zone_find_all_by_id(zoneid)) == NULL) {
5710 zone_hold(zone);
5717 zone_status = zone_status_get(zone);
5725 err = zone_set_initname(zone, (const char *)buf);
5728 zone->zone_restart_init = B_FALSE;
5732 err = zone_set_bootargs(zone, (const char *)buf);
5735 err = zone_set_brand(zone, (const char *)buf);
5738 err = zone_set_fs_allowed(zone, (const char *)buf);
5741 err = zone_set_secflags(zone, (psecflags_t *)buf);
5744 err = zone_set_phys_mcap(zone, (const uint64_t *)buf);
5747 err = zone_set_sched_class(zone, (const char *)buf);
5750 if (bufsize == sizeof (zone->zone_hostid)) {
5751 if (copyin(buf, &zone->zone_hostid, bufsize) == 0)
5774 if ((attr >= ZONE_ATTR_BRAND_ATTRS) && ZONE_IS_BRANDED(zone))
5775 err = ZBROP(zone)->b_setattr(zone, attr, buf, bufsize);
5781 zone_rele(zone);
5791 * swap. This is because the counting for zone.max-swap does not allow swap
5792 * reservation to be shared between zones. zone swap reservation is counted
5793 * on zone->zone_max_swap.
5809 * Cannot enter zone with shared anon memory which
5854 * The current process is injected into said zone. In the process
5856 * zone-wide rctls, and pool association to match those of the zone.
5858 * The first zone_enter() called while the zone is in the ZONE_IS_READY
5860 * enter a zone that is "ready" or "running".
5865 zone_t *zone;
5916 zone = zone_find_all_by_id(zoneid);
5917 if (zone == NULL) {
5924 * To prevent processes in a zone from holding contracts on
5947 * restarted init (or other zone-penetrating process) its
5954 if (contract_getzuniqid(next) != zone->zone_uniqid) {
5967 status = zone_status_get(zone);
5980 if (!priv_issubset(zone->zone_privset, &CR_OPPRIV(CRED()))) {
5988 * since the zone can't disappear (we have a hold on it).
5990 zone_hold(zone);
5996 * until we join the zone.
5999 zone_rele(zone);
6005 * Bind ourselves to the pool currently associated with the zone.
6008 newpool = zone_pool_get(zone);
6013 zone_rele(zone);
6024 * Make sure the zone hasn't moved on since we dropped zonehash_lock.
6026 if (zone_status_get(zone) >= ZONE_IS_SHUTTING_DOWN) {
6037 zone_rele(zone);
6044 * reservation from the global zone to the non global zone because
6052 zone_proj0 = zone->zone_zsched->p_task->tk_proj;
6054 mutex_enter(&zone->zone_nlwps_lock);
6055 /* add new lwps to zone and zone's proj0 */
6057 zone->zone_nlwps += pp->p_lwpcnt;
6058 /* add 1 task to zone's proj0 */
6062 zone->zone_nprocs++;
6063 mutex_exit(&zone->zone_nlwps_lock);
6065 mutex_enter(&zone->zone_mem_lock);
6066 zone->zone_locked_mem += pp->p_locked_mem;
6068 zone->zone_max_swap += swap;
6069 mutex_exit(&zone->zone_mem_lock);
6075 /* remove lwps and process from proc's old zone and old project */
6094 pp->p_zone = zone;
6099 * Joining the zone cannot fail from now on.
6108 * extra zone information, svc_fmri in this case
6117 * Reset the encapsulating process contract's zone.
6120 contract_setzuniqid(ct, zone->zone_uniqid);
6126 * We might as well be in project 0; the global zone's projid doesn't
6127 * make much sense in a zone anyhow.
6131 tk = task_create(0, zone);
6138 e.rcep_p.zone = zone;
6140 (void) rctl_set_dup(NULL, NULL, pp, &e, zone->zone_rctls, NULL,
6146 * the process and zone aren't going away, we know its session isn't
6150 * global zone of init's sid being the pid of sched. We extend this
6154 sp = zone->zone_zsched->p_sessp;
6155 sess_hold(zone->zone_zsched);
6160 pgjoin(pp, zone->zone_zsched->p_pidp);
6163 * If any threads are scheduled to be placed on zone wait queue they
6185 * If there is a default scheduling class for the zone and it is not
6190 if (zone->zone_defaultcid > 0 &&
6191 zone->zone_defaultcid != curthread->t_cid) {
6194 pcparms.pc_cid = zone->zone_defaultcid;
6198 * If setting the class fails, we still want to enter the zone.
6212 * We're firmly in the zone; let pools progress.
6217 * We don't need to retain a hold on the zone since we already
6218 * incremented zone_ntasks, so the zone isn't going anywhere.
6220 zone_rele(zone);
6225 vp = zone->zone_rootvp;
6234 &zone->zone_secflags.psf_lower);
6236 &zone->zone_secflags.psf_upper);
6238 &zone->zone_secflags.psf_inherit);
6247 crsetzone(newcr, zone);
6251 * Restrict all process privilege sets to zone limit
6253 priv_intersect(zone->zone_privset, &CR_PPRIV(newcr));
6254 priv_intersect(zone->zone_privset, &CR_EPRIV(newcr));
6255 priv_intersect(zone->zone_privset, &CR_IPRIV(newcr));
6256 priv_intersect(zone->zone_privset, &CR_LPRIV(newcr));
6261 * Adjust upcount to reflect zone entry.
6289 * Processes running in a (non-global) zone only see themselves.
6296 zone_t *zone, *myzone;
6309 /* just return current zone */
6322 for (zone = list_head(&zone_active);
6323 zone != NULL;
6324 zone = list_next(&zone_active, zone)) {
6325 if (zone->zone_id == GLOBAL_ZONEID)
6327 if (zone != myzone &&
6328 (zone->zone_flags & ZF_IS_SCRATCH))
6336 label2bslabel(zone->zone_slabel))) {
6338 zone->zone_id;
6351 for (zone = list_head(&zone_active); zone != NULL;
6352 zone = list_next(&zone_active, zone))
6353 zoneids[domi_nzones++] = zone->zone_id;
6394 zone_t *zone;
6399 /* return caller's zone id */
6410 zone = zone_find_all_by_name(kname);
6413 * In a non-global zone, can only lookup global and own name.
6414 * In Trusted Extensions zone label dominance rules apply.
6416 if (zone == NULL ||
6417 zone_status_get(zone) < ZONE_IS_READY ||
6418 !zone_list_access(zone)) {
6422 zoneid = zone->zone_id;
6440 zone(int cmd, void *arg1, void *arg2, void *arg3, void *arg4)
6537 zone_t *zone;
6571 zone_t *zone;
6578 zone = zargp->zone;
6582 zone_namelen = strlen(zone->zone_name) + 1;
6584 bcopy(zone->zone_name, zone_name, zone_namelen);
6585 zoneid = zone->zone_id;
6586 uniqid = zone->zone_uniqid;
6588 * zoneadmd may be down, but at least we can empty out the zone.
6593 (void) zone_empty(zone);
6594 ASSERT(zone_status_get(zone) >= ZONE_IS_EMPTY);
6595 zone_rele(zone);
6607 * Since we're not holding a reference to the zone, any number of
6608 * things can go wrong, including the zone disappearing before we get a
6653 if ((zone = zone_find_by_id(zoneid)) == NULL) {
6659 if (zone->zone_uniqid != uniqid) {
6663 zone_rele(zone);
6671 zone_rele(zone);
6684 * Entry point for uadmin() to tell the zone to go away or reboot. Analog to
6685 * kadmin(). The caller is a process in the zone.
6687 * In order to shutdown the zone, we will hand off control to zoneadmd
6688 * (running in the global zone) via a door. We do a half-hearted job at
6689 * killing all processes in the zone, create a kernel thread to contact
6690 * zoneadmd, and make note of the "uniqid" of the zone. The uniqid is
6692 * zone_destroy()) know exactly which zone they're re talking about.
6699 zone_t *zone;
6701 zone = curproc->p_zone;
6743 * is in the zone.
6745 ASSERT(zone_status_get(zone) < ZONE_IS_EMPTY);
6746 if (zone_status_get(zone) > ZONE_IS_RUNNING) {
6748 * This zone is already on its way down.
6756 zone_status_set(zone, ZONE_IS_SHUTTING_DOWN);
6764 killall(zone->zone_id);
6767 * work. This thread can't be created in our zone otherwise
6772 zargp->arg.uniqid = zone->zone_uniqid;
6773 zargp->zone = zone;
6779 zone_hold(zone);
6789 * Entry point so kadmin(A_SHUTDOWN, ...) can set the global zone's
6804 /* Modify the global zone's status first. */
6811 * could cause assertions to fail (e.g., assertions about a zone's
6814 * fail to boot the new zones when they see that the global zone is
6827 * Returns true if the named dataset is visible in the current zone.
6836 zone_t *zone = curproc->p_zone;
6848 for (zd = list_head(&zone->zone_datasets); zd != NULL;
6849 zd = list_next(&zone->zone_datasets, zd)) {
6869 for (zd = list_head(&zone->zone_datasets); zd != NULL;
6870 zd = list_next(&zone->zone_datasets, zd)) {
6888 * zone_vfslist of this zone. If found, return true and note that it is
6902 vfsp = zone->zone_vfslist;
6934 } while (vfsp != zone->zone_vfslist);
6944 * effectively compares against zone paths rather than zonerootpath
6947 * paths, whether zone-visible or not, including those which are parallel
6950 * If the specified path does not fall under any zone path then global
6951 * zone is returned.
6957 * The caller is responsible for zone_rele of the returned zone.
6962 zone_t *zone;
6976 for (zone = list_head(&zone_active); zone != NULL;
6977 zone = list_next(&zone_active, zone)) {
6982 if (zone == global_zone) /* skip global zone */
6986 c = zone->zone_rootpath + zone->zone_rootpathlen - 2;
6991 pathlen = c - zone->zone_rootpath + 1 - path_offset;
6992 rootpath_start = (zone->zone_rootpath + path_offset);
6996 if (zone == NULL)
6997 zone = global_zone;
6998 zone_hold(zone);
7000 return (zone);
7004 * Finds a zone_dl_t with the given linkid in the given zone. Returns the
7008 zone_find_dl(zone_t *zone, datalink_id_t linkid)
7012 ASSERT(mutex_owned(&zone->zone_lock));
7013 for (zdl = list_head(&zone->zone_dl_list); zdl != NULL;
7014 zdl = list_next(&zone->zone_dl_list, zdl)) {
7022 zone_dl_exists(zone_t *zone, datalink_id_t linkid)
7026 mutex_enter(&zone->zone_lock);
7027 exists = (zone_find_dl(zone, linkid) != NULL);
7028 mutex_exit(&zone->zone_lock);
7033 * Add an data link name for the zone.
7039 zone_t *zone;
7045 /* Verify that the datalink ID doesn't already belong to a zone. */
7047 for (zone = list_head(&zone_active); zone != NULL;
7048 zone = list_next(&zone_active, zone)) {
7049 if (zone_dl_exists(zone, linkid)) {
7052 return (set_errno((zone == thiszone) ? EEXIST : EPERM));
7071 zone_t *zone;
7074 if ((zone = zone_find_by_id(zoneid)) == NULL)
7077 mutex_enter(&zone->zone_lock);
7078 if ((zdl = zone_find_dl(zone, linkid)) == NULL) {
7081 list_remove(&zone->zone_dl_list, zdl);
7085 mutex_exit(&zone->zone_lock);
7086 zone_rele(zone);
7091 * Using the zoneidp as ALL_ZONES, we can lookup which zone has been assigned
7098 zone_t *zone;
7102 if ((zone = zone_find_by_id(*zoneidp)) != NULL) {
7103 if (zone_dl_exists(zone, linkid))
7105 zone_rele(zone);
7111 for (zone = list_head(&zone_active); zone != NULL;
7112 zone = list_next(&zone_active, zone)) {
7113 if (zone_dl_exists(zone, linkid)) {
7114 *zoneidp = zone->zone_id;
7124 * Get the list of datalink IDs assigned to a zone.
7136 zone_t *zone;
7142 if ((zone = zone_find_by_id(zoneid)) == NULL)
7146 mutex_enter(&zone->zone_lock);
7147 for (zdl = list_head(&zone->zone_dl_list); zdl != NULL;
7148 zdl = list_next(&zone->zone_dl_list, zdl)) {
7156 mutex_exit(&zone->zone_lock);
7157 zone_rele(zone);
7162 mutex_exit(&zone->zone_lock);
7163 zone_rele(zone);
7174 * Public interface for looking up a zone by zoneid. It's a customized version
7176 * callbacks, since it doesn't have reference on the zone structure hence if
7177 * it is called elsewhere the zone could disappear after the zonehash_lock
7181 * 1. Doesn't check the status of the zone.
7185 * 3. Returns without the zone being held.
7190 zone_t *zone;
7194 zone = &zone0;
7196 zone = zone_find_all_by_id(zoneid);
7198 return (zone);
7202 * Walk the datalinks for a given zone
7208 zone_t *zone;
7214 if ((zone = zone_find_by_id(zoneid)) == NULL)
7221 mutex_enter(&zone->zone_lock);
7222 for (zdl = list_head(&zone->zone_dl_list); zdl != NULL;
7223 zdl = list_next(&zone->zone_dl_list, zdl)) {
7228 mutex_exit(&zone->zone_lock);
7229 zone_rele(zone);
7235 mutex_exit(&zone->zone_lock);
7236 zone_rele(zone);
7240 for (i = 0, zdl = list_head(&zone->zone_dl_list); zdl != NULL;
7241 i++, zdl = list_next(&zone->zone_dl_list, zdl)) {
7245 mutex_exit(&zone->zone_lock);
7252 zone_rele(zone);
7273 zone_t *zone;
7294 if ((zone = zone_find_by_id(zoneid)) == NULL) {
7298 mutex_enter(&zone->zone_lock);
7299 if ((zdl = zone_find_dl(zone, linkid)) == NULL) {
7318 mutex_exit(&zone->zone_lock);
7319 zone_rele(zone);
7329 zone_t *zone;
7349 if ((zone = zone_find_by_id(zoneid)) == NULL)
7352 mutex_enter(&zone->zone_lock);
7353 if ((zdl = zone_find_dl(zone, linkid)) == NULL) {
7371 mutex_exit(&zone->zone_lock);
7372 zone_rele(zone);