inetd.c revision 16ba0fac26f672b18447f2e17a2f91f14ed3ce40
214N/A * The contents of this file are subject to the terms of the 214N/A * Common Development and Distribution License (the "License"). 214N/A * You may not use this file except in compliance with the License. 214N/A * See the License for the specific language governing permissions 214N/A * and limitations under the License. * When distributing Covered Code, include this CDDL HEADER in each * If applicable, add the following below this CDDL HEADER, with the * fields enclosed by brackets "[]" replaced with your own identifying * information: Portions Copyright [yyyy] [name of copyright owner] * Copyright 2010 Sun Microsystems, Inc. All rights reserved. * Use is subject to license terms. * Below are some high level notes of the operation of the SMF inetd. The * notes don't go into any real detail, and the viewer of this file is * encouraged to look at the code and its associated comments to better * understand inetd's operation. This saves the potential for the code * and these notes diverging over time. * Inetd's major work is done from the context of event_loop(). Within this * loop, inetd polls for events arriving from a number of different file * descriptors, representing the following event types, and initiates * any necessary event processing: * - notification of terminated processes (discovered via contract events). * - instance specific events originating from the SMF master restarter. * - stop/refresh requests from the inetd method processes (coming in on a * There's also a timeout set for the poll, which is set to the nearest * scheduled timer in a timer queue that inetd uses to perform delayed * processing, such as bind retries. * The SIGHUP and SIGINT signals can also interrupt the poll, and will * result in inetd being refreshed or stopped respectively, as was the * behavior with the old inetd. * Inetd implements a state machine for each instance. The states within the * machine are: offline, online, disabled, maintenance, uninitialized and * specializations of the offline state for when an instance exceeds one of * its DOS limits. The state of an instance can be changed as a * started up. The ongoing state of an instance is stored in the SMF * repository, as required of SMF restarters. This enables an administrator * to view the state of each instance, and, if inetd was to terminate * unexpectedly, it could use the stored state to re-commence where it left off. * Within the state machine a number of methods are run (if provided) as part * of a state transition to aid/ effect a change in an instance's state. The * supported methods are: offline, online, disable, refresh and start. The * latter of these is the equivalent of the server program and its arguments * Events from the SMF master restarter come in on a number of threads * created in the registration routine of librestart, the delegated restarter * library. These threads call into the restart_event_proxy() function * when an event arrives. To serialize the processing of instances, these events * are then written down a pipe to the process's main thread, which listens * for these events via a poll call, with the file descriptor of the other * end of the pipe in its read set, and processes the event appropriately. * When the event has been processed (which may be delayed if the instance * for which the event is for is in the process of executing one of its methods * as part of a state transition) it writes an acknowledgement back down the * pipe the event was received on. The thread in restart_event_proxy() that * wrote the event will read the acknowledgement it was blocked upon, and will * then be able to return to its caller, thus implicitly acknowledging the * event, and allowing another event to be written down the pipe for the main /* path to inetd's binary */ * be be the primary file, so it is checked before /etc/inetd.conf. /* Arguments passed to this binary to request which method to execute. */ /* connection backlog for unix domain socket */ /* number of retries to recv() a request on the UDS socket before giving up */ /* enumeration of the different ends of a pipe */ * Collection of information for each state. * NOTE: This table is indexed into using the internal_inst_state_t * enumeration, so the ordering needs to be kept in synch. * Pipe used to send events from the threads created by restarter_bind_handle() * to the main thread of control. * Used to protect the critical section of code in restarter_event_proxy() that * involves writing an event down the event pipe and reading an acknowledgement. /* handle used in communication with the master restarter */ /* set to indicate a refresh of inetd is requested */ /* set by the SIGTERM handler to flag we got a SIGTERM */ * Timer queue used to store timers for delayed event processing, such as * fd of Unix Domain socket used to communicate stop and refresh requests * to the inetd start method process. * List of inetd's currently managed instances; each containing its state, * and in certain states its configuration. /* set to indicate we're being stopped */ /* TCP wrappers syslog globals. Consumed by libwrap. */ /* path of the configuration file being monitored by check_conf_file() */ /* Auditing session handle */ /* Number of pending connections */ * The following two functions are callbacks that libumem uses to determine * exported by FMA and is consolidation private. The comments in the two * functions give the environment variable that will effectively be set to * their returned value, and thus whose behavior for this value, described in * umem_debug(3MALLOC), will be followed. return (
"default,verbose");
/* UMEM_DEBUG setting */ return (
"fail,contents");
/* UMEM_LOGGING setting */ "Invalid configuration for instance %s, placing in maintenance"),
* Returns B_TRUE if the instance is in a suitable state for inetd to stop. * Given the instance fmri, obtain the corresonding scf_instance. * Caller is responsible for freeing the returned scf_instance and * Updates the current and next repository states of instance 'inst'. If * any errors occur an error message is output. * If transitioning to maintenance, check auxiliary_tty set * by svcadm and assign appropriate value to auxiliary_state. * If the maintenance event comes from a service request, * validate auxiliary_fmri and copy it to aux =
"administrative_request";
"auxiliary_fmri property for %s"),
"auxiliary_fmri property for %s"),
/* update the repository SMF state */ * Sends a refresh event to the inetd start method process and returns * SMF_EXIT_OK if it managed to send it. If it fails to send the request for * some reason it returns SMF_EXIT_ERR_OTHER. /* write the request and return success */ gettext(
"Failed to send refresh request to inetd: %s"),
* Sends a stop event to the inetd start method process and wait till it goes * away. If inetd is determined to have stopped SMF_EXIT_OK is returned, else * SMF_EXIT_ERR_OTHER is returned. * Assume connect_to_inetd() failed because inetd was already * stopped, and return success. * This is safe to do since we're fired off in a separate process * than inetd and in the case we get wedged, the stop method timeout * will occur and we'd be killed by our restarter. /* write the stop request to inetd and wait till it goes away */ /* wait until remote end of socket is closed */ * This function is called to handle restarter events coming in from the * master restarter. It is registered with the master restarter via * restarter_bind_handle() and simply passes a pointer to the event down * the event pipe, which will be discovered by the poll in the event loop * and processed there. It waits for an acknowledgement to be written back down * the pipe before returning. * Writing a pointer to the function's 'event' parameter down the pipe will * be safe, as the thread in restarter_event_proxy() doesn't return until * the main thread has finished its processing of the passed event, thus * the referenced event will remain around until the function returns. * To impose the limit of only one event being in the pipe and processed * at once, a lock is taken on entry to this function and returned on exit. /* write the event to the main worker thread down the pipe */ * Wait for an acknowledgement that the event has been processed from * the same pipe. In the case that inetd is stopping, any thread in * this function will simply block on this read until inetd eventually * exits. This will result in this function not returning success to * its caller, and the event that was being processed when the * function exited will be re-sent when inetd is next started. * Something's seriously wrong with the event pipe. Notify the * worker thread by closing this end of the event pipe and pause till * Let restarter_event_proxy() know we're finished with the event it's blocked * upon. The 'processed' argument denotes whether we successfully processed the * If safe_write returns -1 something's seriously wrong with the event * pipe, so start the shutdown proceedings. * Switch the syslog identification string to 'ident'. * Perform TCP wrappers checks on this instance. Due to the fact that the * current wrappers code used in Solaris is taken untouched from the open * source version, we're stuck with using the daemon name for the checks, as * opposed to making use of instance FMRIs. Sigh. * Returns B_TRUE if the check passed, else B_FALSE. * Wrap the service using libwrap functions. The code below implements * the functionality of tcpd. This is done only for stream,nowait * services, following the convention of other vendors. udp/dgram and * stream/wait can NOT be wrapped with this libwrap, so be wary of * changing the test below. * Change the syslog message identity to the name of the * daemon being wrapped, as opposed to "inetd". "refused connect from %s (access denied)",
/* Revert syslog identity back to "inetd". */ * Handler registered with the timer queue code to remove an instance from * the connection rate offline state when it has been there for its allotted * Check whether this instance in the offline state is in transition to * another state and do the work to continue this transition. * If inetd is in the process of stopping, we don't want to enter * any states but offline, disabled and maintenance. * Schedule a timer to bring the instance out of the * connection rate offline state. "won't be brought on line after %d " * Create a socket bound to the instance's configured address. If the * bind fails, returns -1, else the fd of the bound socket. "Socket creation failure for instance %s, proto %s: %s"),
/* set the keepalive option */ "service instance %s, proto %s: %s"),
fmri,
/* restrict socket to IPv6 communications only */ "service instance %s, proto %s: %s"),
fmri,
proto,
"Failed to bind to the port of service instance %s, " * Retrieve and store the address bound to for RPC services. debug_msg(
"Listening for service %s with backlog queue" * Handler registered with the timer queue code to retry the creation * For each of the fds for the given instance that are bound, if 'listen' is * set add them to the poll set, else remove them from it. If proto_name is * not NULL then apply the change only to this specific protocol endpoint. * If any additions fail, returns -1, else 0 on success. * Handle the case were we either fail to create a bound fd or we fail * to add a bound fd to the poll set for the given instance. * We must be being called as a result of a failed poll_bound_fds() * as a bind retry is already scheduled. Just return and let it do * Check if the rebind retries limit is operative and if so, * if it has been reached. /* check if any of the fds are being poll'd upon */ if (
pi !=
NULL) {
/* polling on > 0 fds */ "all protocols for instance %s, " "transitioning to degraded"),
* In the case we failed the 'bind' because set_pollfd() * failed on all bound fds, use the offline handling. "protocols for instance %s, instance will go to " * Set the retries exceeded flag so when the method * completes the instance goes to the degraded state. "%s:%d: Unknown instance state %d.\n",
* bind re-scheduled, so if we're offline reflect this in the * Check if two transport protocols for RPC conflict. * If the protocol isn't TCP/IP or UDP/IP assume that it has its own * port namepsace and that conflicts can be detected by literal string * Check if inetd thinks this RPC program number is already registered. * An RPC protocol conflict occurs if * a) the program numbers are the same and, * b) the version numbers overlap, * c) the protocols (TCP vs UDP vs tic*) are the same. * Independent of the transport, for each of the entries in the instance's * proto list this function first attempts to create an associated network fd; * for RPC services these are then bound to a kernel chosen port and the * fd is registered with rpcbind; for non-RPC services the fds are bound * to the port associated with the instance's service name. On any successful * binds the instance is taken online. Failed binds are handled by * Loop through and try and bind any unbound protos. * We cast pi to a void so we can then go on to cast * it to a socket_info_t without lint complaining * about alignment. This is done because the x86 * version of lint thinks a lint suppression directive * is unnecessary and flags it as such, yet the sparc * version complains if it's absent. * Don't register the same RPC program number twice. * Doing so silently discards the old service * without causing an error. * If we've managed to bind at least one proto lets run the * online method, so we can start listening for it. return;
/* instance gone to maintenance */ * We're 'online', so start polling on any bound fds we're * We've successfully bound and poll'd upon all protos, * so reset the failure count. * Nothing to do here as the method completion code will start * listening for any successfully bound fds. * Counter to create_bound_fds(), for each of the bound network fds this * function unregisters the instance from rpcbind if it's an RPC service, * stops listening for new connections for it and then closes the listening fd. /* cancel any bind retries */ * Perform %A address expansion and return a pointer to a static string * array containing crafted arguments. This expansion is provided for * compatibility with 4.2BSD daemons, and as such we've copied the logic of * the legacy inetd to maintain this compatibility as much as possible. This * logic is a bit scatty, but it dates back at least as far as SunOS 4.x. static char addrbuf[
sizeof (
"ffffffff.65536")];
* We cast pi to a void so we can then go on to cast it to a * socket_info_t without lint complaining about alignment. This * is done because the x86 version of lint thinks a lint suppression * directive is unnecessary and flags it as such, yet the sparc * version complains if it's absent. /* set ret[0] to the basename of exec path */ * Returns the state associated with the supplied method being run for an * Store the method's PID and CID in the repository. If the store fails * we ignore it and just drive on. * Remove the method's PID and CID from the repository. If the removal * fails we ignore it and drive on. * Retrieves the current and next states internal states. Returns 0 on success, * else returns one of the following on error: * SCF_ERROR_NO_MEMORY if memory allocation failed. * SCF_ERROR_CONNECTION_BROKEN if the connection to the repository was broken. * SCF_ERROR_TYPE_MISMATCH if the property was of an unexpected type. * SCF_ERROR_NO_RESOURCES if the server doesn't have adequate resources. * SCF_ERROR_NO_SERVER if the server isn't running. /* retrieve internal states */ "Failed to read state of instance %s: %s"),
debug_msg(
"instance with no previous int state - " "setting state to uninitialized");
/* update convenience states */ * Retrieve stored process ids and register each of them so we process their "instance %s from repository: %s"),
inst->
fmri,
* The process must have already terminated. Remove /* synch the repository pid list to remove any terminated pids */ * Remove the passed instance from inetd control. /* stop listening for network connections */ /* stop listening for terminated methods */ * Refresh the configuration of instance 'inst'. This method gets called as * a result of a refresh event for the instance from the master restarter, so * we can rely upon the instance's running snapshot having been updated from * its configuration snapshot. * Ignore any possible changes, we'll re-read the configuration * automatically when we exit these states. /* Cancel scheduled bind retries. */ * Take the instance to the copies * offline state, via the offline * Since we're already in a DOS state, * don't bother evaluating the copies * limit. This will be evaluated when * process_offline_inst(). * Check if the copies limit has been increased * above the current count. * Try to avoid the overhead of taking an instance * offline and back on again. We do this by limiting * this behavior to two eventualities: * - there needs to be a re-bind to listen on behalf * of the instance with its new configuration. This * could be because for example its service has been * associated with a different port, or because the * v6only protocol option has been newly applied to * - one or both of the start or online methods of the * instance have changed in the new configuration. * Without taking the instance offline when the * start method changed the instance may be running * with unwanted parameters (or event an unwanted * binary); and without taking the instance offline * if its online method was to change, some part of * its running environment may have changed and would * not be picked up until the instance next goes * offline for another reason. }
else {
/* no bind config / method changes */ * swap the proto list over from the old * configuration to the new, so we retain * our set of network fds. /* re-evaluate copies limits based on new cfg */ * Since the instance isn't being * taken offline, where we assume it * would pick-up any configuration * changes automatically when it goes * back online, run its refresh method * to allow it to pick-up any changes debug_msg(
"Unhandled current state %d for instance in " * Called by process_restarter_event() to handle a restarter event for an * When startd restarts, it sends _ADD_INSTANCE to delegated * restarters for all those services managed by them. We should * acknowledge this event, as startd's graph needs to be updated * about the current state of the service, when startd is * update_state() is ok to be called here, as commands for * instances in transition are deferred by * process_restarter_event(). * We've got a restart event, so if the instance is online * in any way initiate taking it offline, and rely upon * our restarter to send us an online event to bring * inetd must be closing down as we wouldn't get this * event in one of these states from the master * restarter. Take the instance to the offline resting * Dependencies are met, let's take the service online. * Only try and bind for a wait type service if * no process is running on its behalf. Otherwise, just * mark the service online and binding will be attempted * when the process exits. * The instance should be disabled, so run the * instance's disabled method that will do the work * The master restarter has requested the instance * go to maintenance; since we're already offline * just update the state to the maintenance state. * The instance should be disabled. Firstly, as for * the above dependencies unmet comment, cancel * the bind retry timer and update the state to * offline. Then, run the disable method to do the * work to take the instance from offline to * The master restarter has requested the instance * be placed in the maintenance state. Cancel the * outstanding retry timer, and since we're already * offline, update the state to maintenance. * The instance needs to be disabled. Do the same work * as for the dependencies unmet event below to * take the instance offline. * Indicate that the offline method is being run * as part of going to the disabled state, and to * carry on this transition. * The master restarter has requested the instance be * placed in the maintenance state. This involves * firstly taking the service offline, so do the * same work as for the dependencies unmet event * below. We set the maintenance_req flag to * indicate that when we get to the offline state * we should be placed directly into the maintenance * Dependencies have become unmet. Close and * stop listening on the instance's network file * descriptor, and run the offline method to do * any work required to take us to the offline state. * Ignore other events until we know whether we're * We've got an enabled event; make use of the handling in the * The instance needs enabling. Commence reading its * configuration and if successful place the instance * in the offline state and let process_offline_inst() * The master restarter has requested the instance be * placed in the maintenance state, so just update its * The master restarter has requested that the instance * be taken out of maintenance. Read its configuration, * and if successful place the instance in the offline * state and call process_offline_inst() to take it * The configuration was invalid. If the * service has disabled requested, let's * just place the instance in disabled even * though we haven't been able to run its * disable method, as the slightly incorrect * state is likely to be less of an issue to * an administrator than refusing to move an * instance to disabled. If disable isn't * requested, re-mark the service's state * as maintenance, so the administrator can * see the request was processed. * The instance wants disabling. Take the instance * offline as for the dependencies unmet event above, * and then from there run the disable method to do * the work to take the instance to the disabled state. * The master restarter has requested the instance * be taken to maintenance. Cancel the timer setup * when we entered this state, and go directly to * The instance wants disabling. Update the state * to offline, and run the disable method to do the * work to take it to the disabled state. * The master restarter has requested the instance be * placed in maintenance. Since it's already offline * simply update the state. debug_msg(
"handle_restarter_event: instance in an " * Tries to read and process an event from the event pipe. If there isn't one * or an error occurred processing the event it returns -1. Else, if the event * is for an instance we're not already managing we read its state, add it to * our list to manage, and if appropriate read its configuration. Whether it's * new to us or not, we then handle the specific event. * Returns 0 if an event was read and processed successfully, else -1. * Try to read an event pointer from the event pipe. /* other end of pipe closed */ default:
/* unexpected read error */ * There's something wrong with the event pipe. Let's * shutdown and be restarted. * Check if we're currently managing the instance which the event * pertains to. If not, read its complete state and add it to our "Failed to adopt contracts of instance %s: %s"),
* Only read configuration for instances that aren't in any of * the disabled, maintenance or uninitialized states, since * they'll read it on state exit. * If the instance is currently running a method, don't process the * event now, but attach it to the instance for processing when * the instance finishes its transition. * Do the state machine processing associated with the termination of instance * 'inst''s start method for the 'proto_name' protocol if this parameter is not * A wait type service's start method has exited. * Check if the method was fired off in this inetd's * lifetime, or a previous one; if the former, * re-commence listening on the service's behalf; if * the latter, mark the service offline and let bind * If a bound fd exists, the method was fired * off during this inetd's lifetime. * Check if a nowait service should be brought back online * after exceeding its copies limit. * If the instance has a pending event process it and initiate the debug_msg(
"Injecting pending event %d for instance %s",
* Do the state machine processing associated with the termination * of the specified instance's non-start method with the specified status. * Once the processing of the termination is done, the function also picks up * any processing that was blocked on the method running. "transitioning to maintenance"),
/* non-failure method return */ * An instance method never returned a supported return code. * We'll assume this means the method succeeded for now whilst * non-GL-cognizant methods are used - eg. pkill. debug_msg(
"The %s method of instance %s returned " "non-compliant exit code: %d, assuming success",
* Update the state from the in-transition state. * If we've exhausted the bind retries, flag that by setting * the instance's state to degraded. * This instance was found during refresh to need * taking offline because its newly read configuration * was sufficiently different. Now we're offline, * activate this new configuration. * We've just successfully executed the online method. We have * a set of bound network fds that were created before running * this method, so now we're online start listening for * If we're now out of transition (process_offline_inst() could have * fired off another method), carry out any jobs that were blocked by * us being in transition. * inetd is stopping, and this instance hasn't * been stopped. Inject a stop event. * Check if configuration file specified is readable. If not return B_FALSE, "file %s for performing modification checks: %s"),
* Check whether the configuration file has changed contents since inetd * inetconv needs to be run. * No explicit config file specified, so see if one of the * default two are readable, checking the primary one first * followed by the secondary. /* modified config file */ "Configuration file %s has been modified since " "inetconv was last run. \"inetconv -i %s\" must be " "run to apply any changes to the SMF"),
file,
file);
/* No message if hash not yet computed */ "configuration file %s has been modified: %s"),
* Refresh all inetd's managed instances and check the configuration file * for any updates since inetconv was last run, logging a message if there * are. We call the SMF refresh function to refresh each instance so that * the refresh request goes through the framework, and thus results in the * running snapshot of each instance being updated from the configuration /* call libscf to send refresh requests for all managed instances */ * Log a message if the configuration file has changed since inetconv * Initiate inetd's shutdown. /* Block handling signals for stop and refresh */ /* Indicate inetd is coming down */ /* Stop polling on restarter events. */ * Send a stop event to all currently unstopped instances that * aren't in transition. For those that are in transition, the * event will get sent when the transition completes. * Sets up the intra-inetd-process Unix Domain Socket. * Returns -1 on error, else 0. * Handle an incoming request on the Unix Domain Socket. Returns -1 if there * was an error handling the event, else 0. /* Check peer credentials before acting on the request */ (
void)
poll(
NULL, 0,
100);
/* 100ms pause */ /* flag the request for event_loop() to process */ * Perform checks for common exec string errors. We limit the checks to * whether the file exists, is a regular file, and has at least one execute * bit set. We leave the core security checks to exec() so as not to duplicate * and thus incur the associated drawbacks, but hope to catch the common /* check the file exists */ "Can't stat the %s method of instance %s: %s"),
* Check if the file is a regular file and has at least one execute "The %s method of instance %s isn't a regular file"),
* If wrappers checks fail, pretend the method was exec'd and * Revert the disposition of handled signals and ignored signals to * their defaults, unblocking any blocked ones as a side effect. * Setup exec arguments. Do this before the fd setup below, so our * logging related file fd doesn't get taken over before we call /* Generate audit trail for start operations */ "the %s method of instance %s"),
* The inetd_connect audit record consists of: * Remote address and port * Set method context before the fd setup below so we can output an * error message if it fails. "for the %s method of instance %s");
"control for the %s method of instance %s");
}
else if (
strcmp(
errf,
"pool_set_binding") == 0) {
"instance %s to a pool due to a system " "for the %s method of instance %s");
"instance %s to a pool due to invalid " "instance %s to a pool due to invalid " "%s method of instance %s (%s: %s)"),
msg =
gettext(
"Failed to set credentials for the %s " "method of instance %s (out of memory)");
msg =
gettext(
"Failed to set credentials for the %s " "method of instance %s (no passwd or shadow " /* let exec() free mthd_ctxt */ /* start up logging again to report the error */ gettext(
"Failed to exec %s method of instance %s: %s"),
* We couldn't exec the start method for a wait type service. * Eat up data from the endpoint, so that hopefully the * service's fd won't wake poll up on the next time round * event_loop(). This behavior is carried over from the old * inetd, and it seems somewhat arbitrary that it isn't * also done in the case of fork failures; but I guess * it assumes an exec failure is less likely to be the result * of a resource shortage, and is thus not worth retrying. /* Carry out process assassination */ "start process (%ld) of instance %s: %s"),
* Runs the specified method of the specified service instance. * If the method was never specified, we handle it the same as if the * method was called and returned success, carrying on any transition the * instance may be in the midst of. * If the method isn't executable in its specified profile or an error occurs * forking a process to run the method in the function returns -1. * If a method binary is successfully executed, the function switches the * instance's cur state to the method's associated 'run' state and the next * state to the methods associated next state. * Returns -1 if there's an error before forking, else 0. * Don't bother updating the instance's state for the start method * as there isn't a separate start method state. * If the absent method is IM_OFFLINE, default action needs * to be taken to avoid lingering processes which can prevent * the upcoming rebinding from happening. "is unspecified. Taking default action: kill."),
/* Handle special method tokens, not allowed on start */ /* :true means nothing should be done */ /* Carry out contract assassination */ /* ENOENT means we didn't find any contracts */ "to contracts of instance %s: %s"),
sig,
* Get the associated method context before the fork so we can * modify the instances state if things go wrong. * Perform some basic checks before we fork to limit the possibility * of exec failures, so we can modify the instance state if necessary. "Unable to fork %s method of instance %s: %s"),
* Register this method so its termination is noticed and * the state transition this method participates in is * Since we will never find out about the termination * of this method, if it's a non-start method treat * is as a failure so we don't block restarter event * processing on it whilst it languishes in a method /* do tcp tracing for those nowait instances that request it */ * Only place a start method in maintenance if we're sure * that the failure was non-transient. /* treat the failure as if the method ran and failed */ * Handle an incoming connection request for a nowait service. * This involves accepting the incoming connection on a new fd. Connection * rate checks are then performed, transitioning the service to the * conrate offline state if these fail. Otherwise, the service's start method * is run (performing TCP wrappers checks if applicable as we do), and on * success concurrent copies checking is done, transitioning the service to the * copies offline state if this fails. /* accept nowait service connections on a new fd */ * Failed accept. Return and allow the event loop to initiate * another attempt later if the request is still present. * Limit connection rate of nowait services. If either conn_rate_max * or conn_rate_offline are <= 0, no connection rate limit checking * is done. If the configured rate is exceeded, the instance is taken * to the connrate_offline state and a timer scheduled to try and * bring the instance back online after the configured offline time. /* Generate audit record */ "rate limit audit event"));
* The inetd_ratelimit audit "Instance %s has exceeded its configured " "connection rate, additional connections " "will not be accepted for %d seconds"),
if (
ret == -
1)
/* the method wasn't forked */ * Limit concurrent connections of nowait services. /* Generate audit record */ * The inetd_copylimit audit record consists of: "configured copies, no new connections will be accepted"),
* Handle an incoming request for a wait type service. * Failure rate checking is done first, taking the service to the maintenance * state if the checks fail. Following this, the service's start method is run, * and on success, we stop listening for new requests for this service. * Detect broken servers and transition them to maintenance. If a * wait type service exits without accepting the connection or * consuming (reading) the datagram, that service's descriptor will * select readable again, and inetd will fork another instance of * the server. If either wait_fail_cnt or wait_fail_interval are <= 0, * no failure rate detection is done. /* Generate audit record */ "failure rate audit event"));
* The inetd_failrate audit record * Last two are expressed as k=v pairs "limit=%lld,interval=%d",
"Instance %s has exceeded its configured " "failure rate, transitioning to " * Stop listening for connections now we've fired off the * server for a wait type instance. * Process any networks requests for each proto for each instance. * Ignore instances in states that definitely don't have any * inetd's major work loop. This function sits in poll waiting for events * to occur, processing them when they do. The possible events are * master restarter requests, expired timer queue timers, stop/refresh signal * requests, contract events indicating process termination, stop/refresh * requests originating from one of the stop/refresh inetd processes and * The loop is exited when a stop request is received and processed, and * all the instances have reached a suitable 'stopping' state. * Process any stop/refresh requests from the Unix Domain * Process refresh request. We do this check after the UDS * event check above, as it would be wasted processing if we * started refreshing inetd based on a SIGHUP, and then were * told to shut-down via a UDS event. * We were interrupted by a signal. Don't waste any more * time processing a potentially inaccurate poll return. * Process any instance restarter events. * Process any expired timers (bind retry, con-rate offline, * If inetd is stopping, check whether all our managed * instances have been stopped and we can return. /* if all instances are stopped, return */ * We don't bother to undo the restarter interface at all. * Because of quirks in the interface, there is no way to * disconnect from the channel and cause any new events to be * queued. However, any events which are received and not * acknowledged will be re-sent when inetd restarts as long as inetd * uses the same subscriber ID, which it does. * By keeping the event pipe open but ignoring it, any events which * occur will cause restarter_event_proxy to hang without breaking /* Close audit session */ /* Setup instance list. */ gettext(
"Failed to create instance pool"),
gettext(
"Failed to create instance list"),
* Create event pipe to communicate events with the main event * loop and add it to the event loop's fdset. * We only leave the producer end to block on reads/writes as we * can't afford to block in the main thread, yet need to in * the restarter event thread, so it can sit and wait for an * acknowledgement to be written to the pipe. * Register with master restarter for managed service events. This * will fail, amongst other reasons, if inetd is already running. "Failed to register for restarter events: %s"),
/* Initialize auditing session */ /* Create pipe for child to notify parent of initialization success. */ }
else if (
child > 0) {
/* parent */ /* Wait on child to return success of initialization. */ "Initialization failed, unable to start"));
* Batch all initialization errors as 'other' errors, * resulting in retries being attempted. * Perform initialization and return success code down * Log a message if the configuration file has changed since * When inetd is run from outside the SMF, this message is output to provide * the person invoking inetd with further information that will help them * understand how to start and stop inetd, and to achieve the other * behaviors achievable with the legacy inetd command line interface, if "inetd is now an smf(5) managed service and can no longer be run " "command line. To enable or disable inetd refer to svcadm(1M) on\n" "how to enable \"%s\", the inetd instance.\n" "The traditional inetd command line option mappings are:\n" "\t-d : there is no supported debug output\n" "\t-s : inetd is only runnable from within the SMF\n" "\t-t : See inetadm(1M) on how to enable TCP tracing\n" "\t-r : See inetadm(1M) on how to set a failure rate\n" "To specify an alternative configuration file see svccfg(1M)\n" "for how to modify the \"%s/%s\" string type property of\n" "the inetd instance, and modify it according to the syntax:\n" "\"%s [alt_config_file] %%m\".\n" "For further information on inetd see inetd(1M).\n",
* Usage message printed out for usage errors when running under the SMF. * Returns B_TRUE if we're being run from within the SMF, else B_FALSE. * check if the instance fmri environment variable has been set by /* inetd invocation syntax is inetd [alt_conf_file] method_name */