_utility.c revision 67dbe2be0c0f1e2eb428b89088bb5667e8f0b9f6
* Not found or a forced sync is required. * check if this is a valid TLI/XTI descriptor. * not a stream or a TLI endpoint with no timod * XXX Note: If it is a XTI call, we push "timod" and * try to convert it into a transport endpoint later. * We do not do it for TLI and "retain" the old buggy * behavior because ypbind and a lot of other deamons seem * to use a buggy logic test of the form * "(t_getstate(0) != -1 || t_errno != TBADF)" to see if * they we ever invoked with request on stdin and drop into * untested code. This test is in code generated by rpcgen * which is why it is replicated test in many daemons too. * We will need to fix that test too with "IsaTLIendpoint" * test if we ever fix this for TLI * "timod" not already on stream, then push it * Assumes (correctly) that I_PUSH is * atomic w.r.t signals (EINTR error) * Try to (re)constitute the info at user level from state * in the kernel. This could be information that lost due * to an exec or being instantiated at a new descriptor due * to , open(), dup2() etc. * _t_create() requires that all signals be blocked. * Note that sig_mutex_lock() only defers signals, it does not * block them, so interruptible syscalls could still get EINTR. * restore to stream before timod pushed. It may * not have been a network transport stream. * copy data to output buffer making sure the output buffer is 32 bit * aligned, even though the input buffer may not be. * Aligned copy will overflow buffer * append data and control info in look buffer (list in the MT case) * The only thing that can be in look buffer is a T_DISCON_IND, * T_ORDREL_IND or a T_UDERROR_IND. * It also enforces priority of T_DISCONDs over any T_ORDREL_IND * already in the buffer. It assumes no T_ORDREL_IND is appended * when there is already something on the looklist (error case) and * that a T_ORDREL_IND if present will always be the first on the * This also assumes ti_lock is held via sig_mutex_lock(), * so signals are deferred here. /* can't fit - return error */ * Enforce priority of T_DISCON_IND over T_ORDREL_IND * Note: Since there can be only at most one T_ORDREL_IND * queued (more than one is error case), and we look for it * on each append of T_DISCON_IND, it can only be at the * head of the list if it is there. /* LINTED pointer cast */ /* appending discon ind */ /* LINTED pointer cast */ * Allocate and append a new lookbuf to the * existing list. (Should only happen in MT case) * signals are deferred, calls to malloc() are safe. * Allocate the buffers. The sizes derived from the * sizes of other related buffers. See _t_alloc_bufs() /* giving up - free other memory chunks */ /* giving up - free other memory chunks */ return (0);
/* ok return */ * Is there something that needs attention? * Assumes tiptr->ti_lock held and this threads signals blocked * assumes tiptr->ti_lock held in MT case * Temporarily convert a non blocking endpoint to a * blocking one and restore status later /* did I get entire message */ * is ctl part large enough to determine type? /* LINTED pointer cast */ * if error is out of state and there is something * on read queue, then indicate to user that * there is something that needs attention /* fallthru to err_out: */ * alloc scratch buffers and look buffers /* compensate for XTI level options */ * We compute the largest buffer size needed for this provider by * adding the components. [ An extra sizeof (t_scalar_t) is added to * take care of rounding off for alignment) for each buffer ] * The goal here is compute the size of largest possible buffer that * might be needed to hold a TPI message for the transport provider * Note: T_ADDR_ACK contains potentially two address buffers. /* first addr buffer plus alignment */ /* second addr buffer plus ailignment */ /* option buffer plus alignment */ * Note: The head of the lookbuffers list (and associated buffers) * is allocated here on initialization. * More allocated on demand. * Note: This routine is designed for a "reinitialization" * Following fields are not modified here and preserved. * The above fields have to be separately initialized if this * is used for a fresh initialization. * Link manipulation routines. * NBUCKETS hash buckets are used to give fast * access. The number is derived the file descriptor softlimit * Allocates a new link and returns a pointer to it. * Assumes that the caller is holding _ti_userlock via sig_mutex_lock(), * so signals are deferred here. * Walk along the bucket looking for * duplicate entry or the end. * This can happen when the user has close(2)'ed * a descriptor and then been allocated it again * We will re-use the existing _ti_user struct * in this case rather than using the one * we allocated above. If there are buffers * associated with the existing _ti_user * struct, they may not be the correct size, * so we can not use it. We free them * here and re-allocate a new ones * Allocate and link in a new one. * First initialize fields common with reinitialization and * Find a link by descriptor * Assumes that the caller is holding _ti_userlock. * Walk along the bucket looking for the descriptor. * Assumes that the caller is holding _ti_userlock. * Also assumes that all signals are blocked. * Walk along the bucket looking for * free resource associated with the curptr * Allocate a TLI state structure and synch it with the kernel * Assumes that the caller is holding the _ti_userlock and has blocked signals. * This function may fail the first time it is called with given transport if it * doesn't support T_CAPABILITY_REQ TPI message. * Aligned data buffer for ioctl. /* preferred location first local variable */ * Note: We use "ioctlbuf" allocated on stack above with * room to grow since (struct ti_sync_ack) can grow in size * on future kernels. (We do not use malloc'd "ti_ctlbuf" as that * part of instance structure which may not exist yet) * Its preferred declaration location is first local variable in this * procedure as bugs causing overruns will be detectable on * platforms where procedure calling conventions place return * address on stack (such as x86) instead of causing silent * Use ioctl required for sync'ing state with kernel. * We use two ioctls. TI_CAPABILITY is used to get TPI information and * TI_SYNC is used to synchronise state with timod. Statically linked * TLI applications will no longer work on older releases where there * are no TI_SYNC and TI_CAPABILITY. * Request info about transport. * Assumes that TC1_INFO should always be implemented. * For TI_CAPABILITY size argument to ioctl specifies maximum buffer * TI_CAPABILITY may fail when transport provider doesn't * support T_CAPABILITY_REQ message type. In this case file * descriptor may be unusable (when transport provider sent * M_ERROR in response to T_CAPABILITY_REQ). This should only * happen once during system lifetime for given transport * provider since timod will emulate TI_CAPABILITY after it * XTI ONLY - TLI "struct t_info" does not * Some day there MAY be a NEW bit in T_info_ack * PROVIDER_flag namespace exposed by TPI header * <sys/tihdr.h> which will functionally correspond to * role played by T_ORDRELDATA in info->flags namespace * When that bit exists, we can add a test to see if * it is set and set T_ORDRELDATA. * Note: Currently only mOSI ("minimal OSI") provider * is specified to use T_ORDRELDATA so probability of * if first time or no instance (after fork/exec, dup etc, * then create initialize data structure * Allocate buffers for the new descriptor /* Fill instance structure */ * Restore state from kernel (caveat some heuristics) * Sync information with timod. * This is a "less than" check as "struct ti_sync_ack" returned by * TI_SYNC can grow in size in future kernels. If/when a statically * linked application is run on a future kernel, it should not fail. char databuf[
sizeof (
int)];
/* size unimportant - anything > 0 */ * Peek at message on stream head (if any) * If peek shows something at stream head, then * Adjust "outstate" based on some heuristics. * The following heuristic is to handle data * ahead of T_DISCON_IND indications that might * be at the stream head waiting to be * read (T_DATA_IND or M_DATA) /* LINTED pointer cast */ * The following heuristic is to handle * the case where the connection is established * and in data transfer state at the provider * but the T_CONN_CON has not yet been read /* LINTED pointer cast */ * The following heuristic is to handle data * ahead of T_ORDREL_IND indications that might * be at the stream head waiting to be * read (T_DATA_IND or M_DATA) /* LINTED pointer cast */ * Assumes caller has blocked signals at least in this thread (for safe * Assumes caller has blocked signals at least in this thread (for safe * Free lookbuffer structures and associated resources * Assumes ti_lock held for MT case. * The structure lock should be held or the global list * manipulation lock. The assumption is that nothing * else can access the descriptor since global list manipulation * lock is held so it is OK to manipulate fields without the * Free only the buffers in the first lookbuf * Free the node and the buffers in the rest of the * Free lookbuffer event list head. * Consume current lookbuffer event * Assumes ti_lock held for MT case. * Note: The head of this list is part of the instance * structure so the code is a little unorthodox. * Free the control and data buffers * Replace with next lookbuf event contents * Decrement the flag - should never get to zero. * No more look buffer events - just clear the flag * and leave the buffers alone * Discard lookbuffer events. * Assumes ti_lock held for MT case. * Leave the first nodes buffers alone (i.e. allocated) * Blow away the rest of the list * This routine checks if the receive. buffer in the instance structure * is available (non-null). If it is, the buffer is acquired and marked busy * (null). If it is busy (possible in MT programs), it allocates a new * buffer and sets a flag indicating new memory was allocated and the caller * tiptr->ti_ctlbuf is in use * allocate new buffer and free after use. * This routine checks if the receive buffer in the instance structure * is available (non-null). If it is, the buffer is acquired and marked busy * (null). If it is busy (possible in MT programs), it allocates a new * buffer and sets a flag indicating new memory was allocated and the caller * Note: The receive buffer pointer can also be null if the transport * just when it is "busy". In that case, ti_rcvsize will be 0 and that is * used to instantiate the databuf which points to a null buffer of * length 0 which is the right thing to do for that case. * tiptr->ti_rcvbuf is in use * allocate new buffer and free after use. * This routine requests timod to look for any expedited data * queued in the "receive buffers" in the kernel. Used for XTI * t_look() semantics for transports that send expedited data * On a successful return, the location pointed by "expedited_queuedp" * 0 if no expedited data is found queued in "receive buffers" * 1 if expedited data is found queued in "receive buffers" /* preferred location first local variable */ /* see note in _t_create above */ /* request info on rq expinds */ * This is a "less than" check as "struct ti_sync_ack" returned by * TI_SYNC can grow in size in future kernels. If/when a statically * linked application is run on a future kernel, it should not fail. * like t_sndv(), t_rcvv() etc..follow below. * _t_bytecount_upto_intmax() : * Sum of the lengths of the individual buffers in * the t_iovec array. If the sum exceeds INT_MAX * it is truncated to INT_MAX. return ((
unsigned int)
nbytes);
* Gather the data in the t_iovec buffers, into a single linear buffer * starting at dataptr. Caller must have allocated sufficient space * starting at dataptr. The total amount of data that is gathered is * limited to INT_MAX. Any remaining data in the t_iovec buffers is * Scatter the data from the single linear buffer at pdatabuf->buf into * There cannot be any uncopied data leftover in pdatabuf * at the conclusion of this function. (asserted below) * Adjust the iovec array, for subsequent use. Examine each element in the * iovec array,and zero out the iov_len if the buffer was sent fully. * otherwise the buffer was only partially sent, so adjust both iov_len and * Copy the t_iovec array to the iovec array while taking care to see * that the sum of the buffer lengths in the result is not more than * INT_MAX. This function requires that T_IOV_MAX is no larger than * IOV_MAX. Otherwise the resulting array is not a suitable input to * writev(). If the sum of the lengths in t_iovec is zero, so is the * Routine called after connection establishment on transports where * connection establishment changes certain transport attributes such as * This T_CAPABILITY_REQ should not fail, even if it is unsupported * by the transport provider. timod will emulate it in that case. * T_capability TPI messages are extensible and can grow in future. * However timod will take care of returning no more information * than what was requested, and truncating the "extended" * information towards the end of the T_capability_ack, if necessary. * The T_info_ack part of the T_capability_ack is guaranteed to be * present only if the corresponding TC1_INFO bit is set * Note: Sync with latest information returned in "struct T_info_ack * but we deliberately not sync the state here as user level state * construction here is not required, only update of attributes which * may have changed because of negotations during connection