kcf_sched.c revision 95014fbbfdc010ab9f3ed20db2154dc3322e9270
/* initialize the context for the consumer */ * Allocate a new async request node. * ictx - Framework private context pointer * crq - Has callback function and argument. Should be non NULL. * req - The parameters to pass to the SPI * Requests for context-less operations do not use the * fields - an_is_my_turn, and an_ctxchain_next. * Chain this request to the context. /* Insert the new request to the end of the chain. */ * Queue the request node and do one of the following: * - If there is an idle thread signal it to run. * - If there is no idle thread and max running threads is not * reached, signal the creator thread for more threads. * If the two conditions above are not met, we don't need to do * any thing. The request will be picked up by one of the * worker threads when it becomes available. /* Signal an idle thread to run */ * We keep the number of running threads to be at * kcf_minthreads to reduce gs_lock contention. * The following ensures the number of threads in pool * does not exceed kcf_maxthreads. /* Signal the creator thread for more threads */ * This routine is called by the taskq associated with * each hardware provider. We notify the kernel consumer * via the callback routine in case of CRYPTO_SUCCESS or * A request can be of type kcf_areq_node_t or of type * Wait if flow control is in effect for the provider. A * CRYPTO_PROVIDER_READY or CRYPTO_PROVIDER_FAILED * notification will signal us. We also get signaled if * the provider is unregistering. * Bump the internal reference count while the request is being * processed. This is how we know when it's safe to unregister * a provider. This step must precede the pd_state check below. * Fail the request if the provider has failed. We return a * recoverable error and the notified clients attempt any * recovery. For async clients this is done in kcf_aop_done() * and for sync clients it is done in the k-api routines. * We are in the per-hardware provider thread context and * hence can sleep. Note that the caller would have done * a taskq_dispatch(..., TQ_NOSLEEP) and would have returned. * We need to maintain ordering for multi-part requests. * an_is_my_turn is set to B_TRUE initially for a request * when it is enqueued and there are no other requests * for that context. It is set later from kcf_aop_done() when * the request before us in the chain of requests for the * context completes. We get signaled at that point. * The request is queued by the provider and we should * get a crypto_op_notification() from the provider later. * We notify the consumer at that time. }
else {
/* CRYPTO_SUCCESS or other failure */ * This routine checks if a request can be retried on another * provider. If true, mech1 is initialized to point to the mechanism * structure. mech2 is also initialized in case of a dual operation. fg * is initialized to the correct crypto_func_group_t bit flag. They are * initialized by this routine, so that the caller can pass them to a * kcf_get_mech_provider() or kcf_get_dual_provider() with no further change. * We check that the request is for a init or atomic routine and that * it is for one of the operation groups used from k-api . * This routine is called when a request to a provider has failed * with a recoverable error. This routine tries to find another provider * and dispatches the request to the new provider, if one is available. * We reuse the request structure. * A return value of NULL from kcf_get_mech_provider() indicates * we have tried the last provider. * Add old_pd to the list of providers already tried. * We release the new hold on old_pd in kcf_free_triedlist(). * We reuse the old context by resetting provider specific /* We reuse areq. by resetting the provider and context fields. */ * Routine called by both ioctl and k-api. The consumer should * bundle the parameters into a kcf_req_params_t structure. A bunch * of macros are available in ops_impl.h for this bundling. They are: * KCF_WRAP_DIGEST_OPS_PARAMS() * KCF_WRAP_MAC_OPS_PARAMS() * KCF_WRAP_ENCRYPT_OPS_PARAMS() * KCF_WRAP_DECRYPT_OPS_PARAMS() ... etc. * It is the caller's responsibility to free the ctx argument when * appropriate. See the KCF_CONTEXT_COND_RELEASE macro for details. * Special case for CRYPTO_SYNCHRONOUS providers that * never return a CRYPTO_QUEUED error. We skip any * request allocation and call the SPI directly. * Note that we do not need to hold the context * for synchronous case as the context will never * become invalid underneath us. We do not need to hold * the provider here either as the caller has a hold. * Call the SPI directly if the taskq is empty and the * provider is not busy, else dispatch to the taskq. * Calling directly is fine as this is the synchronous * case. This is unlike the asynchronous case where we * must always dispatch to the taskq. * We can not tell from taskq_dispatch() return * value if we exceeded maxalloc. Hence the * check here. Since we are allowed to wait in * the synchronous case, we wait for the taskq * Wait for the notification to arrive, * if the operation is not done yet. * Bug# 4722589 will make the wait a cv_wait_sig(). }
else {
/* Asynchronous cases */ * This case has less overhead since there is * no switching of context. * CRYPTO_ALWAYS_QUEUE is set. We need to * queue the request and return. * Set the request handle. This handle * is used for any crypto_cancel_req(9f) * calls from the consumer. We have to * do this before dispatching the * There is an error processing this * request. Remove the handle and * release the request structure. * We need to queue the request and return. * We can not tell from taskq_dispatch() return * value if we exceeded maxalloc. Hence the check * Set the request handle. This handle is used * for any crypto_cancel_req(9f) calls from the * consumer. We have to do this before dispatching * We're done with this framework context, so free it. Note that freeing * framework context (kcf_context) frees the global context (crypto_ctx). * The provider is responsible for freeing provider private context after a * final or single operation and resetting the cc_provider_private field * to NULL. It should do this before it notifies the framework of the * completion. We still need to call KCF_PROV_FREE_CONTEXT to handle cases * like crypto_cancel_ctx(9f). /* Release the second context, if any */ * Increment the provider's internal refcnt so it * doesn't unregister from the framework while * we're calling the entry point. /* kcf_ctx->kc_prov_desc has a hold on pd */ /* check if this context is shared with a software provider */ * Free the request after releasing all the holds. * Utility routine to remove a request from the chain of requests * Get context lock, search for areq in the chain and remove it. * Remove the specified node from the global software queue. * The caller must hold the queue lock and request lock (an_lock). * Remove and return the first node in the global software queue. * The caller must hold the queue lock. * Add the request node to the end of the global software queue. * The caller should not hold the queue lock. Returns 0 if the * request is successfully queued. Returns CRYPTO_BUSY if the limit * on the number of jobs is exceeded. /* an_lock not needed here as we hold gs_lock */ * Decrement the thread pool count and signal the failover * thread if we are the last one out. * Function run by a thread from kcfpool to work on global software queue. * It is called from ioctl(CRYPTO_POOL_RUN, ...). * A signal (as in kill(2)) is pending. We did * not get any cv_signal(). * Timed out and we are not signaled. Let us * see if this thread should exit. We should * keep at least kcf_minthreads. /* Resume the wait for work */ * We are signaled to work on the queue. if (
ictx ==
NULL) {
/* Context-less operation */ * We check if we can work on the request now. * Solaris does not guarantee any order on how the threads * are scheduled or how the waiters on a mutex are chosen. * So, we need to maintain our own order. * is_my_turn is set to B_TRUE initially for a request when * it is enqueued and there are no other requests * for that context. Note that a thread sleeping on * an_turn_cv is not counted as an idle thread. This is * because we define an idle thread as one that sleeps on the * global queue waiting for new requests. * kmem_cache_alloc constructor for sync request structure. * kmem_cache_alloc constructor for async request structure. * kmem_cache_alloc constructor for kcf_context structure. * Creates and initializes all the structures needed by the framework. * Create all the kmem caches needed by the framework. We set the * align argument to 64, to get a slab aligned to 64-byte as well as * have the objects (cache_chunksize) to be a 64-byte multiple. * This helps to avoid false sharing as this is the size of the /* Initialize the global reqid table */ /* Allocate and initialize the thread pool */ /* Initialize the event notification list variables */ /* Initialize the crypto_bufcall list variables */ /* Create the kcf kstat */ * kcf_sched_running flag isn't protected by a lock. But, we are safe because * the first thread ("cryptoadm refresh") calling this routine during * boot time completes before any other thread that can call this routine. /* Start the failover kernel thread for now */ /* Start the background processing thread. */ * Signal the waiting sync client. * Callback the async client with the operation status. * We free the async request node and possibly the context. * We also handle any chain of requests hanging off of * Handle recoverable errors. This has to be done first * before doing any thing else in this routine so that * we do not change the state of the request. * We try another provider, if one is available. Else * we continue with the failure notification to the * A request after it is removed from the request * queue, still stays on a chain of requests hanging * of its context structure. It needs to be removed * from this chain at this point. * NOTE - We do not release the context in case of update * operations. We require the consumer to free it explicitly, * in case it wants to abandon an update operation. This is done * as there may be mechanisms in ECB mode that can continue * even if an operation on a block fails. /* Deal with the internal continuation to this request first */ * If CRYPTO_NOTIFY_OPDONE flag is set, we should notify * always. If this flag is clear, we skip the notification * provided there are no errors. We check this flag for only * init or update operations. It is ignored for single, final or * Allocate the thread pool and initialize all the fields. * This function is run by the 'creator' thread in the pool. * It is called from ioctl(CRYPTO_POOL_WAIT, ...). /* Check if there's already a user thread waiting on this kcfpool */ /* Go to sleep, waiting for the signaled flag. */ /* Interrupted, return to handle exit or signal */ * kcfd is exiting. Release the door and /* Timed out. Recalculate the min/max threads */ /* Worker thread did a cv_signal() */ /* Return to userland for possible thread creation. */ * This routine introduces a locking order for gswq->gs_lock followed * This means that no consumer of the k-api should hold cpu_lock when calling * This is the main routine of the failover kernel thread. * If there are any threads in the pool we sleep. The last thread in the * pool to exit will signal us to get to work. We get back to sleep * once we detect that the pool has threads. * Note that in the hand-off from us to a pool thread we get to run once. * Since this hand-off is a rare event this should be fine. * Wait if there are any threads are in the pool. * Get the requests from the queue and wait if needed. * We check the kp_threads since kcfd could have started * while we are waiting on the global software queue. "and restart kcfd. Using the failover kernel " * Get to work on the request. * Insert the async request in the hash table after assigning it * The ID is used by the caller to pass as an argument to a * cancel_req() routine later. * Delete the async request from the hash table. * Cancel a single asynchronous request. * We guarantee that no problems will result from calling * crypto_cancel_req() for a request which is either running, or * has already completed. We remove the request from any queues * if it is possible. We wait for request completion if the * request is dispatched to a provider. * Can be called from user context only. * NOTE: We acquire the following locks in this routine (in order): * - rt_lock (kcf_reqid_table_t) * - ictx->kc_in_use_lock (from kcf_removereq_in_ctxchain()) * This locking order MUST be maintained in code every where else. * We found the request. It is either still waiting * in the framework queues or running at the provider. /* This request can be safely canceled. */ /* Remove from gswq, global software queue. */ /* Remove areq from hash table and free it. */ * There is no interface to remove an entry * once it is on the taskq. So, we do not do * any thing for a hardware provider. * The request is running. Wait for the request completion * Cancel all asynchronous requests associated with the * passed in crypto context and free it. * A client SHOULD NOT call this routine after calling a crypto_*_final * routine. This routine is called only during intermediate operations. * The client should not use the crypto context after this function returns * Can be called from user context only. /* Walk the chain and cancel each request */ * We have to drop the lock here as we may have * to wait for request completion. We hold the * request before dropping the lock though, so that it * won't be freed underneath us. * The failover thread is counted in kp_idlethreads in * some corner cases. This is done to avoid doing more checks * when submitting a request. We account for those cases below. * Allocate and initiatize a kcf_dual_req, used for saving the arguments of * a dual operation or an atomic operation that has to be internally * simulated with multiple single steps. * crq determines the memory allocation flags. /* Copy the whole crypto_call_req struct, as it isn't persistent */ * Callback routine for the next part of a simulated dual part. * Schedules the next step. * This routine can be called from interrupt context. /* Stop the processing if an error occurred at this step */ * The next req is submitted with the same reqid as the * first part. The consumer only got back that reqid, and * should still be able to cancel the operation during its /* No expected recoverable failures, so no retry list */ /* Validate the MAC context template here */ /* No expected recoverable failures, so no retry list */ /* The second step uses len2 and offset2 of the dual_data */ /* preserve if the caller is restricted */ * We would like to call kcf_submit_request() here. But, * that is not possible as that routine allocates a new * kcf_areq_node_t request structure, while we need to * reuse the existing request structure. * Set the params for the second step in the * Note that we have to do a taskq_dispatch() * here as we may be in interrupt context. * We have to release the holds on the request and the provider /* restore, clean up, and invoke the client's callback */ * Last part of an emulated dual operation. * Clean up and restore ... /* The submitter used kcf_last_req as its callback */