mod_cache.c revision 5b6a4b0e8d6d52394b68b51e0fa439d0eee16e37
* limitations under the License. /* -------------------------------------------------------------- */ /* Handles for cache filters, resolved at startup to eliminate * a name-to-function mapping on each request * Can we deliver this request from the cache? * deliver the content by installing the CACHE_OUT filter. * check whether we're allowed to try cache it * By default, the cache handler runs in the quick handler, bypassing * virtually all server processing and offering the cache its optimal * performance. In this mode, the cache bolts onto the front of the * server, and behaves as a discrete RFC2616 caching proxy * Under certain circumstances, an admin might want to run the cache as * a normal handler instead of a quick handler, allowing the cache to * run after the authorisation hooks, or by allowing fine control over * the placement of the cache in the filter chain. This option comes at * a performance penalty, and should only be used to achieve specific * caching goals where the admin understands what they are doing. /* only run if the quick handler is enabled */ * Which cache module (if any) should handle this request? /* make space for the per request config */ /* save away the possible providers */ * Are we allowed to serve cached info at all? /* find certain cache controlling headers */ /* First things first - does the request allow us to return * cached information at all? If not, just decline the request. /* Are we PUT/POST/DELETE? If so, prepare to invalidate the cached entities. /* Add cache_invalidate filter to this request to force a * cache entry to be invalidated if the response is * ultimately successful (2xx). * Try to serve this request from the cache. * If no existing cache file (DECLINED) /* try to obtain a cache lock at this point. if we succeed, * we are the first to try and cache this url. if we fail, * it means someone else is already trying to cache this * url, and we should just let the request through to the * backend without any attempt to cache. this stops * duplicated simultaneous attempts to cache an entity. * Add cache_save filter to cache this request. Choose * the correct filter by checking if we are a subrequest r,
APLOGNO(00
749)
"Adding CACHE_SAVE_SUBREQ filter for %s",
r,
APLOGNO(00
750)
"Adding CACHE_SAVE filter for %s",
"Adding CACHE_REMOVE_URL filter for %s",
/* Add cache_remove_url filter to this request to remove a * stale cache entry if needed. Also put the current cache * request rec in the filter context, as the request that * is available later during running the filter may be * different due to an internal redirect. r,
APLOGNO(00
752)
"Cache locked for url, not caching " r,
APLOGNO(00
753)
"Restoring request headers for %s",
/* we've got a cache hit! tell everyone who cares */ /* if we are a lookup, we are exiting soon one way or another; Restore "Restoring request headers.");
/* If we are a lookup, we have to return DECLINED as we have no * way of knowing if we will be able to serve the content. /* Return cached status. */ /* If we're a lookup, we can exit now instead of serving the content. */ /* Serve up the content */ /* We are in the quick handler hook, which means that no output * filters have been set. So lets run the insert_filter hook. * Add cache_out filter to serve this request. Choose * the correct filter by checking if we are a subrequest * Remove all filters that are before the cache_out filter. This ensures * that we kick off the filter stack with our cache_out filter being the * first in the chain. This make sense because we want to restore things * in the same manner as we saved them. * There may be filters before our cache_out filter, because * 1. We call ap_set_content_type during cache_select. This causes * Content-Type specific filters to be added. * 2. We call the insert_filter hook. This causes filters e.g. like * the ones set with SetOutputFilter to be added. /* kick off the filter stack */ "cache_quick_handler(%s): ap_pass_brigade returned",
* If the two filter handles are present within the filter chain, replace * the last instance of the first filter with the last instance of the * second filter, and return true. If the second filter is not present at * all, the first filter is removed, and false is returned. If neither * filter is present, false is returned and this function does nothing. * If a stop filter is specified, processing will stop once this filter is * Find the given filter, and return it if found, or NULL otherwise. * The cache handler is functionally similar to the cache_quick_hander, * however a number of steps that are required by the quick handler are * not required here, as the normal httpd processing has already handled /* only run if the quick handler is disabled */ * Which cache module (if any) should handle this request? /* make space for the per request config */ /* save away the possible providers */ * Are we allowed to serve cached info at all? /* Are we PUT/POST/DELETE? If so, prepare to invalidate the cached entities. /* Add cache_invalidate filter to this request to force a * cache entry to be invalidated if the response is * ultimately successful (2xx). * Try to serve this request from the cache. * If no existing cache file (DECLINED) /* try to obtain a cache lock at this point. if we succeed, * we are the first to try and cache this url. if we fail, * it means someone else is already trying to cache this * url, and we should just let the request through to the * backend without any attempt to cache. this stops * duplicated simultaneous attempts to cache an entity. * Add cache_save filter to cache this request. Choose * the correct filter by checking if we are a subrequest r,
APLOGNO(00
756)
"Adding CACHE_SAVE_SUBREQ filter for %s",
r,
APLOGNO(00
757)
"Adding CACHE_SAVE filter for %s",
* Did the user indicate the precise location of the * CACHE_SAVE filter by inserting the CACHE filter as a * If so, we get cunning and replace CACHE with the * CACHE_SAVE filter. This has the effect of inserting * the CACHE_SAVE filter at the precise location where * the admin wants to cache the content. All filters that * lie before and after the original location of the CACHE * filter will remain in place. r,
APLOGNO(00
758)
"Replacing CACHE with CACHE_SAVE " "filter for %s", r->
uri);
/* save away the save filter stack */ "Adding CACHE_REMOVE_URL filter for %s",
/* Add cache_remove_url filter to this request to remove a * stale cache entry if needed. Also put the current cache * request rec in the filter context, as the request that * is available later during running the filter may be * different due to an internal redirect. r,
APLOGNO(00
760)
"Cache locked for url, not caching " /* we've got a cache hit! tell everyone who cares */ /* Serve up the content */ * Add cache_out filter to serve this request. Choose * the correct filter by checking if we are a subrequest * Did the user indicate the precise location of the CACHE_OUT filter by * inserting the CACHE filter as a marker? * If so, we get cunning and replace CACHE with the CACHE_OUT filters. * This has the effect of inserting the CACHE_OUT filter at the precise * location where the admin wants to cache the content. All filters that * lie *after* the original location of the CACHE filter will remain in r,
APLOGNO(00
761)
"Replacing CACHE with CACHE_OUT filter for %s",
* Remove all filters that are before the cache_out filter. This ensures * that we kick off the filter stack with our cache_out filter being the * first in the chain. This make sense because we want to restore things * in the same manner as we saved them. * There may be filters before our cache_out filter, because * 1. We call ap_set_content_type during cache_select. This causes * Content-Type specific filters to be added. * 2. We call the insert_filter hook. This causes filters e.g. like * the ones set with SetOutputFilter to be added. /* kick off the filter stack */ * Deliver cached content (headers and body) up the stack. /* user likely configured CACHE_OUT manually; they should use mod_cache * configuration to do that */ "CACHE/CACHE_OUT filter enabled while caching is disabled, ignoring");
"cache: running CACHE_OUT filter");
/* clean out any previous response up to EOS, if any */ /* restore content type of cached response if available */ /* Needed especially when stale content gets served. */ /* restore status of cached response */ /* recall_headers() was called in cache_select() */ /* This filter is done once it has served up its content */ "cache: serving %s", r->
uri);
* Having jumped through all the hoops and decided to cache the * response, call store_body() for each brigade, handling the * case where the provider can't swallow the full brigade. In this * case, we write the brigade we were passed out downstream, and * loop around to try and cache some more until the in brigade is * completely empty. As soon as the out brigade contains eos, call * commit_entity() to finalise the cached element. /* pass the brigade in into the cache provider, which is then * expected to move cached buckets to the out brigade, for us * to pass up the filter stack. repeat until in is empty, or "cache: Cache provider's store_body failed!");
/* give someone else the chance to cache the file */ /* give up trying to cache, just step out the way */ /* does the out brigade contain eos? if so, we're done, commit! */ /* conditionally remove the lock as soon as we see the eos bucket */ /* cache provider wants more data before passing the brigade * upstream, oblige the provider by leaving to fetch more. /* oops, no data out, but not all data read in either, be * safe and stand down to prevent a spin. "cache: Cache provider's store_body returned an " "empty brigade, but didn't consume all of the" "input brigade, standing down to prevent a spin");
/* give someone else the chance to cache the file */ * Sanity check for 304 Not Modified responses, as per RFC2616 Section 10.3.5. * Decide whether or not this content should be cached. * If we decide no it should not: * remove the filter from the chain * If we decide yes it should: * Have we already started saving the response? * If we have started, pass the data to the storage manager via store_body * Check to see if we *can* save this particular response. * If we can, call cache_create_entity() and save the headers and body * Finally, pass the data to the next filter (the network or whatever) * After the various failure cases, the cache lock is proactively removed, so * that another request is given the opportunity to attempt to cache without * waiting for a potentially slow client to acknowledge the failure. /* Setup cache_request_rec */ /* user likely configured CACHE_SAVE manually; they should really use * mod_cache configuration to do that * This section passes the brigades into the cache modules, but only * if the setup section (see below) is complete. /* We've already sent down the response and EOS. So, ignore /* have we already run the cacheability check and set up the * This section opens the cache entity and sets various caching * parameters, and decides whether this URL should be cached at * all. This section is* run before the above section. /* RFC2616 13.8 Errors or Incomplete Response Cache Behavior: * If a cache receives a 5xx response while attempting to revalidate an * entry, it MAY either forward this response to the requesting client, * or act as if the server failed to respond. In the latter case, it MAY * return a previously received response unless the cached entry * includes the "must-revalidate" cache-control directive (see section * This covers the case where an error was generated behind us, for example * by a backend server via mod_proxy. /* morph the current save filter into the out filter, and serve from /* add a revalidation warning */ "111 Revalidation failed");
"cache hit: %d status; stale content returned",
/* give someone else the chance to cache the file */ /* pass brigade to our morphed out filter */ /* read expiry date; if a bad date, then leave it so the client can /* read the last-modified date; if the date is bad, then delete it */ /* read the etag and cache-control from the entity */ /* Have we received a 304 response without any headers at all? Fall back to * the original headers in the original cached request. /* Parse the cache control header */ * what responses should we not cache? * At this point we decide based on the response headers whether it * is appropriate _NOT_ to cache the data from the server. There are * a whole lot of conditions that prevent us from caching this data. * They are tested here one by one to be clear and unambiguous. /* RFC2616 13.4 we are allowed to cache 200, 203, 206, 300, 301 or 410 * We allow the caching of 206, but a cache implementation might choose * to decline to cache a 206 if it doesn't know how to. * We include 304 Not Modified here too as this is the origin server * telling us to serve the cached copy. /* We are also allowed to cache any response given that it has a * valid Expires or Cache Control header. If we find a either of * those here, we pass request through the rest of the tests. From * A response received with any other status code (e.g. status * codes 302 and 307) MUST NOT be returned in a reply to a * subsequent request unless there are cache-control directives or * another header(s) that explicitly allow it. For example, these * include the following: an Expires header (section 14.21); a * "max-age", "s-maxage", "must-revalidate", "proxy-revalidate", * "public" or "private" cache-control directive (section 14.9). /* if a broken Expires header is present, don't cache it */ /* if a Expires header is in the past, don't cache it */ reason =
"Expires header already expired; not cacheable";
/* if we're already stale, but can never revalidate, don't cache it */ =
"s-maxage or max-age zero and no Last-Modified or Etag; not cacheable";
/* if a query string is present but no explicit expiration time, * don't cache it (RFC 2616/13.9 & 13.2.1) reason =
"Query string present but no explicit expiration time";
/* if the server said 304 Not Modified but we have no cache * file - pass this untouched to the user agent, it's not for us. reason =
"HTTP Status 304 Not Modified";
/* 200 OK response from HTTP/1.0 and up without Last-Modified, * Etag, Expires, Cache-Control:max-age, or Cache-Control:s-maxage * is why we have an optional function for a key-gen ;-) reason =
"No Last-Modified; Etag; Expires; Cache-Control:max-age or Cache-Control:s-maxage headers";
/* RFC2616 14.9.2 Cache-Control: no-store response * indicating do not cache, or stop now if you are reason =
"Cache-Control: no-store present";
/* RFC2616 14.9.1 Cache-Control: private response * this object is marked for this user's eyes only. Behave reason =
"Cache-Control: private present";
/* RFC2616 14.8 Authorisation: * if authorisation is included in the request, we don't cache, * but we can cache if the following exceptions are true: * 1) If Cache-Control: s-maxage is included * 2) If Cache-Control: must-revalidate is included * 3) If Cache-Control: public is included reason =
"Authorization required";
reason =
"Vary header contains '*'";
reason =
"environment variable 'no-cache' is set";
/* or we've been asked not to cache it above */ reason =
"r->no_cache present";
* 13.12 Cache Replacement: * Note: a new response that has an older Date header value than * existing cached responses is not cacheable. reason =
"updated entity is older than cached entity";
/* while this response is not cacheable, the previous response still is */ "cache: Removing CACHE_REMOVE_URL filter.");
/* and lastly, contradiction checks for revalidated responses * as per RFC2616 Section 10.3.5 /* contradiction: 304 Not Modified, but entity header modified */ * Enforce RFC2616 Section 10.3.5, just in case. We caught any * If the conditional GET used a strong cache validator (see section * 13.3.3), the response SHOULD NOT include other entity-headers. * Otherwise (i.e., the conditional GET used a weak validator), the * response MUST NOT include other entity-headers; this prevents * inconsistencies between cached entity-bodies and updated headers. /* Hold the phone. Some servers might allow us to cache a 2xx, but * then make their 304 responses non cacheable. RFC2616 says this: * If a 304 response indicates an entity not currently cached, then * the cache MUST disregard the response and repeat the request * without the conditional. * A 304 response with contradictory headers is technically a * different entity, to be safe, we remove the entity from the cache. /* we've got a cache conditional miss! tell anyone who cares */ "conditional cache miss: 304 was uncacheable, entity removed: %s",
/* remove the cached entity immediately, we might cache it again */ /* let someone else attempt to cache */ /* remove this filter from the chain */ /* retry without the conditionals */ /* we've got a cache miss! tell anyone who cares */ /* remove this filter from the chain */ /* remove the lock file unconditionally */ /* ship the data up the stack */ /* Make it so that we don't execute this path again. */ /* Set the content length if known. cl =
NULL;
/* parse error, see next 'if' block */ /* if we don't get the content-length, see if we have all the * buckets and use their length to calculate the size /* remember content length to check response size against later */ /* It's safe to cache the response. * There are two possiblities at this point: * - cache->handle == NULL. In this case there is no previously * cached entity anywhere on the system. We must create a brand * new entity and store the response in it. * - cache->stale_handle != NULL. In this case there is a stale * entity in the system which needs to be replaced by new * content (unless the result was 304 Not Modified, which means * the cached entity is actually fresh, and we should update /* Did we have a stale cache entry that really is stale? /* Oh, hey. It isn't that stale! Yay! */ /* Treat the request as if it wasn't conditional. */ * Restore the original request headers as they may be needed * by further output filters like the byterange filter to make /* no cache handle, create a new entity */ /* We only set info->status upon the initial creation. */ /* we've got a cache miss! tell anyone who cares */ "cache miss: cache unwilling to store response");
/* Caching layer declined the opportunity to cache the response */ /* We are actually caching this response. So it does not * make sense to remove this entity any more. "cache: Removing CACHE_REMOVE_URL filter.");
* We now want to update the cache file header information with * the new date, last modified, expire and content length and write * it away to our cache file. First, we determine these values from * the response, using heuristics if appropriate. * In addition, we make HTTP/1.1 age calculations and write them away /* store away the previously parsed cache control headers */ /* Read the date. Generate one if one is not supplied */ /* no date header (or bad header)! */ /* set response_time for HTTP/1.1 age calculations */ /* get the request time */ /* check last-modified date */ /* if it's in the future, then replace by date */ r,
APLOGNO(00
771)
"cache: Last modified is in the future, " /* if no expiry date then * if Cache-Control: max-age * expiry date = date + max-age * expiry date = date + min((date - lastmod) * factor, maxexpire) * expire date = date + defaultexpire /* if lastmod == date then you get 0*conf->factor which results in * an expiration time of now. This causes some problems with * freshness calculations, so we choose the else path... /* We found a stale entry which wasn't really stale. */ /* RFC 2616 10.3.5 states that entity headers are not supposed * to be in the 304 response. Therefore, we need to combine the * response headers with the cached headers *before* we update * However, before doing that, we need to first merge in * err_headers_out and we also need to strip any hop-by-hop * headers that might have snuck in. /* Merge in our cached headers. However, keep any updated values. */ /* take output, overlay on top of cached */ /* Write away header information to cache. It is possible that we are * trying to update headers for an entity which has already been cached. * This may fail, due to an unwritable cache area. E.g. filesystem full, * permissions problems or a read-only (re)mount. This must be handled /* Did we just update the cached headers on a revalidated response? * If so, we can now decide what to serve to the client. This is done in * the same way as with a regular response, but conditions are now checked * against the cached or merged response headers. /* Load in the saved status and clear the status line. */ /* We're just saving response headers, so we are done. Commit * the response at this point, unless there was a previous error. /* Restore the original request headers and see if we need to * return anything else than the cached response (ie. the original * request was conditional). /* Before returning we need to handle the possible case of an * unwritable cache. Rather than leaving the entity in the cache * and having it constantly re-validated, now that we have recalled * the body it is safe to try and remove the url from the cache. "cache: updating headers with store_headers failed. " /* Probably a mod_cache_disk cache area has been (re)mounted * read-only, or that there is a permissions problem. "cache: attempt to remove url from cache unsuccessful.");
/* we've got a cache conditional hit! tell anyone who cares */ "conditional cache hit: entity refresh failed");
/* we've got a cache conditional hit! tell anyone who cares */ "conditional cache hit: entity refreshed");
/* let someone else attempt to cache */ "cache: store_headers failed");
/* we've got a cache miss! tell anyone who cares */ "cache miss: store_headers failed");
/* we've got a cache miss! tell anyone who cares */ "cache miss: attempting entity save");
* CACHE_REMOVE_URL filter * ----------------------- * This filter gets added in the quick handler every time the CACHE_SAVE filter * gets inserted. Its purpose is to remove a confirmed stale cache entry from * CACHE_REMOVE_URL has to be a protocol filter to ensure that is run even if * the response is a canned error message, which removes the content filters * and thus the CACHE_SAVE filter from the chain. * CACHE_REMOVE_URL expects cache request rec within its context because the * request this filter runs on can be different from the one whose cache entry * should be removed, due to internal redirects. * Note that CACHE_SAVE_URL (as a content-set filter, hence run before the * protocol filters) will remove this filter if it decides to cache the file. * Therefore, if this filter is left in, it must mean we need to toss any /* Setup cache_request_rec */ /* user likely configured CACHE_REMOVE_URL manually; they should really * use mod_cache configuration to do that. So: * 2. Do nothing and bail out "cache: CACHE_REMOVE_URL enabled unexpectedly");
/* Now remove this cache entry from the cache */ * CACHE_INVALIDATE filter * ----------------------- * This filter gets added in the quick handler should a PUT, POST or DELETE * method be detected. If the response is successful, we must invalidate any * cached entity as per RFC2616 section 13.10. * CACHE_INVALIDATE has to be a protocol filter to ensure that is run even if * the response is a canned error message, which removes the content filters * CACHE_INVALIDATE expects cache request rec within its context because the * request this filter runs on can be different from the one whose cache entry * should be removed, due to internal redirects. /* Setup cache_request_rec */ /* user likely configured CACHE_INVALIDATE manually; they should really * use mod_cache configuration to do that. So: * 2. Do nothing and bail out "cache: CACHE_INVALIDATE enabled unexpectedly: %s", r->
uri);
"cache: response status to '%s' method is %d (>299), not invalidating cached entity: %s", r->
method, r->
status, r->
uri);
"cache: Invalidating all cached entities in response to '%s' request for %s",
/* we've got a cache invalidate! tell everyone who cares */ "cache invalidated by %s", r->
method));
* This filter can be optionally inserted into the filter chain by the admin as * a marker representing the precise location within the filter chain where * caching is to be performed. * When the filter chain is set up in the non-quick version of the URL handler, * the CACHE filter is replaced by the CACHE_OUT or CACHE_SAVE filter, * effectively inserting the caching filters at the point indicated by the * admin. The CACHE filter is then removed. * This allows caching to be performed before the content is passed to the * INCLUDES filter, or to a filter that might perform transformations unique * to the specific request and that would otherwise be non-cacheable. /* was the quick handler enabled */ "cache: CACHE filter was added in quick handler mode and " /* otherwise we may have been bypassed, nothing to see here */ "cache: CACHE filter was added twice, or was added where " "the cache has been bypassed and will be ignored: %s",
/* we are just a marker, so let's just remove ourselves */ * If configured, add the status of the caching attempt to the subprocess * environment, and if configured, to headers in the response. * The status is saved below the broad category of the status (hit, miss, * revalidate), as well as a single cache-status key. This can be used for * The status is optionally saved to an X-Cache header, and the detail of * why a particular cache entry was cached (or not cached) is optionally * saved to an X-Cache-Detail header. This extra detail is useful for * service developers who may need to know whether their Cache-Control headers * If an error has occurred, but we have a stale cached entry, restore the * filter stack from the save filter onwards. The canned error message will * be discarded in the process, and replaced with the cached response. /* ignore everything except for 5xx errors */ /* RFC2616 13.8 Errors or Incomplete Response Cache Behavior: * If a cache receives a 5xx response while attempting to revalidate an * entry, it MAY either forward this response to the requesting client, * or act as if the server failed to respond. In the latter case, it MAY * return a previously received response unless the cached entry * includes the "must-revalidate" cache-control directive (see section * This covers the case where the error was generated by our server via /* morph the current save filter into the out filter, and serve from /* add a revalidation warning */ "111 Revalidation failed");
"cache hit: %d status; stale content returned",
/* give someone else the chance to cache the file */ /* -------------------------------------------------------------- */ /* Setup configurable data */ /* maximum time to cache a document */ /* default time to cache a document */ /* factor used to estimate Expires date from LastModified date */ /* array of providers for this URL space */ /* maximum time to cache a document */ /* default time to cache a document */ /* factor used to estimate Expires date from LastModified date */ /* array of URL prefixes for which caching is enabled */ /* array of URL prefixes for which caching is disabled */ /* array of headers that should not be stored in cache */ /* flag indicating that query-string should be ignored when caching */ /* by default, run in the quick handler */ /* array of identifiers that should not be used for key calculation */ ps->
lock = 0;
/* thundering herd lock defaults to off */ /* array of URL prefixes for which caching is disabled */ /* array of URL prefixes for which caching is enabled */ /* if header None is listed clear array */ /* Only add header if no "None" has been found in header list * (When 'None' is passed, IGNORE_HEADERS_SET && nelts == 0.) /* if identifier None is listed clear array */ * Only add identifier if no "None" has been found in identifier "provider (%s) starts with a '/'. Are url and provider switched?",
"CacheEnable provider (%s) is missing an URL.",
type);
return "When in a Location, CacheEnable must specify a path or an URL below " return "CacheDisable must be followed by the word 'on' when in a Location.";
return "CacheDisable must specify a path or an URL.";
return "CacheLastModifiedFactor value must be a float";
return "CacheLockMaxAge value must be a non-zero positive integer";
/* This is the means by which unusual (non-unix) os's may find alternate * Consider a new config directive that enables loading specific cache * implememtations (like mod_cache_mem, mod_cache_file, etc.). * Rather than using a LoadModule directive, admin would use something * like CacheModule mem_cache_module | file_cache_module, etc, * which would cause the approprpriate cache module to be loaded. * This is more intuitive that requiring a LoadModule directive. "A cache type and partial URL prefix below which " "A partial URL prefix below which caching is disabled"),
"The maximum time in seconds to cache a document"),
"The minimum time in seconds to cache a document"),
"The default time in seconds to cache a document"),
"Run the cache in the quick handler, default on"),
"Ignore Responses where there is no Last Modified Header"),
"Ignore requests from the client for uncached content"),
"Ignore expiration dates when populating cache, resulting in " "an If-Modified-Since request to the backend on retrieval"),
"Ignore 'Cache-Control: private' and store private content"),
"Ignore 'Cache-Control: no-store' and store sensitive content"),
"A space separated list of headers that should not be " "Ignore query-string when caching"),
"identifiers that should be ignored for creating the key " "of the cached entity."),
"The factor used to estimate Expires date from " "Enable or disable the thundering herd lock."),
"The thundering herd lock path. Defaults to the '" "DefaultRuntimeDir setting."),
"Maximum age of any thundering herd lock."),
"Add a X-Cache header to responses. Default is off."),
"Add a X-Cache-Detail header to responses. Default is off."),
"Override the base URL of reverse proxied cache keys."),
"Serve stale content on 5xx errors if present. Defaults to on."),
/* cache quick handler */ /* cache error handler */ * XXX The cache filters need to run right after the handlers and before * any other filters. Consider creating AP_FTYPE_CACHE for this purpose. * Depending on the type of request (subrequest / main request) they * need to be run before AP_FTYPE_CONTENT_SET / after AP_FTYPE_CONTENT_SET * filters. Thus create two filter handles for each type: * cache_save_filter_handle / cache_out_filter_handle to be used by * cache_save_subreq_filter_handle / cache_out_subreq_filter_handle * to be run by subrequest * CACHE is placed into the filter chain at an admin specified location, * and when the cache_handler is run, the CACHE filter is swapped with * the CACHE_OUT filter, or CACHE_SAVE filter as appropriate. This has * the effect of offering optional fine control of where the cache is * inserted into the filter chain. * CACHE_SAVE must go into the filter chain after a possible DEFLATE * filter to ensure that the compressed content is stored. * Incrementing filter type by 1 ensures this happens. * CACHE_SAVE_SUBREQ must go into the filter chain before SUBREQ_CORE to * handle subrequsts. Decrementing filter type by 1 ensures this * CACHE_OUT must go into the filter chain after a possible DEFLATE * filter to ensure that already compressed cache objects do not * get compressed again. Incrementing filter type by 1 ensures * CACHE_OUT_SUBREQ must go into the filter chain before SUBREQ_CORE to * handle subrequsts. Decrementing filter type by 1 ensures this /* CACHE_REMOVE_URL has to be a protocol filter to ensure that is * run even if the response is a canned error message, which * removes the content filters.