mail-cache-transaction.c revision 8bb360f9e5de1c25e4f875205bb06e8bf15dae14
/* see if we should try to reopen the cache file */ /* index doesn't have a cache extension, but the cache file exists (corrupted indexes fixed?). fix it. */ /* index offsets don't match the cache file */ /* the cache file appears to be too old. reopening should help. */ /* cache file sequence might be broken. it's also possible that it was just compressed and we just haven't yet seen the changes in index. try if refreshing index helps. if not, compress the cache file. */ /* get the latest reset ID */ "Invalid magic in hole header");
for (i = 0; i <
count; i++) {
/* found a large enough hole. */ /* mail_cache_unlink_hole() could have noticed corruption */ /* allocate some more space than we need */ /* we can simply grow it */ /* grow reservation. it's probably the last one in the buffer, but it's not guarateed because we might have used holes /* we can just set used_file_size back */ /* set it up as a hole */ /* free flushed data as well. do it from end to beginning so we have a better chance of updating used_file_size instead of adding holes */ /* check again - locking might have reopened the cache file */ /* not enough preallocated space in transaction, get more */ /* cache file reopened - need to abort */ /* final commit - see if we can free the rest of the /* write the cache_offsets to index file. records' prev_offset is updated to point to old cache record when index is being /* we added records for this message multiple times in this same uncommitted transaction. only the new one will be written to transaction log, we need to do the linking /* if we're combining multiple transactions, make sure the one with the smallest offset is written into index. this is required for non-file-mmaped cache to work properly. */ /* committing, remove the last dummy record */ /* cache file reopened - need to abort */ /* error / couldn't lock / cache file reopened */ /* see how much we can really write there */ /* drop the written data from buffer */ /* FIXME: here would be a good place to set prev_offset to avoid doing it later, but avoid circular prev_offsets when cache is updated multiple times within the same /* Here would be a good place to do fdatasync() to make sure everything is written before offsets are updated to index. However it slows down I/O unneededly and we're pretty good at catching and fixing cache corruption, so we no longer do it. */ /* if we rollback the transaction, we must not overwrite this area because it's already committed after updating the /* after it's guaranteed to be in disk, update header offset */ /* we're adding the first field. hdr_copy needs to be kept in sync so unlocking won't overwrite it. */ /* we want to avoid adding all the fields one by one to the cache file, so just add all of them at once in here. the unused ones get dropped later when compressing. */ /* if we compressed the cache, the field should be there now. it's however possible that someone else just compressed it and we only reopened the cache file. */ /* re-read header to make sure we don't lose any fields. */ /* it was already added */ /* we wrote all the headers, so there are no pending changes */ "Cache file %s: Newly added field got " /* cache was compressed within this transaction */ /* we'll have to add this field to headers */ /* remember roughly what we have modified, so cache lookups can look into transactions to see changes. */ /* remember that this value exists, in case we try to look it up */ /* time to flush our buffer. if flushing fails because the cache file had been compressed and was reopened, return without adding the cached data since cache_data buffer doesn't contain the cache_rec anymore. */ /* make sure the transaction is reset, so we don't constantly try to flush for each call to this /* add it only if it's newer than what we would drop when /* this function is called for each added cache record (or cache extension record update actually) with new_offset pointing to the new record and old_offset pointing to the previous record. we want to keep the old and new records linked so both old and new cached data is found. normally they are already linked correctly. the problem only comes when multiple processes are adding cache records at the same time. we'd rather not lose those additions, so force the linking order to be new_offset -> old_offset if it isn't "Cache record offset %u points outside file",
/* link is already correct */ /* we'll only update the deleted_space in header. we can't really do any actual deleting as other processes might still be using the data. also it's actually useful as some index views are still able to ask cached data from messages that have already been "record list is circular");