国产99久久精品_欧美日本韩国一区二区_激情小说综合网_欧美一级二级视频_午夜av电影_日本久久精品视频

最新文章專題視頻專題問(wèn)答1問(wèn)答10問(wèn)答100問(wèn)答1000問(wèn)答2000關(guān)鍵字專題1關(guān)鍵字專題50關(guān)鍵字專題500關(guān)鍵字專題1500TAG最新視頻文章推薦1 推薦3 推薦5 推薦7 推薦9 推薦11 推薦13 推薦15 推薦17 推薦19 推薦21 推薦23 推薦25 推薦27 推薦29 推薦31 推薦33 推薦35 推薦37視頻文章20視頻文章30視頻文章40視頻文章50視頻文章60 視頻文章70視頻文章80視頻文章90視頻文章100視頻文章120視頻文章140 視頻2關(guān)鍵字專題關(guān)鍵字專題tag2tag3文章專題文章專題2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章專題3
問(wèn)答文章1 問(wèn)答文章501 問(wèn)答文章1001 問(wèn)答文章1501 問(wèn)答文章2001 問(wèn)答文章2501 問(wèn)答文章3001 問(wèn)答文章3501 問(wèn)答文章4001 問(wèn)答文章4501 問(wèn)答文章5001 問(wèn)答文章5501 問(wèn)答文章6001 問(wèn)答文章6501 問(wèn)答文章7001 問(wèn)答文章7501 問(wèn)答文章8001 問(wèn)答文章8501 問(wèn)答文章9001 問(wèn)答文章9501
當(dāng)前位置: 首頁(yè) - 科技 - 知識(shí)百科 - 正文

Redis源碼解析(十五)---aof-appendonlyfile解析

來(lái)源:懂視網(wǎng) 責(zé)編:小采 時(shí)間:2020-11-09 14:35:21
文檔

Redis源碼解析(十五)---aof-appendonlyfile解析

Redis源碼解析(十五)---aof-appendonlyfile解析:繼續(xù)學(xué)習(xí)redis源碼下的Data數(shù)據(jù)相關(guān)文件的代碼分析,今天我看的是一個(gè)叫aof的文件,這個(gè)字母是append ONLY file的簡(jiǎn)稱,意味只進(jìn)行追加文件操作。這里的文件追加記錄時(shí)為了記錄數(shù)據(jù)操作的改變記錄,用以異常情況的數(shù)據(jù)恢復(fù)的。類似于之前我說(shuō)的redo,un
推薦度:
導(dǎo)讀Redis源碼解析(十五)---aof-appendonlyfile解析:繼續(xù)學(xué)習(xí)redis源碼下的Data數(shù)據(jù)相關(guān)文件的代碼分析,今天我看的是一個(gè)叫aof的文件,這個(gè)字母是append ONLY file的簡(jiǎn)稱,意味只進(jìn)行追加文件操作。這里的文件追加記錄時(shí)為了記錄數(shù)據(jù)操作的改變記錄,用以異常情況的數(shù)據(jù)恢復(fù)的。類似于之前我說(shuō)的redo,un

繼續(xù)學(xué)習(xí)redis源碼下的Data數(shù)據(jù)相關(guān)文件的代碼分析,今天我看的是一個(gè)叫aof的文件,這個(gè)字母是append ONLY file的簡(jiǎn)稱,意味只進(jìn)行追加文件操作。這里的文件追加記錄時(shí)為了記錄數(shù)據(jù)操作的改變記錄,用以異常情況的數(shù)據(jù)恢復(fù)的。類似于之前我說(shuō)的redo,undo日志

繼續(xù)學(xué)習(xí)redis源碼下的Data數(shù)據(jù)相關(guān)文件的代碼分析,今天我看的是一個(gè)叫aof的文件,這個(gè)字母是append ONLY file的簡(jiǎn)稱,意味只進(jìn)行追加文件操作。這里的文件追加記錄時(shí)為了記錄數(shù)據(jù)操作的改變記錄,用以異常情況的數(shù)據(jù)恢復(fù)的。類似于之前我說(shuō)的redo,undo日志的作用。我們都知道,redis作為一個(gè)內(nèi)存數(shù)據(jù)庫(kù),數(shù)據(jù)的每次操作改變是先放在內(nèi)存中,等到內(nèi)存數(shù)據(jù)滿了,在刷新到磁盤文件中,達(dá)到持久化的目的。所以aof的操作模式,也是采用了這樣的方式。這里引入了一個(gè)block塊的概念,其實(shí)就是一個(gè)緩沖區(qū)塊。關(guān)于塊的一些定義如下:

/* AOF的下面的一些代碼都用到了一個(gè)簡(jiǎn)單buffer緩存塊來(lái)進(jìn)行存儲(chǔ),存儲(chǔ)了數(shù)據(jù)的一些改變操作記錄,等到
	緩沖中的達(dá)到一定的數(shù)據(jù)規(guī)模時(shí),在持久化地寫入到一個(gè)文件中,redis采用的方式是append追加的形式,這意味
	每次追加都要調(diào)整存儲(chǔ)的塊的大小,但是不可能會(huì)有無(wú)限大小的塊空間,所以redis在這里引入了塊列表的概念,
	設(shè)定死一個(gè)塊的大小,超過(guò)單位塊大小,存入另一個(gè)塊中,這里定義每個(gè)塊的大小為10M. */
#define AOF_RW_BUF_BLOCK_SIZE (1024*1024*10) /* 10 MB per block */

/* 標(biāo)準(zhǔn)的aof文件讀寫塊 */
typedef struct aofrwblock {
	//當(dāng)前文件塊被使用了多少,空閑的大小
 unsigned long used, free;
 //具體存儲(chǔ)內(nèi)容,大小10M
 char buf[AOF_RW_BUF_BLOCK_SIZE];
} aofrwblock;
也就是說(shuō),每個(gè)塊的大小默認(rèn)為10M,這個(gè)大小說(shuō)大不大,說(shuō)小不小了,如果填入的數(shù)據(jù)超出長(zhǎng)度了,系統(tǒng)會(huì)動(dòng)態(tài)申請(qǐng)一個(gè)新的緩沖塊,在server端是通過(guò)一個(gè)塊鏈表的形式,組織整個(gè)塊的:
/* Append data to the AOF rewrite buffer, allocating new blocks if needed. */
/* 在緩沖區(qū)中追加數(shù)據(jù),如果超出空間,會(huì)新申請(qǐng)一個(gè)緩沖塊 */
void aofRewriteBufferAppend(unsigned char *s, unsigned long len) {
 listNode *ln = listLast(server.aof_rewrite_buf_blocks);
 //定位到緩沖區(qū)的最后一塊,在最后一塊的位置上進(jìn)行追加寫操作
 aofrwblock *block = ln ? ln->value : NULL;

 while(len) {
 /* If we already got at least an allocated block, try appending
 * at least some piece into it. */
 if (block) {
 	//如果當(dāng)前的緩沖塊的剩余空閑能支持len長(zhǎng)度的內(nèi)容時(shí),直接寫入
 unsigned long thislen = (block->free < len) ? block->free : len;
 if (thislen) { /* The current block is not already full. */
 memcpy(block->buf+block->used, s, thislen);
 block->used += thislen;
 block->free -= thislen;
 s += thislen;
 len -= thislen;
 }
 }

 if (len) { /* First block to allocate, or need another block. */
 int numblocks;
	//如果不夠的話,需要新創(chuàng)建,進(jìn)行寫操作
 block = zmalloc(sizeof(*block));
 block->free = AOF_RW_BUF_BLOCK_SIZE;
 block->used = 0;
 //還要把緩沖塊追加到服務(wù)端的buffer列表中
 listAddNodeTail(server.aof_rewrite_buf_blocks,block);

 /* Log every time we cross more 10 or 100 blocks, respectively
 * as a notice or warning. */
 numblocks = listLength(server.aof_rewrite_buf_blocks);
 if (((numblocks+1) % 10) == 0) {
 int level = ((numblocks+1) % 100) == 0 ? REDIS_WARNING :
 REDIS_NOTICE;
 redisLog(level,"Background AOF buffer size: %lu MB",
 aofRewriteBufferSize()/(1024*1024));
 }
 }
 }
}
當(dāng)想要主動(dòng)的將緩沖區(qū)中的數(shù)據(jù)刷新到持久化到磁盤中時(shí),調(diào)用下面的方法:
/* Write the append only file buffer on disk.
 *
 * Since we are required to write the AOF before replying to the client,
 * and the only way the client socket can get a write is entering when the
 * the event loop, we accumulate all the AOF writes in a memory
 * buffer and write it on disk using this function just before entering
 * the event loop again.
 *
 * About the 'force' argument:
 *
 * When the fsync policy is set to 'everysec' we may delay the flush if there
 * is still an fsync() going on in the background thread, since for instance
 * on Linux write(2) will be blocked by the background fsync anyway.
 * When this happens we remember that there is some aof buffer to be
 * flushed ASAP, and will try to do that in the serverCron() function.
 *
 * However if force is set to 1 we'll write regardless of the background
 * fsync. */
#define AOF_WRITE_LOG_ERROR_RATE 30 /* Seconds between errors logging. */
/* 刷新緩存區(qū)的內(nèi)容到磁盤中 */
void flushAppendOnlyFile(int force) {
 ssize_t nwritten;
 int sync_in_progress = 0;
 mstime_t latency;

 if (sdslen(server.aof_buf) == 0) return;

 if (server.aof_fsync == AOF_FSYNC_EVERYSEC)
 sync_in_progress = bioPendingJobsOfType(REDIS_BIO_AOF_FSYNC) != 0;

 if (server.aof_fsync == AOF_FSYNC_EVERYSEC && !force) {
 /* With this append fsync policy we do background fsyncing.
 * If the fsync is still in progress we can try to delay
 * the write for a couple of seconds. */
 if (sync_in_progress) {
 if (server.aof_flush_postponed_start == 0) {
 /* No previous write postponinig, remember that we are
 * postponing the flush and return. */
 server.aof_flush_postponed_start = server.unixtime;
 return;
 } else if (server.unixtime - server.aof_flush_postponed_start < 2) {
 /* We were already waiting for fsync to finish, but for less
 * than two seconds this is still ok. Postpone again. */
 return;
 }
 /* Otherwise fall trough, and go write since we can't wait
 * over two seconds. */
 server.aof_delayed_fsync++;
 redisLog(REDIS_NOTICE,"Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.");
 }
 }
 /* We want to perform a single write. This should be guaranteed atomic
 * at least if the filesystem we are writing is a real physical one.
 * While this will save us against the server being killed I don't think
 * there is much to do about the whole server stopping for power problems
 * or alike */

	//在進(jìn)行寫入操作的時(shí)候,還監(jiān)聽了延遲
 latencyStartMonitor(latency);
 nwritten = write(server.aof_fd,server.aof_buf,sdslen(server.aof_buf));
 latencyEndMonitor(latency);
 /* We want to capture different events for delayed writes:
 * when the delay happens with a pending fsync, or with a saving child
 * active, and when the above two conditions are missing.
 * We also use an additional event name to save all samples which is
 * useful for graphing / monitoring purposes. */
 if (sync_in_progress) {
 latencyAddSampleIfNeeded("aof-write-pending-fsync",latency);
 } else if (server.aof_child_pid != -1 || server.rdb_child_pid != -1) {
 latencyAddSampleIfNeeded("aof-write-active-child",latency);
 } else {
 latencyAddSampleIfNeeded("aof-write-alone",latency);
 }
 latencyAddSampleIfNeeded("aof-write",latency);

 /* We performed the write so reset the postponed flush sentinel to zero. */
 server.aof_flush_postponed_start = 0;

 if (nwritten != (signed)sdslen(server.aof_buf)) {
 static time_t last_write_error_log = 0;
 int can_log = 0;

 /* Limit logging rate to 1 line per AOF_WRITE_LOG_ERROR_RATE seconds. */
 if ((server.unixtime - last_write_error_log) > AOF_WRITE_LOG_ERROR_RATE) {
 can_log = 1;
 last_write_error_log = server.unixtime;
 }

 /* Lof the AOF write error and record the error code. */
 if (nwritten == -1) {
 if (can_log) {
 redisLog(REDIS_WARNING,"Error writing to the AOF file: %s",
 strerror(errno));
 server.aof_last_write_errno = errno;
 }
 } else {
 if (can_log) {
 redisLog(REDIS_WARNING,"Short write while writing to "
 "the AOF file: (nwritten=%lld, "
 "expected=%lld)",
 (long long)nwritten,
 (long long)sdslen(server.aof_buf));
 }

 if (ftruncate(server.aof_fd, server.aof_current_size) == -1) {
 if (can_log) {
 redisLog(REDIS_WARNING, "Could not remove short write "
 "from the append-only file. Redis may refuse "
 "to load the AOF the next time it starts. "
 "ftruncate: %s", strerror(errno));
 }
 } else {
 /* If the ftrunacate() succeeded we can set nwritten to
 * -1 since there is no longer partial data into the AOF. */
 nwritten = -1;
 }
 server.aof_last_write_errno = ENOSPC;
 }

 /* Handle the AOF write error. */
 if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
 /* We can't recover when the fsync policy is ALWAYS since the
 * reply for the client is already in the output buffers, and we
 * have the contract with the user that on acknowledged write data
 * is synched on disk. */
 redisLog(REDIS_WARNING,"Can't recover from AOF write error when the AOF fsync policy is 'always'. Exiting...");
 exit(1);
 } else {
 /* Recover from failed write leaving data into the buffer. However
 * set an error to stop accepting writes as long as the error
 * condition is not cleared. */
 server.aof_last_write_status = REDIS_ERR;

 /* Trim the sds buffer if there was a partial write, and there
 * was no way to undo it with ftruncate(2). */
 if (nwritten > 0) {
 server.aof_current_size += nwritten;
 sdsrange(server.aof_buf,nwritten,-1);
 }
 return; /* We'll try again on the next call... */
 }
 } else {
 /* Successful write(2). If AOF was in error state, restore the
 * OK state and log the event. */
 if (server.aof_last_write_status == REDIS_ERR) {
 redisLog(REDIS_WARNING,
 "AOF write error looks solved, Redis can write again.");
 server.aof_last_write_status = REDIS_OK;
 }
 }
 server.aof_current_size += nwritten;

 /* Re-use AOF buffer when it is small enough. The maximum comes from the
 * arena size of 4k minus some overhead (but is otherwise arbitrary). */
 if ((sdslen(server.aof_buf)+sdsavail(server.aof_buf)) < 4000) {
 sdsclear(server.aof_buf);
 } else {
 sdsfree(server.aof_buf);
 server.aof_buf = sdsempty();
 }

 /* Don't fsync if no-appendfsync-on-rewrite is set to yes and there are
 * children doing I/O in the background. */
 if (server.aof_no_fsync_on_rewrite &&
 (server.aof_child_pid != -1 || server.rdb_child_pid != -1))
 return;

 /* Perform the fsync if needed. */
 if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
 /* aof_fsync is defined as fdatasync() for Linux in order to avoid
 * flushing metadata. */
 latencyStartMonitor(latency);
 aof_fsync(server.aof_fd); /* Let's try to get this data on the disk */
 latencyEndMonitor(latency);
 latencyAddSampleIfNeeded("aof-fsync-always",latency);
 server.aof_last_fsync = server.unixtime;
 } else if ((server.aof_fsync == AOF_FSYNC_EVERYSEC &&
 server.unixtime > server.aof_last_fsync)) {
 if (!sync_in_progress) aof_background_fsync(server.aof_fd);
 server.aof_last_fsync = server.unixtime;
 }
}
當(dāng)然有操作會(huì)對(duì)數(shù)據(jù)庫(kù)中的所有數(shù)據(jù),做操作記錄,便宜用此文件進(jìn)行全盤恢復(fù):
/* Write a sequence of commands able to fully rebuild the dataset into
 * "filename". Used both by REWRITEAOF and BGREWRITEAOF.
 *
 * In order to minimize the number of commands needed in the rewritten
 * log Redis uses variadic commands when possible, such as RPUSH, SADD
 * and ZADD. However at max REDIS_AOF_REWRITE_ITEMS_PER_CMD items per time
 * are inserted using a single command. */
/* 將數(shù)據(jù)庫(kù)的內(nèi)容按照鍵值,再次完全重寫入文件中 */
int rewriteAppendOnlyFile(char *filename) {
 dictIterator *di = NULL;
 dictEntry *de;
 rio aof;
 FILE *fp;
 char tmpfile[256];
 int j;
 long long now = mstime();

 /* Note that we have to use a different temp name here compared to the
 * one used by rewriteAppendOnlyFileBackground() function. */
 snprintf(tmpfile,256,"temp-rewriteaof-%d.aof", (int) getpid());
 fp = fopen(tmpfile,"w");
 if (!fp) {
 redisLog(REDIS_WARNING, "Opening the temp file for AOF rewrite in rewriteAppendOnlyFile(): %s", strerror(errno));
 return REDIS_ERR;
 }

 rioInitWithFile(&aof,fp);
 if (server.aof_rewrite_incremental_fsync)
 rioSetAutoSync(&aof,REDIS_AOF_AUTOSYNC_BYTES);
 for (j = 0; j < server.dbnum; j++) {
 char selectcmd[] = "*2\r\n$6\r\nSELECT\r\n";
 redisDb *db = server.db+j;
 dict *d = db->dict;
 if (dictSize(d) == 0) continue;
 di = dictGetSafeIterator(d);
 if (!di) {
 fclose(fp);
 return REDIS_ERR;
 }

 /* SELECT the new DB */
 if (rioWrite(&aof,selectcmd,sizeof(selectcmd)-1) == 0) goto werr;
 if (rioWriteBulkLongLong(&aof,j) == 0) goto werr;

 /* Iterate this DB writing every entry */
 //遍歷數(shù)據(jù)庫(kù)中的每條記錄,進(jìn)行日志記錄
 while((de = dictNext(di)) != NULL) {
 sds keystr;
 robj key, *o;
 long long expiretime;

 keystr = dictGetKey(de);
 o = dictGetVal(de);
 initStaticStringObject(key,keystr);

 expiretime = getExpire(db,&key);

 /* If this key is already expired skip it */
 if (expiretime != -1 && expiretime < now) continue;

 /* Save the key and associated value */
 if (o->type == REDIS_STRING) {
 /* Emit a SET command */
 char cmd[]="*3\r\n$3\r\nSET\r\n";
 if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
 /* Key and value */
 if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
 if (rioWriteBulkObject(&aof,o) == 0) goto werr;
 } else if (o->type == REDIS_LIST) {
 if (rewriteListObject(&aof,&key,o) == 0) goto werr;
 } else if (o->type == REDIS_SET) {
 if (rewriteSetObject(&aof,&key,o) == 0) goto werr;
 } else if (o->type == REDIS_ZSET) {
 if (rewriteSortedSetObject(&aof,&key,o) == 0) goto werr;
 } else if (o->type == REDIS_HASH) {
 if (rewriteHashObject(&aof,&key,o) == 0) goto werr;
 } else {
 redisPanic("Unknown object type");
 }
 /* Save the expire time */
 if (expiretime != -1) {
 char cmd[]="*3\r\n$9\r\nPEXPIREAT\r\n";
 if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
 if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
 if (rioWriteBulkLongLong(&aof,expiretime) == 0) goto werr;
 }
 }
 dictReleaseIterator(di);
 }

 /* Make sure data will not remain on the OS's output buffers */
 if (fflush(fp) == EOF) goto werr;
 if (fsync(fileno(fp)) == -1) goto werr;
 if (fclose(fp) == EOF) goto werr;

 /* Use RENAME to make sure the DB file is changed atomically only
 * if the generate DB file is ok. */
 if (rename(tmpfile,filename) == -1) {
 redisLog(REDIS_WARNING,"Error moving temp append only file on the final destination: %s", strerror(errno));
 unlink(tmpfile);
 return REDIS_ERR;
 }
 redisLog(REDIS_NOTICE,"SYNC append only file rewrite performed");
 return REDIS_OK;

werr:
 fclose(fp);
 unlink(tmpfile);
 redisLog(REDIS_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
 if (di) dictReleaseIterator(di);
 return REDIS_ERR;
}
系統(tǒng)同樣開放了后臺(tái)的此方法操作:
/* This is how rewriting of the append only file in background works:
 *
 * 1) The user calls BGREWRITEAOF
 * 2) Redis calls this function, that forks():
 * 2a) the child rewrite the append only file in a temp file.
 * 2b) the parent accumulates differences in server.aof_rewrite_buf.
 * 3) When the child finished '2a' exists.
 * 4) The parent will trap the exit code, if it's OK, will append the
 * data accumulated into server.aof_rewrite_buf into the temp file, and
 * finally will rename(2) the temp file in the actual file name.
 * The the new file is reopened as the new append only file. Profit!
 */
/* 后臺(tái)進(jìn)行AOF數(shù)據(jù)文件寫入操作 */
int rewriteAppendOnlyFileBackground(void)
原理就是和昨天分析的一樣,用的是fork(),創(chuàng)建子線程,最后開放出API:
/* aof.c 中的API */
void aofRewriteBufferReset(void) /* 釋放server中舊的buffer,并創(chuàng)建一份新的buffer */
unsigned long aofRewriteBufferSize(void) /* 返回當(dāng)前AOF的buffer的總大小 */
void aofRewriteBufferAppend(unsigned char *s, unsigned long len) /* 在緩沖區(qū)中追加數(shù)據(jù),如果超出空間,會(huì)新申請(qǐng)一個(gè)緩沖塊 */
ssize_t aofRewriteBufferWrite(int fd) /* 將保存內(nèi)存中的buffer內(nèi)容寫入到文件中,也是分塊分塊的寫入 */
void aof_background_fsync(int fd) /* 開啟后臺(tái)線程進(jìn)行文件同步操作 */
void stopAppendOnly(void) /* 停止追加數(shù)據(jù)操作,這里用的是一個(gè)命令模式 */
int startAppendOnly(void) /* 開啟追加模式 */
void flushAppendOnlyFile(int force) /* 刷新緩存區(qū)的內(nèi)容到磁盤中 */
sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) /* 根據(jù)輸入的字符串,進(jìn)行參數(shù)包裝,再次
輸出 */ sds catAppendOnlyExpireAtCommand(sds buf, struct redisCommand *cmd, robj *key, robj *seconds) /* 將過(guò)期等的命令都轉(zhuǎn)化為PEXPIREAT命令,把時(shí)間轉(zhuǎn)化為了絕對(duì)時(shí)間 */ void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, robj **argv, int argc) /* 根據(jù)cmd的不同操作,進(jìn)行命令的不同轉(zhuǎn)化 */ struct redisClient *createFakeClient(void) /* 命令總是被客戶端所執(zhí)行的,因此要引入客戶端的方法 */ void freeFakeClientArgv(struct redisClient *c) /* 釋放客戶端參數(shù)操作 */ void freeFakeClient(struct redisClient *c) /* 釋放客戶端參數(shù)操作 */ int loadAppendOnlyFile(char *filename) /* 加載AOF文件內(nèi)容 */ int rioWriteBulkObject(rio *r, robj *obj) /* 寫入bulk對(duì)象,分為L(zhǎng)ongLong對(duì)象,和普通的String對(duì)象 */ int rewriteListObject(rio *r, robj *key, robj *o) /* 寫入List列表對(duì)象,分為ZIPLIST壓縮列表和LINEDLIST普通鏈表操作 */ int rewriteSetObject(rio *r, robj *key, robj *o) /* 寫入set對(duì)象數(shù)據(jù) */ int rewriteSortedSetObject(rio *r, robj *key, robj *o) /* 寫入排序好的set對(duì)象 */ static int rioWriteHashIteratorCursor(rio *r, hashTypeIterator *hi, int what) /* 寫入哈希迭代器當(dāng)前指向的對(duì)象 */ int rewriteHashObject(rio *r, robj *key, robj *o) /* 寫入哈希字典對(duì)象 */ int rewriteAppendOnlyFile(char *filename) /* 將數(shù)據(jù)庫(kù)的內(nèi)容按照鍵值,再次完全重寫入文件中 */ int rewriteAppendOnlyFileBackground(void) /* 后臺(tái)進(jìn)行AOF數(shù)據(jù)文件寫入操作 */ void bgrewriteaofCommand(redisClient *c) /* 后臺(tái)寫AOF文件操作命令模式 */ void aofRemoveTempFile(pid_t childpid) /* 移除某次子線程ID為childpid所生產(chǎn)的aof文件 */ void aofUpdateCurrentSize(void) /* 更新當(dāng)前aof文件的大小 */ void backgroundRewriteDoneHandler(int exitcode, int bysignal) /* 后臺(tái)子線程寫操作完成后的回調(diào)方法 */

聲明:本網(wǎng)頁(yè)內(nèi)容旨在傳播知識(shí),若有侵權(quán)等問(wèn)題請(qǐng)及時(shí)與本網(wǎng)聯(lián)系,我們將在第一時(shí)間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com

文檔

Redis源碼解析(十五)---aof-appendonlyfile解析

Redis源碼解析(十五)---aof-appendonlyfile解析:繼續(xù)學(xué)習(xí)redis源碼下的Data數(shù)據(jù)相關(guān)文件的代碼分析,今天我看的是一個(gè)叫aof的文件,這個(gè)字母是append ONLY file的簡(jiǎn)稱,意味只進(jìn)行追加文件操作。這里的文件追加記錄時(shí)為了記錄數(shù)據(jù)操作的改變記錄,用以異常情況的數(shù)據(jù)恢復(fù)的。類似于之前我說(shuō)的redo,un
推薦度:
標(biāo)簽: 解析 十五 源碼
  • 熱門焦點(diǎn)

最新推薦

猜你喜歡

熱門推薦

專題
Top
主站蜘蛛池模板: 欧美一区二区在线视频 | 日本黄一级日本黄二级 | 色综合欧美综合天天综合 | 日韩欧美一区二区三区久久 | 国产aaaaa一级毛片无下载 | 欧美日韩在线观看视频 | 亚洲国产欧美91 | 久久亚洲私人国产精品va | 福利毛片 | 欧美一区二区三区四区在线观看 | 国产一级片在线 | 一级黄免费| 亚洲视频在线观看网站 | 精品一区二区视频 | 精品国产一区二区三区久久影院 | 国产区二区| 在线一区观看 | 亚洲一区中文字幕在线 | 国产精品免费网站 | 精品视频在线观看视频免费视频 | 国产在线视频在线 | 国产精品久久久久久久久久久久 | 国产成人精品免费视频大全可播放的 | 国内精品视频在线播放 | 久久久久成人精品一区二区 | 欧美成a人片在线观看 | 亚洲国产精品热久久 | 亚洲一区 中文字幕 久久 | 久久精品一区 | 国产精品视频一区二区噜噜 | 国产精品日本 | 高清亚洲 | 国产成人精品视频一区二区不卡 | 欧美日韩综合精品一区二区三区 | 亚洲欧美另类第一页 | 国产精品亚洲国产三区 | 在线精品欧美日韩 | 国产精品123区 | 色综合久久中文字幕综合网 | 最新国产精品视频 | 国产一区亚洲欧美成人 |