說明: (1) 實驗環(huán)境. 三臺服務(wù)器:test165、test62、test63。test165同時是JobTracker和TaskTracker. 測試?yán)樱汗倬W(wǎng)自帶的SSSP程序,數(shù)據(jù)是自己模擬生成。 運(yùn)行命令:hadoop jar giraph-examples-1.0.0-for-hadoop-0.20.203.0-jar-with-dependencies.jar o
說明:
(1) 實驗環(huán)境.
三臺服務(wù)器:test165、test62、test63。test165同時是JobTracker和TaskTracker.
測試?yán)樱汗倬W(wǎng)自帶的SSSP程序,數(shù)據(jù)是自己模擬生成。
運(yùn)行命令:hadoop jar giraph-examples-1.0.0-for-hadoop-0.20.203.0-jar-with-dependencies.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.SimpleShortestPathsVertex -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /user/giraph/SSSP -of org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op /user/giraph/output-sssp-debug-7 -w 5
(2). 為節(jié)約空間,下文中所有代碼均為核心代碼片段。
(3). core-site.xml中hadoop.tmp.dir的路徑設(shè)為:/home/hadoop/hadooptmp
(4).寫本文是多次調(diào)試完成的,故文中的JobID不一樣,讀者可理解為同一JobID.
(5). 后續(xù)文章也遵循上述規(guī)則。
1. org.apache.giraph.graph.GraphMapper類
Giraph中自定義org.apache.giraph.graph.GraphMapper類來繼承Hadoop中的 org.apache.hadoop.mapreduce.Mapper
This mapper that will execute the BSP graph tasks alloted to this worker. All tasks will be performed by calling the GraphTaskManager object managed by this GraphMapper wrapper classs. Since this mapper will not be passing data by key-value pairs through the MR framework, the Mapper parameter types are irrelevant, and set to Object type.
BSP的運(yùn)算邏輯被封裝在GraphMapper類中,其擁有一GraphTaskManager對象,用來管理Job的tasks。每個GraphMapper對象都相當(dāng)于BSP中的一個計算節(jié)點(diǎn)(compute node)。
在GraphMapper類中的setup()方法中,創(chuàng)建GraphTaskManager對象并調(diào)用其setup()方法進(jìn)行一些初始化工作。如下:
@Override public void setup(Context context) throws IOException, InterruptedException { // Execute all Giraph-related role(s) assigned to this compute node. // Roles can include "master," "worker," "zookeeper," or . . . ? graphTaskManager = new GraphTaskManager(context); graphTaskManager.setup( DistributedCache.getLocalCacheArchives(context.getConfiguration())); }
@Override public void run(Context context) throws IOException, InterruptedException { // Notify the master quicker if there is worker failure rather than // waiting for ZooKeeper to timeout and delete the ephemeral znodes try { setup(context); while (context.nextKeyValue()) { graphTaskManager.execute(); } cleanup(context); // Checkstyle exception due to needing to dump ZooKeeper failure } catch (RuntimeException e) { graphTaskManager.zooKeeperCleanup(); graphTaskManager.workerFailureCleanup(); } }
2. org.apache.giraph.graph.GraphTaskManager 類
功能:The Giraph-specific business logic for a single BSP compute node in whatever underlying type of cluster our Giraph job will run on. Owning object will provide the glue into the underlying cluster framework and will call this object to perform Giraph work.
下面講述setup()方法,代碼如下。
/** * Called by owner of this GraphTaskManager on each compute node * @param zkPathList the path to the ZK jars we need to run the job */ public void setup(Path[] zkPathList) throws IOException, InterruptedException { context.setStatus("setup: Initializing Zookeeper services."); locateZookeeperClasspath(zkPathList); serverPortList = conf.getZookeeperList(); if (serverPortList == null && startZooKeeperManager()) { return; // ZK connect/startup failed } if (zkManager != null && zkManager.runsZooKeeper()) { LOG.info("setup: Chosen to run ZooKeeper..."); } context.setStatus("setup: Connected to Zookeeper service " +serverPortList); this.graphFunctions = determineGraphFunctions(conf, zkManager); instantiateBspService(serverPortList, sessionMsecTimeout); }
1) locateZookeeperClasspath(zkPathList):找到ZK jar的本地副本,其路徑為:/home/hadoop/hadooptmp/mapred/local/taskTracker/root/jobcache/job_201403270456_0001/jars/job.jar ,用于啟動ZooKeeper服務(wù)。
2) startZooKeeperManager(),初始化和配置ZooKeeperManager。定義如下,
/** * Instantiate and configure ZooKeeperManager for this job. This will * result in a Giraph-owned Zookeeper instance, a connection to an * existing quorum as specified in the job configuration, or task failure * @return true if this task should terminate */ private boolean startZooKeeperManager() throws IOException, InterruptedException { zkManager = new ZooKeeperManager(context, conf); context.setStatus("setup: Setting up Zookeeper manager."); zkManager.setup(); if (zkManager.computationDone()) { done = true; return true; } zkManager.onlineZooKeeperServers(); serverPortList = zkManager.getZooKeeperServerPortString(); return false; }
org.apache.giraph.zk.ZooKeeperManager 類,功能:Manages the election of ZooKeeper servers, starting/stopping the services, etc.
ZooKeeperManager類的setup()定義如下:
/** * Create the candidate stamps and decide on the servers to start if * you are partition 0. */ public void setup() throws IOException, InterruptedException { createCandidateStamp(); getZooKeeperServerList(); }
運(yùn)行時指定了5個workers(-w 5),再加上一個master,所有上面有6個task。
getZooKeeperServerList()方法中,taskPartition為0的task會調(diào)用createZooKeeperServerList()方法創(chuàng)建ZooKeeper server List,也是創(chuàng)建一個空文件,通過文件名來描述Zookeeper servers。
createZooKeeperServerList核心代碼如下:
/** * Task 0 will call this to create the ZooKeeper server list. The result is * a file that describes the ZooKeeper servers through the filename. */ private void createZooKeeperServerList() throws IOException, InterruptedException { MaphostnameTaskMap = Maps.newTreeMap(); while (true) { FileStatus [] fileStatusArray = fs.listStatus(taskDirectory); hostnameTaskMap.clear(); if (fileStatusArray.length > 0) { for (FileStatus fileStatus : fileStatusArray) { String[] hostnameTaskArray = fileStatus.getPath().getName().split(HOSTNAME_TASK_SEPARATOR); if (!hostnameTaskMap.containsKey(hostnameTaskArray[0])) { hostnameTaskMap.put(hostnameTaskArray[0], new Integer(hostnameTaskArray[1])); } } if (hostnameTaskMap.size() >= serverCount) { break; } Thread.sleep(pollMsecs); } } }
經(jīng)過多次測試,task 0總是被選為ZooKeeper Server ,因為在同一進(jìn)程中,掃描taskDirectory時,只有它對應(yīng)的task 文件(其他task的文件還沒有生成好),然后退出for循環(huán),發(fā)現(xiàn)hostNameTaskMap的size等于1,直接退出while循環(huán)。那么此處就選了test162 0。
最后,創(chuàng)建了文件:_bsp/_defaultZkManagerDir/job_201403301409_0006/zkServerList_test162 0
onlineZooKeeperServers(),根據(jù)zkServerList_test162 0文件,Task 0 先生成zoo.cfg配置文件,使用ProcessBuilder來創(chuàng)建ZooKeeper服務(wù)進(jìn)程,然后Task 0 再通過socket連接到ZooKeeper服務(wù)進(jìn)程上,最后創(chuàng)建文件 _bsp/_defaultZkManagerDir/job_201403301409_0006/_zkServer/test162 0 來標(biāo)記master任務(wù)已完成。worker一直在進(jìn)行循環(huán)檢測master是否生成好 _bsp/_defaultZkManagerDir/job_201403301409_0006/_zkServer/test162 0,即worker等待直到master上的ZooKeeper服務(wù)已經(jīng)啟動完成。
啟動ZooKeeper服務(wù)的命令如下:
3) determineGraphFunctions()。
GraphTaskManager類中有CentralizedServiceMaster對象和CentralizedServiceWorker 對象,分別對應(yīng)于master和worker。每個BSP compute node扮演的角色判定邏輯如下:
a) If not split master, everyone does the everything and/or running ZooKeeper.
b) If split master/worker, masters also run ZooKeeper
c) If split master/worker == true and giraph.zkList is set, the master will not instantiate a ZK instance, but will assume a quorum is already active on the cluster for Giraph to use.
該判定在GraphTaskManager 類中的靜態(tài)方法determineGraphFunctions()中定義,片段代碼如下:
private static GraphFunctions determineGraphFunctions( ImmutableClassesGiraphConfiguration conf, ZooKeeperManager zkManager) { // What functions should this mapper do? if (!splitMasterWorker) { if ((zkManager != null) && zkManager.runsZooKeeper()) { functions = GraphFunctions.ALL; } else { functions = GraphFunctions.ALL_EXCEPT_ZOOKEEPER; } } else { if (zkAlreadyProvided) { int masterCount = conf.getZooKeeperServerCount(); if (taskPartition < masterCount) { functions = GraphFunctions.MASTER_ONLY; } else { functions = GraphFunctions.WORKER_ONLY; } } else { if ((zkManager != null) && zkManager.runsZooKeeper()) { functions = GraphFunctions.MASTER_ZOOKEEPER_ONLY; } else { functions = GraphFunctions.WORKER_ONLY; } } } return functions; }
默認(rèn)的,Giraph會區(qū)分master和worker。會在master上面啟動zookeeper服務(wù),不會在worker上啟動ZooKeeper服務(wù)。那么Task 0 就是master+ZooKeeper,其他Tasks就是workers。
聲明:本網(wǎng)頁內(nèi)容旨在傳播知識,若有侵權(quán)等問題請及時與本網(wǎng)聯(lián)系,我們將在第一時間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com