本文是在hadoop上運行你的第一個程序,以及如何進行本地調(diào)試。如果還沒有部署好hadoop環(huán)境,請參考之前的文章hadoop在集群上的安裝部署
Hadoop Map/Reduce框架的簡要介紹
Hadoop Map/Reduce是一個使用簡易的軟件框架,基于它寫出來的應(yīng)用程序能夠運行在由上千個商用機器組成的大型集群上,并以一種可靠容錯的方式并行處理上T級別的數(shù)據(jù)集。
一個Map/Reduce 作業(yè)(job) 通常會把輸入的數(shù)據(jù)集切分為若干獨立的數(shù)據(jù)塊,由 map任務(wù)(task)以完全并行的方式處理它們??蚣軙ap的輸出先進行排序, 然后把結(jié)果輸入給reduce任務(wù)。通常作業(yè)的輸入和輸出都會被存儲在文件系統(tǒng)中。 整個框架負責(zé)任務(wù)的調(diào)度和監(jiān)控,以及重新執(zhí)行已經(jīng)失敗的任務(wù)。
通常,Map/Reduce框架和分布式文件系統(tǒng)是運行在一組相同的節(jié)點上的,也就是說,計算節(jié)點和存儲節(jié)點通常在一起。這種配置允許框架在那些已經(jīng)存好數(shù)據(jù)的節(jié)點上高效地調(diào)度任務(wù),這可以使整個集群的網(wǎng)絡(luò)帶寬被非常高效地利用。
Map/Reduce框架由一個單獨的master JobTracker 和每個集群節(jié)點一個slave TaskTracker共同組成。master負責(zé)調(diào)度構(gòu)成一個作業(yè)的所有任務(wù),這些任務(wù)分布在不同的slave上,master監(jiān)控它們的執(zhí)行,重新執(zhí)行已經(jīng)失敗的任務(wù)。而slave僅負責(zé)執(zhí)行由master指派的任務(wù)。
應(yīng)用程序至少應(yīng)該指明輸入/輸出的位置(路徑),并通過實現(xiàn)合適的接口或抽象類提供map和reduce函數(shù)。再加上其他作業(yè)的參數(shù),就構(gòu)成了作業(yè)配置(job configuration)。然后,Hadoop的 job client提交作業(yè)(jar包/可執(zhí)行程序等)和配置信息給JobTracker,后者負責(zé)分發(fā)這些軟件和配置信息給slave、調(diào)度任務(wù)并監(jiān)控它們的執(zhí)行,同時提供狀態(tài)和診斷信息給job-client。
輸入與輸出
Map/Reduce框架運轉(zhuǎn)在
框架需要對key和value的類(classes)進行序列化操作, 因此,這些類需要實現(xiàn) Writable接口。 另外,為了方便框架執(zhí)行排序操作,key類必須實現(xiàn) WritableComparable接口。
一個Map/Reduce 作業(yè)的輸入和輸出類型如下所示:
(input)
運行wordcount程序
wordcount程序在hadoop的分發(fā)包中已經(jīng)有了,在{HADOOP_HOME}\src\examples中
- [hadoop@hadoop hadoop]$cd /home/hadoop/
- [hadoop@hadoop hadoop]$ mkdir wordcount_classes
- [hadoop@hadoop hadoop]$javac -classpath hadoop-0.17.2.1-core.jar -d wordcount_classes ./com/beoop/WordCount.java
- [hadoop@hadoop hadoop]$jar -cvf /home/hadoop/wordcount.jar -C wordcount_classes/ .
[hadoop@hadoop hadoop]$cd /home/hadoop/ [hadoop@hadoop hadoop]$ mkdir wordcount_classes [hadoop@hadoop hadoop]$javac -classpath hadoop-0.17.2.1-core.jar -d wordcount_classes ./com/beoop/WordCount.java [hadoop@hadoop hadoop]$jar -cvf /home/hadoop/wordcount.jar -C wordcount_classes/ .
在HDFS上建立wordcount目錄
- [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount
- [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount/input
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount/input
放入測試文件file01、file02到input目錄中,結(jié)構(gòu)如下
- [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -ls wordcount/input/
- /user/hadoop/wordcount/input/file01
- /user/hadoop/wordcount/input/file02
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -ls wordcount/input/ /user/hadoop/wordcount/input/file01 /user/hadoop/wordcount/input/file02
input中的文件內(nèi)容,file01和file02可以從本地put進去
- [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file01
- Hello World Bye World you are a big star
- [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file02
- Hello Hadoop Goodbye Hadoop
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file01 Hello World Bye World you are a big star [hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file02 Hello Hadoop Goodbye Hadoop
運行wordcount程序,jar文件可以在本地,輸入輸出應(yīng)該在HDFS上
- [hadoop@hadoop hadoop]$./bin/hadoop jar /home/hadoop/wordcount.jar com.beoop.WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output
- 運行輸出信息
- 08/12/11 19:39:39 INFO mapred.FileInputFormat: Total input paths to process : 2
- 08/12/11 19:39:39 INFO mapred.JobClient: Running job: job_200811260234_0027
- 08/12/11 19:39:40 INFO mapred.JobClient: map 0% reduce 0%
- 08/12/11 19:39:47 INFO mapred.JobClient: map 66% reduce 0%
- 08/12/11 19:39:48 INFO mapred.JobClient: map 100% reduce 0%
- 08/12/11 19:39:53 INFO mapred.JobClient: map 100% reduce 11%
- 08/12/11 19:39:55 INFO mapred.JobClient: map 100% reduce 100%
- 08/12/11 19:39:56 INFO mapred.JobClient: Job complete: job_200811260234_0027
- 08/12/11 19:39:56 INFO mapred.JobClient: Counters: 16
- 08/12/11 19:39:56 INFO mapred.JobClient: File Systems
- 08/12/11 19:39:56 INFO mapred.JobClient: Local bytes read=663
- 08/12/11 19:39:56 INFO mapred.JobClient: Local bytes written=1580
- 08/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes read=242
- 08/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes written=228
- 08/12/11 19:39:56 INFO mapred.JobClient: Job Counters
- 08/12/11 19:39:56 INFO mapred.JobClient: Launched map tasks=3
- 08/12/11 19:39:56 INFO mapred.JobClient: Launched reduce tasks=1
- 08/12/11 19:39:56 INFO mapred.JobClient: Data-local map tasks=3
- 08/12/11 19:39:56 INFO mapred.JobClient: Map-Reduce Framework
- 08/12/11 19:39:56 INFO mapred.JobClient: Map input records=4
- 08/12/11 19:39:56 INFO mapred.JobClient: Map output records=38
- 08/12/11 19:39:56 INFO mapred.JobClient: Map input bytes=199
- 08/12/11 19:39:56 INFO mapred.JobClient: Map output bytes=351
- 08/12/11 19:39:56 INFO mapred.JobClient: Combine input records=38
- 08/12/11 19:39:56 INFO mapred.JobClient: Combine output records=31
- 08/12/11 19:39:56 INFO mapred.JobClient: Reduce input groups=30
- 08/12/11 19:39:56 INFO mapred.JobClient: Reduce input records=31
- 08/12/11 19:39:56 INFO mapred.JobClient: Reduce output records=30
[hadoop@hadoop hadoop]$./bin/hadoop jar /home/hadoop/wordcount.jar com.beoop.WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output 運行輸出信息 08/12/11 19:39:39 INFO mapred.FileInputFormat: Total input paths to process : 2 08/12/11 19:39:39 INFO mapred.JobClient: Running job: job_200811260234_0027 08/12/11 19:39:40 INFO mapred.JobClient: map 0% reduce 0% 08/12/11 19:39:47 INFO mapred.JobClient: map 66% reduce 0% 08/12/11 19:39:48 INFO mapred.JobClient: map 100% reduce 0% 08/12/11 19:39:53 INFO mapred.JobClient: map 100% reduce 11% 08/12/11 19:39:55 INFO mapred.JobClient: map 100% reduce 100% 08/12/11 19:39:56 INFO mapred.JobClient: Job complete: job_200811260234_0027 08/12/11 19:39:56 INFO mapred.JobClient: Counters: 16 08/12/11 19:39:56 INFO mapred.JobClient: File Systems 08/12/11 19:39:56 INFO mapred.JobClient: Local bytes read=663 08/12/11 19:39:56 INFO mapred.JobClient: Local bytes written=1580 08/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes read=242 08/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes written=228 08/12/11 19:39:56 INFO mapred.JobClient: Job Counters 08/12/11 19:39:56 INFO mapred.JobClient: Launched map tasks=3 08/12/11 19:39:56 INFO mapred.JobClient: Launched reduce tasks=1 08/12/11 19:39:56 INFO mapred.JobClient: Data-local map tasks=3 08/12/11 19:39:56 INFO mapred.JobClient: Map-Reduce Framework 08/12/11 19:39:56 INFO mapred.JobClient: Map input records=4 08/12/11 19:39:56 INFO mapred.JobClient: Map output records=38 08/12/11 19:39:56 INFO mapred.JobClient: Map input bytes=199 08/12/11 19:39:56 INFO mapred.JobClient: Map output bytes=351 08/12/11 19:39:56 INFO mapred.JobClient: Combine input records=38 08/12/11 19:39:56 INFO mapred.JobClient: Combine output records=31 08/12/11 19:39:56 INFO mapred.JobClient: Reduce input groups=30 08/12/11 19:39:56 INFO mapred.JobClient: Reduce input records=31 08/12/11 19:39:56 INFO mapred.JobClient: Reduce output records=30
本地調(diào)試
上面我們已經(jīng)可以在hadoop上運行程序,但對于日常調(diào)試,會比較麻煩,IBM開發(fā)了IBM MapReduce Tools 的用語eclipse的插件http://www.alphaworks.ibm.com/tech/mapreducetools,IBM已經(jīng)將這個插件捐獻給了hadoop,0.17以上版本,可以在hadoop目錄下的\contrib\eclipse-plugin中可以找到這個插件,和IBM發(fā)布的有些改進。hadoop有自己的rpc遠程調(diào)用框架,所以客戶端的hadoop-core.jar必須與服務(wù)器一致.不然rpc協(xié)議有可能不兼容.
,所以推薦使用hadoop自帶的plugin,以防出現(xiàn)鬼魅問題。
安裝該插件后,重啟eclipse,和平時一樣,new->project 選擇Map/Reduce project,如下圖
點擊右側(cè)的configure hadoop install directory,選擇本地的hadoop的目錄
將hadoop的/src/examples中的文件導(dǎo)入到新建的項目中.
在org.apache.hadoop.examples里有我們需要的WordCount.java
在工程的右下腳控制面板上會出現(xiàn)一個大象的圖標,點擊后會出來配置hadoop服務(wù)器的界面
這里的名字隨便填個就可以,重要的是host和port,插件默認是localhost:50020,需要改成和之間部署hadoop的時候hadoop-site.xml中的一樣。
還需要注意一點的是,Map/Reduce Master 對應(yīng)mapred.job.tracker,而 DFS Master對應(yīng)于fs.default.name
我在初次配置的時候?qū)懛戳耍瑢?dǎo)致出現(xiàn)以下錯誤
- 2008-12-10 02:38:06,434 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001, call getProtocolVersion(org.apache.hadoop.d
- fs.ClientProtocol, 29) from 10.10.1.34:2282: error: java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.Clie
- ntProtocol
- java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.ClientProtocol
- at org.apache.hadoop.mapred.JobTracker.getProtocolVersion(JobTracker.java:173)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
2008-12-10 02:38:06,434 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001, call getProtocolVersion(org.apache.hadoop.d fs.ClientProtocol, 29) from 10.10.1.34:2282: error: java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.Clie ntProtocol java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.ClientProtocol at org.apache.hadoop.mapred.JobTracker.getProtocolVersion(JobTracker.java:173) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
根據(jù)hadoop在集群上的安裝部署中hadoop-site.xml中的配置
- <property>
- <NAME>fs.default.name</NAME>
- <VALUE>hdfs://hadoop:9000/</VALUE>
- </property>
- <property>
- <NAME>mapred.job.tracker</NAME>
- <VALUE>hadoop:9001</VALUE>
- </property>
fs.default.name hdfs://hadoop:9000/ mapred.job.tracker hadoop:9001
如上圖所示,填上相應(yīng)的host和port,如果你沒在本地設(shè)置hosts,那么請用ip代替,或者在C:\WINDOWS\system32\drivers\etc\hosts文件中加入,在高級設(shè)置中可以對hadoop做更為細致的設(shè)置。這里略過。
在run dialog中設(shè)置輸入?yún)?shù)
修改WordCount.java,在run方法中加入下面2句
- conf.set("hadoop.job.ugi", "hadoop,hadoop"); //設(shè)置hadoop server用戶名和密碼
- conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/"); //指定系統(tǒng)路徑
conf.set("hadoop.job.ugi", "hadoop,hadoop"); //設(shè)置hadoop server用戶名和密碼 conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/"); //指定系統(tǒng)路徑
在run as菜單中選擇run on hadoop選項,運行,彈出選擇框,選擇剛才配置好的hadoop server,也可以在這里配置新的server。
如果一切正常在console上會有和上面運行結(jié)果一樣的輸出,可以在 http://hadoop:50030/jobtracker.jsp 上監(jiān)控我們部署的作業(yè)的狀態(tài)。
錯誤分析
初次運行出現(xiàn)以下錯誤,主要是因為沒有設(shè)置用戶名和密碼而導(dǎo)致的,參照上面調(diào)用conf.set手動設(shè)置以下就可以了。
- 08/12/11 14:33:04 WARN fs.FileSystem: uri=hdfs://hadoop:9000/
- javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????
- at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
- at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
- at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)
- at org.apache.hadoop.security.UserGroupInformation.login(UserGroupInformation.java:67)
- at org.apache.hadoop.fs.FileSystem$Cache$Key.<INIT>(FileSystem.java:1353)
- at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1289)
- at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
- at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
- at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352)
- at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331)
- at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304)
- at org.apache.hadoop.examples.WordCount.run(WordCount.java:148)
- at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
- at org.apache.hadoop.examples.WordCount.main(WordCount.java:159)
- Exception in thread "main" java.lang.RuntimeException: java.io.IOException
- at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:356)
- at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331)
- at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304)
- at org.apache.hadoop.examples.WordCount.run(WordCount.java:148)
- at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
- at org.apache.hadoop.examples.WordCount.main(WordCount.java:159)
- Caused by: java.io.IOException
- at org.apache.hadoop.dfs.DFSClient.<INIT>(DFSClient.java:175)
- at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
- at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
- at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
- at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
- at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
- at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
- at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352)
- ... 5 more
- Caused by: javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????
- at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
- at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
- at org.apache.hadoop.dfs.DFSClient.<INIT>(DFSClient.java:173)
- ... 12 more
08/12/11 14:33:04 WARN fs.FileSystem: uri=hdfs://hadoop:9000/ javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ????????? at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250) at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275) at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257) at org.apache.hadoop.security.UserGroupInformation.login(UserGroupInformation.java:67) at org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:1353) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1289) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108) at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304) at org.apache.hadoop.examples.WordCount.run(WordCount.java:148) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.WordCount.main(WordCount.java:159) Exception in thread "main" java.lang.RuntimeException: java.io.IOException at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:356) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304) at org.apache.hadoop.examples.WordCount.run(WordCount.java:148) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.WordCount.main(WordCount.java:159) Caused by: java.io.IOException at org.apache.hadoop.dfs.DFSClient. (DFSClient.java:175) at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108) at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352) ... 5 more Caused by: javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ????????? at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250) at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275) at org.apache.hadoop.dfs.DFSClient. (DFSClient.java:173) ... 12 more
運行的時候出現(xiàn)下面的錯誤,說/home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml的文件沒找到
- 2008-12-10 21:26:48,680 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001, call submitJob(job_200811260234_0022) from 10.10.1.34:1328: error: java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directory
- java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directory
- at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:215)
- at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:149)
- at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1155)
- at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1136)
- at org.apache.hadoop.mapred.JobInProgress.<INIT>(JobInProgress.java:174)
- at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1755)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
2008-12-10 21:26:48,680 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001, call submitJob(job_200811260234_0022) from 10.10.1.34:1328: error: java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directory java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directory at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:215) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:149) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1155) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1136) at org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:174) at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1755) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
到server(hadoop主機)建立了system目錄,依然錯誤,后來在高級選項中修改mapred.system.dir的屬性仍然報錯。最后通過
- conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/");
conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/");
運行成功,不過未明白高級設(shè)置中指定mapred.system.dir為什么無效。是plugin自身問題?