hive - hadoop stuck at “running job” -


i want running hadoop word count program doc. program stuck @ running job

16/09/02 10:51:13 warn util.nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 16/09/02 10:51:13 info client.rmproxy: connecting resourcemanager @ /0.0.0.0:8032 16/09/02 10:51:13 warn mapreduce.jobresourceuploader: hadoop command-line option parsing not performed. implement tool interface , execute application toolrunner remedy this. 16/09/02 10:51:14 info input.fileinputformat: total input paths process : 1 16/09/02 10:51:14 info mapreduce.jobsubmitter: number of splits:2 16/09/02 10:51:14 info mapreduce.jobsubmitter: submitting tokens job: job_1472783047951_0003 16/09/02 10:51:14 info impl.yarnclientimpl: submitted application application_1472783047951_0003 16/09/02 10:51:14 info mapreduce.job: url track job: http://hadoop-master:8088/proxy/application_1472783047951_0003/ 16/09/02 10:51:14 info mapreduce.job: running job: job_1472783047951_0003 

and show http://hadoop-master:8088/proxy/application_1472783047951_0003/ enter image description here

and runs appmaster on http://hadoop-slave2:8042, show enter image description here

however, since stucks on wordcount, stuck on hive

hive (default)> select a, b, count(1) cnt newtb group a, b; query id = hadoop_20160902110124_d2b2680b-c493-4986-aa84-f65794bfd8e4 total jobs = 1 launching job 1 out of 1 number of reduce tasks not specified. estimated input data size: 1 in order change average load reducer (in bytes):   set hive.exec.reducers.bytes.per.reducer=<number> in order limit maximum number of reducers:   set hive.exec.reducers.max=<number> in order set constant number of reducers:   set mapreduce.job.reduces=<number> starting job = job_1472783047951_0004, tracking url = http://hadoop-master:8088/proxy/application_1472783047951_0004/ kill command = /opt/hadoop-2.6.4/bin/hadoop job  -kill job_1472783047951_0004 

the nothing wrong select *.

hive (default)> select * newtb; ok 1   2   3 1   3   4 2   3   4 5   6   7 8   9   0 1   8   3 time taken: 0.101 seconds, fetched: 6 row(s) 

so, think there wrong mapreduce. there enough disk memory. so, how solve it?

you having issues because application master unable start containers , run job. first try restarting system , if doesn't change have change memory allocations in yarn-site.xml , mapred-site.xml. go basic memory settings.

use following links http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide/#yarn-configuration_1

here

here


Comments