为什么下载使命召唤6联机纯净版纯联机的一直是0k...

PaintCode 2 for Mac V3.3.3Hello,This past Monday, I just installed HDFS+hiveserver2 from the just-releasedCDH 5.0.1 distribution on our hadoop lab cluster.When I execute the following, using beeline:&&&beeline& !connect jdbc:hive2://localhost:10000 hive passwdmasked&&org.apache.hive.jdbc.HiveDriver&&&beeline& use s5;&&&0: jdbc:hive2://localhost:10000&&&&+--------------------------------------------+tab_name
|&&&+--------------------------------------------+sales_transaction
|sales_transaction_region1
|&&+--------------------------------------------+&&&beeline& INSERT OVERWRITE TABLE `sales_transaction_region1` SELECT * FROM`sales_transaction` WHERE state in ('IN', 'TX', 'VA', CA');&&&Error: Error while processing statement: FAILED: Execution Error, returncode 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)I am able to create both of the tables shown in that
s5 database, but forsome unknown reason, I am unable to insert into that table or even dropthem tables.After that error message, when I query the sales_transaction_region1 table,it is populated with the right number of rows.When i run select count(*) from sales_transaction_region1, the Map jobreturns with NO error.....This seems to me that the MR job is working fine.Spent almost 20+ hours on this and decided it will be better to ask theexperts in this group :-)The setup (all from CDH5):&&- hiveserver2&&- hive 0.12.0&&- MR1&&- hive-metastore&&- No authentication (no kerberos, no LDAP)&&- the cluster consists of 1 namenode/jobtracker + 5 datanode/tasktracker,so this is *not *in pseudo distributed mode.Things I have checked:#1 HDFS /user/hive/ and /user/hive/warehouse chmod is 0777 and belongs tohive:hive&&&&&&&&&& I think i have followed all the instructions in Cloudera writeups onhive configuration, BUT NOT the kerberos/ldap authentication.&&&&&&&&&& hite-site.xml config dump below:&property&&&&&name&javax.jdo.option.ConnectionURL&/name&&&&&value&jdbc:mysql://localhost/metastore&/value&&&&&description&JDBC connect string for a JDBC metastore&/description&&/property&&property&&&&&name&javax.jdo.option.ConnectionDriverName&/name&&&&&value&com.mysql.jdbc.Driver&/value&&&&&description&Driver class name for a JDBC metastore&/description&&/property&&property&&&&&name&javax.jdo.option.ConnectionUserName&/name&&&&&value&hive&/value&&/property&&property&&&&&name&javax.jdo.option.ConnectionPassword&/name&&&&&value&passwrdmasked&/value&&/property&&property&&&&&name&datanucleus.autoCreateSchema&/name&&&&&value&false&/value&&/property&&property&&&&&name&datanucleus.fixedDatastore&/name&&&&&value&true&/value&&/property&&property&&&&&name&datanucleus.autoStartMechanism&/name&&&&&value&SchemaTable&/value&&/property&&property&&&&&name&hive.metastore.uris&/name&&&&&value&thrift://localhost:9083&/value&&&&&description&IP address (or fully-qualified domain name) and port of themetastore host&/description&&/property&&property&&&&&name&hive.support.concurrency&/name&&&&&description&Whether Hive supports concurrency or not. A Zookeeperinstance must be up and running for the default Hive lock manager tosupport read-write locks&/description&&&&&value&true&/value&&/property&&property&&&&&name&hive.zookeeper.quorum&/name&&&&&description&Zookeeper quorum used by Hive's Table LockManager&/description&&&&&value&MN.&/value&&/property&&property&&&&&name&ipc.client.connection.maxidletime&/name&&&&&value&10000&/value&&/property&&property&&&&&name&hive.metastore.client.socket.timeout&/name&&&&&value&3600&/value&&&&&description&MetaStore Client socket timeout in seconds&/description&&/property&&property&&&&&&name&hive.metastore.warehouse.dir&/name&&&&&&value&/user/hive/warehouse&/value&&&&&&description&location of the warehouse directory&/description&&/property&&!-- my Hive experimental setting --&&property&&&&&&name&hive.metastore.execute.setugi&/name&&&&&&value&false&/value&&&&&&description& metastore server to use the client's user and grouppermissions&/description&&/property&&property&&&&&name&fs.hdfs.impl.disable.cache&/name&&&&&description&disable HDFS filesystem cache, default false&/description&&&&&value&true&/value&&/property&&property&&&&&name&fs.file.impl.disable.cache&/name&&&&&description&disable local file system cache, default false&/description&&&&&value&true&/value&&/property&&property&&&&&name&hive.server2.enable.impersonation&/name&&&&&description&Enable user impersonation for HiveServer2&/description&&&&&value&true&/value&&/property&&property&&&&&name&hive.server2.enable.doAs&/name&&&&&description&this is the hadoop proxy user&/description&&&&&value&true&/value&&/property&&!-- Hive SECURITY Parameters --&&property&&&&&name&hive.server2.authentication&/name&&&&&value&NONE&/value&&&&&description&&&&&&DISABLE CLIENT/SERVER AUthentication --&&&&&&Client authentication types.&&&&&&&&NONE: no authentication check&&&&&&&&LDAP: LDAP/AD based authentication&&&&&&&&KERBEROS: Kerberos/GSSAPI authentication&&&&&&&&CUSTOM: Custom authentication provider&&&&&&&&&&&&&&&&(Use with property hive.server2.custom.authentication.class)&&&&/description&&/property&&/configuration&&&&&&&&&&&& log file snippets from running the above query in hue:.....14/05/15 10:49:06 INFO ql.Driver: Total MapReduce jobs = 314/05/15 10:49:06 INFO ql.Driver: Launching Job 1 out of 314/05/15 10:49:06 INFO exec.Task: Number of reduce tasks is set to 0 sincethere's no reduce operator14/05/15 10:49:06 INFO mr.ExecDriver: Generating plan filefile:/tmp/hive/hive__10-49-04_764_/-local-10004/plan.xml14/05/15 10:49:06 INFO mr.ExecDriver: Executing: /usr/lib/hadoop/bin/hadoopjar /usr/lib/hive/lib/hive-common-0.12.0-cdh5.0.1.jarorg.apache.hadoop.hive.ql.exec.mr.ExecDriver
-planfile:/tmp/hive/hive__10-49-04_764_/-local-10004/plan.xml-jobconffilefile:/tmp/hive/hive__10-49-04_764_/-local-10003/jobconf.xml14/05/15 10:49:17 INFO exec.Task: Execution completed successfully14/05/15 10:49:17 INFO exec.Task: MapredLocal task succeeded14/05/15 10:49:17 INFO mr.ExecDriver: Execution completed successfully14/05/15 10:49:17 INFO exec.Task: Stage-4 is selected by condition resolver.14/05/15 10:49:17 INFO exec.Task: Stage-3 is filtered out by conditionresolver.14/05/15 10:49:17 INFO exec.Task: Stage-5 is filtered out by conditionresolver.14/05/15 10:49:17 INFO exec.Task: Moving data to:hdfs://MN.:8020/tmp/hive-hive/hive__10-49-04_764_/-ext-10000fromhdfs://MN.:8020/tmp/hive-hive/hive__10-49-04_764_/-ext-1000214/05/15 10:49:17 INFO exec.Task: Loading data to tables5.interaction_handled fromhdfs://MN.:8020/tmp/hive-hive/hive__10-49-04_764_/-ext-1000014/05/15 10:49:18 ERROR exec.Task: Failed with exception Unable to altertable.org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table.....14/05/15 10:49:18 ERROR ql.Driver: FAILED: Execution Error, return code 1from org.apache.hadoop.hive.ql.exec.MoveTask14/05/15 10:49:18 ERROR operation.Operation: Error:org.apache.hive.service.cli.HiveSQLException: Error while processingstatement: FAILED: Execution Error, return code 1 fromorg.apache.hadoop.hive.ql.exec.MoveTask&&&Thanks for you help, in advance.-----You received this message because you are subscribed to the Google Groups &CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send an email to cdh-user+unsubscribe@cloudera.org.For more options, visit
Search Discussions
Is the HDFS location of your table (e.g. /user/hive/warehouse/YOUR_TABLE)also writable by your user?RomainOn Thu, May 15, 2014 at 12:46 PM, Andy P wrote:Hello,This past Monday, I just installed HDFS+hiveserver2 from the just-releasedCDH 5.0.1 distribution on our hadoop lab cluster.When I execute the following, using beeline:beeline& !connect jdbc:hive2://localhost:10000 hive passwdmaskedorg.apache.hive.jdbc.HiveDriverbeeline& use s5;0: jdbc:hive2://localhost:10000&+--------------------------------------------+tab_name
|+--------------------------------------------+sales_transaction
|sales_transaction_region1
|+--------------------------------------------+beeline& INSERT OVERWRITE TABLE `sales_transaction_region1` SELECT *FROM `sales_transaction` WHERE state in ('IN', 'TX', 'VA', CA');Error: Error while processing statement: FAILED: Execution Error,return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask(state=08S01,code=1)I am able to create both of the tables shown in that
s5 database, but forsome unknown reason, I am unable to insert into that table or even dropthem tables.After that error message, when I query the sales_transaction_region1table, it is populated with the right number of rows.When i run select count(*) from sales_transaction_region1, the Map jobreturns with NO error.....This seems to me that the MR job is working fine.Spent almost 20+ hours on this and decided it will be better to ask theexperts in this group :-)The setup (all from CDH5):- hiveserver2- hive 0.12.0- MR1- hive-metastore- No authentication (no kerberos, no LDAP)- the cluster consists of 1 namenode/jobtracker + 5 datanode/tasktracker,so this is *not *in pseudo distributed mode.Things I have checked:#1 HDFS /user/hive/ and /user/hive/warehouse chmod is 0777 and belongs tohive:hive#2 I think i have followed all the instructions in Cloudera writeups onhive configuration, BUT NOT the kerberos/ldap authentication.#3 hite-site.xml config dump below:&property&&name&javax.jdo.option.ConnectionURL&/name&&value&jdbc:mysql://localhost/metastore&/value&&description&JDBC connect string for a JDBC metastore&/description&&/property&&property&&name&javax.jdo.option.ConnectionDriverName&/name&&value&com.mysql.jdbc.Driver&/value&&description&Driver class name for a JDBC metastore&/description&&/property&&property&&name&javax.jdo.option.ConnectionUserName&/name&&value&hive&/value&&/property&&property&&name&javax.jdo.option.ConnectionPassword&/name&&value&passwrdmasked&/value&&/property&&property&&name&datanucleus.autoCreateSchema&/name&&value&false&/value&&/property&&property&&name&datanucleus.fixedDatastore&/name&&value&true&/value&&/property&&property&&name&datanucleus.autoStartMechanism&/name&&value&SchemaTable&/value&&/property&&property&&name&hive.metastore.uris&/name&&value&thrift://localhost:9083&/value&&description&IP address (or fully-qualified domain name) and port of themetastore host&/description&&/property&&property&&name&hive.support.concurrency&/name&&description&Whether Hive supports concurrency or not. A Zookeeperinstance must be up and running for the default Hive lock manager tosupport read-write locks&/description&&value&true&/value&&/property&&property&&name&hive.zookeeper.quorum&/name&&description&Zookeeper quorum used by Hive's Table LockManager&/description&&value&MN.&/value&&/property&&property&&name&ipc.client.connection.maxidletime&/name&&value&10000&/value&&/property&&property&&name&hive.metastore.client.socket.timeout&/name&&value&3600&/value&&description&MetaStore Client socket timeout in seconds&/description&&/property&&property&&name&hive.metastore.warehouse.dir&/name&&value&/user/hive/warehouse&/value&&description&location of the warehouse directory&/description&&/property&&!-- my Hive experimental setting --&&property&&name&hive.metastore.execute.setugi&/name&&value&false&/value&&description& metastore server to use the client's user and grouppermissions&/description&&/property&&property&&name&fs.hdfs.impl.disable.cache&/name&&description&disable HDFS filesystem cache, default false&/description&&value&true&/value&&/property&&property&&name&fs.file.impl.disable.cache&/name&&description&disable local file system cache, default false&/description&&value&true&/value&&/property&&property&&name&hive.server2.enable.impersonation&/name&&description&Enable user impersonation for HiveServer2&/description&&value&true&/value&&/property&&property&&name&hive.server2.enable.doAs&/name&&description&this is the hadoop proxy user&/description&&value&true&/value&&/property&&!-- Hive SECURITY Parameters --&&property&&name&hive.server2.authentication&/name&&value&NONE&/value&&description&DISABLE CLIENT/SERVER AUthentication --&Client authentication types.NONE: no authentication checkLDAP: LDAP/AD based authenticationKERBEROS: Kerberos/GSSAPI authenticationCUSTOM: Custom authentication provider(Use with property hive.server2.custom.authentication.class)&/description&&/property&&/configuration log file snippets from running the above query in hue:.....14/05/15 10:49:06 INFO ql.Driver: Total MapReduce jobs = 314/05/15 10:49:06 INFO ql.Driver: Launching Job 1 out of 314/05/15 10:49:06 INFO exec.Task: Number of reduce tasks is set to 0 sincethere's no reduce operator14/05/15 10:49:06 INFO mr.ExecDriver: Generating plan filefile:/tmp/hive/hive__10-49-04_764_/-local-10004/plan.xml14/05/15 10:49:06 INFO mr.ExecDriver: Executing:/usr/lib/hadoop/bin/hadoop jar/usr/lib/hive/lib/hive-common-0.12.0-cdh5.0.1.jarorg.apache.hadoop.hive.ql.exec.mr.ExecDriver
-planfile:/tmp/hive/hive__10-49-04_764_/-local-10004/plan.xml-jobconffilefile:/tmp/hive/hive__10-49-04_764_/-local-10003/jobconf.xml14/05/15 10:49:17 INFO exec.Task: Execution completed successfully14/05/15 10:49:17 INFO exec.Task: MapredLocal task succeeded14/05/15 10:49:17 INFO mr.ExecDriver: Execution completed successfully14/05/15 10:49:17 INFO exec.Task: Stage-4 is selected by conditionresolver.14/05/15 10:49:17 INFO exec.Task: Stage-3 is filtered out by conditionresolver.14/05/15 10:49:17 INFO exec.Task: Stage-5 is filtered out by conditionresolver.14/05/15 10:49:17 INFO exec.Task: Moving data to: hdfs://MN.:8020/tmp/hive-hive/hive__10-49-04_764_/-ext-10000from hdfs://MN.:8020/tmp/hive-hive/hive__10-49-04_764_/-ext-1000214/05/15 10:49:17 INFO exec.Task: Loading data to tables5.interaction_handled from hdfs://MN.:8020/tmp/hive-hive/hive__10-49-04_764_/-ext-1000014/05/15 10:49:18 ERROR exec.Task: Failed with exception Unable to altertable.org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table.....14/05/15 10:49:18 ERROR ql.Driver: FAILED: Execution Error, return code 1from org.apache.hadoop.hive.ql.exec.MoveTask14/05/15 10:49:18 ERROR operation.Operation: Error:org.apache.hive.service.cli.HiveSQLException: Error while processingstatement: FAILED: Execution Error, return code 1 fromorg.apache.hadoop.hive.ql.exec.MoveTaskThanks for you help, in advance.-----You received this message because you are subscribed to the Google Groups&CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send anemail to cdh-user+unsubscribe@cloudera.org.For more options, visit -----You received this message because you are subscribed to the Google Groups &CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send an email to cdh-user+unsubscribe@cloudera.org.For more options, visit
Romain,Is the HDFS location of your table (e.g./user/hive/warehouse/YOUR_TABLE) also writable by your user?Yes it is writable for the world (and to be sure I even OFF the stickybit).But thanks to you pointing that, I found a bizarre thing happened regardingthis, details below.Here used the actual name of the tables, the one I posted in the originalpost are foobar (made up).1. In the attached screenshot, the top part shows the 0777 of the&&/user/hive/warehouse/YOUR_TABLE in HDFS. The bottom part shows the error Idescribed in the original post when an insert-overwrite-table HiveQL isexecuted, using beeline.2. Then i execute the truncate table command, against that same databaseand that same table.3. *after* i truncated the table, the permission of/user/hive/warehouse/YOUR_TABLE now has changed, from 0777 to 0755. What iscausing that ? This may be symptom of the same problem(?).Thanks,Andy.On Friday, May 16, :07 AM UTC-5, Romain Rigaux wrote:Is the HDFS location of your table (e.g. /user/hive/warehouse/YOUR_TABLE)also writable by your user?Romain-----You received this message because you are subscribed to the Google Groups &CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send an email to cdh-user+unsubscribe@cloudera.org.For more options, visit
Hello Andy,Can you do the following and upload the resultant log.hive --hiveconf hive.root.logger=DEBUG,console -e &INSERT OVERWRITE TABLE`sales_transaction_region1` SELECT * FROM `sales_transaction` WHERE statein ('IN', 'TX', 'VA', CA');& & insert_query_debug.log- Johndee BurksOn Fri, May 16, 2014 at 12:06 PM, Andy P wrote:Romain,Is the HDFS location of your table (e.g. /user/hive/warehouse/YOUR_TABLE)also writable by your user?Yes it is writable for the world (and to be sure I even OFF the stickybit).But thanks to you pointing that, I found a bizarre thing happenedregarding this, details below.Here used the actual name of the tables, the one I posted in the originalpost are foobar (made up).1. In the attached screenshot, the top part shows the 0777 of the/user/hive/warehouse/YOUR_TABLE in HDFS. The bottom part shows the errorI described in the original post when an insert-overwrite-table HiveQL isexecuted, using beeline.2. Then i execute the truncate table command, against that same databaseand that same table.3. *after* i truncated the table, the permission of/user/hive/warehouse/YOUR_TABLE now has changed, from 0777 to 0755. Whatis causing that ? This may be symptom of the same problem(?).Thanks,Andy.On Friday, May 16, :07 AM UTC-5, Romain Rigaux wrote:Is the HDFS location of your table (e.g. /user/hive/warehouse/YOUR_TABLE)also writable by your user?Romain-----You received this message because you are subscribed to the Google Groups&CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send anemail to cdh-user+unsubscribe@cloudera.org.For more options, visit --- JRB-----You received this message because you are subscribed to the Google Groups &CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send an email to cdh-user+unsubscribe@cloudera.org.For more options, visit
Hello Johndee,Attached is the log.I noticed the syntax you gave were using the hive CLI. My setup ofHiveServer2 (from the CDH 5.0.1). Does HIVE CLI supports Hive-Server2?Thanks--AndyOn Friday, May 16, :13 AM UTC-5, Johndee Cloudera wrote:Hello Andy,Can you do the following and upload the resultant log.hive --hiveconf hive.root.logger=DEBUG,console -e &INSERT OVERWRITE TABLE`sales_transaction_region1` SELECT * FROM `sales_transaction` WHERE statein ('IN', 'TX', 'VA', CA');& & insert_query_debug.log- Johndee BurksOn Fri, May 16, 2014 at 12:06 PM, Andy P &hendy...@ &javascript:&wrote: Romain,Is the HDFS location of your table (e.g. /user/hive/warehouse/YOUR_TABLE)also writable by your user?Yes it is writable for the world (and to be sure I even OFF the stickybit).But thanks to you pointing that, I found a bizarre thing happenedregarding this, details below.Here used the actual name of the tables, the one I posted in the originalpost are foobar (made up).1. In the attached screenshot, the top part shows the 0777 of the/user/hive/warehouse/YOUR_TABLE in HDFS. The bottom part shows theerror I described in the original post when an insert-overwrite-tableHiveQL is executed, using beeline.2. Then i execute the truncate table command, against that same databaseand that same table.3. *after* i truncated the table, the permission of/user/hive/warehouse/YOUR_TABLE now has changed, from 0777 to 0755. Whatis causing that ? This may be symptom of the same problem(?).Thanks,Andy.On Friday, May 16, :07 AM UTC-5, Romain Rigaux wrote:Is the HDFS location of your table (e.g. /user/hive/warehouse/YOUR_TABLE)also writable by your user?Romain--- JRB-----You received this message because you are subscribed to the Google Groups &CDH Users& group.To unsubscribe from this group and stop receiving emails from it, send an email to cdh-user+unsubscribe@cloudera.org.For more options, visit
Related Discussions
viewthread |
categories
user style
3 users in discussion
site design / logo & 2017 Grokbase

参考资料

 

随机推荐