1.查看hdfs系統目錄/user/hadoop1/myfile下文件[hadoop1@node1]$ hadoop fs -ls /user/hadoop1/myfile/ Found 1 items-rw-r--r-- 3 hadoop1 supergroup 567839 2014-10-29 16:50 /user/hadoop1/myfile/tb_class.txt
2.創建外部表指向myfile目錄下的文件hive (hxl)> create external table tb_class_info_external > (id int, > class_name string, > createtime timestamp , > modifytime timestamp) > ROW FORMAT DELIMITED > FIELDS TERMINATED BY '|' > location '/user/hadoop1/myfile';OKTime taken: 0.083 seconds
注意這里的location指向的是hdfs系統上的路徑,而不是本地機器上的路徑,這里表tb_class_info_external會讀取myfile目錄下的所有文件
3.查看外部表hive (hxl)> select count(1) from tb_class_info_external;Total MaPReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapred.reduce.tasks=Starting Job = job_201410300915_0009, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201410300915_0009Kill Command = /usr1/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=http://192.168.56.101:9001 -kill job_201410300915_0009Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 12014-10-30 15:25:10,652 Stage-1 map = 0%, reduce = 0%2014-10-30 15:25:12,664 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:13,671 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:14,682 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:15,690 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:16,697 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:17,704 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:18,710 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:19,718 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:20,725 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.21 sec2014-10-30 15:25:21,730 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.21 sec2014-10-30 15:25:22,737 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.21 secMapReduce Total cumulative CPU time: 1 seconds 210 msecEnded Job = job_201410300915_0009MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 Cumulative CPU: 1.21 sec HDFS Read: 568052 HDFS Write: 6 SUCCESSTotal MapReduce CPU Time Spent: 1 seconds 210 msecOK10001Time taken: 14.742 seconds
可以看到這里表記錄數是10001,下面我們在myfile目錄下添加另外一個文件tb_class_bak.txt
4.在myfile目錄下添加文本$hadoop fs -cp /user/hadoop1/myfile/tb_class.txt /user/hadoop1/myfile/tb_class_bak.txt
5.再次查詢表記錄數hive (hxl)> select count(1) from tb_class_info_external;Total MapReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max=In order to set a constant number of reducers: set mapred.reduce.tasks=Starting Job = job_201410300915_0010, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201410300915_0010Kill Command = /usr1/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=http://192.168.56.101:9001 -kill job_201410300915_0010Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 12014-10-30 15:32:02,275 Stage-1 map = 0%, reduce = 0%2014-10-30 15:32:04,286 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:05,292 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:06,300 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:07,306 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:08,313 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:09,319 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:10,327 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:11,331 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:12,338 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.16 sec2014-10-30 15:32:13,343 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.16 sec2014-10-30 15:32:14,350 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.16 secMapReduce Total cumulative CPU time: 1 seconds 160 msecEnded Job = job_201410300915_0010MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 Cumulative CPU: 1.16 sec HDFS Read: 1135971 HDFS Write: 6 SUCCESSTotal MapReduce CPU Time Spent: 1 seconds 160 msecOK20002Time taken: 14.665 seconds
可以看到記錄數加倍了,那就說明表已經讀取了新增加的文件.
6.刪除表hive (hxl)> drop table tb_class_info_external;OKTime taken: 1.7 seconds
表對應的文件并沒有刪除[hadoop1@node1]$ hadoop fs -ls /user/hadoop1/myfile/Found 2 items-rw-r--r-- 3 hadoop1 supergroup 567839 2014-10-29 16:50 /user/hadoop1/myfile/tb_class.txt-rw-r--r-- 3 hadoop1 supergroup 567839 2014-10-30 15:28 /user/hadoop1/myfile/tb_class_bak.txt
------------------------------------------------------外部分區表-------------------------------------------------
1.創建外部表目錄[flowrate@richinfo109 ~]$ hadoop fs -mkdir /tmp/bill/20161206[flowrate@richinfo109 ~]$ hadoop fs -mkdir /tmp/bill/20161206/18
2.拷貝文件到外部表目錄hadoop fs -cp /hive/warehouse/richmail.db/t_part_usernumber_t1/statedate=20161206/provcode=18/b.txt /tmp/bill/20161206/18/b.txt
3.創建分區表create external table t_part_ext_usernumber_t1( usernumber string)partitioned by(statedate string,provcode string)row format delimitedfields terminated by '|';
4.新添加分區,指定外部目錄alter table t_part_ext_usernumber_t1 add partition(statedate='20161206',provcode='18') location '/tmp/bill/20161206/18';
5.查看數據select * from t_part_ext_usernumber_t1;
新聞熱點
疑難解答