麻豆小视频在线观看_中文黄色一级片_久久久成人精品_成片免费观看视频大全_午夜精品久久久久久久99热浪潮_成人一区二区三区四区

首頁 > 開發 > 綜合 > 正文

hive外部表

2024-07-21 02:52:28
字體:
來源:轉載
供稿:網友
我們在創建表的時候可以指定external關鍵字創建外部表,外部表對應的文件存儲在location指定的目錄下,向該目錄添加新文件的同時,該表也會讀取到該文件(當然文件格式必須跟表定義的一致),刪除外部表的同時并不會刪除location指定目錄下的文件.

1.查看hdfs系統目錄/user/hadoop1/myfile下文件[hadoop1@node1]$ hadoop fs -ls /user/hadoop1/myfile/ Found 1 items-rw-r--r--   3 hadoop1 supergroup     567839 2014-10-29 16:50 /user/hadoop1/myfile/tb_class.txt

2.創建外部表指向myfile目錄下的文件hive (hxl)> create external table tb_class_info_external          > (id int,          > class_name string,          > createtime timestamp ,          > modifytime timestamp)          > ROW FORMAT DELIMITED          > FIELDS TERMINATED BY '|'          > location '/user/hadoop1/myfile';OKTime taken: 0.083 seconds

注意這里的location指向的是hdfs系統上的路徑,而不是本地機器上的路徑,這里表tb_class_info_external會讀取myfile目錄下的所有文件

3.查看外部表hive (hxl)> select count(1) from tb_class_info_external;Total MaPReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes):  set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers:  set hive.exec.reducers.max=In order to set a constant number of reducers:  set mapred.reduce.tasks=Starting Job = job_201410300915_0009, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201410300915_0009Kill Command = /usr1/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=http://192.168.56.101:9001 -kill job_201410300915_0009Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 12014-10-30 15:25:10,652 Stage-1 map = 0%,  reduce = 0%2014-10-30 15:25:12,664 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:13,671 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:14,682 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:15,690 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:16,697 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:17,704 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:18,710 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:19,718 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:25:20,725 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.21 sec2014-10-30 15:25:21,730 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.21 sec2014-10-30 15:25:22,737 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.21 secMapReduce Total cumulative CPU time: 1 seconds 210 msecEnded Job = job_201410300915_0009MapReduce Jobs Launched: Job 0: Map: 1  Reduce: 1   Cumulative CPU: 1.21 sec   HDFS Read: 568052 HDFS Write: 6 SUCCESSTotal MapReduce CPU Time Spent: 1 seconds 210 msecOK10001Time taken: 14.742 seconds

可以看到這里表記錄數是10001,下面我們在myfile目錄下添加另外一個文件tb_class_bak.txt

4.在myfile目錄下添加文本$hadoop fs -cp /user/hadoop1/myfile/tb_class.txt /user/hadoop1/myfile/tb_class_bak.txt

5.再次查詢表記錄數hive (hxl)> select count(1) from tb_class_info_external;Total MapReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes):  set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers:  set hive.exec.reducers.max=In order to set a constant number of reducers:  set mapred.reduce.tasks=Starting Job = job_201410300915_0010, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201410300915_0010Kill Command = /usr1/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=http://192.168.56.101:9001 -kill job_201410300915_0010Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 12014-10-30 15:32:02,275 Stage-1 map = 0%,  reduce = 0%2014-10-30 15:32:04,286 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:05,292 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:06,300 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:07,306 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:08,313 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:09,319 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:10,327 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:11,331 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.48 sec2014-10-30 15:32:12,338 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.16 sec2014-10-30 15:32:13,343 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.16 sec2014-10-30 15:32:14,350 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.16 secMapReduce Total cumulative CPU time: 1 seconds 160 msecEnded Job = job_201410300915_0010MapReduce Jobs Launched: Job 0: Map: 1  Reduce: 1   Cumulative CPU: 1.16 sec   HDFS Read: 1135971 HDFS Write: 6 SUCCESSTotal MapReduce CPU Time Spent: 1 seconds 160 msecOK20002Time taken: 14.665 seconds

可以看到記錄數加倍了,那就說明表已經讀取了新增加的文件.

6.刪除表hive (hxl)> drop table tb_class_info_external;OKTime taken: 1.7 seconds

表對應的文件并沒有刪除[hadoop1@node1]$ hadoop fs -ls /user/hadoop1/myfile/Found 2 items-rw-r--r--   3 hadoop1 supergroup     567839 2014-10-29 16:50 /user/hadoop1/myfile/tb_class.txt-rw-r--r--   3 hadoop1 supergroup     567839 2014-10-30 15:28 /user/hadoop1/myfile/tb_class_bak.txt

------------------------------------------------------外部分區表-------------------------------------------------

1.創建外部表目錄[flowrate@richinfo109 ~]$ hadoop fs -mkdir /tmp/bill/20161206[flowrate@richinfo109 ~]$ hadoop fs -mkdir /tmp/bill/20161206/18

2.拷貝文件到外部表目錄hadoop fs -cp /hive/warehouse/richmail.db/t_part_usernumber_t1/statedate=20161206/provcode=18/b.txt /tmp/bill/20161206/18/b.txt

3.創建分區表create external table t_part_ext_usernumber_t1( usernumber string)partitioned by(statedate string,provcode string)row format delimitedfields terminated by '|';

4.新添加分區,指定外部目錄alter table t_part_ext_usernumber_t1 add partition(statedate='20161206',provcode='18') location '/tmp/bill/20161206/18';

5.查看數據select * from t_part_ext_usernumber_t1;


發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 日韩美香港a一级毛片 | 久久久久久艹 | 高清av在线 | 久久6国产 | 亚洲午夜国产 | 欧美大穴 | 136福利视频 | 免费国产成人高清在线看软件 | 草草免费视频 | 久久综合久久综合久久 | 中文字幕亚洲欧美 | 久久亚洲精品11p | 欧美激情天堂 | 久久精品国产精品亚洲 | 中文字幕h | 91色一区二区三区 | 久久撸视频 | 日日爱影院 | 午夜久久久精品一区二区三区 | 国产精品视频免费在线观看 | 免费看欧美黑人毛片 | 国产1区2区3区中文字幕 | 美女视频大全网站免费 | 2019天天干夜夜操 | 欧美国产第一页 | 欧美激情精品久久久久 | 最新se94se在线欧美 | 久久国产一二区 | 99精品视频一区二区 | 久久精品国产久精国产 | 色婷婷av一区二区三区久久 | 作爱在线观看 | 欧美一区二区三区不卡免费观看 | 国产亚洲在线 | 久久综合入口 | 久久精国 | 亚洲人成网在线观看 | 福利在线播放 | 欧美一级黄色免费看 | 日本在线播放一区二区 | www成人在线观看 |