<PRoperty> <name>dfs.hosts.exclude</name> <value>file_path</value> </property> 2、file_path文件中存儲要離線的幾點名稱3、執(zhí)行命令 hdfs dfsadmin -refreshNodes 問題:執(zhí)行到最發(fā)現(xiàn)50070界面數(shù)據(jù)塊不發(fā)生變化,查看namenode發(fā)現(xiàn)問題:2017-02-08 15:19:10,145 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 原因:namenode中記錄的文件副本數(shù)和實際存儲的副本數(shù)不一致,可以使用 hadoop fsck / >test.log命令檢查解決方法: hdfs hadoop dfs -setrep -w 2 -R / 將機(jī)器中文件的副本數(shù)統(tǒng)一 然后再執(zhí)行hadoop fsck / >test1.log檢查,看是否有Missing4、然后重新執(zhí)行hdfs dfsadmin -refreshNodes5、退役完成(block 轉(zhuǎn)移結(jié)束),后續(xù)hadoop會自動刪除datanode上的數(shù)據(jù),也可以直接停掉datanode,手動刪除數(shù)據(jù)下線tasktracker or nodemanager(過程與下線datanode類似,一下列舉不同點)
如下配置項到mapred-site.xml
[html] view plain copy<property> <name>mapred.hosts.exclude</name> <value>mrhosts.exclude</value> </property> yarn rmadmin -refreshNodes 若沒啟用yarn,即下線tasktracker時執(zhí)行:hadoop mradmin -refreshNodes
|
新聞熱點
疑難解答