【hadoop二次开发】NameNodeResourceChecker的构造函数

    科技2022-08-09  114

    006-hadoop二次开发-NameNode启动流程

    针对源码文件NameNode.java中startCommonServices(Configuration conf)下namesystem.startCommonServices(conf, haContext) 源码文件FSNamesystem.java中startCommonServices(Configuration conf, HAContext haContext)下nnResourceChecker = new NameNodeResourceChecker(conf); 然后进入源码文件NameNodeResourceChecker.java

    /** * Create a NameNodeResourceChecker, which will check the edits dirs and any * additional dirs to check set in <code>conf</code>. * * 这个构造方法主要是: * 1、声明namenode容忍的磁盘大小的阈值 * 2、封装好需要检查的磁盘路径(fsimage和edits) * 3、将需要检查的磁盘路径通过addDirToCheck方法添加到volumes这个map集合里面, * 然后在FSNameSystem中有一个NameNodeResourceMonitor线程,不断的调用checkAvailableResources方法 * 来检查volumes(磁盘的资源情况) */ public NameNodeResourceChecker(Configuration conf) throws IOException { this.conf = conf; //初始化volumes Map,用来存放需要进行磁盘空间检查的路径 volumes = new HashMap<String, CheckedVolume>(); //100M duReserved = conf.getLong(DFSConfigKeys.DFS_NAMENODE_DU_RESERVED_KEY, DFSConfigKeys.DFS_NAMENODE_DU_RESERVED_DEFAULT); //加载以前循环过程中的volumes里面的待检查目录列表 Collection<URI> extraCheckedVolumes = Util.stringCollectionAsURIs(conf .getTrimmedStringCollection(DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_KEY)); //加载共享的edit是文件目录列表(journalNode和edits) Collection<URI> localEditDirs = Collections2.filter( //拿到edits的目录 FSNamesystem.getNamespaceEditsDirs(conf), new Predicate<URI>() { @Override public boolean apply(URI input) { if (input.getScheme().equals(NNStorage.LOCAL_URI_SCHEME)) { return true; } return false; } }); // Add all the local edits dirs, marking some as required if they are // configured as such. for (URI editsDirToCheck : localEditDirs) { //主要是用来存放需要进行磁盘空间检查的dirs addDirToCheck(editsDirToCheck, FSNamesystem.getRequiredNamespaceEditsDirs(conf).contains( editsDirToCheck)); } // All extra checked volumes are marked "required" for (URI extraDirToCheck : extraCheckedVolumes) { addDirToCheck(extraDirToCheck, true); } minimumRedundantVolumes = conf.getInt( DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_MINIMUM_KEY, DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_MINIMUM_DEFAULT); }

    首先构建一个HashMap,用于存放资源路径,比如edits文件,fsimage文件放在哪个目录上,哪个磁盘上,用volumes记录这些信息。 然后声明最小磁盘的入度,当前存放元数据的目录磁盘空间大小不能小于100M,如果小于会进入安全模式。 然后加载两个集合,第一个加载volumes中待检查目录列表,第二个加载共享edit文件目录。 然后两个for循环无论走哪个,都会执行addDirToCheck。 最后 minimumRedundantVolumes最小冗余数。

    /** * Add the volume of the passed-in directory to the list of volumes to check. * If <code>required</code> is true, and this volume is already present, but * is marked redundant, it will be marked required. If the volume is already * present but marked required then this method is a no-op. * * @param directoryToCheck * The directory whose volume will be checked for available space. */ private void addDirToCheck(URI directoryToCheck, boolean required) throws IOException { File dir = new File(directoryToCheck.getPath()); if (!dir.exists()) { throw new IOException("Missing directory "+dir.getAbsolutePath()); } CheckedVolume newVolume = new CheckedVolume(dir, required); CheckedVolume volume = volumes.get(newVolume.getVolume()); if (volume == null || !volume.isRequired()) { //TODO 需要检查的元数据路径 volumes.put(newVolume.getVolume(), newVolume); } }

    把每个edits中的URI路径构建成File,然后将File生成CheckedVolume,放入volumes。

    Processed: 0.020, SQL: 8