Download source code apache Download address: http://archive.apache.org/dist
<http://archive.apache.org/dist>/(apache We have everything at this address, Very convenient)
hadoop 1.1.2 download linux Edition .tar.gz file After downloading, decompression, Get intosrc binding
Create source project eclipse Newly build web project project becausehadoop Useful toJ2EE Package, establishjava project Project can only provideJ2SE Package.
Remember to change the code to UTF-8, Java Compiler Change to1.6
Import source code Create in project Source Folder Folder core After decompressing src\core Lower org Copy to eclipse Ofcore lower
After decompressing src\core Lowercore-default.xml( After the configuration file is compiled, it must beclass lower) Copy to eclipse Projectsrc lower
( After this step, the code will report an error, In the future, we need tojar Import)
examples,hdfs,mapred,test,tools withcore Same operation test,tools andexamples No replication requiredxml configuration file
Importjar Right click Project Builde Path stayLibraries in Choice Add External JARs Import hadoop After decompression, all the
jar (hadoop-core-1.1.2.jar etc.) Import root directorylib Under all jar (asm-3.2.jar etc.) Import
root directorysrc\test\lib All jar thus hadoop Source codejar Import all, It will be found that most of the source code no longer reports errors, Prompt in error reporting class
...tools.ant.... Not loaded
We still need to go http://archive.apache.org/dist <http://archive.apache.org/dist>/ download ant
Root directory underant-current-bin.zip
<http://archive.apache.org/dist/ant/ant-current-bin.zip> file Under the root directory after importing and decompressinglib All of them jar
If you areeclipse Items created in will also report errors sun Related package not found, becauseeclipse Not used by default insun Wait for a special bag(myeclipse No problem)
We need to add it manually: stay Builde Path OfLibraries OfJRE System.. LowerAccess rules Add to
Read the source code method: See startup script Root directory lib/start-all.sh The first thing to do in the script is config Script environment variable configuration and other operations And then implement
dfs andmapred stay start-dfs.sh Can be seen in To start up. dfs Daemon thread, Started.namenode datanode
Found in projectNameNode(org/apache/hadoop/hdfs/server/namenode/) Suggested use Ctrl+Shift+R
Search for, It's more convenient.
stayNameNode There is one at the bottommain Method, View from this starting pointhdfs Source code, You can see the way to go first createNameNode -》init Wait
As for other package source code reading, the same is true.