Beruflich Dokumente
Kultur Dokumente
Disclaimer
Copyright IBM Corporation 2012. All rights reserved. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. WHILE EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS PRESENTATION, IT IS PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. IN ADDITION, THIS INFORMATION IS BASED ON IBMS CURRENT PRODUCT PLANS AND STRATEGY, WHICH ARE SUBJECT TO CHANGE BY IBM WITHOUT NOTICE. IBM SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION OR ANY OTHER DOCUMENTATION. NOTHING CONTAINED IN THIS PRESENTATION IS INTENDED TO, NOR SHALL HAVE THE EFFECT OF, CREATING ANY WARRANTIES OR REPRESENTATIONS FROM IBM (OR ITS SUPPLIERS OR LICENSORS), OR ALTERING THE TERMS AND CONDITIONS OF ANY AGREEMENT OR LICENSE GOVERNING THE USE OF IBM PRODUCTS AND/OR SOFTWARE. IBM, the IBM logo, ibm.com, and DB2 are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at Copyright and trademark information at http://www.ibm.com/legal/copytrade.shtml Other company, product, or service names may be trademarks or service marks of others.
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
User Interface
Analytics Visualization
Application
Zookeeper Avro Pig Hive MapReduce AdaptiveMR Oozie Jaql
Analytics
ML Analytics
Storage
HDFS
HBase
GPFS-SNC
Netezza
Teradata
Oracle
User Interface
Analytics Visualization
Application
Zookeeper Avro Pig Hive MapReduce AdaptiveMR Oozie Jaql
Analytics
ML Analytics
Storage
HDFS
HBase
GPFS-SNC
Netezza
Teradata
Oracle
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
Terminology review
Node 1
Terminology review
Node 1
Node 2
Terminology review
Node 1
Node 2
Node n
Terminology review
Rack 1
Node 1
Node 2
Node n
10
Terminology review
Rack 1 Rack 2
Node 1
Node 1
Node 2
Node 2
Node n
Node n
11
Terminology review
Rack 1 Rack 2 Rack n
Node 1
Node 1
Node 1
Node 2
Node 2
Node n
Node n
Node 2
Node n
12
Terminology review
Hadoop cluster Rack 1 Rack 2 Rack n
Node 1
Node 1
Node 1
Node 2
Node 2
Node n
Node n
Node 2
Node n
13
Hadoop architecture
14
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
15
Hadoop distributed file system (HDFS) Hadoop file system that runs on top of existing file system Designed to handle very large files with streaming data access patterns Uses blocks to store a file or parts of a file Can create, delete, copy, but NOT update
16
HDFS - Blocks
File Blocks
64MB (default), 128MB (recommended) compare to 4KB in UNIX Behind the scenes, 1 HDFS block is supported by multiple operating system (OS) blocks
128 MB
...
17
HDFS - Blocks
Fits well with replication to provide fault tolerance and availability
Advantages of blocks:
Fixed size easy to calculate how many fit on a disk A file can be larger than any single disk in the network If a file or a chunk of the file is smaller than the block size, only needed space is used. Eg: 420MB file is split as:
128 MB
128 MB
128 MB
36 MB
18
HDFS - Replication
Blocks with data are replicated to multiple nodes Allows for node failure without data loss
Node 1 Node 3
Node 2
19
20
21
22
23
24
25
26
27
28
29
30
31
namenode -format
Before it can be used, a new HDFS installation needs to be formatted
32
namenode -format
33
34
Invoked as follows:
hadoop fs <args>
Example: Listing the current directory in hdfs
hadoop fs ls .
35
scheme://authority/path
37
38
39
40
The -R flag requests a recursive change of replication level for an entire tree.
If -w is specified, waits until new replication level is achieved.
41
cat
chgrp
Usage: hadoop fs -chgrp [-R] GROUP URI [URI ]
Change the permissions of files. With -R, make the change recursively through the directory structure.
chmod
Usage: hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ]
Change group association of files With -R, make the change recursively through the directory structure.
chown
Usage: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
Change the owner of files. With -R, make the change recursively through the directory structure.
count
Usage: hadoop fs -count [-q] <paths>
Count the number of directories, files and bytes under the paths that match the specified file pattern. The output columns are: DIR_COUNT, FILE_COUNT, CONTENT_SIZE FILE_NAME. The output columns with -q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME.
Example:
hadoop fs -count hdfs:/mydir/test_file1 hdfs:/mydir/test_file2 hadoop fs -count -q hdfs:/mydir/test_file1
cp
Usage: hadoop fs -cp URI [URI ] <dest>
Copy files from source to destination. This command allows multiple sources as well in which case the destination must be a directory. Example:
hadoop fs -cp hdfs:/mydir/test_file file:///home/hdpadmin/foo hadoop fs -cp file:///home/hdpadmin/foo file:///home/hdpadmin/boo hdfs:/mydir
du
Usage: hadoop fs -du URI [URI ]
Displays aggregate length of files contained in the
dus
Usage: hadoop fs -dus <args>
Displays a summary of file lengths.
expunge
Usage: hadoop fs -expunge
Empty the Trash
When a file is deleted by a user or an application, it is not immediately removed from HDFS. Instead, HDFS first renames it to a file in the /trash directory. The file can be restored quickly as long as it remains in /trash. A file remains in /trash for a configurable amount of time
ls
Usage: hadoop fs -ls <args>
For a file returns stat on the file with the following format:
permissions number_of_replicas userid groupid filesize modification_date modification_time filename For a directory it returns list of its direct children as in unix.A directory is listed as: permissions userid groupid modification_date modification_time dirname
Example:
hadoop fs -ls hdfs:/mydir/test_file
lsr
Usage: hadoop fs -lsr <args>
Recursive version of ls. Similar to Unix ls -R. Example:
hadoop fs -lsr hdfs:/mydir
mkdir
Usage: hadoop fs -mkdir <paths>
Takes path uri's as argument and creates directories. The behavior is much like unix mkdir -p creating parent directories along the path.
Example:
hadoop fs -mkdir hdfs:/mydir/foodir hdfs:/mydir/boodir
mv
Usage: hadoop fs -mv URI [URI ] <dest>
Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across filesystems is not permitted. Example:
hadoop fs -mv file:///home/hdpadmin/test_file file:///home/hdpadmin/test_file1 hadoop fs mv hdfs:/mydir/file1 hdfs:/mydir/file2 hdfs:/mydir2
rm
Usage: hadoop fs -rm [-skipTrash] URI [URI ]
Delete files specified as args. Only deletes non empty directory and files. If the -skipTrash option is specified, the trash, if enabled, will be bypassed and the specified file(s) deleted immediately. This can be useful when it is necessary to delete files from an over-quota directory. Refer to rmr for recursive deletes. Example:
hadoop fs -rm hdfs:/home/hdpadmin/test_file
rmr
Usage: hadoop fs -rmr [-skipTrash] URI [URI ]
Recursive version of delete. If the -skipTrash option is specified, the trash, if enabled, will be bypassed and the specified file(s) deleted immediately. Example:
hadoop fs -rmr file:///home/hdpadmin/mydir hadoop fs -rmr skipTrash hdfs:/mydir
stat
Usage: hadoop fs -stat URI [URI ]
Returns the stat information on the path. Example:
hadoop fs stat hdfs:/mydir/test_file
tail
Usage: hadoop fs -tail [-f] URI
Displays last kilobyte of the file to stdout. -f option can be used as in UNIX. Example:
hadoop fs -tail hdfs:/mydir/test_file
Example:
hadoop fs -test e hdfs:/mydir/test_file
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
53
MapReduce engine
Technology from Google A MapReduce program consists of map and reduce functions A MapReduce job is broken into tasks that run in parallel
54
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
55
56
57
58
59
60
61
The Checkpoint node should run on a different machine than the NameNode
Should have same storage requirements as NameNode There can be many Checkpoint nodes per cluster
63
64
If theres a problem in NameNode, it can read from the Secondary NameNode. Should have same storage requirements as NameNode
65
67
68
69
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
70
71
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1.Process on the same node.
72
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1.Process on the same node 2.Different nodes on the same rack
73
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1.Process on the same node 2.Different nodes on the same rack 3.Nodes on different racks in the same data center
74
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1.Process on the same node 2.Different nodes on the same rack 3.Nodes on different racks in the same data center 4.Nodes in different data centers
75
Agenda
Terminology review HDFS MapReduce Type of nodes Topology awareness Configuring Hadoop
76
Configuration modes
Standalone (local) mode
Single machine No daemons are running Everything runs in single JVM Standard OS storage Good for development and test with small data, but will not catch all errors
Configuration modes
Pseudo-distributed mode
Single machine but cluster is simulated Daemons run Separate JVMs Good for development and debugging
Fully-distributed mode
Run Hadoop on cluster of machines Daemons run Production environment
Configuration files
hadoop-env.sh Environment variables that are used in the scripts to run Hadoop. core-site.xml Configuration settings for Hadoop Core, such as I/O settings that are common to HDFS and MapReduce hdfs-site.xml Configuration settings for HDFS daemons: the name node, secondary name node, and the data nodes. mapred-site.xml Configuration settings for MapReduce daemons and jobtracker, and tasktrackers. masters A list of machines (one per line) that each run secondary NameNode. slaves A list of machines (one per line) that each run data node and tasktracker. hadoop-metrix.properties Properties for controlling how metrics are published in Hadoop. log4j.properties Properties for system logfiles, the NameNode audit log, and the task log for the tasktracker child process
hadoop-env.sh settings
Most variables are default and not set Only export JAVA_HOME is required and should be set to java JDK HADOOP_HEAPSIZE heap size used by JVM of each daemon Can be overwritten for each daemon:
NameNode - HADOOP_NAMENODE_OPTS DataNode - HADOOP_DATANODE_OPTS Secondary NameNode - HADOOP_SECONDARYNAMENODE_OPTS JobTracker - HADOOP_JOBTRACKER_OPTS TaskTracker - HADOOP_TASKTRACKER_OPTS
Point to code & config /opt/ibm/biginsights BIGINSIGHTS_VAR Keeps logs /var/ibm/biginsights Other environment variables: HADOOP_CLASSPATH, HADOOP_PID_DIR, JAQL_HOME
BIGINSIGHTS_HOME
core-site.xml settings
fs.default.name The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem. Default: file:/// A base for other temporary directories. Default: /tmp/hadoop-${user.name} fs.trash.interval Number of minutes between trash checkpoints. If zero, the trash feature is disabled (default). When greater than zero erased files will be inserted in .trash in users home directory. The size of buffer for use in sequence files. The size of this buffer should be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
hadoop.tmp.dir
io.file.buffer.size
hadoop.rpc.socket.factory.class.ClientP rotocol
SocketFactory to use to connect to a DFS. If null or empty, use hadoop.rpc.socket.class.default. This socket factory is also used by DFSClient to create sockets to DataNodes.
hadoop.rpc.socket.factory.class.JobSub SocketFactory to use to connect to the missionProtocol JobTracker. If null or empty, uses hadoop.rpc.socket.class.default. Recommendation: Leave all three above parameters empty and mark them as FINAL
hdfs-site.xml settings
dfs.data.dir dfs.name.dir
List of directories where the DataNode stores its persistent metadata List of directories where the NameNode stores its persistent metadata. Recommendation: Remote mount NFS disk to backup metadata on NameNode (soft mount). HDFS block size. Default is 64MB. Recommendation: Set block size to 128MB or as appropriate for your data.
dfs.block.size
mapred-site configuration
mapred.hosts
Names a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted.
mapred.hosts.exclude
Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded.
mapred.max.tracker.failur The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't es
assigned to it. Default is 4
mapred.max.tracker.black The number of blacklists for a taskTracker by various jobs after which the task tracker could lists
be blacklisted across all jobs. The tracker will be given a tasks later (after a day). The tracker will become a healthy tracker after a restart. Default is 4.
The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapred.job.tracker is "local". Default: 1. Recommendation: set it to 90% If true, then multiple instances of some map tasks may be executed in parallel. Default: true.
mapred.reduce.tasks.sp If true, then multiple instances of some reduce tasks may be executed in parallel. Default: true. Recommended: false. eculative.execution
mapred.tasktracker.map The maximum number of map tasks that will be run simultaneously by a task tracker. Default: 2. .tasks.maximum
Recommendations: set relevant to number of CPUs and amount of memory on each data node. The maximum number of reduce tasks that will be run simultaneously by a task tracker. Default: 2. Recommendations: set relevant to number of CPUs and amount of memory on each data node.
mapred.tasktracker.red uce.tasks.maximum
The class responsible for scheduling the tasks. Default points to FIFO scheduler. Recommendation: Use Fair scheduler - org.apache.hadoop.mapred.FairScheduler Recover failed job when JobTracker restarts. For production clusters recommended to be set to TRUE
The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. Default: ${hadoop.tmp.dir}/mapred/local
Setting Rack Topology (Rack Awareness) Can be defined by script which specifies which node is on which rack.
<property>
<name>topology.script.file.name</name> <value>/opt/ibm/biginsights/hadoop-conf/rack-aware.sh</value> </property> The network topology script (topology.script.file.name in the above example) receives as arguments one or more IP addresses of nodes in the cluster. It returns on stdout a list of rack names, one for each input. The input and output order must be consistent.
Thank you!
http://bit.ly/cascon2012 @leonsp, @bsteinfe, @mariusbutuc