Sie sind auf Seite 1von 9

thw-dv-Hadoop-01

10.1.30.165
thw-dv-Hadoop-02
10.1.30.168
thw-dv-Hadoop-03
10.1.30.169

nameserver 10.1.204.241
nameserver 8.8.8.8

TYPE=Ethernet
BOOTPROTO=static
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=no
IPV6_PEERDNS=no
IPV6_PEERROUTES=no
IPV6_FAILURE_FATAL=no
NAME=eno16777984
UUID=3852c517-4069-4cf8-bcb9-089c16ac25a3
DEVICE=eno16777984
ONBOOT=yes
IPADDR=10.1.30.169
NETMASK=255.255.255.0
GATEWAY=10.1.30.254
ZONE=public

Go to the main command (usually starting with linux16 or something similar) and
add selinux=0 as one of the parameters:

EPEL-REPLEASE DOWNLOAD & INSTALL

yum install epel-release

Get the latest repo address from https://ambari.apache.org/ and get it, i.e: sudo
wget -nv http://public-repo-
1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo -O
/etc/yum.repos.d/ambari.repo

Configuring Passwordless SSH for root user

Enable SSH Key Logon and Disable Password (Password-Less) Logon in CentOS

This brief tutorial is going to show you how to log on to a SSH server without
passwords using only SSH encryption keys. The reason you may want to do this is to
enable more secured form of authenticating to your SSH enabled servers.

Using password authentication against SSH isnt bad as long as the password is
highly complicated and long beyond normal password strengths. But creating long and
complicated passwords may also encourage you to write it down on a piece of paper
or stored somewhere in an unsecured manner.

Thats why using encryption keys to authenticate SSH connection is a more secured
alternative.

Passwords also stand the risk of being guessed or cracked. SSH authentication on
the other hand makes it virtually impossible for anyone to brute force their way
into your servers. So, if you need a more secured way to sign on to your SSH
server, implement password-less authentication and enable SSH key exchange.

Enhance your coding experience with this split keyboard that offers up to 9" of
separation.

This simple tutorial is going to show you how to do it in CentOS.

The first thing is to verify if SSH is installed. If its not installed run the
commands below to install SSH in CentOS.

yum -y install openssh-server

Create the client private/public key pair


When you run the command to generate a public/private key pair, it creates two sets
of encryption keys on the client computer. One is a private key and the other is
its public key.

The private key always stays with the client. The public key is shared or copied to
computers the client wishes to trust. Only by pairing the correct private and
public keys of the client requesting access will authentication be allowed on the
server.

If the server which has the clients public key isnt able to match or pair the
correct private key submitted by the client with its public key stored on the
server, the connection will be rejected.

So, lets create the client private/public key. To do that, run the commands below
on the client computer.

ssh-keygen -t rsa

After running the above commands, youll be prompted to complete a series of tasks.
The first will be where to save the keys, press Enter to choose the default
location which is in a hidden .ssh folder of your home directory (/root/.ssh/).

The next prompt will be to Enter a passphrase. I personally leave this blank (just
press enter) to continue. It will then create the key pair and youre done.

After generating the keys, you will then to copy the clients public key to the SSH
server computer or host it wants to create trust relationship with.

Copy the client public key to the SSH computer


After generating the key pair, you must copy the client computer public key to the
SSH server host. The public key should be stored in the ~/.ssh/authorized_keys file
on the server.

This file contains public keys of all clients that have sent or copied their keys
to the server. The server uses to this file to match the public and private key
pair.

To copy the client public to the server, run the commands below.

cat ~/.ssh/id_rsa.pub | ssh root@Server_IP_Address "cat >> ~/.ssh/authorized_keys"


Alternatively, you may run the below commands to copy the key from the client to
the server.

ssh-copy-id user@server_ip_address or hostname Mt4bwuL;

Edit SSH file configuration to only allow key log on


Finally, go and edit SSH configuration file to only allow SSH key login and disable
password login. Its also known as password-less logon.

To do that, open the file using the commands below.

vi /etc/ssh/sshd_config

Then uncomment and change the lines to match the ones below. Make sure these lines
are un-commented, meaning they dont have the (#) before it.

PubkeyAuthentication yes
AuthorizedKeyFile .ssh/authorized_keys
PasswordAuthentication no
ChallengeResponseAuthentication no

Save the file and reload SSH server by running the commands below.

service sshd reload

Now try accessing the SSH server and it shouldnt prompt you to enter your
password..

How to Install JAVA 8 (JDK/JRE 8u131) on CentOS/RHEL 7/6 and Fedora 25

After a long wait, finally Java SE Development Kit 8 is available to download. JDK
8 has been released on Mar,18 2014 for general availability with the many featured
enhancements. You can find all the enhancements in JDK 8 here.

Java on Linux

This article will help you to Install JAVA 8 (JDK/JRE 8u131) or update on your
system. Read the instruction carefully before downloading Java from Linux command
line. To Install Java 8 in Ubuntu and LinuxMint read This Article.

Downloading Latest Java Archive

Download latest Java SE Development Kit 8 release from its official download page
or use following commands to download from shell.

For 64Bit
# cd /opt/
# wget --no-cookies --no-check-certificate --header 'Cookie: gpw_e24=http%3A%2F
%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie1'
'http://download.oracle.com/otn-pub/java/jdk/8u131-
b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz'

# tar xzf jdk-8u131-linux-x64.tar.gz

For 32Bit
# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F
%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"
"http://download.oracle.com/otn-pub/java/jdk/8u131-
b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-i586.tar.gz"

# tar xzf jdk-8u131-linux-i586.tar.gz

Install Java with Alternatives

After extracting archive file use alternatives command to install it. alternatives
command is available in chkconfig package.

# cd /opt/jdk1.8.0_131/
# alternatives --install /usr/bin/java java /opt/jdk1.8.0_131/bin/java 2
# alternatives --config java

There are 3 programs which provide 'java'.

Selection Command
-----------------------------------------------
* 1 /opt/jdk1.7.0_71/bin/java
+ 2 /opt/jdk1.8.0_45/bin/java
3 /opt/jdk1.8.0_91/bin/java
4 /opt/jdk1.8.0_131/bin/java

Enter to keep the current selection[+], or type selection number: 4

At this point JAVA 8 has been successfully installed on your system. We also
recommend to setup javac and jar commands path using alternatives

# alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_131/bin/jar 2


# alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_131/bin/javac 2
# alternatives --set jar /opt/jdk1.8.0_131/bin/jar
# alternatives --set javac /opt/jdk1.8.0_131/bin/javac
Check Installed Java Version

Check the installed Java version on your system using following command.

root@tecadmin ~# java -version

java version "1.8.0_131"


Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Configuring Environment Variables

Most of Java based applications uses environment variables to work. Set the Java
environment variables using following commands

#Setup JAVA_HOME Variable


export JAVA_HOME=/opt/jdk1.8.0_131
#Setup JRE_HOME Variable
export JRE_HOME=/opt/jdk1.8.0_131/jre
#Setup PATH Variable
export PATH=$PATH:/opt/jdk1.8.0_131/bin:/opt/jdk1.8.0_131/jre/bin

Also put all above environment variables in /etc/environment file for auto loading
on system boot.

FIREWALL DISABLE
systemctl disable firewalld
service ip6tables stop
chkconfig ip6tables off
service iptables status
service iptables save
service iptables stop
chkconfig iptables off

SELINUX DISABLE

setenforce 0
vi /etc/selinux/config

Disable Transparent Huge Pages (THP)


echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag

To disable or make these changes persistent across reboots I add this to the bottom
of my vi
/etc/rc.local
#disable THP at boot time
if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/redhat_transparent_hugepage/defrag; then
echo

HADOOP INSTALL

root@NameNode:~# cd /usr/local/
root@NameNode:/usr/local/# wget http://www.us.apache.org/dist/hadoop/common/hadoop-
2.7.2/hadoop-2.7.2.tar.gz
root@NameNode:/usr/local/# tar -xzvf hadoop-2.7.2.tar.gz >> /dev/null
root@NameNode:/usr/local/# mv hadoop-2.7.2 /usr/local/hadoop
root@NameNode:/usr/local/# mkdir -p /usr/local/hadoop_work/hdfs/namenode
root@NameNode:/usr/local/# mkdir -p /usr/local/hadoop_work/hdfs/namesecondary

Setup JAVA_HOME under hadoop environment

It is suggested that you also setup the JAVA_HOME environment variable under Hadoop
environment file. Open the hadoop-end.sh file,

root@NameNode:/usr/lib/jvm/java-7-oracle/jre# vi
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
Inside the file, find the line export JAVA_HOME=${JAVA_HOME}. Replace the line like
below
export JAVA_HOME=/usr/lib/jvm/java-7-oracle/jre

Configure core-site.xml

This XML configuration file lets you setup site specific properties, such as I/O
settings that are common to HDFS and MapReduce. Open the file and put the following
properties:

root@NameNode:/usr/local/hadoop/etc/hadoop# vi core-site.xml
core-site.xml
<?xml version="1.0"?>
<!-- core-site.xml -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://NameNode:8020/</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>

Configure hdfs-site.xml

We will tell hadoop where is the name node directory (which we created previously
at the end of Hadoop installation) and how many backup copies of the data files to
be created in the system (called replication) inside this file under the
configuration tag.

root@NameNode:/usr/local/hadoop/etc/hadoop# vi hdfs-site.xml
hdfs-site.xml
<?xml version="1.0"?>
<!-- hdfs-site.xml -->
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_work/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_work/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:/usr/local/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
</configuration>
Configure mapred-site.xml

This controls the configuration settings for MapReduce daemons. Here we need to
ensure that we will be using YARN framework. Also we will configure the MapReduce
Job History server. Copy the template file mapred-site.xml.template:

root@NameNode:/usr/local/hadoop/etc/hadoop# cp mapred-site.xml.template mapred-


site.xml
root@NameNode:/usr/local/hadoop/etc/hadoop# vi mapred-site.xml
Then add the following properties

mapred-site.xml
<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>NameNode:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>NameNode:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user/app</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Djava.security.egd=file:/dev/../dev/urandom</value>
</property>
</configuration>
Configure yarn-site.xml

This XML configuration file lets you setup YARN site specific properties for
Resource Manager & Node Manager. Open the file:

root@NameNode:/usr/local/hadoop/etc/hadoop# vi yarn-site.xml
Then put the following properties under configuration:

yarn-site.xml
<?xml version="1.0"?>
<!-- yarn-site.xml -->
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>NameNode</value>
</property>
<property>
<name>yarn.resourcemanager.bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>yarn.nodemanager.bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:/usr/local/hadoop_work/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>file:/usr/local/hadoop_work/yarn/log</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://NameNode:8020/var/log/hadoop-yarn/apps</value>
</property>
</configuration>
Setup the Master

Now you need to tell Hadoop NameNode the hostname of Secondary name node. In our
case both NameNode & Secondary NameNode resides on the same machine. We do it by
editing the masters file:

root@NameNode:/usr/local/hadoop/etc/hadoop# vi masters
Inside the file, put one line:

NameNode
Next, we will format the name node before starting anything now,

root@NameNode:/usr/local/hadoop/etc/hadoop# /usr/local/hadoop/bin/hadoop namenode


-format
You should see a success command saying "Storage directory
/usr/local/hadoop_work/hdfs/namenode has been successfully formatted"

Installing Ambari on Centos 7

Installation
Installing Ambari on Centos 7
Clone this wiki locally

https://github.com/ekhtiar/DamStream.wiki.git
Clone in Desktop
Create DamStream User adduser damstream
Set a Password passwd 1234
Give Sudo Permission gpasswd -a username wheel
Update The Repository sudo yum -y update
Disable SELinux and PackageKit and check the umask Value setenforce permissive
Edit the host file name vi /etc/hosts
Add a line for each host in your cluster. The line should consist of the IP address
and the FQDN. For example: 192.168.41.21 DSambari.novalocal hortonworks
Disable Firewall systemctl disable firewalld service firewalld stop
Enable NTP on the Cluster and on the Browser Host sudo yum -y install ntp systemctl
enable ntpd sudo systemctl start ntpd
Enable Transparent Hugepage Support echo never >
/sys/kernel/mm/transparent_hugepage/enabled
Install wget sudo yum install -y wget
Get the latest repo address from https://ambari.apache.org/ and get it, i.e: sudo
wget -nv http://public-repo-
1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo -O
/etc/yum.repos.d/ambari.repo
Update repolist yum repolist
Start Ambari setup yum install ambari-server
Setup Ambari Server ambari-server setup
Create SSH Key ssh-keygen cd home/damstream/.ssh cat id_rsa.pub > authorized_keys
chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys
Start the ambari server setup sudo ambari-server setup
Start the ambari server sudo ambari-server start
Follow next next next and use the SSH Key

During troubleshooting and more specifically, while using the Ambari API, it might
be necessary know the Ambari cluster name. This article describes a simple way to
find the cluster name via the command line.

Procedure
1. Log into the Ambari node as the user root.
2. Run the command http://localhost:8080/api/v1/clusters/

Find the Services running on which server


curl -s -u admin:admin -H X-Requested-By: Ambari -X GET
http://10.1.30.165:8080/api/v1/clusters/Hadoop/services/KAFKA

Invalid Request: The service[SPARK2] associated with the component[SPARK2_CLIENT]


doesn't exist for the cluster[Hadoop]

Das könnte Ihnen auch gefallen