Sie sind auf Seite 1von 28

OneNote: one place for all of your notes

minute
video
Watc
h the
2

1. Take notes anywhere on the page


Write your name here

2. Get organized
You start with "My Notebook" - everything lives in here

Add sections for activities like:

Add pages inside of each section:

Quick Notes Page 1


(Pages are over there)

3. For more tips, check out 30 second videos

Clip from Plan a trip Search notes Write notes


the web with others instantly on slides

4. Create your first page


You're in the Quick Notes section - use it for random notes

Quick Notes Page 2


OneNote Basics

Remember everything
Add Tags to any notes
Make checklists and to-do lists
Create your own custom tags

Collaborate with others


Keep your notebooks on OneDrive
Share with friends and family
Anyone can edit in a browser

Keep everything in sync


People can edit pages at the same time
Real-Time Sync on the same page
Everything stored in the cloud
Accessible from any device

Clip from the web

Quick Notes Page 3


Clip from the web
Quickly clip anything on your screen
Take screenshots of products online
Save important news articles

Organize with tables


Type, then press TAB to create a table
Quickly sort and shade tables
Convert tables to Excel spreadsheets

Write notes on slides


Send PowerPoint or Word docs to OneNote
Annotate with a stylus on your tablet
Highlight and finger-paint

Integrate with Outlook

Quick Notes Page 4


Integrate with Outlook
Take notes on Outlook or Lync meetings
Insert meeting details
Add Outlook tasks from OneNote

From Outlook:

Add Excel spreadsheets


Track finances, budgets, & more
Preview updates on the page

Brainstorm without clutter


Hide everything but the essentials
Extra space to focus on your notes

Take quick notes

Quick Notes Page 5


Take quick notes
Quickly jot down thoughts and ideas
They go into your Quick Notes section

Quick Notes Page 6


Stats
Friday, April 22, 2016 10:19 AM

yarn queue -status dev


yarn queue -status <queuename> - Gives the statstics of the queue.

Also for maven projects all dev teams should use Hortonworks repo instead of public apache repo:

<repositories>
<repository>
<releases>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
<checksumPolicy>warn</checksumPolicy>
</releases>
<snapshots>
<enabled>false</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>fail</checksumPolicy>
</snapshots>
<id>HDPReleases</id>
<name>HDP Releases</name>
<url>http://repo.hortonworks.com/content/repositories/releases/</url>
<layout>default</layout>
</repository>
</repositories>

Hi All,

Pl find the attached script for taking backup and recreate the Hbase table with pre split. Pl follow the
below steps. Example command has used a HBase table named master:TEST_ADD with two column A, B.

Backup the data


o pig -f BackUpHBaseInHdfs.pig -param TABLENAME=master:TEST_ADD -param
columnsList=cf1:A,cf1:B -param ColumnNameWithType=A:chararray,B:chararray -
param outputDir=/dev/tset/raw/hbase/TEST_ADD

Validate the data generated in the given output folder

Drop the Hbase table

Create the Hbase table with Pre-split and snappy

o create 'master:TEST_ADD', {NAME => 'cf1', COMPRESSION => 'SNAPPY'}, {SPLITS => ['00000',
'00001', '00002', '00003', '00004', '00005', '00006', '00007', '00008',
'1','2','3','4','5','6','7','8','9','a','c','e','g', 'l', 'j', 'k', 'm', 'o', 'q', 's', 'u', 'w', 'y']}

Reload the data


o pig -f LoadHbaseFromBackup.pig -param TABLENAME=master:TEST_ADD -param
columnsList=cf1:A,cf1:B -param ColumnNameWithType=A:chararray,B:chararray -param
outputDir=/dev/tset/raw/hbase/TEST_ADD

Daily Commands Page 7


Hbase commands:
Thursday, March 03, 2016 2:03 PM

Creating pre split like below

create lineofservice:ln_of_srvc_ofrg_instnc', 'cf1', SPLITS => ['1','2','3','4','5','6','7','8','9','a',c,'e','g', 'l', j,


k, 'm', o, 'q', 's', u, 'w', y]

create 'party:NPANXX', {NAME => 'cf1', COMPRESSION => 'SNAPPY'}, {SPLITS => ['00000', '00001',
'00002', '00003', '00004', '00005', '00006', '00007', '00008', '1','2','3','4','5','6','7','8','9','a','c','e','g', 'l', 'j',
'k', 'm', 'o', 'q', 's', 'u', 'w', 'y']}

Hbase Limit command:


Scan 'hbase_table',{'LIMIT' => 5}

snapshot 'sourceTable', 'sourceTable-Snapshot'


clone_snapshot 'sourceTable-snapshot', 'newTable'

Example:
snapshot 'DEV_RPX.PAG_ADM.NGP_REF', 'DEV_RPX.PAG_ADM.NGP_REF_SNAP'
clone_snapshot 'DEV_RPX.PAG_ADM.NGP_REF_SNAP' , 'DEV_RPX.PAG_ADM.NGP_REF_BKP'

HBase commands:
get 'DEV_NCS.CPN_OWN.SKU_REF','984216:1900-01-01 12:00:00','CF1:SKU_ID'
get '<tablename>','<rowkey>','CF1:<columnname>'

deleteall 'DEV_NCS.CPN_OWN.SKU_REF','rowkey'

Daily Commands Page 8


Hive commands:
Thursday, March 03, 2016 2:04 PM

alter table dimnpanxx drop partition (partdate=20150120031606);


ALTER TABLE sku_ref DROP PARTITION (stagingpartdate > '0');
ALTER TABLE partner_ref DROP PARTITION (stagingpartdate=20160318045645);

ALTER TABLE event_adjustment ADD PARTITION (stagingpartdate=20160415052530) location


'/dev/rpx_event_adm_staging/event_adjustment/20160415052530';

mapred.job.queue.name=dev

Daily Commands Page 9


Pig
Thursday, March 03, 2016 2:04 PM

export PIG_OPTS="-Dhive.metastore.uris=thrift://devehdp004.unix.gsm1900.org:9083 -
Dmapred.job.queue.name=dev"
export PIG_CLASSPATH=/usr/hdp/current/hive-webhcat/share/hcatalog/*:/usr/hdp/2.2.4.2-2/hive/lib/*
pig -useHCatalog

REGISTER /usr/hdp/2.2.4.2-2/hbase/lib/*.jar;
REGISTER /usr/hdp/current/hive-webhcat/share/hcatalog/*.jar;
REGISTER /usr/hdp/2.2.4.2-2/hive/lib/*.jar;
REGISTER /usr/local/share/eit_hadoop/applications/idw/Finance_Retrofit/piggybank.jar;
REGISTER /usr/local/share/eit_hadoop/applications/idw/Finance_Retrofit/idwudf-1.0.jar;

Daily Commands Page 10


Oozie
Wednesday, March 16, 2016 2:59 PM

Killing a oozie job:


oozie job -oozie http://devehdp004.unix.gsm1900.org:11000/oozie -kill 0000683-160123215237769-
oozie-oozi-W

Daily Commands Page 11


Sqoop commands
Tuesday, March 22, 2016 10:35 AM

Connecting to mysql db
mysql --host=devehdp004.unix.gsm1900.org --user=sqoop --password=sqoop
Deleting a Sqoop job in DEV
sqoop job --meta-connect "jdbc:mysql://devehdp004.unix.gsm1900.org:3306/sqoop?
user=sqoop&password=sqoop" --delete hdp_event_account_transaction_event_disp_EXPORT

sqoop job --meta-connect 'jdbc:mysql://devehdp004.unix.gsm1900.org:3306/sqoop?


user=sqoop&password=sqoop' --show hdp_SAMSON_REFERENCE_SOC_IDW_IMPORT

sqoop job --meta-connect 'jdbc:mysql://prdehdp079.unix.gsm1900.org:3306/sqoop?


user=sqoop&password=sqoopaqmin123' --delete hdp_product_offering_samson_disp_EXPORT

Daily Commands Page 12


Phoenix
Thursday, April 21, 2016 12:30 PM

Below are the instructions for using Phoenix Client on Hadoop server. You are seeing error message as it is
unable to read Hbase configs.

export HBASE_CONF_PATH=/etc/hbase/conf:/etc/hadoop/conf

cd /usr/hdp/current/phoenix-client/bin
./sqlline.py devehdp001,devehdp002,devehdp003:2181:/hbase-secure

Note: Make sure you have valid kerberos ticket before starting sqlline client

#command
Kinit <<NTID>>@GSM1900.ORG

[sparepa@devehdp006 bin]$ ./sqlline.py devehdp001,devehdp002,devehdp003:2181:/hbase-secure


Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:devehdp001,devehdp002,devehdp003:2181:/hbase-secure none none
org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:devehdp001,devehdp002,devehdp003:2181:/hbase-secure
Connected to: Phoenix (version 4.2)
Driver: PhoenixEmbeddedDriver (version 4.2)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
134/134 (100%) Done
Done
sqlline version 1.1.2
0: jdbc:phoenix:devehdp001,devehdp002,devehdp>

Commands:
!tables
!describe <tablename>

Daily Commands Page 13


Issues:
Friday, April 22, 2016 2:51 PM

First:
Did you ran this job by adding below property? If any job is running beyond 24 hours it is failing as
delegation token is cancelled. If you are still seeing the same issue even after adding that property,
then it has to be made at cluster level to reflect the change.

<property>
<name>mapreduce.job.complete.cancel.delegation.tokens</name>
<value>false</value>
</property>

Second:
1. hbase.rpc.timeout
2. hbase.client.scanner.timeout.period
set below properties to 900000

Third:

Daily Commands Page 14


Cluster Details:
Monday, April 18, 2016 2:33 PM

Production details:
211 datanodes
24 core
126 GB ram
111 hbase region servers
Each data node size: 24TB (of which we use 2TB for Harddisk root)
Total hdfs storage is 3.6PB

Versions:
Horton works version: 2.2.4
Hadoop: 2.6
Pig: 0.14
Hive: 0.14
Oozie: 4.1
Hbase: 0.98.4

Server n Logins n Softwares Page 15


QA Details
Saturday, March 05, 2016 11:38 AM

QA Cluster Details
IP Address: 10.158.163.15
Hostname: qatehdp005.unix.gsm1900.org

Login: NT id and password


Kerberos problem: kinit aakula@GSM1900.ORG and enter password when promted

Server n Logins n Softwares Page 16


ABC Details
Friday, March 11, 2016 11:27 AM

Dev Details:
Dev UI URL: http://devbeam002.unix.gsm1900.org:8080/abcapp/login#
Credentials: dadmin/abcuiadmin2016

Dev Balance Check Logs URL: http://devbeam002.unix.gsm1900.org:8080/abc/balanceCheckLogs


Dev Control Definition URL: http://devbeam002.unix.gsm1900.org:8080/abc/controlDefinitions

MySql details:
Hostname : devehdp006.unix.gsm1900.org
Port : 3306
DB Name : demo_abc_platform
Username : readonlyabc
Password: readonly123

Username Password Role Comments


financeretrofitui retrofitui2016 ROLE_UI_EDIT User to be used for all UI operations
financeretrofitimport retrofitimport2016 ROLE_DADMIN User to be used to test the Export/Import functionality to DevInt environment
http://devbeam002.unix.gsm1900.org:8080/abc_devint/import
financeretrofitws retrofitws2016 ROLE_USER To be used by the Job for integration with ABC in the Dev Env(Replacement for the generic TestUser
account

QA Details:
QA UI URL: http://qatbeam001.unix.gsm1900.org:8080/abc_qat
Credentials: dadmin/abcuiadmin2016

QA Balance Check Logs URL: https://qatbeam001.unix.gsm1900.org:8443/abc_qat/balanceCheckLogs


QA Control Definition URL: https://qatbeam001.unix.gsm1900.org:8443/abc_qat/controlDefinitions

For Testing:
For control check service testing, you can look into following runtime tables
CONTROL_DETAILS
For balance check service testing, you can look into following runtime tables
BALANCE_CHECK_LOG
MISMATCH_CHECK_LOG
MISMATCH_JOB_STATUS_CHECK
You can use individual process id from table LOAD_RPOCESS_AUDIT_STATS and link to the runtime tables above.

Notes:
ABC ERD pdf:
https://tmobileusa.sharepoint.com/teams/da/DM/PR207463/PL/000 IDW Architecture Design and Development/Frameworks/ABC/design/abc_ERD_20151101.pdf

Server n Logins n Softwares Page 17


Sqoop
Friday, March 11, 2016 3:45 PM

Sqoop Metastore
1) Login to devehdp004
2) mysql -u sqoop -p
3) password: sqoop

Server n Logins n Softwares Page 18


Softwares
Friday, March 11, 2016 4:18 PM

Git: https://git-scm.com/downloads

Server n Logins n Softwares Page 19


Github Details
Monday, March 14, 2016 1:54 PM

View Pre-Prep DEV: https://gitenterprise.unix.gsm1900.org/orgs/IDW-DATA-PRE-


PREPARATION/teams/pre-prep-dev
View Preparation-DEV: https://gitenterprise.unix.gsm1900.org/orgs/IDW-DATA-
PREPARATION/teams/preparation-dev
View dispatch-jobs-DEV: https://gitenterprise.unix.gsm1900.org/orgs/IDW-DATA-
DISPATCH/teams/dispatch-jobs-dev

Read more about team permissions here: https://help.github.com/enterprise/2.4/user/articles/what-


are-the-different-access-permissions

Git Commands:
Rt click -> Git Bash here
git config --global user.name aakula
git config --global user.email anusha.akula@T-Mobile.com

git pull
git sync

Jenkins:
URL: http://prdcicd005.unix.gsm1900.org:8080/login?from=%2F
Username: aakula
Password: NT password

Goto - Finance_Retrofit -> Build with Parameters ->

Nexus:
URL: http://prdcicd003.unix.gsm1900.org:8081/nexus
ID: aakula
Password: password

Server n Logins n Softwares Page 20


Edge node details
Wednesday, March 16, 2016 2:49 PM

IP Address: 10.158.31.206
Username: hdpsrvc
Password: G+j4y=z6s@ef-uh_che4ut&7bE?5xa

Server n Logins n Softwares Page 21


Dev Details
Monday, April 18, 2016 2:29 PM

Ambari Dev Link:


http://devehdp004.unix.gsm1900.org:8080
Username/password: ambari/ambari

Server n Logins n Softwares Page 22


Pig
Tuesday, March 22, 2016 12:06 PM

PIG http://blog.cloudera.com/blog/2015/07/how-to-tune-mapreduce-
performance parallelism-in-apache-pig-jobs/
Good PIG https://www.xplenty.com/blog/2014/05/improving-pig-data-integration-
performance performance-with-join/
PIG http://hortonworks.com/blog/pig-performance-and-optimization-analysis/
performance
Good PIG http://pig.apache.org/docs/r0.9.1/perf.html
performance

Learning Pages Page 23


Hive
Friday, March 25, 2016 2:56 PM

http://sanjivblogs.blogspot.com/2015/05/10-ways-to-optimizing-hive-queries.html - Hive

https://www.linkedin.com/pulse/orc-files-storage-index-big-data-hive-mich-talebzadeh-ph-d- -> ORC file format


https://code.facebook.com/posts/229861827208629/scaling-the-facebook-data-warehouse-to-300-pb/ - ORC Performance

https://snippetessay.wordpress.com/2015/07/25/hive-optimizations-with-indexes-bloom-filters-and-statistics/ - Performance of hive

https://www.linkedin.com/pulse/hive-functions-udfudaf-udtf-examples-gaurav-singh - Hive UDF example

Learning Pages Page 24


Hbase
Friday, March 25, 2016 4:50 PM

http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf - Phoenix

https://www.mapr.com/blog/in-depth-look-hbase-architecture - MapR Hbase Architecture

Learning Pages Page 25


Misc
Sunday, March 06, 2016 1:30 PM

http://www.cs.brandeis.edu/~rshaull/cs147a-fall-2008/hadoop-troubleshooting/ - Troubleshooting
http://0x0fff.com/hadoop-mapreduce-comprehensive-description/ - MapReduce

http://0x0fff.com/spark-architecture-video/ - Spark Architecture

Learning Pages Page 26


Phoenix
Tuesday, May 17, 2016 2:36 PM

Phoenix grammar:
https://phoenix.apache.org/language/index.html#create_view
https://phoenix.apache.org/faq.html - FAQs
https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html - Phoenix in 15 mins

http://kubilaykara.blogspot.com/2015/07/query-existing-hbase-tables-with-sql.html

https://community.hortonworks.com/questions/12538/phoenix-storage-in-pig.html - Pig read the


phoenix table
https://phoenix.apache.org/pig_integration.html - Pig integration

Learning Pages Page 27


Spark
Monday, May 23, 2016 3:12 PM

https://github.com/JerryLead/SparkInternals

https://github.com/JerryLead/SparkInternals/blob/master/EnglishVersion/3-JobPhysicalPlan.md

Learning Pages Page 28

Das könnte Ihnen auch gefallen