Sie sind auf Seite 1von 11

ANALYSIS OF OUTPUT

CRAFTED BY:

RASHI GUPTA(07)
PRIYANKA JHA(18)
BARRY CLIFF(42)
PRIYA RANJAN SINGH(57)
DIBYA LOCHAN PRADHAN(65)
Factor Analysis

Descriptive Statistics
Std.
Mean Deviation Analysis N
kind 6.57 3.823 7
intelligence 7.71 1.496 7
happy 6.71 2.928 7
likeable 6.71 3.904 7
just 7.29 2.870 7

SCALING:
1-VERY STRONG
9-VERY WEAK

Correlation Matrix
kind intelligence happy likeable Just
Correlation Kind 1.000 .296 .881 .995 .545
intelligence .296 1.000 -.022 .326 .837
happy .881 -.022 1.000 .867 .130
likeable .995 .326 .867 1.000 .544
Just .545 .837 .130 .544 1.000

Communalities
Rescaled
Initial Extraction
• kind 1.000 .997
intelligence 1.000 .794
happy 1.000 .969
likeable 1.000 .992
just 1.000 .990
Extraction Method: Principal Component Analysis.
• Communalities indicate the amount of variance in each variable that is accounted for.

• Initial communalities are estimates of the variance in each variable accounted for by all components or factors. For principal components extraction,
this is always equal to 1.0 for correlation analyses.

• Extraction communalities are estimates of the variance in each variable accounted for by the components. The communalities in this table are all high,
which indicates that the extracted components represent the variables well. If any communalities are very low in a principal components extraction,
you may need to extract another component.
Total Variance Explained
Comp
onent Initial Eigenvaluesa Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
% of Cumulative % of Cumulative % of Cumulative
Total Variance % Total Variance % Total Variance %
Rescaled 1 39.161 80.077 80.077 3.201 64.018 64.018 2.802 56.037 56.037
2 8.780 17.954 98.031 1.542 30.847 94.865 1.941 38.828 94.865
3 .661 1.352 99.383
4 .302 .617 100.000
5 -3.222E-
-6.588E-15 100.000
15

• a. When analyzing a covariance matrix, the initial eigenvalues are the same across the
raw and rescaled solution.
• The variance explained by the initial solution, extracted components, and rotated components is displayed. This first section of the table shows the Initial
Eigenvalues

• The second section of the table shows the extracted components. They explain nearly 98% of the variability in the original 5 variables, so you can considerably
reduce the complexity of the
data set by using these components, with only a 2% loss of information.

• The rotation maintains the cumulative percentage of variation explained by the extracted components, but that variation is now spread more evenly over the
components. The large changes
in the individual totals suggest that the rotated component matrix will be easier to interpret than the unrotated matrix

Component Matrixa
Rescaled
Component
1 2
kind .998 -.042
intelligence .354 .818
happy .869 -.462
likeable .996 -.029
just .577 .811
Extraction Method: Principal Component Analysis.
a. 2 components extracted.
Rotated Component Matrixa
Rescaled
Component
1 2
kind .947 .316
intelligenc
.040 .890
e
happy .977 -.122
likeable .941 .327
just .251 .963
Extraction Method: Principal Component
Analysis.
Rotation Method: Varimax with Kaiser
Normalization.
• Rotation converged in 3 iterations.

• After rotation:
• Kind,Happy and Likeable comes under Component 1.
• While Intelligence and just comes under Component 2.

Naming:
• Component 1:Happy go lucky person.
• Component 2:Intelligent and just.
Component
Transformation Matrix

Comp
onent 1 2
1 .935 .356
2 -.356 .935
Extraction Method:
Principal Component
Analysis.
Rotation Method:
Varimax with Kaiser
Normalization.
Component Score
Coefficient Matrixa
Component
1 2
kind .373 .067
intelligence -.055 .202
happy .338 -.354
likeable .380 .090
just -.157 .754
Extraction Method: Principal
Component Analysis.

• For each case and each component, the component score is computed by multiplying the case's standardized variable values (computed using listwise deletion)
by the component's
score coefficients. The resulting three component score variables are representative of, and can be used in place of, the ten original variables with only a 2% loss
of information.
Scatterplot Matrix of Component Scores

The first plot in the first row shows the first component on the vertical axis versus the second component on the horizontal axis
CLUSTER ANALYSIS:

Case Processing Summarya


Cases
Valid Missing Total
N Percent N Percent N Percent
7 100.0% 0 .0% 7 100.0%
a. Squared Euclidean Distance
used

Agglomeration Schedule
Coefficient Stage Cluster First Next
Stage Cluster Combined s Appears Stage
Cluster Cluster
1 2 Cluster 1 Cluster 2
1 1 4 1.429 0 0 3
2 2 5 8.095 0 0 4
3 1 3 8.095 1 04
4 1 2 10.000 3 2 0

• The agglomeration schedule is a numerical summary of the cluster solution.

• At the first stage, cases 1 and 4 are combined because they have the smallest distance.

• The cluster created by their joining next appears in stage 3

• In stage 3, the clusters 1 and 3 are joined. The resulting cluster next appears in stage 4.

• The largest gaps in the coefficients column occur between stages 1 and 2 and stage 3 and 4 which divides it into basically 2 clusters as
we will see further in dendogram.
Dendrogram

* * * * * * * * * * * * * * * * * * * H I E R A R C H I C A L C L U S T E R A N A L Y S I S * * * * * * * * * * * * * * * * * * *

Dendrogram using Single Linkage


Rescaled Distance Cluster Combine

C A S E 0 5 10 15 20 25
Label Num +---------+---------+---------+---------+---------+

kind 1 ─┬─────────────────────────────────────┐
likeable 4 ─┘ ├─────────┐
happy 3 ───────────────────────────────────────┘ │
intellig 2 ───────────────────────────────────────┬─────────┘
just 5 ───────────────────────────────────────┘

Abbreviated Extended
Name Name

• The dendrogram is a graphical summary of the cluster solution

• The horizontal axis shows the distance between clusters when they are joined.

• Parsing the classification tree to determine the number of clusters is a subjective process. Generally, you begin by looking for "gaps" between joinings along the
horizontal axis.

• Initially Kind and Likeable are clubbed which finally clubs with Happy. So they form a cluster. On the contrary Intelligent and Just are clubbed. This result is in
accordance with
the result of Factor analysis.

• Hence proved the data summarizing can be done with both Cluster analysis and Factor analysis.

Das könnte Ihnen auch gefallen