Sie sind auf Seite 1von 15

SORTING ALGORITHMS

THE PAPER
TO MEET THE SUBJECT TASK
Mathematical Thinking Progress
Who is fostered by Subhan Ajiz Awalludin, M.Sc.

Structured by ;
Ayu Nafidatul Ummah 1801105106
Hanif Nur Wicaksono 1801105095
Mutiara Dwi Meilinda 1801105085
Novlin Esterindah 1801105126
Nurainy Dwi Novitasari 1801105115
Wulandari Liga Kusumawati 1801105070

PROGRAM STUDI PENDIDIKAN MATEMATIKA


FAKULTAS KEGURUAN DAN ILMU PENDIDIKAN
UNIVERSITAS MUHAMMADIYAH PROF. DR. HAMKA
PREFACE

First of all, the writer wants to express his thanks to Allah SWT, because of His bless and
grace, the entitled ” Sorting Methode” can be finished on time.

This paper is a requirement to fulfill the assignment from Mr. Subhan Ajiz Awalludin, M.Sc.
the lecturer of Proses Berikir Matematika. The writer also thanks to him for all the guidance
to complete it.

In completing this paper, the writer faced many problems, but with the help of many people,
all the problems could be passed. May Allah SWT give the blessing for them. It provides the
intrinsic elements in the Happy Prince short story with detail explanation. Although this paper
has many deficiencies in the arrangement and explanation, the writer hope that it can be used
as a reference for the reader to understand the intrinsic elements of a short story.

2
TABEL OF CONTENT

PREFACE ........................................................................................... 2

TABEL OF CONTENT ....................................................................... 3

CHAPTER I ........................................................................................ 4

INTRODUCTION ................................................................................ 4

1.1 Background of the paper ................................................................ 4

1.2 Problem formulation ...................................................................... 4

1.3 Purpose of the paper ....................................................................... 4

CHAPTER II ........................................................................................ 5

THEORY AND DISCUSION .............................................................. 5

2.1 Definition of Sorting Algorithm .................................................... 5

2.2 History of Sorting Algorithm ......................................................... 5

2.3 Classification of Sorting Algorithm ............................................... 5

2.4 Kind of Sorting ............................................................................... 6

1. Bubble Sort ................................................................................ 6

2. Selection Sort ............................................................................ 7

3. Insertion Sort ............................................................................. 9

4. Shell Sort ................................................................................. 10

5. Merge Sort ............................................................................... 12

CHAPTER III ..................................................................................... 14

CONCLUSION .................................................................................. 14

3.1 Conclusion .................................................................................... 14

3.2 Suggest ......................................................................................... 14

BIBLIOGRAPHY .............................................................................. 15

3
CHAPTER I
INTRODUCTION
1.1 Background of the paper

Design and analysis algorithms are a special branch of computer science that studies the
characteristics and performance of an algorithm in solving problems, regardless of the
implementation of the algorithm. In this branch of the discipline algorithms are studied
abstractly, regardless of the computer system or programming language used. Different
algorithms can be applied to a problem with the same criteria.
The complexity of an algorithm is a measure of how much computation the algorithm requires
to solve a problem. Informally, algorithms that can solve a problem in a short time have a low
complexity, while algorithms that require a long time to solve the problem have a high
complexity.
While sorting is a process of arranging objects in a particular order and / or in a different set,
and therefore it has two different general meanings:
1. sorting: arranging similar objects, classmates, etc., in a regular order.
2. categorization: grouping and labeling objects with similar properties.
The sorting algorithm consists of several algorithms such as Bubble Sort Selection Sort
Insertion Sort Shell Sort Merge Sort, where each type of sorting has differences from one
another

1.2 Problem formulation

From the background as for our variables are as follows:


1. Definition of Sorting Algorithm?
2. History of Sorting Algorithm
3. Classification of Sorting Algorithms
4. Type of Sort

1.3 Purpose of the paper

From the formulation of the problem above, our goals are as follows:
1. To find out the meaning of a sorting algorithm
2. To find out the parts of the sorting algorithm
3. To find out the function of the parts of the sorting algorithm

4
CHAPTER II
THEORY AND DISCUSION

2.1 Definition of Sorting Algorithm


In computer science, a sorting algorithm is an algorithm that puts elements of a list in
a certain order. The most frequently used orders are numerical order and lexicographical
order. Efficient sorting is important for optimizing the efficiency of other algorithms (such as
search and merge algorithms) which require input data to be in sorted lists. Sorting is also
often useful for canonicalizing data and for producing human-readable output. More formally,
the output of any sorting algorithm must satisfy two conditions:

The output is in nondecreasing order (each element is no smaller than the previous
element according to the desired total order). The output is a permutation (a reordering, yet
retaining all of the original elements) of the input.

Further, the input data is often stored in an array, which allows random access, rather
than a stack, which only allows sequential access; though many algorithms can be applied to
either type of data after suitable modification.

2.2 History of Sorting Algorithm

From the beginning of computing, the sorting problem has attracted a great deal of
research, perhaps due to the complexity of solving it efficiently despite its simple, familiar
statement. Among the authors of early sorting algorithms around 1951 was Betty Holberton
(née Snyder), who worked on ENIAC and UNIVAC.[1][2] Bubble sort was analyzed as early
as 1956.[3] Comparison sorting algorithms have a fundamental requirement of Ω(n log n)
comparisons (some input sequences will require a multiple of n log n comparisons);
algorithms not based on comparisons, such as counting sort, can have better performance.
Although many[who?] consider sorting a solved problem—asymptotically optimal algorithms
have been known since the mid-20th century—useful new algorithms are still being invented,
with the now widely used Timsort dating to 2002, and the library sort being first published in
2006.

Sorting algorithms are prevalent in introductory computer science classes, where the
abundance of algorithms for the problem provides a gentle introduction to a variety of core
algorithm concepts, such as big O notation, divide and conquer algorithms, data structures
such as heaps and binary trees, randomized algorithms, best, worst and average case analysis,
time–space tradeoffs, and upper and lower bounds.

2.3 Classification of Sorting Algorithm

Sorting algorithms are often classified by :

1. Computational complexity (worst, average and best behavior) in terms of the size of the list
(n). For typical serial sorting algorithms good behavior is O(n log n), with parallel sort in
5
O(log2 n), and bad behavior is O(n2). (See Big O notation.) Ideal behavior for a serial sort is
O(n), but this is not possible in the average case. Optimal parallel sorting is O(log n).
Comparison-based sorting algorithms need at least Ω(n log n) comparisons for most inputs.

2. Computational complexity of swaps (for "in-place" algorithms).

3. Memory usage (and use of other computer resources). In particular, some sorting
algorithms are "in-place". Strictly, an in-place sort needs only O(1) memory beyond the items
being sorted; sometimes O(log(n)) additional memory is considered "in-place".

4. Recursion. Some algorithms are either recursive or non-recursive, while others may be both
(e.g., merge sort).

5. Stability: stable sorting algorithms maintain the relative order of records with equal keys
(i.e., values).

6. Whether or not they are a comparison sort. A comparison sort examines the data only by
comparing two elements with a comparison operator.

7. General method: insertion, exchange, selection, merging, etc. Exchange sorts include
bubble sort and quicksort. Selection sorts include shaker sort and heapsort.

8. Whether the algorithm is serial or parallel. The remainder of this discussion almost
exclusively concentrates upon serial algorithms and assumes serial operation.

9. Adaptability: Whether or not the presortedness of the input affects the running time.
Algorithms that take this into account are known to be adaptive.

2.4 Kind of Sorting


1. Bubble sort,

a. Definision of Bubble sort

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that
repeatedly steps through the list, compares adjacent pairs and swaps them if they are in the
wrong order. The pass through the list is repeated until the list is sorted. The algorithm, which
is a comparison sort, is named for the way smaller or larger elements "bubble" to the top of
the list. Although the algorithm is simple, it is too slow and impractical for most problems
even when compared to insertion sort. Bubble sort can be practical if the input is in mostly
sorted order with some out-of-order elements nearly in position.

b. Performance of bubble sort

Bubble sort has a worst-case and average complexity of О(n2), where n is the number
of items being sorted. Most practical sorting algorithms have substantially better worst-case
or average complexity, often O(n log n). Even other О(n2) sorting algorithms, such as
insertion sort, generally run faster than bubble sort, and are no more complex. Therefore,
bubble sort is not a practical sorting algorithm.

6
The only significant advantage that bubble sort has over most other algorithms, even
quicksort, but not insertion sort, is that the ability to detect that the list is sorted efficiently is
built into the algorithm. When the list is already sorted (best-case), the complexity of bubble
sort is only O(n). By contrast, most other algorithms, even those with better average-case
complexity, perform their entire sorting process on the set and thus are more complex.
However, not only does insertion sort share this advantage, but it also performs better on a list
that is substantially sorted (having a small number of inversions).

Bubble sort should be avoided in the case of large collections. It will not be efficient in the
case of a reverse-ordered collection.

c. Example of bubble Sort

First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since
5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm
does not swap them.

Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.

Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

2. selection sort

a. Definision of selection sort

Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has O(n2)
time complexity, making it inefficient on large lists, and generally performs worse than the
similar insertion sort. Selection sort is noted for its simplicity, and it has performance
advantages over more complicated algorithms in certain situations, particularly where
auxiliary memory is limited.

The algorithm divides the input list into two parts: the sublist of items already sorted, which is
built up from left to right at the front (left) of the list, and the sublist of items remaining to be
sorted that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted
sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest,
depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the

7
leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one
element to the right.

b. Complexity of Selection sort

Selection sort is not difficult to analyze compared to other sorting algorithms since none of
the loops depend on the data in the array. Selecting the minimum requires scanning elements
(taking n-1 comparisons) and then swapping it into the first position. Finding the next lowest
element requires scanning the remaining n-1 elements and so on. Therefore, the total number
of comparisons is

By the hockey-stick identity,

which is of complexity O ( n 2 ) {\displaystyle O(n^{2})} O(n2) in terms of number of


comparisons. Each of these scans requires one swap for n − 1 {\displaystyle n-1} n-1 elements
(the final element is already in place).

c. Example selection sort

arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]

// and place it at beginning

11 25 12 22 64

// Find the minimum element in arr[1...4]

// and place it at beginning of arr[1...4]

11 12 25 22 64

// Find the minimum element in arr[2...4]

// and place it at beginning of arr[2...4]

11 12 22 25 64

// Find the minimum element in arr[3...4]

// and place it at beginning of arr[3...4]

11 12 22 25 64

8
3. Insertion sort

a. Definision of Insertion sort

A simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is
much less efficient on large lists than more advanced algorithms such as quicksort, heapsort,
or merge sort. However, insertion sort provides several advantages:

 Simple implementation: Jon Bentley shows a three-line C version, and a five-line


optimized version[2]
 Efficient for (quite) small data sets, much like other quadratic sorting algorithms
 More efficient in practice than most other simple quadratic (i.e., O(n2)) algorithms
such as selection sort or bubble sort
 Adaptive, efficient for data sets that are already substantially sorted: the time
complexity is O(nk) when each element in the input is no more than k places away
from its sorted position
 Stable, does not change the relative order of elements with equal keys
 In-place, only requires a constant amount O(1) of additional memory space
 Online, can sort a list as it receives it

When people manually sort cards in a bridge hand, most use a method that is similar to
insertion sort.

b. Algorithm of Insertion sort

// Sort an arr[] of size n


insertionSort(arr, n)
Loop from i = 1 to n-1.
……a) Pick element arr[i] and insert it into sorted sequence arr[0…i-1]

c. Example of Insertion sort

Another Example:
12, 11, 13, 5, 6
9
Let us loop for i = 1 (second element of the array) to 5 (Size of input array)

i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12


11, 12, 13, 5, 6

i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6

i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one
position ahead of their current position.
5, 11, 12, 13, 6

i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position
ahead of their current position.
5, 6, 11, 12, 13

4. Shell Sort

a. definition of Shell sort


Shell sort or Shell's method, is an in-place comparison sort. It can be seen as either a
generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort). The
method starts by sorting pairs of elements far apart from each other, then progressively
reducing the gap between elements to be compared. Starting with far apart elements, it can
move some out-of-place elements into position faster than a simple nearest neighbor
exchange. Donald Shell published the first version of this sort in 1959. The running time of
Shellsort is heavily dependent on the gap sequence it uses. For many practical variants,
determining their time complexity remains an open problem.

b. Description of Shell sort


Shellsort is a generalization of insertion sort that allows the exchange of items that are
far apart. The idea is to arrange the list of elements so that, starting anywhere, considering
every hth element gives a sorted list. Such a list is said to be h-sorted. Equivalently, it can be
thought of as h interleaved lists, each individually sorted. Beginning with large values of h,
this rearrangement allows elements to move long distances in the original list, reducing large
amounts of disorder quickly, and leaving less work for smaller h-sort steps to do. If the list is
then k-sorted for some smaller integer k, then the list remains h-sorted. Following this idea for
a decreasing sequence of h values ending in 1 is guaranteed to leave a sorted list in the end.
An example run of Shellsort with gaps 5, 3 and 1 is shown below.

The first pass, 5-sorting, performs insertion sort on five separate subarrays (a1, a6, a11), (a2, a7,
a12), (a3, a8), (a4, a9), (a5, a10). For instance, it changes the subarray (a1, a6, a11) from (62, 17,
25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the three subarrays (a1,

10
a4, a7, a10), (a2, a5, a8, a11), (a3, a6, a9, a12). The last pass, 1-sorting, is an ordinary insertion
sort of the entire array (a1,..., a12).
As the example illustrates, the subarrays that Shellsort operates on are initially short; later
they are longer but almost ordered. In both cases insertion sort works efficiently.
Shellsort is not stable: it may change the relative order of elements with equal values. It is an
adaptive sorting algorithm in that it executes faster when the input is partially sorted.

c. example of Shell sort

This can be seen in Figure 6. This list has nine items. If we use an increment of three,
there are three sublists, each of which can be sorted by an insertion sort. After completing
these sorts, we get the list shown in Figure 7. Although this list is not completely sorted,
something very interesting has happened. By sorting the sublists, we have moved the items
closer to where they actually belong.

Figure 8 shows a final insertion sort using an increment of one; in other words, a standard
insertion sort. Note that by performing the earlier sublist sorts, we have now reduced the total
number of shifting operations necessary to put the list in its final order. For this case, we need
only four more shifts to complete the process.

11
We said earlier that the way in which the increments are chosen is the unique feature of the
shell sort. The function shown in ActiveCode 1 uses a different set of increments. In this case,
we begin with n/2 sublists. On the next pass, n/4
sublists are sorted. Eventually, a single list is sorted with the basic insertion sort. Figure 9
shows the first sublists for our example using this increment.
The following invocation of the shellSort function shows the partially sorted lists after each
increment, with the final sort being an insertion sort with an increment of one.

5. Merge Sort

a. definition of Merge Sort


merge sort (also commonly spelled mergesort) is an efficient, general-purpose,
comparison-based sorting algorithm. Most implementations produce a stable sort, which
means that the order of equal elements is the same in the input and output. Merge sort is a
divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed
description and analysis of bottom-up mergesort appeared in a report by Goldstine and von
Neumann as early as 1948.

b. Algorithm of Merge Sort


Conceptually, a merge sort works as follows:

Divide the unsorted list into n sublists, each containing 1 element (a list of 1 element is
considered sorted).

Repeatedly merge sublists to produce new sorted sublists until there is only 1 sublist
remaining. This will be the sorted list.

1. Top-down implementation

Example C-like code using indices for top down merge sort algorithm that recursively splits
the list (called runs in this example) into sublists until sublist size is 1, then merges those
sublists to produce a sorted list. The copy back step is avoided with alternating the direction
of the merge with each level of recursion.

2. Bottom-up implementation

Example C-like code using indices for bottom up merge sort algorithm which treats the list as
an array of n sublists (called runs in this example) of size 1, and iteratively merges sub-lists
back and forth between two buffers:

3. Top-down implementation using lists

12
Pseudocode for top down merge sort algorithm which recursively divides the input list into
smaller sublists until the sublists are trivially sorted, and then merges the sublists while
returning up the call chain.

4. Bottom-up implementation using lists

Pseudocode for bottom up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2 i or 0. node is a
reference or pointer to a node. The merge() function would be similar to the one shown in the
top down merge lists example, it merges two already sorted lists, and handles empty lists. In
this case, merge() would use node for its input parameters and return value.

e. Example

MergeSort(arr[], l, r)

If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
The following diagram from wikipedia shows the complete merge sort process for an example
array {38, 27, 43, 3, 9, 82, 10}. If we take a closer look at the diagram, we can see that the
array is recursively divided in two halves till the size becomes 1. Once the size becomes 1, the
merge processes comes into action and starts merging arrays back till the complete array is
merged.

13
CHAPTER III
CONCLUSION
3.1 Conclusion
An easy algorithm in terms of implementation is the Bubble Sort, Selection
Sort, Insertion Sort, and.All three have O (n2) complexity. Among these algorithms, most
disappear is Insertion Sort.The algorithm is more functionally is MergeSort and Quick Sort
with its complexity is O (n log n).As for the most functionally of this algorithm are five Quick
Sort.

3.2 Suggest
With the disusunya this paper, this paper may be biased infirasi in everyday life,
we often inadvertently making this kind of prosedut in everyday life such as do the wash, run
that motorcycle. all are algorithms that do in everyday life. so also in the computer or in a
programming language.If the steps in everyday life we observe, almost not much different
from the programming steps.

14
Bibliography
https://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms

https://en.wikipedia.org/wiki/Bubble_sort#Performance

https://www.geeksforgeeks.org/bubble-sort/

https://www.geeksforgeeks.org/selection-sort/

https://en.wikipedia.org/wiki/Selection_sort

https://en.wikipedia.org/wiki/Insertion_sort

https://www.geeksforgeeks.org/insertion-sort/

https://en.wikipedia.org/wiki/Shellsort

http://interactivepython.org/runestone/static/pythonds/SortSearch/TheShellSort.html

https://en.wikipedia.org/wiki/Merge_sort

https://www.geeksforgeeks.org/merge-sort/

15

Das könnte Ihnen auch gefallen