Sie sind auf Seite 1von 2

Page 1/2, 1704178, Seminar Group 1

CS260 Algorithms Coursework 1

1) To find the number of bits to write n! in binary in θ(.) form as a function of n, I first recall that the
number of bits required to represent an integer n is ⌈log⁡(𝑛 + 1)⌉. Thus, I aim to find bounds on
log(n!).
𝑛! = ⁡ ∏𝑛𝑖=1 𝑖 ≤ ⁡ ∏𝑛𝑖=1 𝑛, 𝑓𝑜𝑟⁡𝑎𝑙𝑙⁡𝑛 ≥ 1, 𝑏𝑒𝑐𝑎𝑢𝑠𝑒⁡𝑛 > 𝑛 − 1 > ⋯ ≥ 2 ≥ 1
∴ 𝑛! ≤ 𝑛𝑛 ⁡∀⁡𝑛 ≥ 1
𝑛 𝑛 𝑛 𝑛 𝑛
𝑛2 𝑛 +1 𝑛 −1 𝑛 −1
Also notice = ⁡ ∏𝑖=1
2 2
≤ (∏𝑖=1 2
) (∏𝑖=1 𝑖 ) trivially holds as ∗ ∏𝑖=1
2
𝑖 ≥ 1.
2 2 2 2
𝑛 𝑛 𝑛 𝑛
+1 𝑛 𝑛2 𝑛 −1
Since ∏𝑖=1
2
≤ ⁡ ∏𝑛𝑖=𝑛⁡ 𝑖 , 𝑡ℎ𝑒𝑛 = ∏𝑖=1
2
≤ (∏𝑛𝑖=𝑛 𝑖 ) (∏𝑖=1
2
𝑖 ) = ⁡ ∏𝑛𝑖=1 𝑖 = 𝑛!
2 2 2 2 2
𝑛
𝑛2
∴ ≤ 𝑛!⁡∀⁡𝑛 ≥ 1
2
𝑛
𝑛 2
Combining the derived inequalities yields ( 2) ≤ 𝑛! ≤ 𝑛𝑛 ⁡∀⁡𝑛 ≥ 1.
Observe the function 𝑓: 𝑅 + → 𝑅 + given by 𝑓(𝑥) = log⁡(𝑥) is strictly increasing. Thus
𝑛 𝑛
𝑛2 𝑛 2 𝑛 𝑛
≤ 𝑛! ≤ 𝑛𝑛 ⁡ ⇒ log ( ) ≤ log(𝑛!) ≤ log(𝑛𝑛 ) ⁡ ⇒ log ( ) ≤ log(𝑛!) ≤ 𝑛 log 𝑛
2 2 2 2

1 1
⇒ log ( ) ∗ 𝑛 log(𝑛) ≤ log(𝑛!) ≤ 𝑛 log(𝑛) ∀⁡𝑛 ≥ 1⁡
2 2

I have shown that log(n!) is bounded above and below by constant multiples of nlogn. Thus
log(n!) = θ(nlogn). This means that the number of bits to write n! in binary in θ(.) form is
θ(nlogn).

2) Let the pivot be the first element of A[1,…,n] in the quicksort process. Quicksort works by
splitting A into 3 subsections; elements in A less than the pivot, the pivot and elements in A
greater than the pivot. In lectures, the quicksort() function pseudocode was given to us, but the
partition() function was not, so I give the algorithm here.
function partition(A,left,right,p):
i = left – 1
for j = left to (p-1)
if A[j] < A[p]
i=i+1
swap(A[i],A[j])
for j = p+1 to right
if A[j] < A[p]
i=i+1
swap(A[i],A[j])
swap(A[i+1], A[p])

𝑝−1 𝑟𝑖𝑔ℎ𝑡
The number of operations = 2 + ⁡ ∑𝑗=𝑙𝑒𝑓𝑡 3 + ∑𝑗=𝑃+1 3 = 2 + 3(𝑝 − 1) − 3(𝑙𝑒𝑓𝑡 − 1) +
3(𝑟𝑖𝑔ℎ𝑡) − 3𝑝 = 2 + 3(𝑟𝑖𝑔ℎ𝑡 − 𝑙𝑒𝑓𝑡) = 𝑂(1) + 𝑂(𝑟𝑖𝑔ℎ𝑡 − 𝑙𝑒𝑓𝑡) = 𝑂(𝑟𝑖𝑔ℎ𝑡 − 𝑙𝑒𝑓𝑡 + 1), so
partition() takes linear time.
After each quicksort recursion, the pivot element is sorted in the array. The largest number of
recursive calls that can be made is n-1. This occurs if each time quicksort and partition are called,
Page 2/2, 1704178, Seminar Group 1

zero element and n-1 element subarrays (one of which containing elements less than the pivot
and the other containing elements greater than the pivot) are created. This yields the recurrence
relation T(n) = T(n-1) + O(n). So, we can solve the recurrence relation as follows:
𝑇(𝑛) = 𝑇(𝑛 − 1) + 𝑂(𝑛) = 𝑇(𝑛 − 2) + 𝑂(𝑛) + 𝑂(𝑛 − 1) = ⁡𝑇(𝑛 − 3) + 𝑂(𝑛) + 𝑂(𝑛 − 1) +
𝑂(𝑛 − 2) etc
By repeatedly substituting in T(n-i) = T(n-i-1) + O(n-i), for 1 ≤ 𝑖 ≤ 𝑛 − 1, we obtain:
𝑛(𝑛+1)
𝑇(𝑛) = 𝑂(𝑛) + 𝑂(𝑛 − 1) + ⋯ + 𝑂(3) + 𝑂(2) = 𝑂(∑𝑛𝑖=2 𝑖 ) = 𝑂 ( − 1) = 𝑂(𝑛2 ).
2

I have shown T(n) = 𝑂(𝑛2 ) so quicksort with naïve pivot choice has worst case running time
𝑂(𝑛2 ).

3) function maxObjects(A[1,…,n],B[1,…,n])
Input: two n element arrays denoted A and B, where 𝐴[𝑖] < ⁡𝐵[𝑖]⁡𝑓𝑜𝑟⁡1 ≤ 𝑖 ≤ 𝑛
Output: the largest number of overlapping intervals [A[i],B[i]] at a point in time
A’ = mergeSort(A) and B’ = mergeSort(B)
Define integers visible= 1, max = 1 , i = 2, j = 1
While i < n and j < n
If A’[i] <= B’[j]
visible = visible + 1
if visible > max
max = visible
i=i+1
else
visible = visible – 1
j=j+1
return visible

To analyse the worst-case time complexity of maxObjects(), I first consider the number of times
the while loop runs. The while loop runs while i and j are both less than n (index of the array).
After each iteration either i or j increases. Therefore, the maximum number of iterations of the
while loop is k*n, for some positive k; this happens when a pattern of alternating incrementing ‘i’
and ‘j’s in successive iterations occurs. For example, in the first iteration an ‘i’ could increment.
This would mean in the next iteration, a ‘j’ increments, followed by an ‘i’ etc. Since constant work
is done within the while loop (according to the RAM model of computation, memory access and
if statements take constant time), the time complexity of the while loop execution is O(n). Since
mergesort is used to sort both A and B and mergesort takes θ(nlogn) time, maxObjects() takes
θ(nlogn) + O(n) = θ(nlogn) time.

This algorithm correctness follows from observing that the total number of visible objects at time
t is the difference between number of objects becoming visible at time t and the number of
objects disappearing by time t. Therefore, by sorting the times of appearance and disappearance,
the algorithm tracks the number of observable celestial objects through the variable ‘visible’. If
this variable ever exceeds max, then the contents of ‘visible’ is reassigned to ‘max.’ If an object
stops being visible, then ‘visible’ decreases by one. Thus, the algorithm must be correct for
values of n for which the while loop runs. Lastly, consider n = 1, as this is the only case in which
the while loop doesn’t run. In this case, the program automatically returns 1, as this is the default
value of ‘visible’. This is true trivially. Therefore, the algorithm must be correct.

Das könnte Ihnen auch gefallen