# mergeSort

Mergesort is an optimized sorting algorithm which is a common choice to implement `sort` methods in standard libraries as an alternative to quicksort or heapsort.

Mergesort uses a divide-and-conquer strategy. It begins by treating the input list of length N as a set of N “sublists” of length 1, which are considered to be sorted. Adjacent sublists are then “merged” into sorted sublists of length 2, which are merged into sorted sublists of length 4, and so on, until only a single sorted list remains. (Note, if N is odd, an extra sublist of length 1 will be left after the first merge, and so on.)

Time complexity: The entire input must be iterated through n, and you’re narrowing the workload by half every time you merge so n * logn = O(nlogn).

1. Split array in half 2. Recursively sort each half
3. Merge sorted arrays into larger sorted array This can be implemented using either a recursive (“top-down”) or an iterative (“bottom-up”) approach.

Recursive Solution:

``````var mergeSort = function(array) {
if (array.length < 2) {
return array;
}

var mid = Math.floor(array.length / 2);
var left = array.slice(0, mid);
var right = array.slice(mid);
return merge(mergeSort(left), mergeSort(right));
};

var merge = function (left, right) {
var output = [];
var i = 0;
var j = 0;

while (left.length > i && right.length > j) {
if (left[i] < right[j]) {
output.push(left[i]);
i++;
} else{
output.push(right[j]);
j++;
}
}

return output.concat(left.slice(i)).concat(right.slice(j));
}``````

Advertisements

# Big O

Big O notation describes the performance or complexity of an algorithm. O(1) – Constant time complexity – describes an algorithm (a one-line statement code) that will always execute in the same time (or space) regardless of the size of the input data set. An example is accessing a value of an array.

```var arr = [ 1,2,3,4,5];
arr; // => 3```

O(N) – Linear time complexity – describes an algorithm (usually a loop) whose performance will grow linearly and in direct proportion to the size of the input data set. For example, if the array has 10 items, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.

`//if we used for loop to print out the values of the arrays`
```for (var i = 0; i < array.length; i++) {
console.log(array[i]);
}
```

O(log N) – Logarithmic time complexity – describes an algorithm where have a large set of data and you halve the dataset at each iteration until you get the result that you’re looking for.  An example of this is finding a word in a dictionary (binary search).   Sorting a deck of cards (merge sort) would be O(N log N).

Other examples:

```Example 1:
for (var i = 1; i < n; i = i * 2)
console.log(i);
}```
```Example 2:
for (i = n; i >= 1; i = i/2)
console.log(i);
}```

O(N2) – Quadratic time complexity – represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.  Examples include checking for duplicates in a deck of cards, bubble sort, selection sort, or insertion sort.

```for(var i = 0; i < length; i++) {     // has O(n) time complexity
for(var j = 0; j < length; j++) { // has O(n^2) time complexity
// More loops?
}
}```

O(2N) – Exponential time complexity – denotes an algorithm whose growth doubles with each addition to the input data set. An example of an O(2N) function is the recursive calculation of Fibonacci numbers.  Another example is trying to break a password by testing every possible combination (assuming numerical password of length N).

Amortized time:

If you do an operation say a million times, you don’t really care about the worst-case or the best-case of that operation – what you care about is how much time is taken in total when you repeat the operation a million times.

So it doesn’t matter if the operation is very slow once in a while, as long as “once in a while” is rare enough for the slowness to be diluted away (that the cost is “amortized”).  Essentially amortised time means “average time taken per operation, if you do many operations“.

Source: stackoverflow

Space complexity:

Sometimes we want to optimize for using less memory instead of (or in addition to) using less time. Talking about memory cost (or “space complexity”) is very similar to talking about time cost and we also use the Big O notation.

# Bubble Sort

Write a function ‘bubble_sort(arr)’ which will sort an array of integers using the “bubble sort” methodology.  (http://en.wikipedia.org/wiki/Bubble_sort)

Solution:

First and foremost, understand the difference between insertion sort and bubble sort.

Insertion sort:  Faster because it compares a value against the entire array and places it in the right location in one go.  (The .sort method does this). Bubble sort: Slower because it compares only against the next index, thus resulting in more comparison steps before ending up in the right location.  Test Cases: # Selection Sort

Selection sort loops over positions in the array. For each position, it finds the index of the minimum value in the subarray starting at that position. Then it swaps the values at the position and at the minimum index. Write selection sort, making use of the `swap` and `indexOfMinimum` functions. not sure how to rewrite selectionSort function in Ruby.  FIGURE IT OUT.

# Find minimum in subarray

### Find index of minimum value in a subarray

Finish writing the function `indexOfMinimum`, which takes an array and a number `startIndex`, and returns the index of the smallest value that occurs with index `startIndex` or greater. If this smallest value occurs more than once in this range, then return the index of the leftmost occurrence within this range.

Here’s my work in Ruby: and in js, with a while loop: Still getting used to javascript syntax with all the semi-colons and parentheses..

Also, a note on the ‘for’ loop:

for (initializer; condition; iterator) {

// statement(s)

}

# Binary Search

Binary Search is a way to efficiently search an array of items by halving the search space each time.

An example that demonstrates Binary Search:

Write a function that returns either the location of the target value in the array, or -1 if the array does not contain the target value.

Pseudocode:
1. Let min = 0 and max = n
2. If max < min, then stop: target is not present in the array. Return -1.
3. Guess the average of max and min, rounded down so that it is an integer
4. If you guessed the number, stop.  You found it!
5. If the guess was too low, set min to the one larger than the guess.
6. If the guess was too high, set max to the one smaller than the guess.
7. Go back to step 2. (while loop — don’t use “for” loop because the binary search doesn’t guess the indices by sequential order. )

My solution in Ruby: Running the code yields the expected outputs: 