## Group Anagrams

Many a times we need to sort some things which are not integers, foremost being strings. How do we go about using quicksort? I did not know that the C library has a function for it. This function is very powerful and easily sort anything given the compare function. So in today’s post, we will be using that function and learn how it can be used to solve our problem statement.

Given a list of words, group all words together which are anagrams. For example, if the input is like :

`cat tac act dog god gdo`

First of all, we want to understand how to strings can be identified as anagrams. There are more efficient ways to do it, but the most simple one is sort two string based on characters and if two resultant strings match, two strings are anagrams.
Now the problem boils down to sorting each individual string and then sort the entire array of strings. The anagram strings will automatically grouped together.
If we look closely, we have another problem, that is when we sort the original strings individually, original strings are lost. So, idea is to have a duplicate array of strings and then sorting operation. Also to after sorting the duplicate array, we will lose the original index of string once we sort the entire array. Hence we need to store that information too. Let’s figure what we need to store and how? We have array of strings, let’s say there can be a maximum 100 such strings. So we create an array of character pointers to store these strings. Also, we need a duplicate array where we will sort strings and also we need to store the position of the string in original array so, the duplicate array would look like:
Next step is to sort individual string in the duplicate array, where we will use the library function qsort().
This function is very easy to use, there are four parameters you have to pass:
1. The buffer or the array you want to have sorting performed on.
2. The size of the buffer
3. Size of an individual element of the buffer.
4. And last and most important is the compare function, which has to be written in code and passed as function pointer to this function. For further details of qsort() function usage follow : qsort
Once all the strings are individually sorted, sort the entire array of strings using the same qsort(), only difference will be the compare function which will now run on an entire string instead of character. We are almost done! Now we have all anagrams placed next to each other, however, we need to print original words. That’s where the index we stored in duplicate array will help and we will print words from original array using the order given in the duplicate array.

```class Solution {
public List<List<String>> groupAnagrams(String[] strs) {

Map<String, List<String>> map = new HashMap<>();

for(int i=0; i < strs.length; i++){
char [] temp = strs[i].toCharArray();
Arrays.sort(temp);

String keyString  = String.valueOf(temp);

if(!map.containsKey(keyString)){
map.put(keyString, new ArrayList<>());
}
}

return new ArrayList<List<String>>(map.values());
}
}
```

This code will perform sorting on M strings, each of size N characters, so the complexity of the algorithm to print all anagrams together will be O(MlogM + MNlogN) MNlogN because NlogN for sorting each string of size N and there are M strings, MlogM to sort M strings in the array.

## Clone linked list with random pointer

Given a linked list with every node has two pointers: next and random, ask is to clone this linked list with a random pointer to a new linked list.
The linked list will look like:

This problem can be solved using n extra spaces where we store a mapping of the existing node and new node in a hash map, then go through the linked list again, find the copy node of the current node, find copy node for its next, and link them. Then find the random pointer node of the current node, find the corresponding clone node of the random node, and link random pointer of the current nodes copy node to cloned random node. It takes 2 scans of the linked list as well.

However, the challenge is to do it in O(1) space complexity.

## Thought process

We are using hashmap to know the corresponding cloned node of a node in the linked list, can we do that without using hashmap? Can we use the linked list itself to store the information?
The idea here is to add the cloned node after the original node in the given linked list. For each node, we will insert the cloned node between the original node and its next node. After inserting the nodes as a subsequent node of the original node, we can easily get the mapping we were storing in the hashmap right?

Once, all the nodes are linked together, we can copy the random pointer of the original node to the random pointer of the cloned node as

```node.next.random = node.random.next
```

The last step will be to separate the two lists. We go through the list, for each node, we will get the cloned node by node.next, next of the current original node should be the next of the cloned node.

```Node clonedNode = node.next;
node.next = cloneNode.next;
```

Last, we have to link the cloned nodes next to node.next.next and move forward to the next node of the original node.

Overall, this implementation required 3 passes to the linked list, first to insert nodes in between, then to copy the random pointers and then to detach the cloned linked list. Pass 2 and 3 can be combined but it is easier to understand that way.

### Show me the implementation

```/*
// Definition for a Node.
class Node {
public int val;
public Node next;
public Node random;

public Node() {}

public Node(int _val,Node _next,Node _random) {
val = _val;
next = _next;
random = _random;
}
};
*/
class Solution {

/* Step 1. create clones of each node and
insert them next to the original node.
List [1,2,3] will look like [1,1,2,2,3,3]
*/
while(current != null){
//create node.
Node newNode = new Node(current.val);

//Insert to the next of current
newNode.next = current.next;
current.next = newNode;

current = newNode.next;
}

/* Step 2. Copy the random pointers.
The cloned node's random will point to the
next of the original node.
*/
while(current != null){
if(current.random != null){
//current.next is cloned node. it's random points
//to next of current node's random.
current.next.random = current.random.next;
}
current = current.next.next;
}

/* Step 3 : Detach the cloned list from the original list */
while(current != null){
Node node = current.next;
current.next = current.next.next;
//IMPORTANT: Check for the last node.
if(current.next != null){
node.next = current.next.next;
}
current = current.next;
}

}
}
```

The time complexity of above implementation is O(N) and space complexity is O(1)

# Median of integers stream

We solve two problems which involved streams, first was to find first non repeated character in stream and second was LRU cache. Let’s discuss another problem which is to find median of integers stream. Problem statement is like this: Given continuous stream of integers, find median of integers stream received till given point of time. Median can be asked at multiple times.

To understand problem better, ask yourself, what is a median?

The median is the value separating the higher half from the lower half of a data sample. For a data set, it may be thought of as the “middle” value.

Wikipedia

For example, in the data set {1, 3, 3, 6, 7, 8, 9}, the median is 6, the fourth largest, and also the fourth smallest, number in the sample.

Median of sorted array of integers is element at middle index of array if size of array is odd and average of elements at mid and mid +1 elements if size of array is even.

Now, that we understood the definition of median. let’s go back to our problem and take an example to understand it further. Problem is that we get integers from a stream, one by one and at any given point of time, we have to return median of set of integers received till now.
First, stream throws 12, then 7 and then 8. What will be the median now? It will be 8, because if we arrange 12,7,8 in sorted order, 8 is element at middle. What if we get 11 next? Well, now sorted order looks like 7,8,11,12. As size of set is even, we take average of mid and mid+1 element which is 9.5.

## Median of integers stream : thoughts

What will be the brute force solution? As integers are processed from stream, store them in an array. Can we store element randomly? If yes, to find median, we have to sort array every time. Complexity of this method to find median in stream of integers will be O(n log n) dominated by the sorting algorithm.
How about we insert element in array in sorted order. This will make complexity of processing integer from stream O(n2), as we have to move n elements to right in worst case.
Another underlying problem in using array here is that we do not know how many integers will come out of stream, so it will be very difficult to pre-allocate memory for it. Linked list can solve that problem, however, it does not reduce complexity of processing, at the same increases the complexity of finding median to O(n) from O(1).

Think of this, do we need completely sorted set of  integers before we can calculate the median? Actually, we need kth smallest element of array if size of set is odd and average of kth and k+1th element if size of set is even, k will be n/2.

However, we do not have pre-processed array with us. What is the property of the median? Median is greater than all elements on left of it and less than all elements on the right side of it, where the number of elements on both groups is equal or differs by 1.

### Median of integers stream : Heaps

How about we split the incoming integers into two halves. Whenever median is asked, we can get the maximum of one half and return it as median, if the size of two halves differ by 1 or return of average of the max of one half and minimum of other halves if the size of two halves is equal.

What data structure is best to find min and max in constant time? Heap it is. In this case, we will need two heaps, one max and another min heap. Max heap will store all the elements on the left side of median and min heap will store all the elements on the right side of the median.

How to balance the size difference between the two heaps? Insert new processed integer into the max heap,  if the size of the max heap is 2 more than min heap, extract the maximum element from the max heap and put it in min heap.

Also, maintain the property that all the elements on the max heap should be less than elements on the min heap. So, whenever the root of the max heap greater than the root of min heap, it should be removed from the max heap and added to the min heap.

Let’s take an example and understand the method first and the make concrete algorithm out of it. We have the first number from the stream as 12, what should we do? We decided to put every number on the max heap to start with.

Now, comes the integer 7. First of all, we add a new integer to the max heap. This will create a difference in size of the min and max heap more than one. In that case, we will take out the max from the max heap and put it into the min heap.

Next integer is 18, what happens now. We add into the max heap. Difference between sizes is not more than 1, However, the root of the max heap (18) is greater than the root of min heap (12). In this case, too, we take the root of the max heap and move it to the min heap. At this point, if the median of integers stream is asked, return the root of min heap which is 12.

Come the integer 10, it goes into the max heap, does not create any size difference and the root of the max heap is less than the root of the min heap. At this point, the median of the stream of integers till now is 11 ((10+12)/2).

.New integer from the stream is 11. As usual, add the new integer to the max heap, size difference remains less than 2 and 11 is less than the root of the min heap (12).
What should be the median now? At this point, the size of the max heap is more than the min heap, hence we will return the root of the max heap (11)

### Median of a stream of integers: Algorithm

1. Process integer from the stream and add it to the max heap.
2. If the root of max heap greater than the root of the min heap:
1. Delete the root from the max heap
2. Add removed integer from the max heap to the min heap
3. If the size difference between the two heaps is more than 2:
1. Remove the root of the heap which has more elements.
2. Add removed node to another heap.
4. To calculate the median:
1. If the size of both heaps equal, return average of their roots.
2. Else, return the root of the heap with more elements.

#### Median of integers stream : Implementation

Implementation involves priority queue in Java, refer to Stack Overflow question on how to use priority queue as a max heap.

```package com.company;

import java.util.Collections;
import java.util.PriorityQueue;

/**
* Created by sangar on 18.10.18.
*/
public class MedianOfIntegerStream {
private PriorityQueue maxHeap;
private PriorityQueue minHeap;

public MedianOfIntegerStream(){
maxHeap = new PriorityQueue(Collections.reverseOrder());
minHeap = new PriorityQueue();
}

public double getMedian(){
if(maxHeap.size() == minHeap.size())
return (double)((int)maxHeap.peek() + (int)minHeap.peek())/2;

if(maxHeap.size() > minHeap.size())
return (double)(int)maxHeap.peek();

return (double)(int)minHeap.peek();

}

public void processInteger(int data){

if(maxHeap.size() - minHeap.size() > 1
|| ( minHeap.size() > 0
&& (int)maxHeap.peek() > (int)minHeap.peek())){
}

if(minHeap.size() - maxHeap.size() > 1){
}
}
}
```

#### Test cases for median in integers stream

```package test;

import com.company.MedianOfIntegerStream;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class MedianOfIntegerStreamTest {

MedianOfIntegerStream tester = new MedianOfIntegerStream();

@Test
public void baseTest() {

tester.processInteger(12);
tester.processInteger(7);

assertEquals(9.5, tester.getMedian() );
}

@Test
public void maxHeapWithMoreElementsTest() {

tester.processInteger(12);
tester.processInteger(7);
tester.processInteger(9);

assertEquals(9, tester.getMedian() );
}

@Test
public void minHeapWithMoreElementsTest() {

tester.processInteger(12);
tester.processInteger(7);
tester.processInteger(9);
tester.processInteger(13);
tester.processInteger(15);

assertEquals(12, tester.getMedian() );
}

@Test
public void minHeapSizeMoreThanTwoDifferenceTest() {

tester.processInteger(12);
tester.processInteger(7);
tester.processInteger(9);
tester.processInteger(13);
tester.processInteger(15);
tester.processInteger(17);
tester.processInteger(19);

assertEquals(13, tester.getMedian() );
}

@Test
public void maxHeapGetsTheElementTest() {

tester.processInteger(12);
tester.processInteger(7);
tester.processInteger(9);
tester.processInteger(13);
tester.processInteger(15);
tester.processInteger(17);
tester.processInteger(5);
assertEquals(12, tester.getMedian() );
}
}
```

Complexity of processing is O(log n) to insert an element into any heap. However, fetching median in stream of integers at any given time is O(1).

## Merge overlapping intervals

Given N intervals S = {E1,E2,…..En} with each Ei has start time si and end time ei. Some of these intervals are overlapping. The problem statement is to merge these overlapping intervals.

Ei and Ej overlap when start time of Ej i.e sj is less than end time of Ei i.e ei.

For example:

```Input:
[(1,3),(2,4),(5,8), (6,9)]
Output:
[(1, 4),(5,9)]
Explantion:
Interval (1,3) and (2,4) and interval (5,8) and (6,9) overlap.
```

## Merge overlapping intervals solution

As we always do, first try to come up with brute force solution, given enough time and space and money, how would you solve this?
The natural course is to take ith interval and compare start time of all jth intervals with end time of ith, if the start time of jth interval is less than the end time of ith event, then you can merge intervals. What should be end time for merged interval then?  It should be a maximum of end times of two merged intervals.

What will be the time complexity of this approach? We are not using any additional space, however, the worst-case time complexity is O(n2). Can we do better?

What are two times we are comparing in brute force solution? It’s the start time of one interval with the end time of another. If we arrange input in a specific order, can we reduce processing some entries?

If we sort all intervals based on their start time, si < si+1< si+2. Also, interval is always forward looking, ei > si, ei+1 > si+1 and so on.

If si is greater ei-1, then si+1 will be greater than ei-1, so no need to compare si+1 with ei-1, that is no need to go beyond immediate previous interval for any interval Ei.

If si is less than ei-1, update ei-1 with maximum of ei-1 and ei and move to Ei+1.

Notice that we need last interval Ei-1 to decide if to merge new interval into previous one or keep it as standalone. A stack is the best data structure to use. The algorithm will look like:

1. Consider interval Ei.
2. If stack is empty, push Ei to stack.
3. If stack is not empty, then pop interval at top of stack call it Ei-1.
4. Compare si, start time of Ei with ei-1, end time of Ei-1.
5. If si less than ei-1, update ei-1 as max(ei-1, ei), as in maximum of end times of two intervals and push back Ei-1on to stack.
6. Else push Ei on to stack.
7. Continue till all events are considered.
8. At the end of processing, stack will contain all merged interval.

Let’s take an example and see how this algorithm works. We have following intervals and we have to merge overlapping intervals.

Find the maximum of end times of two intervals and update the previous interval with that end time and push it back on to stack.

At this point, when there is no more interval remaining, the stack contains all merged overlapping intervals.

### Merge intervals Java implementation

```package com.company;

import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Stack;

/**
* Created by sangar on 8.4.18.
*/
public class OverlappingIntervals {
public  static ArrayList<Interval>
mergeOverlappingIntervals(ArrayList<Interval> intervals){

ArrayList<Interval> mergedIntervals = new ArrayList<>();
Stack<Interval> s = new Stack();

//Sort the ArrayList of interval based on start time.
Collections.sort(intervals, Comparator.comparing(p -> p.getStartTime()));
for(Interval currentInterval : intervals){
if(s.empty())s.push(currentInterval);
else {
Interval previousInterval = s.pop();
if(previousInterval.getEndTime() >
currentInterval.getStartTime()){
/*
If current interval's start time is less than end time of
previous interval, find max of end times of two intervals
and push new interval on to stack.
*/
int endTime = Integer.max(previousInterval.getEndTime(),
currentInterval.getEndTime());
/* Notice that we have created new interval and
did not update the old one
This concept is called as immutability of class
*/
s.push(new Interval(previousInterval.getStartTime(),
endTime));
}
else{
s.push(previousInterval);
s.push(currentInterval);
}
}
}
while(!s.empty()){
}

return mergedIntervals;
}

public static void main(String[] args) {
ArrayList<Interval> intervals = new ArrayList<>();

ArrayList<Interval> mergedIntervals
= mergeOverlappingIntervals(intervals);
for (Interval interval : mergedIntervals){
System.out.print("(" + interval.getStartTime() +","
+ interval.getEndTime() + ")");
}
}
}
```

Complexity of algorithm to merge overlapping intervals will be O(nlogn) due to sorting with O(n) extra space for stack and then copying into the list to return also takes O(n) space.

There is another way to implement the same function without using the stack, here we use the fact that ArrayList in Java is implemented using the array as the base and getting an element at a particular index should be O(1) operation. The code looks more or less the same, however, there is no traversal of the stack at the end to create the list to return.

find overlapping intervals

```public List<Interval> mergeOptimized(List<Interval> intervals) {

if(intervals.size() == 0) return intervals;

Collections.sort(intervals,
(Interval a, Interval b) -> a.getStartTime() - b.getStartTime());

List<Interval> mergedIntervals = new ArrayList<Interval>();
for(Interval interval : intervals){

/*If the merged list is empty add the interval to
it or check if the last interval in merged list overlaps

/*Remember the get function on ArrayList is O(1) operation
because Arraylists in Java are backed by arrays */
if(mergedIntervals.isEmpty()
|| mergedIntervals.get(
mergedIntervals.size()-1).getEndTime() <
interval.getStartTime() ){
}
else {
int lastEndTime = Math.max(
mergedIntervals.get(mergedIntervals.size()-1)
.getEndTime(),
interval.getEndTime()
);
mergedIntervals.get(mergedIntervals.size()-1)
.setEndTime(lastEndTime);
}
}

return mergedIntervals;
}
```

You can use the above snippet of code to submit for this leetcode problem and it should be accepted.

Please share if there is something missing or wrong. Also, please reach out to us at [email protected] if you want to contribute to the website and help others to learn by sharing your knowledge. If you are preparing for an interview and need some coaching to prepare for it, please sign up for the free session with us.

## Right view of a binary tree

We learned different traversals of a binary tree like inorder, preorder, postorder, level order etc in previous posts. Today’s problem includes traversal of a binary tree too but in a different manner. The problem statement is to write a program to print the right view of a binary tree. The first question would be what is the right view of a binary tree?

Right view of a binary tree would be all the nodes of the binary tree which are visible when the tree is looked at from the right-hand side of the binary tree. For example, we would see nodes: 10, 15, and 19 when we look at the binary tree below from the right side as node 7 will be hidden by node 15 and node 8,9 and 18 will be hidden by node 19.

### Right view of a binary tree: thoughts

What do we see when we look at the tree from the right-hand side? What is the observation? It is that once we see a node, we can not see any node which is on the same level behind the visible node. The visible node obstructs all other nodes.
Which node will be the first one to be visible? It would be the rightmost node on that level. So we have to visit the right child of a node first before we visit the left child. If there was a right child of a node, the left child will not be visible. How can we make sure even the cousins of the rightmost node are not visible?

The idea is simple, we will do a preorder traversal of a binary tree with the right child visited first. Why? Because if we see the right child, the left child will not be visible as explained above.

To make sure none of the cousins are visible of a rightmost node are visible, we have to keep track of the levels. When we reach a node, we see if the level of the node is deeper than already seen maximum level? If yes, this node is the rightmost node (Why? because we are visited right child first) on that level and should be visible. Now, the maximum visited level is this new level, all the nodes which are this new level will not be visible.

### Right view of a binary tree: example

Let’s take an example and see how this method works.

We have current max level traversed as -1. At `node(10)`, we visit the level 0 which is greater than the current maximum. So `node(10)` should be visible in the right view of the binary tree.

At `node(15)`, we are moving down a level, so the current level would be 1, whereas current max visited level is 0. `node(15)`will be visible from the right-hand side of the tree. The max level visited is 1.

As we are doing preorder traversal, we will visit `node(19)` next, which is at level 2 which is greater than max level, so, `node(19)` will be visible in the right view of the binary tree.

Next, we visit the `node(18)`, which is at the level 2, which is equal to max level, hence `node(18)` will not be visible.

`node(7)` is at the level 1, which is less than current max level 2, so it will not be visible. Same is the case for the `node(8)` and `node(9)`.

#### Right view of a binary tree: implementation

```#include<stdio.h>
#include<stdlib.h>

struct node{
int value;
struct node *left;
struct node *right;
};
typedef struct node Node;

void printRightView(Node * node, int currLevel, int *maxLevel){

if(node == NULL) return;

if(currLevel >  *maxLevel){
printf("%d  ", node->value);
*maxLevel = currLevel;
}
printRightView(node->right, currLevel+1, maxLevel);
printRightView(node->left, currLevel+1, maxLevel);
}
/* driver program */
Node * createNode(int value){
Node *temp =  (Node *)malloc(sizeof(Node));
temp->value = value;
temp->right= NULL;
temp->left = NULL;
return temp;
}

if(node == NULL){
return createNode(value);
}
else{
if (node->value > value){
}
else{
}
}
return node;
}

int main(){

Node *root = NULL;
//Creating a binary tree

int max = -1;
printRightView(root, 0, &max);

return 0;
}
```

We visit each node only once, complexity of above code is O(n).

Please share if there is something wrong or missing. If you are willing to contribute and share your learning with thousands of learners across the world, please reach out to us at [email protected]

## Balanced partition problem

Given a set of integers, partition those integers into two parts where the difference between the two parts is minimum. This problem is known as balanced partition problem. For example,

```Input:
A = [1,7,4,11],
Output:
1
Explanation:
Two subsets can be: {1,11} and {7,4}, two have a difference of 1, which is the minimum difference we can get by splitting this array.
```
Mathematically, you have a set of n integers each in the range 0, . . . , K. Partition these integers into two subsets such that you minimize |S1 − S2|, where S1 and S2 denote the sums of the elements in each of the two subsets.

Balance partition problem can be asked in many other ways, for instance, given a list of 22 players and their strengths, divide those 22 players into two teams so that both teams are balanced. Another version can be that you have n candy, each candy has a value associated with it. You want to distribute those candies between two kids as equally as possible.

No matter what version is asked, the approach remains the same.

## Balance partition problem: thoughts

The brute force method will be to list down all the subsets of the given set and find the sum of each one of them. Then scan through the sum of all the subsets and find the two closest ones. For a set of n elements, there can be `2n` subset. Therefore, the complexity of this brute force solution is already exponential.

Let me tweak balance partition problem a bit. We find if there are two subsets of the set of integers such that the difference between sum of these two subsets is zero. Essentially, this is a special case of the original problem. If the difference between the sum of two subsets is zero that means the sum of both subsets should be exactly equal to half of the sum of all elements in the set.

So problem reduces to a smaller problem that is to find if there is a subset of integers which add up to half the sum of all integers in the set? This is the subset sum problem which we have already solved.

How can we use information provided by subset set problem above? Let’s say S is the sum of all the integers in the set. S/2 will be half of that sum. We have to find a subset with sum i such that S/2 -i is minimum.

Whether or not, there is a subset with sum i in the set is given by solving subset sum problem. For the sums, i, which are possible with subsets of the set, find the one which is the least distance from S/2. That will give us other subsets which are least greater than half of the sum of all elements of the set and that will be minimal difference possible between two subsets.

So,  expression would be as

```min(S/2 - i) where T[n][i] = True and i>=0 and i<=S/2
```

Why we took i >=0 and i<S/2? Because, we want to be balanced, so i cannot be more than half of the total sum in any case.

### Balanced partition problem implementation

```package com.company;

/**
* Created by sangar on 25.11.18.
*/
public class BalancedPartition {
public int findBalancePartition(int[] a){

// Calculate sum of all the elements in set
int S = 0;
for (int i=0; i<a.length; i++)
S += a[i];

boolean T[][] = new boolean[a.length + 1][S + 1];

/* Initialize the first column as true.
0 sum is possible with all elements.
*/
for (int i=0; i<=a.length; i++)
T[i][0] = true;

/*  Initialize top row, except dp[0][0],
as false. With 0 elements, no other
sum except 0 is possible
*/
for (int i=1; i<=S; i++)
T[0][i] = false;

for (int i = 1; i <= a.length; i++) {
for (int j = 1; j <= S; j++) {
// If ith element is excluded
T[i][j] = T[i - 1][j];

// If ith element is included
if (a[i - 1] <= j)
T[i][j] |= T[i - 1][j - a[i - 1]];
}
}

// Initialize difference of two sums.
int diff = Integer.MAX_VALUE;

for (int j = S/2; j >= 0; j--) {
// Find the
if (T[a.length][j] == true)
{
diff = S - 2 * j;
break;
}
}
return diff;
}
}
```

Once, we get the nearest sum, we can always backtrack the table and find elements of the subset itself. Actually, this problem is now reduced to 0/1 knapsack problem, where maximum value we can get is j from the set of integers.

Complexity to split set into two balanced partitions is O(n * S) with a space complexity of O(n * S), where S will be the max value array can have.

# Minimum jumps to reach end of array

Given an array of integers, find minimum jumps to reach end of the array. Condition is that you can maximum jump a[i] indices from index i.

For example, in following array, minimum jumps required are 2.

At index 1, we can either jump 0, 1 or 2 indices ahead. If we jump 2 indices, we would require two more jumps (at 1 and 1) to reach at 4. So total number of jumps would be 3.

However if we jump only 1 index ahead, next A[i] will allow us to jump 3 indices ahead, doing so we will reach at the end of the array. So minimum number of jumps to reach at the end of array is 2.

## Minimum number of jumps : thought process

What would be the brute force method to solve this? At each index, you try all possible jumps and get the combination which gives you the minimum jumps. This method will have exponential complexity which we do not want.

What is the original problem? It’s `minJumps(start, end)` Of all the jumps possible from start, let’s say we go to index k, then what how does problem reduces? Well, now we have to find minimum number of jumps from k to end. How to decide on k now? We try all k values from start+1 to start + a[i].

```minJumps(start, end) = Min ( minJumps(k, end) )
for all k reachable from start
```

Now, we have clear recursion relationship, what should be the base case? When `k + A[k]` > end, or `k == end`, we should return 1 as there would be only one jump required from k to end now.

```package com.company;

/**
* Created by sangar on 10.10.18.
*/
public class MinimumJumps {

public int minimumNumberOfJump(int[] a, int start, int end){
//If start == end, we reached the end, return 0.
if(start == end) return 0;

//if current element is 0, you cannot jump to end at all
if(a[start] == 0) return Integer.MAX_VALUE;

int minimumJumps = Integer.MAX_VALUE;

for(int k=start+1; k<=start+a[start] && k<=end; k++){
/*
For each K from start+1 to end, find the minimum jumps.
*/
int jumps = minimumNumberOfJump(a,k,end);
if(jumps != Integer.MAX_VALUE && jumps + 1 <; minimumJumps){
minimumJumps  = jumps + 1;
}
}
return minimumJumps;
}
}
```

Test cases for above function

```package test;

import com.company.MinimumJumps;
import org.junit.jupiter.api.Test;

import static org.junit.Assert.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class MinimumJumpTest {

MinimumJumps tester = new MinimumJumps();

@Test
public void baseTest() {

int[] a = {1, 3, 5, 8, 9, 2, 6, 7, 6, 8, 9};
assertEquals(3,
tester.minimumNumberOfJump(a,0, a.length-1));
}

@Test
public void arrayContainsZeroTest() {

int[] a = {1, 3, 0, 0, 0, 2, 6, 7, 6, 8, 9};
assertEquals(Integer.MAX_VALUE,
tester.minimumNumberOfJump(a,0, a.length-1));
}

@Test
public void nullArrayTest() {

assertEquals(0, tester.minimumNumberOfJump(null,0, 0));
}

@Test
public void arrayWithTwoElementsTest() {

int[] a = {1, 0};
assertEquals(1,
tester.minimumNumberOfJump(a,0, a.length-1));
}
}
```

Let’s see execution trace of above function for an input.

From the above execution tree, we notice that some subproblems are calculated again and again. This is typically known as overlapping subproblems.
Also, optimal solution to subproblem actually lead us to optimal solution for original problem which is optimal subproblem structure. These two property are must to apply dynamic programming to a problem.

What if we store minimum number of jumps required to reach a particular index. To reach first index, jumps required is 0. `Jump[i]` represents the number of reach index i. Solution to reach at the end of the array would be Jump[n-1]. How do we feel this array? For each i,  from  j = 0 to i-1 and check if j+a[j] <= i, if yes, update `jump[i] = min (jump[i], jump[j]+1)`.

### Minimum number of jumps: dynamic programming approach

```package com.company;

/**
* Created by sangar on 10.10.18.
*/
public class MinimumJumps {

public int minimumNumberOfJumpDP(int[] a){

if(a == null || a.length == 0) return 0;

if(a[0] == 0) return Integer.MAX_VALUE;

int[] jump = new int[a.length];

//no jumps required for first element
jump[0] = 0;

for(int i=1; i<a.length;i++){
jump[i] = Integer.MAX_VALUE;

for(int j=0; j<i; j++){
if(j+a[j]>=i && jump[j] != Integer.MAX_VALUE ){
jump[i] = Integer.min(jump[i], 1 + jump[j]);
}
}
}
return jump[a.length-1];
}
}
```

Complexity of dynamic programming approach to find minimum number of jumps to reach end of an array is `O(n2)` with space complexity of `O(n)`

If you are interested to solve this problem in `O(n)` time, please visit stack overflow discussion

Please share if there is something wrong or missing. If you are interested in taking coaching from one of our experienced teachers, please reach out to us at [email protected]