Runway reservation system

Runway reservation system

Given an airport with a single runway, we have to design a runway reservation system of that airport. Add to details: each reservation request comes with requested landing time let’s say t. Landing can go through if there is no landing scheduled within k minutes of requested time, that means t can be added to the set of scheduling landings. K can vary and depends on external conditions. This system helps with reservations for the future landings.
Once the plane lands safely, we have to remove the plane for landing sets.

This is perfectly possible with airports with multiple runways where only one runway can be used because of weather conditions, maintenance etc. Also, one landing cannot follow another immediately due safety reasons, that’s why there has to be some minimum time before another landing takes place. We have to build this system with given constraints.

In nutshell, we have create a set of landings which are compatible with each other, i.e they do not violate the constraint put.  There are two operations to be performed on this set : insertion and removal. Insertion involves checking of constraints. 

Example: let’s say below is the timeline of all the landing currently scheduled and k = 3 min

reservation system

Now, if the new request comes for landing at 48.5, it should be added to the set as it does not violate the constraint of k mins. However, if a request comes for landing at 53, it cannot be added as it violates the constraint.
If a new request comes for 35, it is invalid as the request is in past.

Reservation system: thoughts

What is the most brute force solution which comes to mind? We can store all the incoming requests in an unsorted array.  Insertion should be O(1) operation as it is at the end. However, checking the constrain that it is satisfied will take O(n) because we have to scan through the entire array.
Same is true even if we use unsorted linked list. 

How about a sorted array? We can use a binary search algorithm to find the constraint, which will take O(log n) complexity. However, the insertion will still be O(n) as we have to move all the elements to the right from the position of insertion.

Sorted list solves the problem of insertion in O(1) but then search for the constraint will be O(n) complexity. It just moves the problem from one place to another.

Reservation system using binary search tree

We have to optimize two things : first check if the new request meets the constraints, second insert the new request into the set. 

Let’s think of binary search tree. To check a constraint, we have to check each node of the binary tree, but based on the relationship of the current node and the new request time, we can discard half of the tree. (Binary search tree property, where all the nodes on the left side are smaller than the root node and all the nodes on the right side are greater than the current node)

When a new request comes, we check with the root node and it does not violate the constraints, then we check if the requested time is less than the root. If yes, we go to left subtree and check there. If requested landing time is greater than the root node, we go to right subtree.
When we reach the leaf node, we add a new node with new landing time as the value of that node.

If at any given node, the constraint is violated, i.e not new landing time is within k minutes of time in the node, then we just return stating it is not possible to add new landing.

What will be complexity of checking the constraint? It will be O(h) where h is height of binary search tree. Insert is then O(1) operation.

Reservation system implementation

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
    int value;
    struct node *left;
    struct node *right;
};
typedef struct node Node;

void inoderTraversal(Node * root){
    if(!root) return;
	
    inoderTraversal(root->left);
    printf("%d ", root->value);
    inoderTraversal(root->right);
}

Node *createNode(int value){
    Node * newNode =  (Node *)malloc(sizeof(Node));
    
    newNode->value = value;
    newNode->right= NULL;
    newNode->left = NULL;
	
    return newNode;
}

Node *addNode(Node *node, int value, int K){
	if(!node)
        return createNode(value);
    
    if ( node->value + K > value && node->value - K < value ){
        return node;
    }
    if (node->value > value)
        node->left = addNode(node->left, value, K);
    else
        node->right = addNode(node->right, value, K);
    return node;
}

/* Driver program for the function written above */
int main(){
    Node *root = NULL;
	
    //Creating a binary tree
    root = addNode(root, 30, 3);
    root = addNode(root, 20, 3);
    root = addNode(root, 15, 3);
    root = addNode(root, 25, 3);
    root = addNode(root, 40, 3);
    root = addNode(root, 38, 3);
    root = addNode(root, 45, 3);
    inoderTraversal(root);
	
    return 0;
}

Let’s say a new requirement comes which is to find how many flights are scheduled till time t?

This problem can easily be solved using binary search tree, by keeping track of size of subtree at each node as shown in figure below.

runway reservation system

While inserting a new node, update counter of all the nodes on the nodes. When query is run, just return the count of node of that time or closest and smaller than that value of t. 

Please share if there is something wrong or missing. If you are preparing for an interview, please signup to receive free preparation material.

Prune nodes not on paths with given sum

Prune nodes not on paths with given sum

Prune nodes not on paths with given sum is a very commonly asked question in Amazon interviews. It involves two concepts in one problem. First, how to find a path with a given sum and then second, how to prune nodes from binary tree. The problem statement is:

Given a binary tree, prune nodes which are not paths with a given sum.

For example, given the below binary tree and given sum as 43, red nodes will be pruned as they are not the paths with sum 43.

Prune nodes not on path with given sum
Prune nodes not on path with given sum

Prune nodes in a binary tree: thoughts

To solve this problem, first, understand how to find paths with a given sum in a binary tree.  To prune all nodes which are not on these paths,  get all the nodes which are not part of any path and then delete those nodes one by one. It requires two traversals of the binary tree.
Is it possible to delete a node while calculating the path with a given sum? At what point we find that this is not the path with given sum? At the leaf node.
Once we know that this leaf node is not part of the path with given sum, we can safely delete it.  What happens to this leaf node? We directly cannot delete the parent node as there may be another subtree which leads to a path with the given sum. Hence for every node, the pruning is dependent on what comes up from its subtrees processing.

At the leaf node, we return to parent false if this leaf node cannot be part of the path and delete the leaf node. At parent node, we look for return values from both the subtrees. If both subtrees return false, it means this node is not part of the path with the given sum. If one of the subtrees returns true, it means the current node is part of a path with the given sum. It should not be deleted and should return true to its parent.

Prune nodes from a binary tree: implementation

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};
typedef struct node Node;

#define true 1
#define false 0

int prunePath(Node *node, int sum ){
	
	if( !node ) return true;
	
	int subSum =  sum - node->value;
	/* To check if left tree or right sub tree 
	contributes to total sum  */
	
	int leftVal = false, rightVal = false;
	
	/*Check if node is leaf node */
	int isLeaf = !( node->left || node->right );
	
	/* If node is leaf node and it is part of path with sum
	= given sum return true to parent node so tha parent node is
	not deleted */
	if(isLeaf && !subSum )
		return true;
		
	/* If node is leaf and it not part of path with sum 
	equals to given sum
    Return false to parent node */
    else if(isLeaf && subSum ){
    	free(node);
    	return false;
    }
    /* If node is not leaf and there is left child 
	Traverse to left subtree*/
    leftVal = prunePath(node->left, subSum);
    
    /* If node is not leaf and there is right child
	 Traverse to right subtree*/
    rightVal = prunePath(node->right, subSum);
    
    /* This is crux of algo.
    1. If both left sub tree and right sub tree cannot lead to
	path with given sum,Delete the node 
    2. If any one sub tree can lead to path with sum equal
	to given sum, do not delete the node */ 
    if(!(leftVal || rightVal) ){
    	free(node);
    	return false;
    }
    if(leftVal || rightVal ){
    	if(leftVal)
    		node->right = NULL;
    	if(rightVal)
    		node->left = NULL;
    	return true;
    }
    return true ;
}

void inoderTraversal(Node * root){
	if(!root)
		return;
	
	inoderTraversal(root->left);
	printf("%d ", root->value);
	inoderTraversal(root->right);
}
Node *createNode(int value){
	Node * newNode =  (Node *)malloc(sizeof(Node));
	newNode->value = value;
	newNode->right= NULL;
	newNode->left = NULL;
	
	return newNode;
}
Node *addNode(Node *node, int value){
	if(node == NULL){
		return createNode(value);
	}
	else{
		if (node->value > value){
			node->left = addNode(node->left, value);
		}
		else{
			node->right = addNode(node->right, value);
		}
	}
	return node;
}

/* Driver program for the function written above */
int main(){
	Node *root = NULL;
	//Creating a binary tree
	root = addNode(root,30);
	root = addNode(root,20);
	root = addNode(root,15);
	root = addNode(root,25);
	root = addNode(root,40);
	root = addNode(root,37);
	root = addNode(root,45);
	
	inoderTraversal(root);	
	prunePath(root, 65);
	
	printf( "\n");
	if( root ){
		inoderTraversal(root);	
	}
	return 0;
}

The complexity of this algorithm to prune all nodes which are not on the path with a given sum is O(n).

Please share if there is something wrong or missing. If you are preparing for interviews, please signup for free interview material.

Print paths in a binary tree

Print paths in a binary tree

We learned various kind of traversals of a binary tree like inorder, preorder and postorder. Paths in a binary tree problem require traversal of a binary tree too like every other problem on a binary tree. The problem statement is:

Given a binary tree, print all paths in that binary tree

What is a path in a binary tree? A path is a collection of nodes from the root to any leaf of the tree. By definition, a leaf node is a node which does not have left or right child. For example, one of the paths in the binary tree below is 10,7,9.

paths in a binary tree
Paths in a binary tree

Paths in a binary tree: thoughts

It is clear from the problem statement that we have to start with root and go all the way to leaf nodes. Question is do we need to start with root from each access each path in binary tree? Well, no. Paths have common nodes in them. Once we reach the end of a path (leaf node), we just move upwards one node at a time and explore other paths from the parent node. Once all paths are explored, we go one level again and explore all paths from there. 

This is a typical postorder traversal of a binary tree, we finish the paths in the left subtree of a node before exploring paths on the right subtree We process the root before going into left or right subtree to check if it is the leaf node. We add the current node into the path till now. Once we have explored left and right subtree, the current node is removed from the path.

Let’s take an example and see how does it work. Below is the tree for each we have to print all the paths in it.

paths in a binary tree
Paths in binary tree

First of all our list of paths is empty. We have to create a current path, we start from the root node which is the node(10). Add node(10) to the current path. As node(10) is not a leaf node, we move towards the left subtree.

print paths in a binary tree

node(7) is added to the current path. Also, it is not a leaf node either, so we again go down the left subtree.

paths in binary search tree

node(8) is added to the current path and this time, it is a leaf node. We put the entire path into the list of paths or print the entire path based on how we want the output.

paths in a binary tree

At this point, we take outnode(8) from the current path and move up to node(7). As we have traversed the left subtree of,node(7) we will traverse right subtree of the node(7).

paths in binary search tree

node(9) is added now to the current path. It is also a leaf node, so again, put the path in the list of paths. node(9) is moved out of the current path.

Now, left and right subtrees of node(7) have been traversed, we remove node(7) from the current path too.

At this point, we have only one node in the current path which is the node(10) We have already traversed the left subtree of it. So, we will start traversing the right subtree, next we will visit node(15) and add it to the current path.

node(15) is not a leaf node, so we move down the left subtree. node(18) is added to the current path. node(18) is a leaf node too. So, add the entire path to the list of paths. Remove node(18) from the current path.

We go next to the right subtree of the node(15), which is the node(19). It is added to the current path. node(19) is also a leaf node, so the path is added to the list of paths.

Now, the left and right subtrees of the node(15) are traversed, it is removed from the current path and so is the node(10).

Print paths in a binary tree: implementation

package com.company.BST;

import java.util.ArrayList;

/**
 * Created by sangar on 21.10.18.
 */
public class PrintPathInBST {
    public void printPath(BinarySearchTree tree){
        ArrayList<TreeNode> path  = new ArrayList<>();
        this.printPathRecursive(tree.getRoot(), path);
    }

    private void printPathRecursive(TreeNode root,
									ArrayList<TreeNode> path){
        if(root == null) return;

        path.add(root);

        //If node is leaf node
        if(root.getLeft() == null && root.getRight() == null){
            path.forEach(node -> System.out.print(" " 
							+ node.getValue()));
            path.remove(path.size()-1);
            System.out.println();
            return;
        }

        /*Not a leaf node, add this node to 
		path and continue traverse */
        printPathRecursive(root.getLeft(),path);
        printPathRecursive(root.getRight(), path);

		//Remove the root node from the path
        path.remove(path.size()-1);
    }
}

Test cases

package com.company.BST;
 
/**
 * Created by sangar on 10.5.18.
 */
public class BinarySearchTreeTests {
    public static void main (String[] args){
        BinarySearchTree binarySearchTree = new BinarySearchTree();
 
        binarySearchTree.insert(7);
        binarySearchTree.insert(8);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);
        binarySearchTree.insert(3);
        binarySearchTree.insert(4);
 
        binarySearchTree.printPath();
    }
}

Tree node definition

package com.company.BST;

/**
 * Created by sangar on 21.10.18.
 */
public class TreeNode<T> {
    private T value;
    private TreeNode left;
    private TreeNode right;

    public TreeNode(T value) {
        this.value = value;
        this.left = null;
        this.right = null;
    }

    public T getValue(){
        return this.value;
    }
    public TreeNode getRight(){
        return this.right;
    }
    public TreeNode getLeft(){
        return this.left;
    }

    public void setValue(T value){
        this.value = value;
    }

    public void setRight(TreeNode node){
        this.right = node;
    }

    public void setLeft(TreeNode node){
        this.left = node;
    }
}

Complexity of above algorithm to print all paths in a binary tree is O(n).

Please share if there is something wrong or missing. If you are preparing for Amazon, Microsoft or Google interviews, please signup for interview material for free.

First non repeated character in string

First non repeated character in string

Given a string, find first non repeated character in a stringFor example, string is abcbdbdebab, the first non repeating character would be c. Even though e is also non repeating in string, c is output as it is first non repeating character.

Non repeating character : thoughts

What does it mean to be non-repeating character? Well, the character should occur in string only once. How about we scan the string and find what is count for each character? Store character and count in map as key value pair.
Now, that we have <character, count> key value pair for all unique characters in string, how can we find first non repeating character? Refer back to original string; scan the string again and for each character, check the corresponding count and if it is 1 return the character.

package com.company;

import java.util.HashMap;

/**
 * Created by sangar on 4.10.18.
 */
public class FirstNonRepeatingChar {

    HashMap<Character, Integer> characterCount = new HashMap<>();

    public char firstNonRepeatingCharacter(String s){
        //Best to discuss it with interviewer, what should we return here?
        if(s == null) return ' ';

        if(s.length() == 0) return ' ';

        for (char c: s.toCharArray()){
            if(!characterCount.containsKey(c)){
                characterCount.put(c,1);
            }
            else {
                characterCount.put(c, characterCount.get(c) + 1);
            }
        }
        for (char c: s.toCharArray()) {
            if(characterCount.get(c) == 1) return c;
        }

        return ' ';
    }
}
package test;

import com.company.FirstNonRepeatingChar;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class FirstNonRepeatingCharTest {

    FirstNonRepeatingChar tester = new FirstNonRepeatingChar();

    @Test
    public void firstNonRepeatingCharExists() {
        String s = "abcbcbcbcbad";
        assertEquals('d', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharDoesNotExists() {
        String s = "abcbcbcbcbadd";
        assertEquals(' ', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharWithEmptyString() {
        String s = "";
        assertEquals(' ', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharWithNull() {
        assertEquals(' ', tester.firstNonRepeatingCharacter(null));
    }
}

Complexity of this method to find first non-repeating character in a string is O(n) along with space complexity of O(1) to store the character to count map.

There is always some confusion about space complexity of above method, we think as 256 characters are used, should it not be counted as space complexity? Definitely. But in asymptotic notations, this space is independent of size of input, so space complexity remains O(1).

One more thing, even though time complexity is O(n), input string is scanned twice, first time to get count of characters and second time to find first non repeating character.

Optimization

Consider a case when string is too large with millions of characters, most of them repeated, above solution may turn slow in last where we look for character with count 1 in map.  How can we avoid scanning array second time?
How about we store some information with character in map along with count, so that we can figure out if the character is first non repeating or not.
Or we can have two maps, one stores the count and other stores the first index of character.

Once, we have created two maps as mentioned above, go through the first map and find the all characters with count 1. For each of these characters, check which one has the minimum index on second map and return that character.

Complexity of algorithm remains same, however, second scan of string is now not required. In other words, second scan is now independent of size of input as it depends on the size of first map, which is constant to 256 as that’s the number of unique 8 bit characters possible.

Find first non repeating character : Implementation

package com.company;

import java.util.HashMap;

/**
 * Created by sangar on 4.10.18.
 */
public class FirstNonRepeatingChar {
    public char firstNonRepeatingCharacterOptimized(String s){
        HashMap<Character, Integer> characterCount = new HashMap<>();
        HashMap<Character, Integer>characterIndex = new HashMap<>();
        //Best to discuss it with interviewer, what should we return here?
        if(s == null) return ' ';

        if(s.length() == 0) return ' ';

        for (int i=0; i<s.length(); i++){
            char c  = s.charAt(i);
            if(!characterCount.containsKey(c)){
                characterCount.put(c,1);
                characterIndex.put(c,i);
            }
            else {
                characterCount.put(c, characterCount.get(c) + 1);
            }
        }
        char nonRepeatedCharacter = ' ';
        int prevIndex = s.length();
        for (char c : characterCount.keySet()) {
            if(characterCount.get(c) == 1 
			&& characterIndex.get(c) < prevIndex){
                prevIndex = characterIndex.get(c);
                nonRepeatedCharacter = c;
            }
        }
        return nonRepeatedCharacter;
    }
}
package test;

import com.company.FirstNonRepeatingChar;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class FirstNonRepeatingCharTest {

    FirstNonRepeatingChar tester = new FirstNonRepeatingChar();

    @Test
    public void firstNonRepeatingOptimizedCharExists() {
        String s = "aebcbcbcbcbad";
        assertEquals('e', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedDoesNotExists() {
        String s = "abcbcbcbcbadd";
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedWithEmptyString() {
        String s = "";
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedWithNull() {
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(null));
    }
}

Please share if there is anything wrong or missing. If you are interested in taking personalized coaching sessions by our expert teachers, please signup to website and get first session free.


Difference between array and linked list

Difference between array and linked list

In last post : Linked list data structure, we discussed basics of linked list, where I promised to go in details what is difference between array and linked list. Before going into post, I want to make sure that you understand that there is no such thing called one data structure is better than other. Based on your requirements and use cases, you chose one or the other. It depends on what is most frequent operation your algorithm would perform in it’s lifetime. That’s why they have data structure round in interview process to understand if you can chose the correct one for the problem.

What is an array?
Array is linear, sequential and contiguous collection of elements which can be addressed using index.

What is a linked list?
Linked list is linear, sequential and non-contiguous collection of nodes, each node store the reference to next node. To understand more, please refer to Linked list data structure.

Difference between arrays and linked list

Static Vs dynamic size

Size of an array is defined statically at the compile time where as linked list grows dynamically at run time based on need. Consider a case where you know the maximum number of elements algorithm would ever have, then you can confidently declare it as array. However, if you do not know, the linked list is better. There is a catch : What if there is a rare chance that number of elements will reach maximum, most of the time it will be way less than maximum? In this case, we would unnecessary allocating extra memory for array which may or may not be used. 

Memory allocation

An array is given contiguous memory in system. So, if you know the address of any of the element in array, you can access other elements based position of the element.

linked list vs arrays
Statically allocated contiguous memory

Linked list are not store contiguous on memory, nodes are scattered around on memory. So you may traverse forward in linked list, given node (using next node reference), but you can not access nodes prior to it.

arrays vs linked list
Dynamically allocated non-contiguous memory

Contiguous allocation of memory required sufficient memory before hand for an array to be stored, for example if want to store 20 integers in an array, we would required 80 bytes contiguous memory chunk. However, with linked list we can start with 8 bytes and request more memory as when required, which may be wherever. Contiguous allocation of memory makes it difficult to resize an array too. We have to look for different chunk of memory, which fits the new size, move all existing elements to that location. Linked list on other hand are dynamically size and can grow much faster without relocating existing elements.

Memory requirement

It’s good to have non-contiguous memory then? It comes with a cost. Each node of linked list has to store reference to next node in memory. This leads to extra payload of 4 bytes in each node. On the other hand, array do not require this extra payload. You  have to trade off extra space with advantages you are getting. Also, sometime, spending extra space is better that have cumbersome operations like shifting, adding and deleting operation on array. Or value stored in node is big enough to make these 4 bytes negligible in analysis.

Operation efficiency

We do operations of data structure to get some output. There are four basic operations we should be consider : read, search, insert/update and delete.

Read on array is O(1) where you can directly access any element in array given it’s index. By O(1), read on array does not depend on size of array.
Whereas, time complexity of read on linked list is O(n) where n is number of nodes. So, if you have a problem, which requires more random reads, array will over-weigh linked list.

Given the contiguous memory allocation of array, there are optimized algorithms like binary search to search elements on array which has complexity of O(log n). Search on linked list on other hand requires O(n).

Insert on array is O(1) again, if we are writing within the size of array. In linked list, complexity of insert depends where do you want to write new element at. If insert happens at head, then it O(1), on the other hand if insert happens at end, it’s O(n).

Insert node at start of linked list
Insert node at the tail of linked list

Update means here, changing size of array or linked list by adding one more element. In array it is costly operation, as it will require reallocation of memory and copying all elements on to it. Does not matter if you add element at end or start, complexity remains O(1).
For linked list, it varies, to update at end it’s O(n), to update at head, it’s O(1). 
In same vain, delete on array requires movement of all elements, if first element is deleted, hence complexity of O(n). However, delete on linked list O(1), if it’s head, O(n) if it’s tail.

To see the difference between O(1) and O(n), below graph should be useful.

difference between array and linked list
Complexity analysis graph

Key difference between array and linked list are as follows

  • Arrays are really bad at insert and delete operation due to internal reallocation of memory.
  • Statically sized at the compile time
  • Memory allocation is contiguous,  which make access elements easy without any additional pointers. Can jump around the array without accessing all the elements in between.
  • Linked list almost have same complexity when insert and delete happens at the end, however no memory shuffling happens
  • Search on linked list is bad.=, usually require scan with O(n) complexity
  • Dynamically sized on run time.
  • Memory allocation is non-contiguous, additional pointer is required to store neighbor node reference. Cannot jump around in linked list.

Please share if there is something wrong or missing. If you wan to contribute to website, please reach out to us at communications@algorithmsandme.com

Linked list data structure

Linked list data structure

Linked list is a very important data structure to understand as lot of problems are asked based on linked list in Amazon, Microsoft and Google interview. Today, we will understand the basics of linked list data structure and it’s implementation. 

Linked list represent linear sequence of elements. Each element connected to next element using chain of references. Another data structure which store linear sequence of items is array. There are some advantages and uses cases where linked list way of storing sequence is more efficient than array, I will cover that into next post : Arrays Vs Linked lists.

In last paragraph, I emphasized on linkedlist being linear data structure. In linear data structure, there is a sequence and order how elements are inserted, arranged and traversed. In order to go to tail of linked list, we have to go through all of the nodes.

linked list data structure
linear data structure when elements can be traversed only in one order


Non linear data structures are the ones where elements are not arranged or traversed in a specific order. One element may be connected to many others, hence we cannot traverse them in the same order every time. Example of non-linear data structure would be maps, dictionaries, trees, graphs etc.

linked list as data structure
Non  linear data structure when nodes cannot be traversed in one order always

Linked list implementation

Linked list consists of node, any number of nodes. Each node contains two things : first, value of the node, this value can be of any type, integer, string, or other user defined type. Second, a reference which points to next node in linked list. A node can be declared as follows:

typedef struct Node {
	int data;
	struct Node * next;
} Node;
Node structure
Linked list

What happens if the node is last node in linked list? At last node, next pointer of the node points to the null. It’s very important to understand this bit, as this condition will be used on almost every problem you have to solve on linked list.

Linked list is dynamic data structure. By dynamic data structure, we mean, it’s size and nature is not defined at the time of compilation, but defined at run time. Every time, a new node is added to linked list, new memory location is allocated and previous node’s next pointer will point to new node.

Operations of linked list

  • Adding node at the end of list
    There are three basic steps to add a node to linked list at end:
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Scan through linked list using next pointer, reach to the last node.
    2. Create a new node, and point next pointer of last node to this new node.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);

	//find the last node
	Node *currentNode = *headRef;
	while(currentNode && currentNode->next != NULL){
		currentNode = currentNode->next;
	}
	if(currentNode)
		currentNode->next = newNode;
	}
	else{
		//Change headRef to point to new head.
		*headRef = newNode;
	}
}

Complexity of adding a node to linked list is O(n). 

  • Insert node at head of list
    In this case too, we allocate a new node, however, this time we do not have to scan the entire list. Every time we add node to list, it’s head changes though.
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Create a new node, and point next pointer new node to head.
    2. Return new node as head pointer.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);
	newNode->next = *headRef;
	*headRef = newNode;
}

Linked list data structure problems

It’s very important to understand that linked list is a recursive data structure. Base case is a linked list with no node, represented by NULL node. Every problem on linked list can be solved using template : process one node, and then recursively process the remaining linked list.

In programming terms, linked list is divided into two parts, head and tail. The node being processed is called head and rest of the linked list is tail. Tail has the exactly same structure as the original list. 

Problems like merging linked lists, reverse a linked list, find length of linked list all can be solved using the same template of processing one node and the recursively call function on remaining node. 

Types of linked list

There are three types of linked lists :
1. Singly linked list 
Singly linked lists contain nodes with data and reference, i.e., next, which points to the next node in the sequence of nodes. The next pointer of the last node will point to null. In singly linked list you can traverse only in one direction.

singly linked list
singly linked list

2. Doubly linked list
In a doubly linked list, each node contains two links – previous, which points to the node before current node and next,  which points to next node. The previous pointer of the first node and next pointer of the last node will point to null. In doubly linked list, you can traverse it both directions. Two references adds to weight as extra memory is required.

doubly linked list
doubly linked list

3. Circular linked list
In circular linked list, next pointer of  the last node will point to the first node. A circular linked list can be both singly as well as doubly linked list.

circular linked list
Circular doubly linked list

This was all for basics of linked list, I know problems on them are hard to solve but if you look at all the problems, they boil down to one thing : understanding of node and how recursion can be used. In next posts, we will be solving many of these problems and see how we can use these basics.

Please share if there is something wrong or missing. If you are interested in contributing to website and share your knowledge with thousands of users across world, please reach out to us at communications@algorithmsandme.com

Inorder predecessor in binary search tree

Inorder predecessor in binary search tree

What is an inorder predecessor in binary tree? Inorder predecessor is the node which traversed before given node in inorder traversal of binary tree.  In binary search tree, it’s the previous big value before a node. For example, inorder predecessor of node(6) in below tree will 5 and for node(10) it’s 6.

inorder predecessor

If node is leftmost node in BST or least node, then there is no inorder predecessor for that node.

Inorder predecessor : Thoughts

To find inorder predecessor , first thing to find is the node itself.  As we know in inorder traversal, root node is visited after left subtree.  A node can be predecessor for given node which is on right side of it.

Let’s come up with examples and see what algorithm works. First case, if given node is left most leaf node of tree, there is no inorder predecessor, in that case return NULL. For example, predecessor for node 1 is NULL.

predecessor in BST

What if node has left subtree? In that case, maximum value in left subtree will be predecessor of given node.  We can find maximum value in tree by going deep down right subtree, till right subtree is NULL, and then return the last node. For example, predecessor node(10) is 6.

inorder predecessor

What are the other cases? Another case is that node does not have left subtree but it is also not the leftmost leaf node? Then parent of given node will be inorder predecessor. While moving down the tree on right side, keep track of parent node as it may be solution. predecessor of node(12) will be 10 as that’s where we moved to right subtree last time.  Note that we change predecessor candidate only  while moving down right subtree.

Algorithm to find inorder predecessor

  1. Start with root, current = root, successor = NULL.
  2. If node.value > current.value, then predecessor = current, current = current.right.
  3. If node.value < current.value, current = current.left.
  4. If node.value == current.value and node.left!= null, predecessor = maximum(current.left).
  5. Return predecessor

Inorder predevessor: Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;


/* This function return the maximum node in tree rooted at node root */
Node *findMaximum(Node *root){
    if(!root)
        return NULL;
 
    while(root->right) root = root->right;
    return root;
}
/* This function implements the logic described in algorithm to find inorder predecessor
of a given node */
Node *inorderPredecessor(Node *root, int K){
 
    Node *predecessor 	= NULL;
    Node *current  		= root;
    
    if(!root)
        return NULL;
 
    while(current && current->value != K){
         /* Else take left turn and no need to update predecessor pointer */
        if(current->value >K){
            current= current->left;
        }
        /* If node value is less than the node which are looking for, then go to right sub tree
        Also when we move right, update the predecessor pointer to keep track of last right turn */
        else{
            predecessor = current;
            current = current->right;
        }
    }
    /*Once we reached at the node for which inorder predecessor is to be found,
    check if it has left sub tree, if yes then find the maximum in that right sub tree and return that node 
    Else last right turn taken node is already stored in predecessor pointer and will be returned*/
    if(current && current->left){
        predecessor = findMaximum(current->left);
    }
    return predecessor;
}
Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  int n = 0;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  Node *predecessor = inorderPredecessor(root, 40);
  printf("\n Inorder successor node is : %d ",predecessor ? predecessor->value: 0);
  
  return 0;
}

Complexity of algorithm to find inorder predecessor will be O(logN) in almost balanced binary tree. If tree is skewed, then we have worst case complexity of O(N).

Please share if there is something wrong or missing. If you want to contribute to website and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Inorder successor in binary search tree

Inorder successor in binary search tree

What is an inorder successor in binary tree? Inorder successor is the node which traversed next to given node in inorder traversal of binary tree.  In binary search tree, it’s the next big value after the node. For example, inorder successor of node(6) in below tree will 10 and for node(12) it’s 14.

inorder successor

If node is the rightmost node or in BST, the greatest node, then there is no inorder successor for that node.

Inorder successor : Thoughts

To find inorder successor, first thing to find is the node itself.  As we know in inorder traversal, root node is visited after left subtree.  A node can be successor for given node which is on left side of it.

Let’s come up with examples and see what algorithm works. First case, if the node is right most node of tree, there is no inorder successor, in that case return NULL. For example, successor for node 16 is NULL.

What if node has right subtree? In that case, minimum value in right subtree will be successor of given node.  We can find minimum value in tree by going deep down left subtree, till left subtree is NULL, and then return the last node. For example, successor node(5) is 6.

inorder successor

What are the other cases? Another case is that node does not have right subtree? Then parent of given node will be inorder successor. While moving down the tree on left side, keep track of parent node as it may be the solution. Successor of node(7) will be 10 as that’s where we moved to left subtree last time.  Note that we change successor candidate only  while moving down left subtree.

Algorithm to find inorder successor

  1. Start with root, current = root, successor = NULL.
  2. If node.value < current.value, then successor = current, current = current.left.
  3. If node.value > current.value, current = current.right.
  4. If node.value == current.value and node.right != null, successor = minimum(current.right).
  5. Return successor

Inorder successor : Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;


//this function finds the minimum node in given tree rooted at node root
Node * findMinimum(Node *root){
    if(!root)
        return NULL;
   // Minimum node is left most child. hence traverse down till left most node of tree.
    while(root->left) root = root->left;
   // return the left most node
    return root;
}
/* This function implements the logic described in algorithm to find inorder successor
of a given node */
Node *inorderSuccessor(Node *root, Node *node){
 
    Node *successor = NULL;
    Node *current  = root;
    if(!root)
        return NULL;
 
    while(current->value != node->value){
        /* If node value is greater than the node which are looking for, then go to left sub tree
        Also when we move left, update the successor pointer to keep track of lst left turn */
        
        if(current->value > node->value){
            successor = current;
            current= current->left;
        }
        /* Else take right turn and no need to update successor pointer */
        else
            current = current->right;
    }
    /*Once we reached at the node for which inorder successor is to be found,
    check if it has right sub tree, if yes then find the minimum in that right sub tree and return taht node 
    Else last left turn taken node is already stored in successor pointer and will be returned*/
    if(current && current->right){
        successor = findMinimum(current->right);
    }
 
    return successor;
}


Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  int n = 0;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  Node *node = root;
  root = addNode(root,37);
  root = addNode(root,45);
  
  Node *successor = inorderSuccessor(root, node);
  printf("\n Inorder successor node is : %d ",successor ? successor->value: 0);
  
  return 0;
}

Complexity of algorithm to find inorder successor will be O(logN) in almost balanced binary tree. If tree is skewed, then we have worst case complexity of O(N).

Please share if there is something wrong or missing. If you want to contribute to website and share your knowledge with thousands of learners across the world, please reach out to us at communication@algorithmsandme.com

Iterative postorder traversal

Iterative postorder traversal

In last two posts, iterative inorder and iterative preorder traversal, we learned how stack can be used to replace recursion and why recursive implementation can be dangerous in production environment. In this post, let’s discuss iterative postorder traversal of binary tree which is most complex of all traversals. What is post order traversal ? A traversal where  left and right subtrees are visited before root is processed. For example, post order traversal of below tree would be : [1,6,5,12,16,14,10]

iterative postorder traversal

Iterative postorder traversal  : Thoughts

Let’s look at the recursive implementation of postorder.

    private void postOrder(Node root){
        if(root == null) return;

        postOrder(root.left);
        postOrder(root.right);
        System.out.println(root.value);

    }

As we are going into left subtree and then directly to right subtree, without visiting root node. Can you find the similarity of structure between preorder and postorder implementation?  Can we reverse the entire preorder traversal to get post order traversal? Reverse preorder will give us right child, left child and then root node, however order expected is left child, right child and root child.
Do you remember we pushed left and right node onto stack in order where right child went before left. How about reversing that?

There is one more problem with just reversing the preorder. In preorder, a node was processed as soon as popped from stack, like root node will  be the first node to be processed. However, in postorder, root node is processed last. So, we actually need the order of processing too be reversed. What better than using a stack to store the reverse order of root nodes to processed.
All in all, we will be using two stacks, one to store left and right child, second to store processing order of nodes.

  1. Create two stacks s an out and push root node onto s
  2. While stack s is not empty
    1. op from stack s, current = s.pop
    2. Put current onto stack out.
    3. Put left and right child of current on to stack s
  3. Pop everything from out stack and process it.

Postorder traversal with two stacks : Implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 22.5.18.
 */
public class BinarySearchTreeTraversal {

    private Node root;

    public void BinarySearchTree(){
        root = null;
    }

    public class Node {
        private int value;
        private Node left;
        private Node right;

        public Node(int value) {
            this.value = value;
            this.left = null;
            this.right = null;
        }
    }

    public void insert(int value){
        this.root =  insertNode(this.root, value);
    }

    private Node insertNode(Node root, int value){
        if(root == null){
            //if this node is root of tree
            root = new Node(value);
        }
        else{
            if(root.value > value){
                //If root is greater than value, node should be added to left subtree
                root.left = insertNode(root.left, value);
            }
            else{
                //If root is less than value, node should be added to right subtree
                root.right = insertNode(root.right, value);
            }
        }
        return root;
    }
    private void postOrder(Node root){
       if(root == null) return;

       postOrder(root.left);
       postOrder(root.right);
       System.out.println(root.value);
    }

    public void postOrderTraversal(){
        postOrderIterative(root);
    }

    private void postOrderIterative(Node root){
        Stack<Node> out = new Stack<>();
        Stack<Node> s = new Stack<>();

        s.push(root);

        while(!s.empty()){
            Node current = s.pop();

            out.push(current);
            if(current.left != null) s.push(current.left);
            if(current.right != null) s.push(current.right);
        }

        while(!out.empty()){
            System.out.println(out.pop().value);
        }
    }
}

Complexity of iterative implementation is O(n) with additional space complexity of O(n).

Can we avoid using two stack, and do it with one stack? Problem with root in postorder traversal is that it is visited three times, moving down from parent, coming up from left child and coming up from right child. When should be the node processed? Well, when we are coming up from right child.

How can we keep track of how the current node was reached? If we keep previous pointer, there are three cases:

  1. Previous node is parent of current node, we reached node from parent node, nothing is done.
  2. Previous node is left child of current node, it means we have visited left child, but still not visited right child, move to right child of current node.
  3. Previous node is right child of current node, it means  we have visited left and right child of current node,  process the current node.

Let’s formulate  postorder traversal algorithm then.

  1. Push root node onto stack s, set prev = null.
  2. Repeat below steps till stack is not empty (!s.empty())
  3. current = s.pop(), pop from the stack.
  4. If (prev == null || prev.left == current || prev.right == current) then
    1. If current.left != null, push current.left onto stack.
    2. If current.right != null, push current.right onto stack.
    3. If current.left == current.right == null, process current.
  5. If current.left == prev, i.e. moving up left child then
    1. If current.right == null, process current.
    2. If current.right != null, push it to stack.
  6. If current.right == prev i.e moving up from right child
    1. process current.
    2. prev = current, current = s.pop.

Iterative Postorder traversal : Implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 22.5.18.
 */
public class BinarySearchTreeTraversal {

    private Node root;

    public void BinarySearchTree(){
        root = null;
    }

    public class Node {
        private int value;
        private Node left;
        private Node right;

        public Node(int value) {
            this.value = value;
            this.left = null;
            this.right = null;
        }
    }

    public void insert(int value){
        this.root =  insertNode(this.root, value);
    }
  
    private Node insertNode(Node root, int value){
        if(root == null){
            //if this node is root of tree
            root = new Node(value);
        }
        else{
            if(root.value > value){
                //If root is greater than value, node should be added to left subtree
                root.left = insertNode(root.left, value);
            }
            else{
                //If root is less than value, node should be added to right subtree
                root.right = insertNode(root.right, value);
            }
        }
        return root;
    }


    private void inorder(Node root){
            if(root == null) return;

            if(root.left != null) inorder(root.left);
            System.out.println(root.value);
            if(root.right != null) inorder(root.right);
        }

        private void preOrder(Node root){
            if(root == null) return;

            System.out.println(root.value);
            preOrder(root.left);
            preOrder(root.right);
        }

        private void postOrder(Node root){
            if(root == null) return;

            postOrder(root.left);
            postOrder(root.right);
            System.out.println(root.value);

        }
        public void postOrderTraversal(){
          //  postOrder(root);
            postOrderIterative2(root);
            //postOrderIterative(root);
        }

        private void postOrderIterative2(Node root){
            Node prev = null;
            Stack<Node> s = new Stack<>();

            s.push(root);

            while(!s.empty()){
                Node current  = s.peek();
                if(prev == null || ( prev.left == current || prev.right == current )){
                    if(current.left != null) s.push(current.left);
                    else if(current.right != null) s.push(current.right);
                }
                else if(prev == current.left){
                    if(current.right != null) s.push(current.right);
                }else{
                    System.out.println(current.value);
                    s.pop();
                }

                prev = current;
            }
    }

}

Complexity of code is O(n) again, with additional space complexity of O(n).

Please share if there is something wrong or missing. If you want to contribute and share your learning with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Iterative preorder traversal

Iterative preorder traversal

In last post Iterative inorder traversal , we learned how to do inorder traversal of binary tree without recursion or in iterative way. Today we will learn how to do iterative preorder traversal of binary tree. In preorder traversal, root node is processed before left and right subtrees. For example, preorder traversal of below tree would be [10,5,1,6,14,12,15],

iterative preorder traversal without recursion

We already know how to implement preorder traversal in recursive way, let’s understand how to implement it in non-recursive way.

Iterative preorder traversal : Thoughts

If we look at recursive implementation, we see we process the root node as soon as we reach it and then start with left subtree before touching anything on right subtree.

Once left subtree is processed, control goes to first node in right subtree. To emulate this behavior in non-recursive way, it is best to use a stack. What and when push and pop will happen on the stack?
Start with pushing the root node to stack. Traversal continues till there at least one node onto stack.

Pop the root node from stack,process it and push it’s right and left child on to stack. Why right child before left child? Because we want to process left subtree before right subtree. As at every node, we push it’s children onto stack, entire left subtree of node will be processed before right child is popped from the stack. Algorithm is very simple and is as follows.

    1. Start with root node and push on to stack s
    2. While there stack is not empty
      1. Pop from stack current  = s.pop() and process the node.
      2. Push current.right onto to stack.
      3. Push current.left onto to stack.

Iterative preorder traversal : example

Let’s take and example and see how it works. Given below tree, do preorder traversal on it without recursion.

iterative preorder traversal without recursion

Let’s start from root node(10) and push it onto stack. current = node(10).

Here loop starts, which check if there is node onto stack. If yes, it pops that out. s.pop will return node(10), we will print it and push it’s right and left child onto stack. Preorder traversal till now : [10].

Since stack is not empty, pop from it.current= node(5). Print it and push it’s right and left child i.e node(6) and node(1) on stack.

Again, stack is not empty, pop from stack. current  = node(1). Print node. There is no right and left child for this node, so we will not push anything on the stack.

Stack is not empty yet, pop again. current= node(6). Print node. Similar to node(1), it also does not have right or left subtree, so nothing gets pushed onto stack.

However, stack is not empty yet. Pop. Current = node(14). Print node, and as there are left and right children, push them onto stack as right child before left child.

Stack is not empty, so pop from stack, current = node(12). Print it, as there are no children of node(12), push nothing to stack.

Pop again from stack as it not empty. current = node(15). Print it. No children, so no need to push anything.

At this point, stack becomes empty and we have traversed all node of tree also.

Iterative preorder traversal : Implementation

#include <stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};
typedef struct node Node;

#define STACK_SIZE 10
 
typedef struct stack{
        int top;
        Node *items[STACK_SIZE];
}stack;
 
void push(stack *ms, Node *item){
   if(ms->top < STACK_SIZE-1){
       ms->items[++(ms->top)] = item;
   }
   else {
       printf("Stack is full\n");
   }
}
 
Node * pop (stack *ms){
   if(ms->top > -1 ){
       return ms->items[(ms->top)--];
   } 
   else{
       printf("Stack is empty\n");
   }
}
Node * peek(stack ms){
  if(ms.top < 0){
      printf("Stack empty\n");
      return 0;
   }
   return ms.items[ms.top];
}
int isEmpty(stack ms){
   if(ms.top < 0) return 1;
   else return 0;
}
void preorderTraversalWithoutRecursion(Node *root){
	stack ms;
	ms.top = -1;
	
	if(root == NULL) return ;

	Node *currentNode = NULL;
	/* Step 1 : Start with root */
	push(&ms,root);
	
	while(!isEmpty(ms)){
		/* Step 5 : Pop the node */
		currentNode = pop(&ms);
		/* Step 2 : Print the node */
		printf("%d  ", currentNode->value);
		/* Step 3: Push right child first */
		if(currentNode->right){
			push(&ms, currentNode->right);
		}
		/* Step 4: Push left child */
		if(currentNode->left){
			push(&ms, currentNode->left);
		}
	}
}


void preorder (Node *root){
	if ( !root ) return;
	
 	printf("%d ", root->value );
	preorder(root->left);
	preorder(root->right);
}
 
Node * createNode(int value){
    Node * newNode =  (Node *)malloc(sizeof(Node));
	
    newNode->value = value;
    newNode->right= NULL;
    newNode->left = NULL;
	
    return newNode;
}

Node * addNode(Node *node, int value){
    if(node == NULL){
    	return createNode(value);
    }
    else{
    	if (node->value > value){
    		node->left = addNode(node->left, value);
    	}
    	else{
    		node->right = addNode(node->right, value);
    	}
    }
    return node;
}
 
/* Driver program for the function written above */
int main(){
        Node *root = NULL;
        //Creating a binary tree
        root = addNode(root,30);
        root = addNode(root,20);
        root = addNode(root,15);
        root = addNode(root,25);
        root = addNode(root,40);
        root = addNode(root,37);
        root = addNode(root,45);
        
	preorder(root);
        printf("\n");
	
        preorderTraversalWithoutRecursion(root);
        return 0;
}

Complexity of iterative implementation of binary tree is O(n) as we will be visiting each node at least once. Also, there is added space complexity of stack which is O(n).

Please share if there is something wrong or missing. If you are willing to contribute and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com