Blog

First non repeated character in string

First non repeated character in string

Given a string, find first non repeated character in a stringFor example, string is abcbdbdebab, the first non repeating character would be c. Even though e is also non repeating in string, c is output as it is first non repeating character.

Non repeating character : thoughts

What does it mean to be non-repeating character? Well, the character should occur in string only once. How about we scan the string and find what is count for each character? Store character and count in map as key value pair.
Now, that we have <character, count> key value pair for all unique characters in string, how can we find first non repeating character? Refer back to original string; scan the string again and for each character, check the corresponding count and if it is 1 return the character.

package com.company;

import java.util.HashMap;

/**
 * Created by sangar on 4.10.18.
 */
public class FirstNonRepeatingChar {

    HashMap<Character, Integer> characterCount = new HashMap<>();

    public char firstNonRepeatingCharacter(String s){
        //Best to discuss it with interviewer, what should we return here?
        if(s == null) return ' ';

        if(s.length() == 0) return ' ';

        for (char c: s.toCharArray()){
            if(!characterCount.containsKey(c)){
                characterCount.put(c,1);
            }
            else {
                characterCount.put(c, characterCount.get(c) + 1);
            }
        }
        for (char c: s.toCharArray()) {
            if(characterCount.get(c) == 1) return c;
        }

        return ' ';
    }
}
package test;

import com.company.FirstNonRepeatingChar;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class FirstNonRepeatingCharTest {

    FirstNonRepeatingChar tester = new FirstNonRepeatingChar();

    @Test
    public void firstNonRepeatingCharExists() {
        String s = "abcbcbcbcbad";
        assertEquals('d', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharDoesNotExists() {
        String s = "abcbcbcbcbadd";
        assertEquals(' ', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharWithEmptyString() {
        String s = "";
        assertEquals(' ', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharWithNull() {
        assertEquals(' ', tester.firstNonRepeatingCharacter(null));
    }
}

Complexity of this method to find first non-repeating character in a string is O(n) along with space complexity of O(1) to store the character to count map.

There is always some confusion about space complexity of above method, we think as 256 characters are used, should it not be counted as space complexity? Definitely. But in asymptotic notations, this space is independent of size of input, so space complexity remains O(1).

One more thing, even though time complexity is O(n), input string is scanned twice, first time to get count of characters and second time to find first non repeating character.

Optimization

Consider a case when string is too large with millions of characters, most of them repeated, above solution may turn slow in last where we look for character with count 1 in map.  How can we avoid scanning array second time?
How about we store some information with character in map along with count, so that we can figure out if the character is first non repeating or not.
Or we can have two maps, one stores the count and other stores the first index of character.

Once, we have created two maps as mentioned above, go through the first map and find the all characters with count 1. For each of these characters, check which one has the minimum index on second map and return that character.

Complexity of algorithm remains same, however, second scan of string is now not required. In other words, second scan is now independent of size of input as it depends on the size of first map, which is constant to 256 as that’s the number of unique 8 bit characters possible.

Find first non repeating character : Implementation

package com.company;

import java.util.HashMap;

/**
 * Created by sangar on 4.10.18.
 */
public class FirstNonRepeatingChar {
    public char firstNonRepeatingCharacterOptimized(String s){
        HashMap<Character, Integer> characterCount = new HashMap<>();
        HashMap<Character, Integer>characterIndex = new HashMap<>();
        //Best to discuss it with interviewer, what should we return here?
        if(s == null) return ' ';

        if(s.length() == 0) return ' ';

        for (int i=0; i<s.length(); i++){
            char c  = s.charAt(i);
            if(!characterCount.containsKey(c)){
                characterCount.put(c,1);
                characterIndex.put(c,i);
            }
            else {
                characterCount.put(c, characterCount.get(c) + 1);
            }
        }
        char nonRepeatedCharacter = ' ';
        int prevIndex = s.length();
        for (char c : characterCount.keySet()) {
            if(characterCount.get(c) == 1 
			&& characterIndex.get(c) < prevIndex){
                prevIndex = characterIndex.get(c);
                nonRepeatedCharacter = c;
            }
        }
        return nonRepeatedCharacter;
    }
}
package test;

import com.company.FirstNonRepeatingChar;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class FirstNonRepeatingCharTest {

    FirstNonRepeatingChar tester = new FirstNonRepeatingChar();

    @Test
    public void firstNonRepeatingOptimizedCharExists() {
        String s = "aebcbcbcbcbad";
        assertEquals('e', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedDoesNotExists() {
        String s = "abcbcbcbcbadd";
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedWithEmptyString() {
        String s = "";
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedWithNull() {
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(null));
    }
}

Please share if there is anything wrong or missing. If you are interested in taking personalized coaching sessions by our expert teachers, please signup to website and get first session free.


Difference between array and linked list

Difference between array and linked list

In last post : Linked list data structure, we discussed basics of linked list, where I promised to go in details what is difference between array and linked list. Before going into post, I want to make sure that you understand that there is no such thing called one data structure is better than other. Based on your requirements and use cases, you chose one or the other. It depends on what is most frequent operation your algorithm would perform in it’s lifetime. That’s why they have data structure round in interview process to understand if you can chose the correct one for the problem.

What is an array?
Array is linear, sequential and contiguous collection of elements which can be addressed using index.

What is a linked list?
Linked list is linear, sequential and non-contiguous collection of nodes, each node store the reference to next node. To understand more, please refer to Linked list data structure.

Difference between arrays and linked list

Static Vs dynamic size

Size of an array is defined statically at the compile time where as linked list grows dynamically at run time based on need. Consider a case where you know the maximum number of elements algorithm would ever have, then you can confidently declare it as array. However, if you do not know, the linked list is better. There is a catch : What if there is a rare chance that number of elements will reach maximum, most of the time it will be way less than maximum? In this case, we would unnecessary allocating extra memory for array which may or may not be used. 

Memory allocation

An array is given contiguous memory in system. So, if you know the address of any of the element in array, you can access other elements based position of the element.

linked list vs arrays
Statically allocated contiguous memory

Linked list are not store contiguous on memory, nodes are scattered around on memory. So you may traverse forward in linked list, given node (using next node reference), but you can not access nodes prior to it.

arrays vs linked list
Dynamically allocated non-contiguous memory

Contiguous allocation of memory required sufficient memory before hand for an array to be stored, for example if want to store 20 integers in an array, we would required 80 bytes contiguous memory chunk. However, with linked list we can start with 8 bytes and request more memory as when required, which may be wherever. Contiguous allocation of memory makes it difficult to resize an array too. We have to look for different chunk of memory, which fits the new size, move all existing elements to that location. Linked list on other hand are dynamically size and can grow much faster without relocating existing elements.

Memory requirement

It’s good to have non-contiguous memory then? It comes with a cost. Each node of linked list has to store reference to next node in memory. This leads to extra payload of 4 bytes in each node. On the other hand, array do not require this extra payload. You  have to trade off extra space with advantages you are getting. Also, sometime, spending extra space is better that have cumbersome operations like shifting, adding and deleting operation on array. Or value stored in node is big enough to make these 4 bytes negligible in analysis.

Operation efficiency

We do operations of data structure to get some output. There are four basic operations we should be consider : read, search, insert/update and delete.

Read on array is O(1) where you can directly access any element in array given it’s index. By O(1), read on array does not depend on size of array.
Whereas, time complexity of read on linked list is O(n) where n is number of nodes. So, if you have a problem, which requires more random reads, array will over-weigh linked list.

Given the contiguous memory allocation of array, there are optimized algorithms like binary search to search elements on array which has complexity of O(log n). Search on linked list on other hand requires O(n).

Insert on array is O(1) again, if we are writing within the size of array. In linked list, complexity of insert depends where do you want to write new element at. If insert happens at head, then it O(1), on the other hand if insert happens at end, it’s O(n).

Insert node at start of linked list
Insert node at the tail of linked list

Update means here, changing size of array or linked list by adding one more element. In array it is costly operation, as it will require reallocation of memory and copying all elements on to it. Does not matter if you add element at end or start, complexity remains O(1).
For linked list, it varies, to update at end it’s O(n), to update at head, it’s O(1). 
In same vain, delete on array requires movement of all elements, if first element is deleted, hence complexity of O(n). However, delete on linked list O(1), if it’s head, O(n) if it’s tail.

To see the difference between O(1) and O(n), below graph should be useful.

difference between array and linked list
Complexity analysis graph

Key difference between array and linked list are as follows

  • Arrays are really bad at insert and delete operation due to internal reallocation of memory.
  • Statically sized at the compile time
  • Memory allocation is contiguous,  which make access elements easy without any additional pointers. Can jump around the array without accessing all the elements in between.
  • Linked list almost have same complexity when insert and delete happens at the end, however no memory shuffling happens
  • Search on linked list is bad.=, usually require scan with O(n) complexity
  • Dynamically sized on run time.
  • Memory allocation is non-contiguous, additional pointer is required to store neighbor node reference. Cannot jump around in linked list.

Please share if there is something wrong or missing. If you wan to contribute to website, please reach out to us at communications@algorithmsandme.com

Linked list data structure

Linked list data structure

Linked list is a very important data structure to understand as lot of problems are asked based on linked list in Amazon, Microsoft and Google interview. Today, we will understand the basics of linked list data structure and it’s implementation. 

Linked list represent linear sequence of elements. Each element connected to next element using chain of references. Another data structure which store linear sequence of items is array. There are some advantages and uses cases where linked list way of storing sequence is more efficient than array, I will cover that into next post : Arrays Vs Linked lists.

In last paragraph, I emphasized on linkedlist being linear data structure. In linear data structure, there is a sequence and order how elements are inserted, arranged and traversed. In order to go to tail of linked list, we have to go through all of the nodes.

linked list data structure
linear data structure when elements can be traversed only in one order


Non linear data structures are the ones where elements are not arranged or traversed in a specific order. One element may be connected to many others, hence we cannot traverse them in the same order every time. Example of non-linear data structure would be maps, dictionaries, trees, graphs etc.

linked list as data structure
Non  linear data structure when nodes cannot be traversed in one order always

Linked list implementation

Linked list consists of node, any number of nodes. Each node contains two things : first, value of the node, this value can be of any type, integer, string, or other user defined type. Second, a reference which points to next node in linked list. A node can be declared as follows:

typedef struct Node {
	int data;
	struct Node * next;
} Node;
Node structure
Linked list

What happens if the node is last node in linked list? At last node, next pointer of the node points to the null. It’s very important to understand this bit, as this condition will be used on almost every problem you have to solve on linked list.

Linked list is dynamic data structure. By dynamic data structure, we mean, it’s size and nature is not defined at the time of compilation, but defined at run time. Every time, a new node is added to linked list, new memory location is allocated and previous node’s next pointer will point to new node.

Operations of linked list

  • Adding node at the end of list
    There are three basic steps to add a node to linked list at end:
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Scan through linked list using next pointer, reach to the last node.
    2. Create a new node, and point next pointer of last node to this new node.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);

	//find the last node
	Node *currentNode = *headRef;
	while(currentNode && currentNode->next != NULL){
		currentNode = currentNode->next;
	}
	if(currentNode)
		currentNode->next = newNode;
	}
	else{
		//Change headRef to point to new head.
		*headRef = newNode;
	}
}

Complexity of adding a node to linked list is O(n). 

  • Insert node at head of list
    In this case too, we allocate a new node, however, this time we do not have to scan the entire list. Every time we add node to list, it’s head changes though.
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Create a new node, and point next pointer new node to head.
    2. Return new node as head pointer.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);
	newNode->next = *headRef;
	*headRef = newNode;
}

Linked list data structure problems

It’s very important to understand that linked list is a recursive data structure. Base case is a linked list with no node, represented by NULL node. Every problem on linked list can be solved using template : process one node, and then recursively process the remaining linked list.

In programming terms, linked list is divided into two parts, head and tail. The node being processed is called head and rest of the linked list is tail. Tail has the exactly same structure as the original list. 

Problems like merging linked lists, reverse a linked list, find length of linked list all can be solved using the same template of processing one node and the recursively call function on remaining node. 

Types of linked list

There are three types of linked lists :
1. Singly linked list 
Singly linked lists contain nodes with data and reference, i.e., next, which points to the next node in the sequence of nodes. The next pointer of the last node will point to null. In singly linked list you can traverse only in one direction.

singly linked list
singly linked list

2. Doubly linked list
In a doubly linked list, each node contains two links – previous, which points to the node before current node and next,  which points to next node. The previous pointer of the first node and next pointer of the last node will point to null. In doubly linked list, you can traverse it both directions. Two references adds to weight as extra memory is required.

doubly linked list
doubly linked list

3. Circular linked list
In circular linked list, next pointer of  the last node will point to the first node. A circular linked list can be both singly as well as doubly linked list.

circular linked list
Circular doubly linked list

This was all for basics of linked list, I know problems on them are hard to solve but if you look at all the problems, they boil down to one thing : understanding of node and how recursion can be used. In next posts, we will be solving many of these problems and see how we can use these basics.

Please share if there is something wrong or missing. If you are interested in contributing to website and share your knowledge with thousands of users across world, please reach out to us at communications@algorithmsandme.com

Functional interface in Java

Function interface in Java

We discussed interfaces and abstract class in earlier, let’s discuss specific type of interface called functional interface. A functional interface is an interface with only one abstract method. This abstract method is implemented by class. Below is example of a functional interface, interface CustomFunctionalInterface has only one abstract method abstractMethod.

@FunctionalInterface
public interface CustomFunctionalInterface {
	public boolean abstractMethod(Integer a);
}

Notice that there is special annotation @FunctionalInterface added in Java 8 to help compile type check, if you add another abstract method in functional interface, it will throw an error.

What happens when another interface inherit functional interface? If there is no other abstract method defined in child interface, it automatically become functional interface. Child interface can have default methods and still be functional.

It’s not that the functional interfaces came in Java 8, they preexisted, example being Runnable interface mentioned above. However there are many functional interfaces like Predicate are introduced in Java 8. By virtue of their ability to pass around functionality(i.e. behavior), functional interfaces primarily enable behavior parameterization.

Functional interface and anonymous class

Anonymous class is a class without a name. Anonymous classes are created and instantiated using functional interfaces. One of the most common example is instantiating Runnable class.

public class AnonymousInnerClassTest {
	public static void main(String[] args) {
		new Thread(new Runnable() {
			@Override
			public void run() {
				System.out.println("A thread created and running");
			}
		}).start();
	}
}

This does not mean that anonymous classes can only be created using functional interface, but it is less clumsy and  more understandable when using one. Notice how anonymous class is created, 

The anonymous class expression consists of the following:

  • The new operator
  • The name of an interface to implement or a class to extend. In this example, the anonymous class is implementing the interface Runnable.
  • Parentheses that contain the arguments to a constructor, just like a normal class instance creation expression. Note: When you implement an interface, there is no constructor, so you use an empty pair of parentheses, as in this example.
  • A body, which is a class declaration body. More specifically, in the body, method declarations are allowed but statements are not

Let’s see an example where we are using functional interfaces.

Please share if there is something is missing or wrong. If you want to contribute to website, please reach out to us on communications@algorithmsandme.com

Default methods in interface In Java 8

Default methods in interface

Before understanding default methods in interface, first understand what in an interface. An interface is the contract an object exposes to the outside world. An interface is a reference type in Java. It is a collection of abstract methods which are implemented by class.

In plain terms, let’s say you are developing a program which interact with multiple TV sets. These TV sets will have different types of remote operating them. Now, if you implement version of our program for each type of TV and remote available in market, it will be a nightmare.
That’s why we expose an interface or contract which all TV set implementation adhere to and all remotes can use that interface to operate desired functionality.
Here advantage is that no matter what internal implementation of TV is as far as it follows the same interface, remote will work.

	interface TV { 
		public boolean switchOn();
		public boolean switchOff();
		public int changeChannel();
		public int volumeDown();
		public int volumeUp()
	} 

We can implement these methods in any which way we like on different TV sets (objects) without having to change remotes (outside world implementation).

public class SamsungTV implements TV {
	public boolean switchOn() {
		// implementation
	}

	public boolean switchOff() {
		// implementation
	}

	public int volumeDown() {
		// implementation
	}

	public int volumeUp() {
		// implementation
	}
}

As you noticed that interface only contains abstract function and does not give any implementation of any function. However, with Java 8 that has changed. An interface can have default implementation of a function.

	public interface InterfaceExample {
		// to be implemented by class
		void abstractMethod(String str);
	
		default void defaultMethod(String str){
			System.out.println("logging::"+str);
		}
	}

Why was that required? Imagine a case where you defined an interface and whole lot of classes implemented it in whatever way they wanted to. Now, at certain time in your project, you decided to add another method to the interface. It will need update of all classes which implement interface. 
How about a default implementation of new function and classes can implement it if they want different behavior? One of the major reason for introducing default methods in interfaces is to enhance the Collections API in Java 8 to support lambda expressions.

As you know, Java does no allow multiple inheritance, that is to inherit more than one class, because that leads to “diamond problem. However, which interfaces having default implementation, and classes can implement multiple interfaces, “diamond problem” is bound happen. If two interface implement same function which function should be used by class object? 
To solve this problem, it is mandatory to provide implementation of common default methods of interfaces by class else compiler will throw errors.

	public interface InterfaceExample1 {
		// to be implemented by class
		void abstractMethod1(String str);
	
		default void defaultMethod(String str){
			System.out.println("logging::"+str);
		}
	}		
	public interface InterfaceExample2 {
		// to be implemented by class
		void abstractMethod2(String str);
	
		default void defaultMethod(String str){
			System.out.println("logging::"+str);
		}
	}		
public class ExampleClass implements InterfaceExample1, InterfaceExample2 {

	@Override
	public void abstractMethod1() {
	}

	@Override
	public void abstractMethod2(String str) {
	}

	@Override
	public void log(String str){
		System.out.println("MyClass logging::"+str);
	}
}

Default methods in interfaces actually bridges the gap between abstract classes and interfaces in Java. Also, it safeguards again breaking of code in case interfaces are modified. 

Please share if there is something wrong or missing. If you want to contribute to website, please reach out to us on communications@algorithmsandme.com

Fill 4xN wall with 4×1 and 1×4 bricks

Fill 4xN wall with 4×1 and 1×4 bricks

There is a wall with 4 x N dimensions and we have a brick with 4 x 1 dimension. We have to fill the wall with given brick and find out how may ways possible to fill that wall.

For example, if there is wall with N = 3, we have only one way to fill the wall, with three brick laid horizontally.

Where as with N = 4, there are two ways, one with putting four bricks horizontally, or 4 bricks vertically.

fill wall with bricks Actually, examples themselves give away the answer to the our problem. Let’s start small and build on top of it. What if N = 1 , then wall dimensions are 4 x 1, and there is only one way to fill that wall with brick of 4 x 1, which is to lay the brick horizontally.

What if N = 2, i.e. wall  is 4 x 2, , again, there is only one way  possible, put two bricks horizontally,we cannot put bricks vertical. Why?

Take N  = 3, i.e. wall with 4 x 3, only way we can fill the wall is to put three bricks horizontally, can’t use vertical brick.

What if N = 4, wall with 4 x 4 dimensions, in this scenario, we have two options, put four bricks horizontally or four bricks vertically, so there are two ways to fill a wall of 4 x 4 with brick of 4 x 1.

Now,  if number of ways to fill a wall of dimension 4 x N is f(N) then f(N) for values 1, 2 and 3 is as follows.

f(1)=1, f(2)=1, f(3)=1

We have two choices for each brick for wall of size greater than 4 X 3.  Either to keep brick vertically or  to keep brick horizontally.

If we keep brick vertically, we cover four units out of N units height of wall with each brick, require four vertical bricks to cover horizontally, so problem reduces to N-4 units.

If we keep brick horizontally, then it covers only 1 unit height of wall, hence we need to cover N-1 units of height further.
So, for N we have relationship as

f(N) = f(N-1)  + f(N-4)

We have the recurrence relation and the base conditions, let’s implement it.

Fill wall with brick : recursive implementation

int findWays(int n){
        if(n == 0 || n == 1 || n == 2 || n == 3) return 1;
        return findWays(n-1) + findWays(n-4);
}

int main(void) {
	int N = 5;
	int ways = findWays(N);
	printf("%d", ways);
	return 0;
}

Do you think this solution is optimized? Why do you think, it can be optimized and how? If you closely look at the recursion tree of implementation, you will see the problem. Some of the subproblems are solved repeatedly. Can we avoid solving them again and again?  Yes, that’s called memoization.

Well, this problem can be solved using dynamic programming, because two properties hold : First, optimal solution to subproblem gives solution to original problem. Second, overlapping subproblems.

Dynamic programming approach would be to fill a table bottom up where table [N] will be the solution.  table[0] = table[1] = table[2] = table[3] = 1 as discussed above.

Now from N = 4, we can fill the table bottom up as

table[N] = table[N-1] + table[N-4]

Fill wall with brick : dynamic programming implementation

int find_ways(int n, int table[]){
	int i;
	for(i = 4; i&lt;= n; i++){
		table[i] = table[i-1] + table[i-4];
	}
}

int main(void) {
	int N =5;
	int table[N+1];
	table[0] = 1;
	table[1] = 1;
	table[2] = 1;
	table[3] = 1;
	find_ways(N, table);
	printf("%d", table[N]);
	return 0;
}

Complexity of dynamic programming approach is O (N) with space complexity of O(N).

Please share if there is something wrong or missing. If you are willing to share your knowledge and help thousands of learners across the world, please reach out to us on communications@algorithmsandme.com

Inorder predecessor in binary search tree

Inorder predecessor in binary search tree

What is an inorder predecessor in binary tree? Inorder predecessor is the node which traversed before given node in inorder traversal of binary tree.  In binary search tree, it’s the previous big value before a node. For example, inorder predecessor of node(6) in below tree will 5 and for node(10) it’s 6.

inorder predecessor

If node is leftmost node in BST or least node, then there is no inorder predecessor for that node.

Inorder predecessor : Thoughts

To find inorder predecessor , first thing to find is the node itself.  As we know in inorder traversal, root node is visited after left subtree.  A node can be predecessor for given node which is on right side of it.

Let’s come up with examples and see what algorithm works. First case, if given node is left most leaf node of tree, there is no inorder predecessor, in that case return NULL. For example, predecessor for node 1 is NULL.

predecessor in BST

What if node has left subtree? In that case, maximum value in left subtree will be predecessor of given node.  We can find maximum value in tree by going deep down right subtree, till right subtree is NULL, and then return the last node. For example, predecessor node(10) is 6.

inorder predecessor

What are the other cases? Another case is that node does not have left subtree but it is also not the leftmost leaf node? Then parent of given node will be inorder predecessor. While moving down the tree on right side, keep track of parent node as it may be solution. predecessor of node(12) will be 10 as that’s where we moved to right subtree last time.  Note that we change predecessor candidate only  while moving down right subtree.

Algorithm to find inorder predecessor

  1. Start with root, current = root, successor = NULL.
  2. If node.value > current.value, then predecessor = current, current = current.right.
  3. If node.value < current.value, current = current.left.
  4. If node.value == current.value and node.left!= null, predecessor = maximum(current.left).
  5. Return predecessor

Inorder predevessor: Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;


/* This function return the maximum node in tree rooted at node root */
Node *findMaximum(Node *root){
    if(!root)
        return NULL;
 
    while(root->right) root = root->right;
    return root;
}
/* This function implements the logic described in algorithm to find inorder predecessor
of a given node */
Node *inorderPredecessor(Node *root, int K){
 
    Node *predecessor 	= NULL;
    Node *current  		= root;
    
    if(!root)
        return NULL;
 
    while(current && current->value != K){
         /* Else take left turn and no need to update predecessor pointer */
        if(current->value >K){
            current= current->left;
        }
        /* If node value is less than the node which are looking for, then go to right sub tree
        Also when we move right, update the predecessor pointer to keep track of last right turn */
        else{
            predecessor = current;
            current = current->right;
        }
    }
    /*Once we reached at the node for which inorder predecessor is to be found,
    check if it has left sub tree, if yes then find the maximum in that right sub tree and return that node 
    Else last right turn taken node is already stored in predecessor pointer and will be returned*/
    if(current && current->left){
        predecessor = findMaximum(current->left);
    }
    return predecessor;
}
Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  int n = 0;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  Node *predecessor = inorderPredecessor(root, 40);
  printf("\n Inorder successor node is : %d ",predecessor ? predecessor->value: 0);
  
  return 0;
}

Complexity of algorithm to find inorder predecessor will be O(logN) in almost balanced binary tree. If tree is skewed, then we have worst case complexity of O(N).

Please share if there is something wrong or missing. If you want to contribute to website and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Inorder successor in binary search tree

Inorder successor in binary search tree

What is an inorder successor in binary tree? Inorder successor is the node which traversed next to given node in inorder traversal of binary tree.  In binary search tree, it’s the next big value after the node. For example, inorder successor of node(6) in below tree will 10 and for node(12) it’s 14.

inorder successor

If node is the rightmost node or in BST, the greatest node, then there is no inorder successor for that node.

Inorder successor : Thoughts

To find inorder successor, first thing to find is the node itself.  As we know in inorder traversal, root node is visited after left subtree.  A node can be successor for given node which is on left side of it.

Let’s come up with examples and see what algorithm works. First case, if the node is right most node of tree, there is no inorder successor, in that case return NULL. For example, successor for node 16 is NULL.

What if node has right subtree? In that case, minimum value in right subtree will be successor of given node.  We can find minimum value in tree by going deep down left subtree, till left subtree is NULL, and then return the last node. For example, successor node(5) is 6.

inorder successor

What are the other cases? Another case is that node does not have right subtree? Then parent of given node will be inorder successor. While moving down the tree on left side, keep track of parent node as it may be the solution. Successor of node(7) will be 10 as that’s where we moved to left subtree last time.  Note that we change successor candidate only  while moving down left subtree.

Algorithm to find inorder successor

  1. Start with root, current = root, successor = NULL.
  2. If node.value < current.value, then successor = current, current = current.left.
  3. If node.value > current.value, current = current.right.
  4. If node.value == current.value and node.right != null, successor = minimum(current.right).
  5. Return successor

Inorder successor : Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;


//this function finds the minimum node in given tree rooted at node root
Node * findMinimum(Node *root){
    if(!root)
        return NULL;
   // Minimum node is left most child. hence traverse down till left most node of tree.
    while(root->left) root = root->left;
   // return the left most node
    return root;
}
/* This function implements the logic described in algorithm to find inorder successor
of a given node */
Node *inorderSuccessor(Node *root, Node *node){
 
    Node *successor = NULL;
    Node *current  = root;
    if(!root)
        return NULL;
 
    while(current->value != node->value){
        /* If node value is greater than the node which are looking for, then go to left sub tree
        Also when we move left, update the successor pointer to keep track of lst left turn */
        
        if(current->value > node->value){
            successor = current;
            current= current->left;
        }
        /* Else take right turn and no need to update successor pointer */
        else
            current = current->right;
    }
    /*Once we reached at the node for which inorder successor is to be found,
    check if it has right sub tree, if yes then find the minimum in that right sub tree and return taht node 
    Else last left turn taken node is already stored in successor pointer and will be returned*/
    if(current && current->right){
        successor = findMinimum(current->right);
    }
 
    return successor;
}


Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  int n = 0;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  Node *node = root;
  root = addNode(root,37);
  root = addNode(root,45);
  
  Node *successor = inorderSuccessor(root, node);
  printf("\n Inorder successor node is : %d ",successor ? successor->value: 0);
  
  return 0;
}

Complexity of algorithm to find inorder successor will be O(logN) in almost balanced binary tree. If tree is skewed, then we have worst case complexity of O(N).

Please share if there is something wrong or missing. If you want to contribute to website and share your knowledge with thousands of learners across the world, please reach out to us at communication@algorithmsandme.com

Iterative postorder traversal

Iterative postorder traversal

In last two posts, iterative inorder and iterative preorder traversal, we learned how stack can be used to replace recursion and why recursive implementation can be dangerous in production environment. In this post, let’s discuss iterative postorder traversal of binary tree which is most complex of all traversals. What is post order traversal ? A traversal where  left and right subtrees are visited before root is processed. For example, post order traversal of below tree would be : [1,6,5,12,16,14,10]

iterative postorder traversal

Iterative postorder traversal  : Thoughts

Let’s look at the recursive implementation of postorder.

    private void postOrder(Node root){
        if(root == null) return;

        postOrder(root.left);
        postOrder(root.right);
        System.out.println(root.value);

    }

As we are going into left subtree and then directly to right subtree, without visiting root node. Can you find the similarity of structure between preorder and postorder implementation?  Can we reverse the entire preorder traversal to get post order traversal? Reverse preorder will give us right child, left child and then root node, however order expected is left child, right child and root child.
Do you remember we pushed left and right node onto stack in order where right child went before left. How about reversing that?

There is one more problem with just reversing the preorder. In preorder, a node was processed as soon as popped from stack, like root node will  be the first node to be processed. However, in postorder, root node is processed last. So, we actually need the order of processing too be reversed. What better than using a stack to store the reverse order of root nodes to processed.
All in all, we will be using two stacks, one to store left and right child, second to store processing order of nodes.

  1. Create two stacks s an out and push root node onto s
  2. While stack s is not empty
    1. op from stack s, current = s.pop
    2. Put current onto stack out.
    3. Put left and right child of current on to stack s
  3. Pop everything from out stack and process it.

Postorder traversal with two stacks : Implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 22.5.18.
 */
public class BinarySearchTreeTraversal {

    private Node root;

    public void BinarySearchTree(){
        root = null;
    }

    public class Node {
        private int value;
        private Node left;
        private Node right;

        public Node(int value) {
            this.value = value;
            this.left = null;
            this.right = null;
        }
    }

    public void insert(int value){
        this.root =  insertNode(this.root, value);
    }

    private Node insertNode(Node root, int value){
        if(root == null){
            //if this node is root of tree
            root = new Node(value);
        }
        else{
            if(root.value > value){
                //If root is greater than value, node should be added to left subtree
                root.left = insertNode(root.left, value);
            }
            else{
                //If root is less than value, node should be added to right subtree
                root.right = insertNode(root.right, value);
            }
        }
        return root;
    }
    private void postOrder(Node root){
       if(root == null) return;

       postOrder(root.left);
       postOrder(root.right);
       System.out.println(root.value);
    }

    public void postOrderTraversal(){
        postOrderIterative(root);
    }

    private void postOrderIterative(Node root){
        Stack<Node> out = new Stack<>();
        Stack<Node> s = new Stack<>();

        s.push(root);

        while(!s.empty()){
            Node current = s.pop();

            out.push(current);
            if(current.left != null) s.push(current.left);
            if(current.right != null) s.push(current.right);
        }

        while(!out.empty()){
            System.out.println(out.pop().value);
        }
    }
}

Complexity of iterative implementation is O(n) with additional space complexity of O(n).

Can we avoid using two stack, and do it with one stack? Problem with root in postorder traversal is that it is visited three times, moving down from parent, coming up from left child and coming up from right child. When should be the node processed? Well, when we are coming up from right child.

How can we keep track of how the current node was reached? If we keep previous pointer, there are three cases:

  1. Previous node is parent of current node, we reached node from parent node, nothing is done.
  2. Previous node is left child of current node, it means we have visited left child, but still not visited right child, move to right child of current node.
  3. Previous node is right child of current node, it means  we have visited left and right child of current node,  process the current node.

Let’s formulate  postorder traversal algorithm then.

  1. Push root node onto stack s, set prev = null.
  2. Repeat below steps till stack is not empty (!s.empty())
  3. current = s.pop(), pop from the stack.
  4. If (prev == null || prev.left == current || prev.right == current) then
    1. If current.left != null, push current.left onto stack.
    2. If current.right != null, push current.right onto stack.
    3. If current.left == current.right == null, process current.
  5. If current.left == prev, i.e. moving up left child then
    1. If current.right == null, process current.
    2. If current.right != null, push it to stack.
  6. If current.right == prev i.e moving up from right child
    1. process current.
    2. prev = current, current = s.pop.

Iterative Postorder traversal : Implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 22.5.18.
 */
public class BinarySearchTreeTraversal {

    private Node root;

    public void BinarySearchTree(){
        root = null;
    }

    public class Node {
        private int value;
        private Node left;
        private Node right;

        public Node(int value) {
            this.value = value;
            this.left = null;
            this.right = null;
        }
    }

    public void insert(int value){
        this.root =  insertNode(this.root, value);
    }
  
    private Node insertNode(Node root, int value){
        if(root == null){
            //if this node is root of tree
            root = new Node(value);
        }
        else{
            if(root.value > value){
                //If root is greater than value, node should be added to left subtree
                root.left = insertNode(root.left, value);
            }
            else{
                //If root is less than value, node should be added to right subtree
                root.right = insertNode(root.right, value);
            }
        }
        return root;
    }


    private void inorder(Node root){
            if(root == null) return;

            if(root.left != null) inorder(root.left);
            System.out.println(root.value);
            if(root.right != null) inorder(root.right);
        }

        private void preOrder(Node root){
            if(root == null) return;

            System.out.println(root.value);
            preOrder(root.left);
            preOrder(root.right);
        }

        private void postOrder(Node root){
            if(root == null) return;

            postOrder(root.left);
            postOrder(root.right);
            System.out.println(root.value);

        }
        public void postOrderTraversal(){
          //  postOrder(root);
            postOrderIterative2(root);
            //postOrderIterative(root);
        }

        private void postOrderIterative2(Node root){
            Node prev = null;
            Stack<Node> s = new Stack<>();

            s.push(root);

            while(!s.empty()){
                Node current  = s.peek();
                if(prev == null || ( prev.left == current || prev.right == current )){
                    if(current.left != null) s.push(current.left);
                    else if(current.right != null) s.push(current.right);
                }
                else if(prev == current.left){
                    if(current.right != null) s.push(current.right);
                }else{
                    System.out.println(current.value);
                    s.pop();
                }

                prev = current;
            }
    }

}

Complexity of code is O(n) again, with additional space complexity of O(n).

Please share if there is something wrong or missing. If you want to contribute and share your learning with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Iterative preorder traversal

Iterative preorder traversal

In last post Iterative inorder traversal , we learned how to do inorder traversal of binary tree without recursion or in iterative way. Today we will learn how to do iterative preorder traversal of binary tree. In preorder traversal, root node is processed before left and right subtrees. For example, preorder traversal of below tree would be [10,5,1,6,14,12,15],

iterative preorder traversal without recursion

We already know how to implement preorder traversal in recursive way, let’s understand how to implement it in non-recursive way.

Iterative preorder traversal : Thoughts

If we look at recursive implementation, we see we process the root node as soon as we reach it and then start with left subtree before touching anything on right subtree.

Once left subtree is processed, control goes to first node in right subtree. To emulate this behavior in non-recursive way, it is best to use a stack. What and when push and pop will happen on the stack?
Start with pushing the root node to stack. Traversal continues till there at least one node onto stack.

Pop the root node from stack,process it and push it’s right and left child on to stack. Why right child before left child? Because we want to process left subtree before right subtree. As at every node, we push it’s children onto stack, entire left subtree of node will be processed before right child is popped from the stack. Algorithm is very simple and is as follows.

    1. Start with root node and push on to stack s
    2. While there stack is not empty
      1. Pop from stack current  = s.pop() and process the node.
      2. Push current.right onto to stack.
      3. Push current.left onto to stack.

Iterative preorder traversal : example

Let’s take and example and see how it works. Given below tree, do preorder traversal on it without recursion.

iterative preorder traversal without recursion

Let’s start from root node(10) and push it onto stack. current = node(10).

Here loop starts, which check if there is node onto stack. If yes, it pops that out. s.pop will return node(10), we will print it and push it’s right and left child onto stack. Preorder traversal till now : [10].

Since stack is not empty, pop from it.current= node(5). Print it and push it’s right and left child i.e node(6) and node(1) on stack.

Again, stack is not empty, pop from stack. current  = node(1). Print node. There is no right and left child for this node, so we will not push anything on the stack.

Stack is not empty yet, pop again. current= node(6). Print node. Similar to node(1), it also does not have right or left subtree, so nothing gets pushed onto stack.

However, stack is not empty yet. Pop. Current = node(14). Print node, and as there are left and right children, push them onto stack as right child before left child.

Stack is not empty, so pop from stack, current = node(12). Print it, as there are no children of node(12), push nothing to stack.

Pop again from stack as it not empty. current = node(15). Print it. No children, so no need to push anything.

At this point, stack becomes empty and we have traversed all node of tree also.

Iterative preorder traversal : Implementation

#include <stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};
typedef struct node Node;

#define STACK_SIZE 10
 
typedef struct stack{
        int top;
        Node *items[STACK_SIZE];
}stack;
 
void push(stack *ms, Node *item){
   if(ms->top < STACK_SIZE-1){
       ms->items[++(ms->top)] = item;
   }
   else {
       printf("Stack is full\n");
   }
}
 
Node * pop (stack *ms){
   if(ms->top > -1 ){
       return ms->items[(ms->top)--];
   } 
   else{
       printf("Stack is empty\n");
   }
}
Node * peek(stack ms){
  if(ms.top < 0){
      printf("Stack empty\n");
      return 0;
   }
   return ms.items[ms.top];
}
int isEmpty(stack ms){
   if(ms.top < 0) return 1;
   else return 0;
}
void preorderTraversalWithoutRecursion(Node *root){
	stack ms;
	ms.top = -1;
	
	if(root == NULL) return ;

	Node *currentNode = NULL;
	/* Step 1 : Start with root */
	push(&ms,root);
	
	while(!isEmpty(ms)){
		/* Step 5 : Pop the node */
		currentNode = pop(&ms);
		/* Step 2 : Print the node */
		printf("%d  ", currentNode->value);
		/* Step 3: Push right child first */
		if(currentNode->right){
			push(&ms, currentNode->right);
		}
		/* Step 4: Push left child */
		if(currentNode->left){
			push(&ms, currentNode->left);
		}
	}
}


void preorder (Node *root){
	if ( !root ) return;
	
 	printf("%d ", root->value );
	preorder(root->left);
	preorder(root->right);
}
 
Node * createNode(int value){
    Node * newNode =  (Node *)malloc(sizeof(Node));
	
    newNode->value = value;
    newNode->right= NULL;
    newNode->left = NULL;
	
    return newNode;
}

Node * addNode(Node *node, int value){
    if(node == NULL){
    	return createNode(value);
    }
    else{
    	if (node->value > value){
    		node->left = addNode(node->left, value);
    	}
    	else{
    		node->right = addNode(node->right, value);
    	}
    }
    return node;
}
 
/* Driver program for the function written above */
int main(){
        Node *root = NULL;
        //Creating a binary tree
        root = addNode(root,30);
        root = addNode(root,20);
        root = addNode(root,15);
        root = addNode(root,25);
        root = addNode(root,40);
        root = addNode(root,37);
        root = addNode(root,45);
        
	preorder(root);
        printf("\n");
	
        preorderTraversalWithoutRecursion(root);
        return 0;
}

Complexity of iterative implementation of binary tree is O(n) as we will be visiting each node at least once. Also, there is added space complexity of stack which is O(n).

Please share if there is something wrong or missing. If you are willing to contribute and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com