Thursday, March 5, 2015

Virtual memory in operating systems

Virtual memory

What is virtual memory?

Memory is integral part of any execution of process. It is the place where process reads and writes data to process on. An executable instruction needs to in memory for before processor can execute it. What happens when available memory is less than that is required by process?
That's where virtual memory comes into picture. Virtual memory is a technique which is implemented by operating system to meet process memory requirements even when there is not enough physical memory available. This memory which is not real physical memory is called as virtual memory.  

Virtual memory brings in another concept called logical address. The address which are directly mapped to physical memory are called as physical addresses. Other type of address which is actually visible to process, is called as logical address. So, the address space which a process sees is not actual physical address space available. Then how do these two spaces map?  To map these two, operating system implements memory map function which does the dirty work of mapping process's logical address to physical address on to memory. 
One question still remains: What happens to extra memory which is required by process and is not available now? For example if memory required by process is 150 MB and only 80 MB is available in memory, there is 70 MB short. How do we manage that? Operating systems rely on the fact that not all instructions of process are required to be in memory at the same time in order to execute it. So, only instruction which are needed are brought into memory and rest 70 MB are allocated on disk and managed by a process called as Virtual Memory Manager (VMM). Instructions are swapped in and out when there is a need of an instruction which is on disk and not in main memory.

All in all, below picture explain overall virtual memory concept.

Virtual memory
Virtual memory overview
Virtual memory allows process to use much more memory than what is really available. It makes process which are very big in size (more than memory size) to be able to execute in memory. Other advantage it brings is ability to run multiple process at a time in main memory which is called as multiprocessing.

To go in details of concept, let's understand how physical memory is organized. Physical memory is divided into chunks of a predefined size usually 4 KB. These chunks are called as pages. When a logical address is accessed by process, it physical address is found by memory map and the page in which that memory address falls is accessed. What if that address is not available in memory? In that case, operating system needs to bring that page into main memory from the disk. Another question? What if there is no space available in main memory to accommodate that new page? Then operating system which decide which page needs to be taken out from memory to make place for new page. Page replacement algorithms is another study all together. I have discussed one Least Recently Used page replacement algorithm in this post: Least Recently Used cache

Once desired page is loaded into main memory, process resumes it execution.
This swapping of pages between hard disk and main memory presents another problem called as thrashing. Thrashing is an occurrence when every instruction or most of them lead to swapping of pages from main memory to disk and vice-versa. This has tremendous impact on performance.

This article explain basic concept of virtual memory. In next post we will discuss how page address translation works in order to realize virtual memory.  Please leave comment if you think there is something I missed.

Sunday, March 1, 2015

Merge sort algorithm

Merge Sort Algorithm

Merge Sort Algorithm

We can classify sorting algorithms based on their complexity, selection sort, bubble and insertion sort have complexity of O(n2) while heap sort, merge sort and quicksort (with some assumptions) have complexity of  O(nlogn) and count and Radix sorts are linear O(n) algorithms.
In this post I will be concentrating on one sorting algorithm that is merge sort.
Merge sort falls in divide and conquer class of algorithm where input space is divided into subproblems and these subproblems are then solved individually and combined together to give solution to the original problem. Below figure explains divide and conquer paradigm.

Divide and conquer

In merge sort, input space is divided into two subproblems till the time we achieve smallest sub-problem and then the smallest sub-problem is solved, that is sorted and then combined up till the point where all of the original input is sorted. Question arises is what is the smallest sub-problem?  Smallest sub-problem is the condition where input cannot be further divided. In case of an array of integers, this will be met when there is only one element in the array.
From the above explanation, it is clear that sub-problems are subjected to same processing which is done on original input. That rings bell for recursion. And the base condition to stop recursion is derived in above paragraph, which is there is only one element in array.
Now, how do we combine two sub-problems solutions? This step is conquer part. Sort the smallest parts and combine them together with merge operation. In merge operation, we scan both sorted arrays and based on the comparison, put one of the two items into output array, till both arrays are scanned. If one array is finished before other, just copy all elements from the other array to output array. Copy this output array back to original input array so that it can be combined with bigger sub problems till solution to original problem is derived.
If don't want to read, watch merge sort video


Below figure shows the merge sort operation.

merge step of merge sort

Let's take and example and work out merge sort and then we will implement it.
Divide step

Divide step of merge sort

Conquer step

Conquer step of merge sort

Merge sort implementation


Code explanation
For mergesort function we get three parameters, the input array a[], the start index and the end index of array. This start and end change in every recursive invocation of mergesort function.
We find the middle index using start and end index of the input array and again invoke the same function with two parts one starting from start to mid and other being from mid+1 to end.
Once base condition is hit, we start winding up and call merge function. Merge function takes four parameters, input array, start, end and middle index based on start and end.
Merge function merges two sub arrays (one from start to mid and other from mid+1 to end) into a single array from start to end. This array is then returned to upper calling function which then again sort two parts of array.

Complexity analysis
As we know that every time input space is divided into two and from the binary search algorithm we know that this division will have complexity of O(log n) where n is input size. So the first part of implementation of merge sort has complexity of O(logn). Now the second part of implements merge step which place every elements in its proper place in array, hence it linear time O(n). Since above step of dividing has to be done for n elements, hence total complexity of merge sort will be O(nlogn).
However, there is one caveat, every time we merge, two subarrays an auxiliary array is needed to temporarily store array elements and that incurs O(n) space complexity on merge sort algorithm.

There are some improvements which can be done on this algorithm.
1. When number of elements are less than some threshold, one can use insertion or selection sort to sort those numbers.  Why? Because when n is small, O(n2) is as good as O(nlogn) and it saves extra overhead of split and merging. All in all, using insertion sort in input array with small size, can give better performance.

2. Before calling merging, check if all the elements in right array are greater then left array, if yes, no need of merging. This can be easily checked by comparing a[mid] with a[mid+1]. If a[mid] is less than a[mid+1],  two sub arrays are already sorted and we don't need to perform merge.

External Merge sort

Merge sort is best used when data is in huge in size as compared to available memory, like sorting 2GB of data when we have only 100 MB of RAM or physical memory. Data cannot fit in memory and resides on to disk from where it is fetched in chunks and merged.
There are two passes in the external merge sort : Sort pass and merge pass. below figure explains this

external merge sort
Sort pass
  • Divide the input in N chunks where N  = Size of total data/Size of available memory
  • Load each chunk in main memory and sort it with any conventional sorting algorithm.
  • Now load a predefined block of the sorted chunks into memory again. keep a buffer to store sorted output.
Merge Pass
  • Now have N-way merge and put output on to buffer. As buffer get full, write that onto disk. 
  • If any of the small chunk taken if exhausted, one can fill the next block from the same chunk. 
External merge sort can be improved significantly using parallelism where data chunks are written on different disk where read and write can be performed in parallel. Also, different chunk can be sorted on different processors in parallel. If processors are not available, merge routine can take advantage of multithreading.
There is one problem which is classic example of usage of external merge sort is mentioned here in problem number 5 and solved here : External merge sort.

If you have any other optimization trick or better way to implement merge sort please comment below.

Sunday, February 22, 2015

Atomic Behaviour: Coding Best Practice

Atomic Behaviour: Coding Best Practice

Many organizations follow a pattern where they have a master database where updates happen and queries are handled from a separated database which is regularly synchronized. Such synchronization stuff is normally handled at the database level, but sometimes there are occasions when programs are written to achieve this synchronization. This generally happens when frequently changing business rules are used to extract data from the master database that needs to go into the query database. Interestingly most of such implementations that update the query database, I have seen, are implemented in the following manner:

  1. Verify the major input data - reject if inconsistent
  2. Delete the previous data present in the query database
  3. Insert the new data with which the process is invoked

At the outset the above steps look fine, and yes they work in most of the cases except in few scenarios. In one of the instances, when I faced problem with the above, was in a JEE environment. The method having the steps there had the transaction semantics as "TRANSACTION REQUIRED". This method was being called from a batch program executing in and with transactions controlled by the JEE environment.

Problematic Scenario

The scenario involved entities that can be mapped to User, Privileges and User Privileges and the data being updated into the query database regularly was that of User and User Privileges entities. Privileges were assumed to be more stable and were not synchronized.
The incidents reported for this scenario was that of a particular user having lost some of his privileges was not able to perform some tasks. Not all privileges were lost and the privileges lost were random and inconsistent, eg. losing the view access but still having the update access for the same business functionality.

Causal Analysis

In order to understand why the way the method was written was the cause for the incident, let us look at the methods steps replicated below for the scenario:
  1. Check whether the user is present - reject input if not
  2. DELETE all the existing privileges of the specified user
  3. INSERT all the specified privileges for the user

Note that step one does not validate the presence of privileges in the query database; it assumes that they are present. The database, however, does this checking in step 3, to validate the referential integrity. Consider a scenario, when the one of the privileges of the user is missing from the privileges entity. This will cause the step 3 above to fail. However, note that the step 2 would have succeeded and if the transaction were to be committed after the failure, the database would be in an inconsistent state from business point of view. This is exactly how the batch program was written; it ignored the exception from this method. The rationale for doing so is obvious - one cannot rollback the complete transaction, just because one user's data failed. The point to note, here is that the failure should not have left the database in an inconsistent state from the business point of view, which in our case it was doing. It was relying on the assumption that the caller would rollback the transaction and hence the problem.
The diagram illustrates the relationship between the data synchronization method and the batch program and highlights the difference from an online update on the same method. A batch program does a lot more transactional tasks than an equivalent online program. Each task cannot be treated as a transaction because that would unnecessarily slow down the system on the whole. Further if any of the tasks fails, a complete rollback too is not possible as that would lead to wasting of time and efforts spent in all the tasks that were successful.
One would argue that this should be handled by the transaction semantics, well that is correct, but then the JEE container managed transactions semantics does not allow for nesting of transaction context.

Learning & Best Practices

There are a few learnings that can be drawn from this episode, which are also the best practices for coding for atomic behaviour. These will certainly not eliminate the risks, but following them would certainly reduce the risks substantially. The best practices are as follows:
  1. Perform all the validations before any update. If the validations are not successful then raise an exception
  2. Perform the unrelated operations in Insert, Updates and then Delete order – Rationale being having incorrect data is better than losing correct data
  3. Perform only the operations that are required
  4. In case of throwing an exception, ensure that the database is in a consistent business state. This may require you to perform some more database operations to bring the database to a consistent business state.
The best practices above generic in nature and should be followed for all development having database.