Data Structure & Algorithms 2023 | Mid Semester Exam | Previous Year Question & Answers
Q1. (A) -> a) The contents of a queue Q are as follows:
The queue can store a maximum of eight elements and the front (F) and rear (R) pointers currently point at index 0 and 3 respectively. Show the queue contents and indicate the position of the front and rear pointers after each of the following queue operations. [3 Marks]
(a) Insert(Q, 16),
(b) Delete(Q),
(c) Delete(Q),
(d) Insert(Q, 7),
(e) Delete(Q),
(f) Insert (Q, -2)
- Queue contents: [16, _, _, _, _, _, _, _]
- Front pointer (F): 0
- Rear pointer (R): 4
- Queue contents: [_, _, _, _, _, _, _, _]
- Front pointer (F): 1
- Rear pointer (R): 4
- Queue contents: [_, _, _, _, _, _, _, _]
- Front pointer (F): 2
- Rear pointer (R): 4
- Queue contents: [7, _, _, _, _, _, _, _]
- Front pointer (F): 2
- Rear pointer (R): 5
- Queue contents: [_, _, _, _, _, _, _, _]
- Front pointer (F): 3
- Rear pointer (R): 5
- Queue contents: [_, _, _, -2, _, _, _, _]
- Front pointer (F): 3
- Rear pointer (R): 6
Q1. (B) -> What is Heap? Explain min and max heap. [2 Marks]
Heaps are commonly used in algorithms such as Dijkstra's shortest path algorithm, heap sort, and priority queues. They provide efficient access to the minimum or maximum element in the data structure, which can be useful in many applications.
The heap property ensures that the tree remains balanced, allowing for efficient insertion and deletion operations. The time complexity for inserting an element into a heap is O(log n), where n is the number of elements in the heap. The time complexity for deleting the minimum or maximum element from a heap is also O(log n).
In summary, a heap is a tree-based data structure that satisfies the heap property. It can be either a max heap or a min heap, depending on the order of the elements. Heaps are commonly used in algorithms for efficient access to the minimum or maximum element in the data structure.
Q2. The following elements are sorted using two-way merge sort in ascending order 200,470,150,80,90,40,400,300,120,70. What is the order of these elements after each pass of the merge sort algorithm? Explain the procedure for it. [5 Marks]
Answer: Two-way merge sort is a sorting algorithm that follows the divide-and-conquer approach. It divides the input array into smaller subarrays, sorts each subarray individually, and then merges them back together. Here's how the merge sort algorithm works:
- Divide: The input array is divided into two halves recursively until each subarray contains only one element.
- Conquer: Each subarray with one element is already considered sorted.
- Merge: The sorted subarrays are then merged back together in a sorted manner.
- Initial array: [200, 470, 150, 80, 90, 40, 400, 300, 120, 70]
- Divide: The array is divided into two halves: [200, 470, 150, 80, 90] and [40, 400, 300, 120, 70].
- Conquer: Each subarray is sorted individually.
- Merge: The sorted subarrays are merged back together.
- Merged array: [200, 470, 150, 80, 90, 40, 400, 300, 120, 70]
- Initial array: [200, 470, 150, 80, 90, 40, 400, 300, 120, 70]
- Divide: Each half is divided into two halves: [200, 470, 150] and [80, 90] and [40, 400, 300] and [120, 70].
- Conquer: Each subarray is sorted individually.
- Merge: The sorted subarrays are merged back together.
- Merged array: [150, 200, 470, 80, 90, 40, 300, 400, 70, 120]
- Initial array: [150, 200, 470, 80, 90, 40, 300, 400, 70, 120]
- Divide: Each half is divided into two halves: [150, 200, 470] and [80, 90] and [40, 300, 400] and [70, 120].
- Conquer: Each subarray is sorted individually.
- Merge: The sorted subarrays are merged back together.
- Merged array: [80, 90, 150, 200, 470, 40, 70, 120, 300, 400]
- Initial array: [80, 90, 150, 200, 470, 40, 70, 120, 300, 400]
- Divide: Each half is divided into two halves: [80, 90] and [150, 200] and [40, 70] and [120, 300, 400].
- Conquer: Each subarray is sorted individually.
- Merge: The sorted subarrays are merged back together.
- Merged array: [40, 70, 80, 90, 120, 150, 200, 300, 400, 470]
Q3. (A)-> In application software there are four functions of which each take n identical input and each produces some output. Function is a recursive function which halves the input for every call. Help the reviewer to represent the time complexity of this function in terms of n in asymptotic notation?
Q3. (B)->Insert the keys 72, 33, 92, 19, 21 into the Hash Table of size 10. Resolve all collisions using Double Hashing (that is, h(k, i) (h 1 (k)+i*h 2 (k))% m) where first hash-function is h 1 (k) = k mod 10 and second hash function is h 2 (k) = 1 + (k mod 7). Show all steps.
- Calculate the index for each key using the first hash function.
- If there's a collision, calculate the step size using the second hash function.
- Use double hashing to resolve collisions and insert keys into the hash table.
- Inserting key 72:
- h1(72) = 72 % 10 = 2
- No collision, so insert 72 at index 2.
- Inserting key 33:
- h1(33) = 33 % 10 = 3
- Collision at index 3.
- h2(33) = 1 + (33 % 7) = 1 + 5 = 6 (step size)
- Calculate new index: (3 + 1*6) % 10 = 9
- Insert 33 at index 9.
- Inserting key 92:
- h1(92) = 92 % 10 = 2
- Collision at index 2.
- h2(92) = 1 + (92 % 7) = 1 + 1 = 2 (step size)
- Calculate new index: (2 + 1*2) % 10 = 4
- Collision at index 4.
- Calculate new index: (4 + 2*2) % 10 = 8
- Insert 92 at index 8.
- Inserting key 19:
- h1(19) = 19 % 10 = 9
- Collision at index 9.
- h2(19) = 1 + (19 % 7) = 1 + 5 = 6 (step size)
- Calculate new index: (9 + 1*6) % 10 = 5
- Insert 19 at index 5.
- Inserting key 21:
- h1(21) = 21 % 10 = 1
- No collision, so insert 21 at index 1.
Q4. Consider the classical sequential search algorithm (of looking for a key in a list of keys) and a variation of sequential search that scans a list to return the number of occurrences of a given search key in the list. Compare the best-case, worst-case, and average-case efficiency as well as the overall time complexity of the classical sequential search with that of its variation. [5 Marks]
- Best Case: The best case occurs when the key being searched is found at the beginning of the list. In this case, only one comparison is needed.
- Worst Case: The worst case occurs when the key being searched is not present in the list or is present at the end of the list. In this case, n comparisons are needed, where n is the number of elements in the list.
- Average Case: In the average case, the key being searched may be present multiple times in the list, leading to an average of n/2 comparisons.
- Overall Time Complexity: The time complexity of this variation of sequential search is also O(n), where n is the number of elements in the list, as it still involves scanning the entire list.
- Both classical sequential search and its variation have the same best-case, worst-case, and average-case time complexity, which is O(n).
- However, the variation of sequential search returns the number of occurrences of the search key in addition to finding its presence, providing more information.
- Overall, while both algorithms have identical time complexities, the variation of sequential search offers additional functionality by counting occurrences, making it more versatile for certain applications where such information is needed.