Data Structure & Algorithms 2023 Previous Year | Mid Semester Exam | Previous Year Question & Answers

Data Structure & Algorithms 2023 | Mid Semester Exam | Previous Year Question & Answers

 




Suggestion :  keep refreshing the page for updated content & Search Questions Using Find )

Q1. (A) -> a) The contents of a queue Q are as follows:


The queue can store a maximum of eight elements and the front (F) and rear (R) pointers currently point at index 0 and 3 respectively. Show the queue contents and indicate the position of the front and rear pointers after each of the following queue operations. [3 Marks]

(a) Insert(Q, 16),

(b) Delete(Q),

(c) Delete(Q),

(d) Insert(Q, 7),

(e) Delete(Q),

(f) Insert (Q, -2)


Answer: Given: The queue can store a maximum of eight elements and the front (F) and rear (R) pointers currently point at index 0 and 3 respectively.

(a) Insert(Q, 16):
  • Queue contents: [16, _, _, _, _, _, _, _]
  • Front pointer (F): 0
  • Rear pointer (R): 4
(b) Delete(Q):
  • Queue contents: [_, _, _, _, _, _, _, _]
  • Front pointer (F): 1
  • Rear pointer (R): 4
(c) Delete(Q):
  • Queue contents: [_, _, _, _, _, _, _, _]
  • Front pointer (F): 2
  • Rear pointer (R): 4
(d) Insert(Q, 7):
  • Queue contents: [7, _, _, _, _, _, _, _]
  • Front pointer (F): 2
  • Rear pointer (R): 5
(e) Delete(Q):
  • Queue contents: [_, _, _, _, _, _, _, _]
  • Front pointer (F): 3
  • Rear pointer (R): 5
(f) Insert(Q, -2):
  • Queue contents: [_, _, _, -2, _, _, _, _]
  • Front pointer (F): 3
  • Rear pointer (R): 6
So, the final queue contents and the positions of front and rear pointers after each operation are as follows:

(a) [16, _, _, _, _, _, _, ] (F: 0, R: 4)
(b) [, _, _, _, _, _, _, ] (F: 1, R: 4)
(c) [, _, _, _, _, _, _, _] (F: 2, R: 4)
(d) [7, _, _, _, _, _, _, ] (F: 2, R: 5)
(e) [, _, _, _, _, _, _, ] (F: 3, R: 5)
(f) [, _, _, -2, _, _, _, _] (F: 3, R: 6)


Q1. (B) -> What is Heap? Explain min and max heap. [2 Marks]

Answer:

A heap is a specialized tree-based data structure that satisfies the heap property. In a max heap, for any given node i, the value of i is greater than or equal to the values of its children. In a min heap, for any given node i, the value of i is less than or equal to the values of its children.

Here is an example of a max heap:
      10
     /  \
    8    5
   / \   \
  3   2   1

In this max heap, the parent nodes are always greater than or equal to their child nodes.

Here is an example of a min heap:
      3
     /  \
    8    5
   / \   \
  10  2   1

In this min heap, the parent nodes are always less than or equal to their child nodes.

Heaps are commonly used in algorithms such as Dijkstra's shortest path algorithm, heap sort, and priority queues. They provide efficient access to the minimum or maximum element in the data structure, which can be useful in many applications.


The heap property ensures that the tree remains balanced, allowing for efficient insertion and deletion operations. The time complexity for inserting an element into a heap is O(log n), where n is the number of elements in the heap. The time complexity for deleting the minimum or maximum element from a heap is also O(log n).


In summary, a heap is a tree-based data structure that satisfies the heap property. It can be either a max heap or a min heap, depending on the order of the elements. Heaps are commonly used in algorithms for efficient access to the minimum or maximum element in the data structure.



Q2. The following elements are sorted using two-way merge sort in ascending order 200,470,150,80,90,40,400,300,120,70. What is the order of these elements after each pass of the merge sort algorithm? Explain the procedure for it. [5 Marks]

Answer: Two-way merge sort is a sorting algorithm that follows the divide-and-conquer approach. It divides the input array into smaller subarrays, sorts each subarray individually, and then merges them back together. Here's how the merge sort algorithm works:

  1. Divide: The input array is divided into two halves recursively until each subarray contains only one element.
  2. Conquer: Each subarray with one element is already considered sorted.
  3. Merge: The sorted subarrays are then merged back together in a sorted manner.
Let's go through each pass of the merge sort algorithm for the given array [200, 470, 150, 80, 90, 40, 400, 300, 120, 70] in ascending order:

Pass 1:
  • Initial array: [200, 470, 150, 80, 90, 40, 400, 300, 120, 70]
  • Divide: The array is divided into two halves: [200, 470, 150, 80, 90] and [40, 400, 300, 120, 70].
  • Conquer: Each subarray is sorted individually.
  • Merge: The sorted subarrays are merged back together.
    • Merged array: [200, 470, 150, 80, 90, 40, 400, 300, 120, 70]
Pass 2:
  • Initial array: [200, 470, 150, 80, 90, 40, 400, 300, 120, 70]
  • Divide: Each half is divided into two halves: [200, 470, 150] and [80, 90] and [40, 400, 300] and [120, 70].
  • Conquer: Each subarray is sorted individually.
  • Merge: The sorted subarrays are merged back together.
    • Merged array: [150, 200, 470, 80, 90, 40, 300, 400, 70, 120]
Pass 3:
  • Initial array: [150, 200, 470, 80, 90, 40, 300, 400, 70, 120]
  • Divide: Each half is divided into two halves: [150, 200, 470] and [80, 90] and [40, 300, 400] and [70, 120].
  • Conquer: Each subarray is sorted individually.
  • Merge: The sorted subarrays are merged back together.
    • Merged array: [80, 90, 150, 200, 470, 40, 70, 120, 300, 400]
Pass 4:
  • Initial array: [80, 90, 150, 200, 470, 40, 70, 120, 300, 400]
  • Divide: Each half is divided into two halves: [80, 90] and [150, 200] and [40, 70] and [120, 300, 400].
  • Conquer: Each subarray is sorted individually.
  • Merge: The sorted subarrays are merged back together.
    • Merged array: [40, 70, 80, 90, 120, 150, 200, 300, 400, 470]
After each pass, we observe that the array becomes progressively more sorted until the final pass produces the completely sorted array. This is the essence of the merge sort algorithm, which ensures that the array is eventually sorted through repeated dividing, sorting, and merging operations.



Q3. (A)-> In application software there are four functions of which each take n identical input and each produces some output. Function is a recursive function which halves the input for every call. Help the reviewer to represent the time complexity of this function in terms of n in asymptotic notation?

Answer: The time complexity of a recursive function can be represented in terms of the number of recursive calls and time taken for each call.

In this case, the function halves the input for every call. Therefore, the number of recursive calls made by the function is log-2(n). The time taken for each call is a constant, say C.

Therefore, The total time complexity of the function can be represented as O((*log-2(n)), where C is a constant.

It's also worth nothing that, each time the function halves the input, it also reduces the number of calls by half and this is the power of logarithmic time complexity. As the input size increases, the number of operations needed to solve the problem increase at a much slower rate. Since the function is called 4 times the time complexity would be O(4*C*log-2(n)).


Q3. (B)->Insert the keys 72, 33, 92, 19, 21 into the Hash Table of size 10. Resolve all collisions using Double Hashing (that is, h(k, i) (h 1 (k)+i*h 2 (k))% m) where first hash-function is h 1 (k) = k mod 10 and second hash function is h 2 (k) = 1 + (k mod 7). Show all steps.

Answer: To insert the keys 72, 33, 92, 19, 21 into a hash table of size 10 using double hashing with the given hash functions, we'll follow these steps:

  1. Calculate the index for each key using the first hash function.
  2. If there's a collision, calculate the step size using the second hash function.
  3. Use double hashing to resolve collisions and insert keys into the hash table.
Let's go through each step:
  1. Inserting key 72:
    • h1(72) = 72 % 10 = 2
    • No collision, so insert 72 at index 2.
  2. Inserting key 33:
    • h1(33) = 33 % 10 = 3
    • Collision at index 3.
    • h2(33) = 1 + (33 % 7) = 1 + 5 = 6 (step size)
    • Calculate new index: (3 + 1*6) % 10 = 9
    • Insert 33 at index 9.
  3. Inserting key 92:
    • h1(92) = 92 % 10 = 2
    • Collision at index 2.
    • h2(92) = 1 + (92 % 7) = 1 + 1 = 2 (step size)
    • Calculate new index: (2 + 1*2) % 10 = 4
    • Collision at index 4.
    • Calculate new index: (4 + 2*2) % 10 = 8
    • Insert 92 at index 8.
  4. Inserting key 19:
    • h1(19) = 19 % 10 = 9
    • Collision at index 9.
    • h2(19) = 1 + (19 % 7) = 1 + 5 = 6 (step size)
    • Calculate new index: (9 + 1*6) % 10 = 5
    • Insert 19 at index 5.
  5. Inserting key 21:
    • h1(21) = 21 % 10 = 1
    • No collision, so insert 21 at index 1.
Final hash table:
Index: 0 1 2 3 4 5 6 7 8 9
Keys: -- 21 72 33 92 19 -- -- -- --

So, the keys 72, 33, 92, 19, 21 are inserted into the hash table using double hashing to resolve collisions.


Q4. Consider the classical sequential search algorithm (of looking for a key in a list of keys) and a variation of sequential search that scans a list to return the number of occurrences of a given search key in the list. Compare the best-case, worst-case, and average-case efficiency as well as the overall time complexity of the classical sequential search with that of its variation. [5 Marks]

Answer: The classical sequential search algorithm and its variation that returns the number of occurrences of a given search key in terms of best-case, worst-case, and average-case efficiency, as well as overall time complexity.

Classical Sequential Search:
  • Best Case: The best case occurs when the key being searched is found at the beginning of the list. In this case, only one comparison is needed.
  • Worst Case: The worst case occurs when the key being searched is not present in the list or is present at the end of the list. In this case, n comparisons are needed, where n is the number of elements in the list.
  • Average Case: In the average case, the key being searched may be present multiple times in the list, leading to an average of n/2 comparisons.
  • Overall Time Complexity: The time complexity of this variation of sequential search is also O(n), where n is the number of elements in the list, as it still involves scanning the entire list.
Comparison:
  • Both classical sequential search and its variation have the same best-case, worst-case, and average-case time complexity, which is O(n).
  • However, the variation of sequential search returns the number of occurrences of the search key in addition to finding its presence, providing more information.
  • Overall, while both algorithms have identical time complexities, the variation of sequential search offers additional functionality by counting occurrences, making it more versatile for certain applications where such information is needed.