When determining the efficiency of an algorithm you must consider which of the following things select two answers?

When determining the efficiency of an algorithm you must consider which of the following things select two answers?
Photo by Aliaksandr Bahdanovich on iStockPhoto

Thanks to the evolution of new technologies and huge amounts of data constantly being generated, the demand for faster and more efficient sorting algorithms always exists. If you look up sorting algorithms, you will be amazed by the sheer amounts of algorithms even with the same space and time complexity. However, looking deeper into each algorithm reveals that slight implementation variances can cause huge differences in the applications. In this article, I’m going to briefly discuss the main factors to consider when choosing a sorting algorithm aligned with your needs.

1. Simplicity

In the technology industry considering the large amount of data, simplicity of implementation usually has nothing to say when it comes to factors like performance and speed. Algorithms like the Bubble sort, Insertion sort, and Selection sort are easy to understand and implement but considering the quadratic (N²) average and worst-case time complexity of the algorithms, they are not very popular in the computer world. However, due to their simplicity, they are used for educational purposes, small-sized data, and real-life scenarios. For example, a clothing shop employee will choose insertion sort over merge sort to organize t-shirts based on their sizes. For very small-size input, basic sorts like Insertion might outperform Quick sort and Merge sort due to the lack of need to instantiate new memory units or making recursive calls. Consequently, basic sorts such as Insertion sort might be used in hybrid algorithms like Radix to sort the bins.

2. Running time

Running time is the main factor to classify sorting algorithms. In a coding interview, when asked about the time complexity of your algorithm, the interviewer is usually looking for worst-case time complexity. In practice, however, the average case and the performance of the algorithm on all sets of data is what’s mostly considered. A great example is Quick sort which is a common sorting algorithm with an O(N²) worst-case time complexity compared to the O(NlogN) for the average case. However, the worst case is a very special and rare circumstance (whenever the pivot is always minimum or maximum). In addition, the actual running time can greatly change depending on other factors like the number of recursive calls, access to the memory, and required comparisons. For instance, let’s look at Heap sort vs. Quick sort which both possess the same average time complexity but in practice, Quick sort performs much better on large data. Another popular fast sorting algorithm is Merge sort that is used in general cases (Arrays.sort() in Java for objects) or special cases like in e-commerce when a website wants to sort the results fetched from other websites.

There are certain use-cases in critical systems such as air-crafts and life monitoring systems where a guaranteed response time is required. In these cases, Heap sort is a better option than Quick sort because of its better worst-case time complexity.

3. Memory consumption

Space complexity states how much memory is being used regarding the size of input data. Some basic algorithms like Insertion or Bubble sort require no additional memory and can sort the data in place. On the other hand, more efficient algorithms like Quick sort and Merge sort require O(logN) and O(N) time complexity respectively (meaning that extra space is required to complete the sorting). This can be problematic in situations where the input data is huge or the available memory is limited. Due to memory limitations, embedded systems and Linux kernel rather in-place but efficient enough sorts like Heap sort. When we are talking about memory consumption, our assumption is that the algorithm can fit all data in RAM (internal sorting). However, in the scenarios such as sorting data on file systems and databases where the input data cannot fit in RAM, we need other sorting algorithms like external merge sort to sort data by using external storage (external sorting).

3. Parallel processing

In real-world scenarios, terabytes of data still need to be sorted in a resourceful way. Procuring super-strong machines is not feasible due to significant costs. Instead, data can be divided and assigned to cheaper machines to sort independently and then merged later to create the final sorted result. Algorithms that incorporate divide-and-conquer approaches such as Quick sort, Merge sort, Map sort, and Tera sort are good candidates to be used in parallel processing.

4. Stability

A sorting algorithm is deemed stable whenever two objects with equal sorting keys appear in the same order in the sorted output as they were in the original input. This is essential in applications where other attributes of an object are important to us. Imagine that you have a list of students’ information such as their grades already sorted based on their names. If we want to sort the list based on the grades without affecting the name-sort attribute whenever grades are identical, we need to look for stable sorts such as insertion sort, merge sort, etc. Remember that although some sorts like Quick sort and Heap sort are not stable by nature, they can be modified to support stability.

5. Assumptions about input data

In many cases, having extra information about the data and its range allows you to choose a reliable sorting algorithm for that specific purpose. Consider non-comparison-based algorithms such as Radix and Counting sorts that give a linear time complexity for sorting numbers. Sometimes, due to special structures of data, applying regular sorting algorithms is impossible. A well-known example is using Topological sort to put graph nodes in order depending on their relations.

Conclusion: Know the problem space

In the end, there is no best or worst sorting algorithm. Each algorithm can be useful for certain applications. Also, the same algorithm might change its behavior depending on the size of the input data. The very first approach for choosing a sorting solution is to determine the problem space and your requirements. Occasionally, a single algorithm cannot satisfy the requirements and a hybrid approach produces a better outcome. Once the problem space is known, you can then narrow down the list of potential good-enough algorithms and perform a practical analysis on the input data to reach a final solution.

The word Algorithm means ” A  set of rules to be followed in calculations or other problem-solving operations ” Or ” A procedure for solving a mathematical problem in a finite number of steps that frequently by recursive operations “. 

Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.



Algorithms can be simple and complex depending on what you want to achieve.

It can be understood by taking the example of cooking a new recipe. To cook a new recipe, one reads the instructions and steps and executes them one by one, in the given sequence. The result thus obtained is the new dish cooked perfectly. Every time you use your phone, computer, laptop, or calculator you are using Algorithms. Similarly, algorithms help to do a task in programming to get the expected output.

The Algorithm designed are language-independent, i.e. they are just plain instructions that can be implemented in any language, and yet the output will be the same, as expected.

What are the Characteristics of an Algorithm?

As one would not follow any written instructions to cook the recipe, but only the standard one. Similarly, not all written instructions for programming is an algorithms. In order for some instructions to be an algorithm, it must have the following characteristics:

  • Clear and Unambiguous: The algorithm should be clear and unambiguous. Each of its steps should be clear in all aspects and must lead to only one meaning.
  • Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined inputs. 
  • Well-Defined Outputs: The algorithm must clearly define what output will be yielded and it should be well-defined as well. 
  • Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
  • Feasible: The algorithm must be simple, generic, and practical, such that it can be executed with the available resources. It must not contain some future technology or anything.
  • Language Independent: The Algorithm designed must be language-independent, i.e. it must be just plain instructions that can be implemented in any language, and yet the output will be the same, as expected.

Properties of Algorithm:

  • It should terminate after a finite time.
  • It should produce at least one output.
  • It should take zero or more input.
  • It should be deterministic means giving the same output for the same input case.
  • Every step in the algorithm must be effective i.e. every step should do some work.

Types of Algorithms:

There are several types of algorithms available. Some important algorithms are:

1. Brute Force Algorithm: It is the simplest approach for a problem. A brute force algorithm is the first approach that comes to finding when we see a problem.

2. Recursive Algorithm: A recursive algorithm is based on recursion. In this case, a problem is broken into several sub-parts and called the same function again and again.

3. Backtracking Algorithm: The backtracking algorithm basically builds the solution by searching among all possible solutions. Using this algorithm, we keep on building the solution following criteria. Whenever a solution fails we trace back to the failure point and build on the next solution and continue this process till we find the solution or all possible solutions are looked after.

4. Searching Algorithm: Searching algorithms are the ones that are used for searching elements or groups of elements from a particular data structure. They can be of different types based on their approach or the data structure in which the element should be found.

5. Sorting Algorithm: Sorting is arranging a group of data in a particular manner according to the requirement. The algorithms which help in performing this function are called sorting algorithms. Generally sorting algorithms are used to sort groups of data in an increasing or decreasing manner.

6. Hashing Algorithm: Hashing algorithms work similarly to the searching algorithm. But they contain an index with a key ID. In hashing, a key is assigned to specific data.

7. Divide and Conquer Algorithm: This algorithm breaks a problem into sub-problems, solves a single sub-problem and merges the solutions together to get the final solution. It consists of the following three steps:

8. Greedy Algorithm: In this type of algorithm the solution is built part by part. The solution of the next part is built based on the immediate benefit of the next part. The one solution giving the most benefit will be chosen as the solution for the next part.

9. Dynamic Programming Algorithm: This algorithm uses the concept of using the already found solution to avoid repetitive calculation of the same part of the problem. It divides the problem into smaller overlapping subproblems and solves them.

10. Randomized Algorithm: In the randomized algorithm we use a random number so it gives immediate benefit. The random number helps in deciding the expected outcome.

To learn more about the types of algorithms refer to the article about “Types of Algorithms“.

Advantages of Algorithms:

  • It is easy to understand.
  • An algorithm is a step-wise representation of a solution to a given problem.
  • In Algorithm the problem is broken down into smaller pieces or steps hence, it is easier for the programmer to convert it into an actual program.

Disadvantages of Algorithms:

  • Writing an algorithm takes a long time so it is time-consuming.
  • Understanding complex logic through algorithms can be very difficult.
  • Branching and Looping statements are difficult to show in Algorithms(imp).

How to Design an Algorithm?

In order to write an algorithm, the following things are needed as a pre-requisite: 
 

  1. The problem that is to be solved by this algorithm i.e. clear problem definition.
  2. The constraints of the problem must be considered while solving the problem.
  3. The input to be taken to solve the problem.
  4. The output to be expected when the problem is solved.
  5. The solution to this problem, is within the given constraints.

Then the algorithm is written with the help of the above parameters such that it solves the problem.
Example: Consider the example to add three numbers and print the sum.
 

  • Step 1: Fulfilling the pre-requisites 
    As discussed above, in order to write an algorithm, its pre-requisites must be fulfilled. 
    1. The problem that is to be solved by this algorithm: Add 3 numbers and print their sum.
    2. The constraints of the problem that must be considered while solving the problem: The numbers must contain only digits and no other characters.
    3. The input to be taken to solve the problem: The three numbers to be added.
    4. The output to be expected when the problem is solved: The sum of the three numbers taken as the input i.e. a single integer value.
    5. The solution to this problem, in the given constraints: The solution consists of adding the 3 numbers. It can be done with the help of ‘+’ operator, or bit-wise, or any other method.
  • Step 2: Designing the algorithmNow let’s design the algorithm with the help of the above pre-requisites:

    Algorithm to add 3 numbers and print their sum: 

    1. START
    2. Declare 3 integer variables num1, num2 and num3.
    3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3 respectively.
    4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
    5. Add the 3 numbers and store the result in the variable sum.
    6. Print the value of the variable sum
    7. END
  • Step 3: Testing the algorithm by implementing it.
    In order to test the algorithm, let’s implement it in C language.

Program:

    cout << "Enter the 1st number: ";

    cout << " " << num1 << endl;

    cout << "Enter the 2nd number: ";

    cout << " " << num2 << endl;

    cout << "Enter the 3rd number: ";

    sum = num1 + num2 + num3;

    cout << "\nSum of the 3 numbers is: "

    printf("Enter the 1st number: ");

    printf("Enter the 2nd number: ");

    printf("Enter the 3rd number: ");

    sum = num1 + num2 + num3;

    printf("\nSum of the 3 numbers is: %d", sum);

    public static void main(String[] args)

            = new Scanner(System.in);

        System.out.println("Enter the 1st number: ");

        System.out.println(" " + num1);

        System.out.println("Enter the 2nd number: ");

        System.out.println(" " + num2);

        System.out.println("Enter the 3rd number: ");

        System.out.println(" " + num3);

        sum = num1 + num2 + num3;

        System.out.println("Sum of the 3 numbers is = "

if __name__ == "__main__":

    num1 = int(input("Enter the 1st number: "))

    num2 = int(input("Enter the 2nd number: "))

    num3 = int(input("Enter the 3rd number: "))

    print("\nSum of the 3 numbers is:", sum)

Output Enter the 1st number: 0 Enter the 2nd number: 0 Enter the 3rd number: -1577141152 Sum of the 3 numbers is: -1577141152

One problem, many solutions: The solution to an algorithm can be or cannot be more than one. It means that while implementing the algorithm, there can be more than one method to implement it. For example, in the above problem to add 3 numbers, the sum can be calculated in many ways like:

  • + operator
  • Bit-wise operators
  • . . etc

How to analyze an Algorithm? 

For a standard algorithm to be good, it must be efficient. Hence the efficiency of an algorithm must be checked and maintained. It can be in two stages:

  1. Priori Analysis: “Priori” means “before”. Hence Priori analysis means checking the algorithm before its implementation. In this, the algorithm is checked when it is written in the form of theoretical steps. This Efficiency of an algorithm is measured by assuming that all other factors, for example, processor speed, are constant and have no effect on the implementation. This is done usually by the algorithm designer. This analysis is independent of the type of hardware and language of the compiler. It gives the approximate answers for the complexity of the program.
  2. Posterior Analysis: “Posterior” means “after”. Hence Posterior analysis means checking the algorithm after its implementation. In this, the algorithm is checked by implementing it in any programming language and executing it. This analysis helps to get the actual and real analysis report about correctness, space required, time consumed etc. That is, it is dependent on the language of the compiler and the type of hardware used.

What is Algorithm complexity and how to find it?

An algorithm is defined as complex based on the amount of Space and Time it consumes. Hence the Complexity of an algorithm refers to the measure of the Time that it will need to execute and get the expected output, and the Space it will need to store all the data (input, temporary data and output). Hence these two factors define the efficiency of an algorithm. 
The two factors of Algorithm Complexity are:

  • Time Factor: Time is measured by counting the number of key operations such as comparisons in the sorting algorithm.
  • Space Factor: Space is measured by counting the maximum memory space required by the algorithm.

Therefore the complexity of an algorithm can be divided into two types:

1. Space Complexity: The space complexity of an algorithm refers to the amount of memory used by the algorithm to store the variables and get the result. This can be for inputs, temporary operations, or outputs. 

How to calculate Space Complexity?The space complexity of an algorithm is calculated by determining the following 2 components: 

  • Fixed Part: This refers to the space that is definitely required by the algorithm. For example, input variables, output variables, program size, etc.
  • Variable Part: This refers to the space that can be different based on the implementation of the algorithm. For example, temporary variables, dynamic memory allocation, recursion stack space, etc.
    Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the variable part of the algorithm, which depends on instance characteristic I.

Example: Consider the below algorithm for Linear Search

Step 1: STARTStep 2: Get the array in arr and the number to be searched in xStep 3: Start from the leftmost element of arr[] and one by one compare x with each element of arr[]Step 4: If x matches with an element, Print True.Step 5: If x doesn’t match with any of the elements, Print False.Step 6: END

Here, There are 2 variables arr[], and x, where the arr[] is the variable part and x is the fixed part. Hence S(P) = 1+1. Now, space depends on data types of given variables and constant types and it will be multiplied accordingly.

2. Time Complexity: The time complexity of an algorithm refers to the amount of time that is required by the algorithm to execute and get the result. This can be for normal operations, conditional if-else statements, loop statements, etc.

How to calculate Time Complexity?
The time complexity of an algorithm is also calculated by determining the following 2 components: 

  • Constant time part: Any instruction that is executed just once comes in this part. For example, input, output, if-else, switch, etc.
  • Variable Time Part: Any instruction that is executed more than once, say n times, comes in this part. For example, loops, recursion, etc.
    Therefore Time complexity 
    When determining the efficiency of an algorithm you must consider which of the following things select two answers?
    of any algorithm P is T(P) = C + TP(I), where C is the constant time part and TP(I) is the variable part of the algorithm, which depends on the instance characteristic I.

Example: In the algorithm of Linear Search above, the time complexity is calculated as follows:

Step 1: –Constant TimeStep 2: –Constant TimeStep 3: –Variable Time (Till the length of the Array, say n, or the index of the found element)Step 4: –Constant TimeStep 5: –Constant TimeStep 6: –Constant Time

Hence, T(P) = 5 + n, which can be said as T(n).


Practice Tags :