1. Introduction to Algorithms
- Input, output, definiteness, finiteness, effectiveness.
- Top-down & Bottom-up approaches
- Algorithm complexity
- Space complexity
- Time complexity
- Time space trade off
- Big ‘O’ notation
An algorithm defines some process, how an operation works. It is specified in terms of simple steps. Each step must be clearly implementable in a mechanical way, perhaps by a single machine instruction, by a single programming language statement or by a previously defined algorithm. An algorithm to solve a problem must be correct; it must produce correct output when given valid input data. When the input data is invalid we usually want the algorithm to warn us and terminate gracefully. Often we want to know how much time an algorithm will take or how much space (computer memory) it will use and this leads to the analysis of algorithms and to the field of computational complexity.
Algorithms and data structures can be specified in any adequately precise language. English and other natural languages are satisfactory if used with care to avoid ambiguity but more precise mathematical languages and programming languages are generally preferred. The execution of the latter can also be automated. A program is the expression of some algorithm(s) and data structure(s) in a particular programming language. The program can be used on all types of computer for which suitable language compilers or interpreters exist, making it more valuable.
The basic features of an algorithm includes appropriate input, output, definiteness, finiteness and effectiveness. An algorithm must operate upon a given set of inputs to give desired output. An algorithm must definitely result in an output that is either known or desired from the execution of a program. An algorithm must be finite in itself designed within appropriate boundaries. An algorithm must be effective and efficient enough in terms of space and time considerations to minimize the complexity inherent in its implementation.
Top down and Bottom up Approaches:
In the Top-down model an overview of the system is formulated, without going into detail for any part of it. Each part of the system is then refined by designing it in more detail. Each new part may then be refined again, defining it in yet more detail until the entire specification is detailed enough to validate the Model. The “Top-down” Model is often designed to get an overview of the proposed system, but insufficient and irrelevant in understanding the elementary mechanisms.
By contrast in Bottom-up design individual parts of the system are specified in detail. The part are then linked together to form larger components, which are in turn linked until a complete system is formed. Strategies based on the “bottom-up” information flow seem potentially necessary and sufficient because they are based on the knowledge of all variables that may affect the elements of the system.
Space Complexity:
Space complexity theory measures the amount of space required to run an algorithm or a program. The better the time complexity of an algorithm is, the faster the algorithm will carry out its work in practice. Apart from time complexity, its space complexity is also important. This is essentially the number of memory cells required by an algorithm. A good algorithm keeps this number as small as possible.
Instruction Space:
Instruction space is the space for simple variables, fixed-size structure variables and constants.
Sometimes, it is essential to pre-determine the time required by an algorithm to execute successfully to give the desired result. Sometimes, it is required to estimate the time, a program may take for its execution, before it is actually executed. A good measure for the running time is the number of executed comparisons.
Time complexity theory measures the amount of time required to execute an algorithm or a program. Time complexity theory measures things in number of “steps”, or computer operations (like addition, subtraction, multiplication, assignment etc.). A count of the number of steps to run an algorithm is the “raw time complexity”. A raw time complexity of “4N2 – 5N + 41″ will be O(N2). The complexity for constant values can all be reduced to one, say O(1).
The number of (machine) instructions which a program executes during its running time is called its time complexity. This number depends primarily on the size of the program’s input, that is approximately on the number of the strings to be sorted (and their length) and the algorithm used.
There is often a time-space-tradeoff involved in a problem, that is, it cannot be solved with few computing time and low memory consumption. One then has to make a compromise and to exchange computing time for memory consumption or vice versa, depending on which algorithm one chooses and how one parameterizes it.
Time complexities are always specified in the so-called O-notationin computer science. One way to describe complexity is by saying that the sorting method has running time O(n2). The expression O is also called Landau’s symbol.
Mathematically speaking, O(n2) stands for a set of functions, exactly for all those functions which, “in the long run”, do now grow faster than the function n2, that is for those functions for which the function n2 is an upper bound (apart from a constant factor). To be precise, the following holds true: A function f is an element of the set O(n2) if there are a factor c and an integer number n0 such that for all n equal to or greater than this n0 the following holds:
f(n) <= c . n2
A function f from O(n2) may grow considerably more slowly than n2 so that, mathematically speaking, the quotient f/n2 converges to 0 with growing n. An example of this is the function f(n) = n.
Ways to calculate frequency/efficiency of an algorithm:
Eg. (i):
for i = 1 to n
for j = 1 to n
i j No. of times
1 1 to n n
2 2 to n n-1
3 3 to n n-2
– – –
– – –
– – –
n-1 n-1 to n 2
n n to n 1
Therefore,
1+2+3+……………+(n-1)+n
=n(n+1)/2 (Sum of n terms)
Frequency, f(n) = n(n+1)/2 = n2/2 + n/2 = 0.5 n2 + 0.5 n
A higher value of n2 will be most effective. 0.5 is a negligible value.
Therefore in Big(O) notation, f(n) = O(n2).
Eg. (ii):
for i = 1 to n
sum = sum + I;
Big (O) Notation, f(n) = O(n)
Linear Search:
1. Best Case: When search value is found at first position
f(n) = 1
2. Worst Case: Value not in array or in the end
f(n) = n (n number of comparisons are made).
3. Average Case: 1, 2, 3, ………………………..n
(1 + 2 + 3 + ……………………… + n) / n
=n(n+1)/2.n = (n+1)/2
=n/2 + 1/2
=0.5 n + 0.5
f(n) = O(n)
Thus Big-O Notation is a useful measurement tool for measuring time complexity.