As with data structures, in a Computer Science course you’ll get to study algorithms a lot.

You might have heard the terms

- bubble sort
- quick sort
- insertion sort
- binary search
- search by hashing
- serial search

Most CS courses will introduce all of these, and you’ll get in all sorts of details about them. Not here, because we have very little time and lots of things to do! But I might create a dedicated course about them in the future, because they are important to know.

What’s very important is learning how to calculate the complexity of the algorithm. We call it **Big-O**.

Some algorithms scale very poorly with the growing amount of inputs.

Some others scale very well.

Ranging from most efficient to least efficient, here’s how we classify algorithms:

- Constant - O(1)
- Logarithmic - O(log n)
- Linear - O(n)
- Linearithmic - O(n log n)
- Quadratic - O(n²)
- Exponential - O(2^n)
- Factorial - O(n!) ￼ Let’s apply this to the time metric.

A constant efficiency means that if you have 1 input, or 10000 inputs, the time required to complete the algorithm is the same.

A linear efficiency means that the time required scales linearly with the number of inputs. 1 input equals to one second? Then 600 inputs equals to 10 minutes.

Of course it’s very hard to create efficient algorithms, and the more complex an algorithm is and the more central to the core business of your company/program, the bigger the benefits of making an algorithm more efficient.