Programmers often talk about the time complexity of an algorithm, e.g. O(log n) or O(n^2).
Time complexity classifications are made as the input size goes to infinity, but ironically infinite input size in computation is not used.
Put another way, the classification of an algorithm is based on a situation that algorithm will never be in: where n = infinity.
Also, consider that a polynomial time algorithm where the exponent is huge is just as useless as an exponential time algorithm with tiny base (e.g., 1.00000001^n) is useful.
Given this, how much can I rely on the Big-O time complexity to advise choice of an algorithm?