I think it is interesting how csc148 and csc165 are converging. This gives one a better sense of how big-oh is used. Algorithms are an essential part of computer science. With big-oh it is possible to determine which algorithm is best without having to actually run the code. Sort and tree search algorithms are two examples that can be evaluated in this way.
The function f as a member of big-O just says that f is a function that does not grow faster than a function g times a constant multiplier (c); for a for numbers greater that a certain breakpoint (B). In simple terms, for n>no, f(n) tracks cg(n).
Knowing this, there are two possibilities for an algorithm: f(n) either tracks c(g(n) or it does not. One has to either prove or disprove this.
To prove, it must be shown to be f(n) <= cg(n), for a set of conditions c and B. To disprove, one must prove the negation of the statement. One must find an instance of n as a function of c and B for which the negation is valid.
Simple to say, not so easy to do.