No disrespect meant to orgrisel, but his answer is lacking, as it completely ignores the number of features.
AdaBoost's time complexity is trivially O(T f), where f is the runtime of the weak learner in use.
For a normal style decision tree such as C4.5 the time complexity is O(N D^2), where D is the number of features. A single level decision tree would be O(N D)
You should never use experimentation to determine the runtime complexity of an algorithm as has been suggested. First, you will be unable to easily distinguish between similar complexities such as O(N log(N)) and O(N log(N)^2). It also risks being fooled by underlying implementation details. For example, many sorts can exhibit O(N) behavior when the data is mostly sorted or contains a few unique attributes. If you gave in an input with few unique values the runtime would exhibit faster results then the expected general case.