Question

For example, in real life, you're working with real time data that is constantly appended to. Computer science assumes a static problem, like traveling salesman. In reality you would start with a set of cities and add and remove them over time, so the solution at the individual step is trivial, and it's easier to store the result in memory than to run a fancy algorithm every time.

Likewise if you want to traverse a data structure in a certain way, isnt it easier to create a system of pointers and just store it in memory (x is the last branch etc) than having to find the last branch at runtime with an algorithm?

Is computer science actually excessively computational to the point it ignores easy in-memory or pointer solutions to most data structure problems?

Was it helpful?

Solution

Your question takes a very narrow and untrue view of what computer science is. Computer science doesn't "assume a static problem" -- there is an entire subfield studying Online Algorithms, which take input problems piece-by-piece, as just one example.

Computer science is the system of methods for analyzing computing problems (i.e., making efficient use of computing time and computing memory). If you have (correctly) worked out a better way to solve a problem, then you are doing Computer Science. In general, it makes no such "static assumptions" or "ignores in-memory pointer solutions". You'll find that nearly any problem has been studied from nearly any angle, by someone in computer science.

When you describe "a system of pointers and how to store them in memory", what are you doing but describing an algorithm? An algorithm is a description of executable steps -- if you have devices a system of pointer-bookkeeping that solves the problem, you have devised an algorithm. If that algorithm doesn't find the "optimal" solution, it's still an algorithm that may be analyzeable by standard methods in Computer Science (e.g., "average"/worst-case runtime; or approximation bounds)

It's true that many problems faced in the "real world" are not addressable solely by the algorithms and data-structures taught in the first few years of a computer science education, and also that many real word problems do not require optimal solutions (not that computer science only studies optimal solutions -- it doesn't, optimizations and approximations are very intently studied).

However, the point of teaching the "standard" algorithms and data-structures isn't only to give a complete toolkit, but to provide a breadth of examples that introduce many different techniques:

  • Algorithms for solving NP-complete problems like the Traveling Salesman Problem can introduce the concept of heuristics (which may or may not be "complete") and approximations
  • Data-structures like doubling-arrays can introduce the concept of amortized analysis
  • Data-structures like balanced trees prove out the concept of worst case analysis, showing that naive implementations can lead to pathological runtimes, and showing that there are bounds to how much you can improve in general (e.g., you can do insertion & lookup better than $\Theta(n)$ and no better than $\Theta(\log(n))$)
Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top