Then in back-end I plan to build a string that should point to the "viewport at the given zoom level", e.g. '02113' and I want to find all points that have this prefix ('02113') in the tree coordinates column.
An ordinary index should perform well on any modern dbms as long as you're looking at the left-most five (or six or seven) characters of a string in an indexed column.
SELECT ...
...
WHERE column_name LIKE '02113%';
In PostgreSQL, you can also build an index on an expression. So you could create an index on the first five characters.
CREATE INDEX your_index_name ON your_table (left(column_name, 5));
I'd expect PostgreSQL's query optimizer to pick the right index if there were three or four like that. (One for 5 characters, one for 6 characters, etc.)
I build a table, and I populated it with a million rows of random data.
In the following query, PostgreSQL's query optimizer did pick the right index.
explain analyze
select s
from coords
where left(s, 5) ='12345';
It returned in 0.1 ms.
I also tested using GROUP BY. Again, PostgreSQL's query optimizer picked the right index.
"GroupAggregate (cost=0.00..62783.15 rows=899423 width=8) (actual time=91.300..3096.788 rows=90 loops=1)"
" -> Index Scan using coords_left_idx1 on coords (cost=0.00..46540.36 rows=1000000 width=8) (actual time=0.051..2915.265 rows=1000000 loops=1)"
"Total runtime: 3096.914 ms"
An expression like left(name, 2)
in the GROUP BY clause will require PostgreSQL to touch every row in the index, if not every row in the table. That's why my query took 3096ms; it had to touch a million rows in the index. But you can see from the EXPLAIN plan that it used the index.
Ordinarily, I'd expect a geographic application to use a bounding box against a PostGIS table to reduce the number of rows you access. If your quad tree implementation can't do better than that, I'd stick with PostGIS long enough to become an expert with it. (You won't know for sure that it can't do the job until you've spent some time in it.)