Use the second approach. 1000 users * 10 categories = 10000 rows, which is by no means considered "large" in the database world.
Unless your client library forces you otherwise, you should use a natural key design:
Counter(user_id, category_id, count, PRIMARY KEY (user_id, category_id))
If your DBMS supports clustering, this whole table can be physically represented as a single B-Tree, which is efficient to query, modify and cache.
That being said, are you sure you need the count for eternity? Perhaps it would be better to keep the count only for the last 30 days1? That would require: 1000 users * 10 categories * 30 days = 300000 rows, which is still not particularly "large".
Alternatively, you might run a periodic batch job that multiplies all counts by some factor less than 1 (say 0.9), which would make old visits less "important" than the new ones. You'd probably want to use some floating-point type (as opposed to integer) for the counter in that scenario.
1 Or 90 or whatever...