It isn't a concurrent server if you don't either fork()
or process the connection in a (new?) thread. That's the definition of a concurrent server.
If I'm reading your code correctly, what you've got is a simple sequential server. It can only process one connection at a time. That's fine if the computation required for each response is minimal, as in your example. It's not so good if the computation involves a lot of effort — access disk or database, for example.
Note that a sequential server design is completely legitimate. So too is a concurrent server design. They should be applied to different workloads. Generally, though, a concurrent server will handle large traffic volumes better than a sequential server. Imagine if Google used sequential servers for responding to search requests!
Another design uses a thread pool or process pool with one thread or process farming out the work to other threads or processes. These are trickier to write so they work well.