Just answered a similar question yesterday. Based on your description and the picture, I don't see a need to change your architecture. If you're using one of the main WCF bindings (webHttpBinding, wsHttpBinding or BasicHTTPBinding), the service you deploy should easily be able handle dozens of concurrent users, all saving and reading at the same time.
Each client request will generate its own connection and web service objects, each of which can communicate concurrently with your database, whether that request is to read data or write data. When the response is sent back to the client, your WCF service will destroy the objects and clean up the memory for you as long as you're not doing something strange.
I've spent the last two years working on WCF web services on and industrial scale. Lately I've been working on a load testing / benchmarking project that spins up hundreds of concurrent users, each of which is slamming our WCF test server with XML artifacts that get loaded into the database. We've managed to load up to 160 packages (about 110kb - each per client) per second. WCF is not perfect, but it's quick, clean and scales really well.
My experience has been that your database will be your bottleneck, not your WCF web service. If your client wants to scale this archtecture up to an Amazon size web service, then you bring in an F5 load balancer and scale it up that way.