Question

I have a data intensive application where speed is vital and network traffic should be kept as low as possible.
The set of elements returned in the different queries are usually overlapping but not identical.
If I decide to optimize all requests with the most specific queries and DTO-s, most probably the number of operations and types will grow fast and none of them will be reusable. On the other hand generalization gives the opportunity to reuse the code at the price of loosing performance.
Are there any good practices or guidelines to handle this problem, other than use common sense and measurements?

Was it helpful?

Solution

Over the time I have began shifting to specialized queries.
What you can do if you like your back-end to be generic you can cherry pick properties from an object to serialize.
That way you keep a heavy back-end OO design with great re-usability but a small network footprint. This is typically done with JSON.

What will happen if you follow layered architecture and OO principles is that you will shuffle way to much data. For example if your business tier should have a clear cut from your persistence layer you have to populate all data fields. So what I would do is to forget about the clean cut and let the db-connected persistence object wander up the layers for me to use lazy loading and only let go of it before serializing to the client.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top