Question

We have a solution which is spread globally across a few Sybase DB servers and fronted by an Oracle Coherence cache.

Now, we need to support 'cache speed writes', yet due to the internationally-replicated nature of our DB, we need to accept data for DB persisting faster than the DB can actually write the data, which you will probably all agree is quite a problem.

I am therefore wondering what the recommended approach to tackle this situation would be.

Points of note:

  • There are no constraints
  • There are multiple shards split according to usage statistics
Was it helpful?

Solution 2

I have decided to use horizontal partitioning on some of the larger and more frequently accessed tables, which is something that is natively supported by Sybase ASE 15+ and is transparent to client applications.

OTHER TIPS

One approach to consider:

DB may potentially write slower than you need if you're writing to a read-optimized database or tables. There could be a lot of constraints and indexes involved, and a lot of time "wasted" having them checked and re-calculated.

You might want to consider a separate schema or set of tables with an appropriate write-optimized storage engine and no indexes. There could be substantial performance gains here.

There will be another process then that will transfer data from write-optimized to read-optimized (permanent) schema.

In essence, if a synchronous process running into limitations, one would split it in multiple asynchronous processes with introduction of throttling and/or queue mechanisms.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top