質問

Let's say we're building a DB for storing analytics from web sites or mobile apps. Transactions aren't important (it's ok to drop things) and only slow things down, supporting as many concurrent calls is, consistency can be eventual, and a part of data is going to be unstructured (i.e. a user can pass a random JSON blob as parameter to the analytics logging call, which we will store in the DB).

Something like Mongo would be a reasonable pick for this, as it does satisfy most of the requirements I listed above. However, one does often hear about the ability of Postgres of being customizable to fill a lot of different roles. I imagine one can disable transactions etc.

I'm not very familiar with Postgres setting, and thus I must ask: is it in fact possible to adapt Postgres to fit the requirements I listed above?

役に立ちましたか?

解決

You'd have to try it out/do some tuning to see if it meets your performance needs. But, Postgres has built-in support for transparently storing large column values (up to 1GB) efficiently called TOAST (since 7.1). The Postgres-as-a-service provider Heroku takes advantaged of this to offer "document store" capabilities using their hstore unstructured text Postgres datatype - that page has example applications from hstore adopters. As far as tuning, I think Postgres 9.0 High Performance is good reference for someone unfamiliar with postgres to figure out where to look in the Postgres online doc by topic. For completeness, Postgres has a large object API that can handle objects up to 2GB, but this would make migrating to another RDBMS more difficult than using a transparent solution like TOAST.

他のヒント

You can look at using https://www.mangodb.io/ which is a wire-compatible MongoDB to SQL proxy.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top