문제

Here is my situation. I have a server application designed for multiple users to use, so there are a lot of read/write operation a the same time. And the responses need to be FAST.

Currently I cached all the data in memory so the data reading/writing operation is as fast as we expected.To prevent the data lock from causing me problems I used Queue to line the users' requests up, make them come into the handler process one by one.

But soon I found a problem. Program can only handle one request at one time. Even the program benchmark timer report me it used zero ms to process, but there's still limits to handle requests in one second.Say now I handled about 100 times per second.

So I'm seeking for some methods more concurrent, like 8 processes to handle the 8 requests at the SAME TIME. That will be so nice. But there's bigger problem with data sharing, I don't wanna reinvent the wheel. So I checked the mongodb, redis and sqlite.

here's my homework,Correct me if I was wrong,Thanks a lot

mongodb and redis are really fast as they stated ,but they used the same mechanism, they can handle one request one time, that's not what I'm seeking for.

So the sqlite is pretty much more closer, multiple processes can open same db file at the same time and read, the pain is its write lock(I don't how much better the new lock in sqlite3 works).

Here's my question, is there solid and good solution for this scenario? If I separate write process in one, would it help?

thanks for any comment

도움이 되었습니까?

해결책

The solution with MongoDB is sharding. MongoDB sharding basically allows for you to throw more processors at the problem. More processors = more write threads.

Each MongoDB instance only has one write thread. Sharding gives more instances and therefore allows for more writes.

However, there is a bigger problem here. Disk throughput.

I've had Mongo running over 1000 inserts / sec if the whole thing is in RAM. But most people use Mongo as a Database with an actual file backing. So if you're using Mongo for really heavy writes you have to be prepared with disks that can accept that level of throughput.

Again, the way around the disk throughput problem is sharding. Build out more shards and you'll get less writes / disk and basically less locking all around.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top