質問

I need to do some performance tests for my program and I need to evaluate the worst IO access time to some data stored in files, so I think that the best way to evaluate this is to randomly store these data into different HD sectors in order to avoid contiguous data access and caching improvements. I think that the only way to do this is using some low-level OS commands, like dd in UNIX where you can specify the sector where you write the data, but if I'm not mistaken, this is an insecure method. Someone know a good alternative to do this?

PS: Any solution for any OS will work, the only requirement is that I have to do the tests over different data size, accessing the data through a JAVA program.

役に立ちましたか?

解決

I think that the only way to do this is using some low-level OS commands

No... RandomAccessFile has .seek():

final RandomAccessFile f = new RandomAccessFile(someFile, "rw");

f.seek(someRandomLong);
f.write(...);

Now, it is of course up to you to ensure that writes don't collide with one another.

Another solution is to map the file in memory and set the buffer's position to some random position before writing.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top