Question

after trying to work with mongoDB , and finding out that it dosn't work with documents that are larger the 16 mega .
i need to find way to load large complex json file ( i guess to memory ) .
and to transform it to my need via ( i guess ) Query processor like jsoniq , but i open to other solutions .
the key here that i dont what to change the master json . and i don't care which programming language
to make it ,
but i just want to find method to do it right and fast.

Was it helpful?

Solution

I think the Jackson library can help you. Its official website states the following features:

Jackson is a:

1) Streaming (reading, writing)

2) FAST (measured to be faster than any other Java json parser and data binder)

3) Powerful (full data binding for common JDK classes as well as any Java bean class, Collection, Map or Enum)

4) Zero-dependency (does not rely on other packages beyond JDK)

5)Open Source (LGPL or AL)

6) Fully conformant

7) Extremely configurable

JSON processor (JSON parser + JSON generator) written in Java. Beyond basic JSON reading/writing (parsing, generating), it also offers full node-based Tree Model, as well as full OJM (Object/Json Mapper) data binding functionality.

OTHER TIPS

Zorba can also help for this kind of use case.

It is a open-source memory engine that supports XQuery and JSONiq. It can consume all kind of input including JSON from the local filesystem, from the Web or from other sources.

It is commonly used on the command line, but since it is open-source (C++), it can also be adapted and integrated in other environments.

For completeness, other JSONiq implementations are available, such as Xidel. Finally, XQuery was extended in its version 3.1 to support JSON as well, so that many XQuery engines (Saxon, ...) also qualify for manipulating JSON.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top