This is not implemented, though could be a nice feature. (and FYI I would not have it set by default in .get(...)
because its not explicit enough (e.g. should it ALWAYS read ALL the tables, too much guessing), but could have an argument to control which sub-tables I suppose. If you are interested in implemented this, pls put to github.
You can use some internal functions to make this pretty easy though (and you could even pass a where
to each of the selects.
In [13]: store = pd.HDFStore('test.h5',mode='w')
In [14]: store.append('df/foo1',DataFrame(np.random.randn(10,2)))
In [15]: store.append('df/foo2',DataFrame(np.random.randn(10,2)))
In [16]: pd.concat([ store.select(node._v_pathname) for node in store.get_node('df') ])
Out[16]:
0 1
0 -0.495847 -1.449251
1 -0.494721 1.572560
2 1.219985 0.280878
3 -0.419651 1.975562
4 -0.489689 -2.712342
5 -0.022466 -0.238129
6 -1.195269 -0.028390
7 -0.192648 1.220730
8 1.331892 0.950508
9 -0.790354 -0.743006
0 -0.761820 0.847983
1 -0.126829 1.304889
2 0.667949 -1.481652
3 0.030162 -0.111911
4 -0.433762 -0.596412
5 -1.110968 0.411241
6 -0.428930 0.086527
7 -0.866701 -1.286884
8 -0.649420 0.227999
9 -0.100669 -0.205232
[20 rows x 2 columns]
In [17]: store.close()
Keep in mind though if I were doing this, their is little reason to have SEPARATE nodes when the data is the same; its MUCH more efficient to have it in a single table with say a field that indicates its name or id or whatever.
Almost always I use different nodes for heteregenous data (not necessary different dtypes, but different 'types' of data).
That said, you can organize however you like!