Question

I'm developing have Hadoop running on top of Cassandra. It's all running very well, but I have now bumped into a problem I can't find the solution to.

One of my columns contains a collection, definition is something similar to:

create table productUsage( .... products map, productcategories map )... etc.

In my map/reduce map function, I need to read the values from these columns, but can't work out how to convert the column data - which is a byte buffer - into the a usable HashMap variable - the ByteBufferUtil function doesn't seem to help.

The map/reduce map code I have that extracts the column values at the moment looks like this...

string productid; HashMap products;

for (Entry column : columns.entrySet()){

  if ("productid".equalsIgnoreCase(column.getKey())){
      productid = ByteBufferUtil.string(column.getValue());
  }

  if ("products".equalsIgnoreCase(column.getKey())){
      products = ???? //ByteBufferUtil.string(column.getValue());
  }        

}

Does anyone have any idea's or can anyone point me in the right direction?

Thanks Gerry

Was it helpful?

Solution

I'll leave it as an answer then. Use MapType.getInstance(K-type,V-Type).compose(column.getValues()) Where K-Type and V-Type are instances of classes for key and value (for example Int32Type.instance, UTF8Type.instance etc)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top