Question

If I serialize an object using a schema version 1, and later update the schema to version 2 (say by adding a field) - am I required to use schema version 1 when later deserializing the object? Ideally I would like to just use schema version 2 and have the deserialized object have the default value for the field that was added to the schema after the object was originally serialized.

Maybe some code will explain better...

schema1:

{"type": "record",
 "name": "User",
 "fields": [
  {"name": "firstName", "type": "string"}
 ]}

schema2:

{"type": "record",
 "name": "User",
 "fields": [
  {"name": "firstName", "type": "string"},
  {"name": "lastName", "type": "string", "default": ""}
 ]}

using the generic non-code-generation approach:

// serialize
ByteArrayOutputStream out = new ByteArrayOutputStream();
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
GenericDatumWriter writer = new GenericDatumWriter(schema1);
GenericRecord datum = new GenericData.Record(schema1);
datum.put("firstName", "Jack");
writer.write(datum, encoder);
encoder.flush();
out.close();
byte[] bytes = out.toByteArray();

// deserialize
// I would like to not have any reference to schema1 below here
DatumReader<GenericRecord> reader = new GenericDatumReader<GenericRecord>(schema2);
Decoder decoder = DecoderFactory.get().binaryDecoder(bytes, null);
GenericRecord result = reader.read(null, decoder);

results in an EOFException. Using the jsonEncoder results in an AvroTypeException.

I know it will work if I pass both schema1 and schema2 to the GenericDatumReader constructor, but I'd like to not have to keep a repository of all previous schemas and also somehow keep track of which schema was used to serialize each particular object.

I also tried the code-gen approach, first serializing to a file using the User class generated from schema1:

User user = new User();
user.setFirstName("Jack");
DatumWriter<User> writer = new SpecificDatumWriter<User>(User.class);
FileOutputStream out = new FileOutputStream("user.avro");
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
writer.write(user, encoder);
encoder.flush();
out.close();

Then updating the schema to version 2, regenerating the User class, and attempting to read the file:

DatumReader<User> reader = new SpecificDatumReader<User>(User.class);
FileInputStream in = new FileInputStream("user.avro");
Decoder decoder = DecoderFactory.get().binaryDecoder(in, null);
User user = reader.read(null, decoder);

but it also results in an EOFException.

Just for comparison's sake, what I'm trying to do seems to work with protobufs...

format:

option java_outer_classname = "UserProto";
message User {
    optional string first_name = 1;
}

serialize:

UserProto.User.Builder user = UserProto.User.newBuilder();
user.setFirstName("Jack");
FileOutputStream out = new FileOutputStream("user.data");
user.build().writeTo(out);

add optional last_name to format, regen UserProto, and deserialize:

FileInputStream in = new FileInputStream("user.data");
UserProto.User user = UserProto.User.parseFrom(in);

as expected, user.getLastName() is the empty string.

Can something like this be done with Avro?

Was it helpful?

Solution

Avro and Protocol Buffers have different approaches to handling versioning, and which approach is better depends on your use case.

In Protocol Buffers you have to explicitly tag every field with a number, and those numbers are stored along with the fields' values in the binary representation. Thus, as long as you never change the meaning of a number in a subsequent schema version, you can still decode a record encoded in a different schema version. If the decoder sees a tag number that it doesn't recognise, it can simply skip it.

Avro takes a different approach: there are no tag numbers, instead the binary layout is completely determined by the program doing the encoding — this is the writer's schema. (A record's fields are simply stored one after another in the binary encoding, without any tagging or separator, and the order is determined by the writer's schema.) This makes the encoding more compact, and saves you from having to manually maintain tags in the schema. But it does mean that for reading, you have to know the exact schema with which the data was written, or you won't be able to make sense of it.

If knowing the writer's schema is essential for decoding Avro, the reader's schema is a layer of niceness on top of it. If you're doing code generation in a program that needs to read Avro data, you can do the codegen off the reader's schema, which saves you from having to regenerate it every time the writer's schema changes (assuming it changes in a way that can be resolved). But it doesn't save you from having to know the writer's schema.

Pros & Cons

Avro's approach is good in an environment where you have lots of records that are known to have the exact same schema version, because you can just include the schema in the metadata at the beginning of the file, and know that the next million records can all be decoded using that schema. This happens a lot in a MapReduce context, which explains why Avro came out of the Hadoop project.

Protocol Buffers' approach is probably better for RPC, where individual objects are being sent over the network (as request parameters or return value). If you use Avro here, you may have different clients and different servers all with different schema versions, so you'd have to tag every binary-encoded blob with the Avro schema version it's using, and maintain a registry of schemas. At that point you might as well have used Protocol Buffers' built-in tagging.

OTHER TIPS

To do what you are trying to do you need to make the last_name field optional, by allowing null values. The type for last_name should be ["null", "string"] instead of "string"

I have tried to circumvent this problem. I am putting it here:

I have also tried using two schemas one schema just an addition of another column to the other schema using the refection API of Avro. I have the following schema:

Employee (having name, age, ssn)
ExtendedEmployee (extending Employee and having gender column)

I am assuming on file which had the Employee objects earlier now also has the ExtendedEmployee object and I tried to read that file as :

    RecordHandler rh = new RecordHandler();
    if (rh.readObject(employeeSchema, dbLocation) instanceof Employee) {
        Employee e = (Employee) rh.readObject(employeeSchema, dbLocation);
        System.out.print(e.toString());
    } else if (rh.readObject(schema, dbLocation) instanceof ExtendedEmployee) {
        ExtendedEmployee e = (ExtendedEmployee) rh.readObject(schema, dbLocation);
        System.out.print(e.toString());
    }

This solves the problem here. However, I would love to know if there is an API wherein we can give the ExtendedEmployee schema to read the objects of Employee as well.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top