Question

I am using the ProtoWriter/ProtoReader classes to implement something similar to the DataTableSerializer included with the Protobuf-net source. One difference is that after the initial transfer of the table contents all future updates are serialised incrementally.

Currently I'm not disposing the ProtoWriter instance until the program ends (as I want all future updates to be serialised with the same writer). This has the effect of delaying all writing to the output stream until the internal buffer size of 1024 bytes is reached.

Should I be creating a new ProtoWriter for each incremental update? Is there another way to force the writer to write to the stream?

Sample code:

    private readonly ProtoWriter _writer;

    private void WriteUpdate(IEnumerable<IReactiveColumn> columns, int rowIndex)
    {
        // Start the row group
        ProtoWriter.WriteFieldHeader(ProtobufOperationTypes.Update, WireType.StartGroup, _writer);
        var token = ProtoWriter.StartSubItem(rowIndex, _writer);

        var rowId = rowIndex;

        // Send the row id so that it can be matched against the local row id at the other end.
        ProtoWriter.WriteFieldHeader(ProtobufFieldIds.RowId, WireType.Variant, _writer);
        ProtoWriter.WriteInt32(rowId, _writer);

        foreach (var column in columns)
        {
            var fieldId = _columnsToFieldIds[column.ColumnId];

            WriteColumn(column, fieldId, rowId);
        }

        ProtoWriter.EndSubItem(token, _writer);
    }
Was it helpful?

Solution

Interesting question. The flush method isn't exposed because internally it is not always the case that it is appropriate to flush, but I guess there's not a huge reason not to expose this and just let it no-op. On the other hand:

  • it is already a lightweight wrapper around a stream: you could dispose and recreate
  • or you could just keep writing and make full use of the extra buffering
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top