Question

While reading the Netty tutorial, I've found a simple description of how to integrate Netty and Google Protocol Buffers. I've started to investigate its example (because there is no more information in the documentation) and written a simple application like the example local time application. But this example is using static initialization in PipeFactory Class, e.g.:

import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.handler.codec.protobuf.ProtobufDecoder;
import org.jboss.netty.handler.codec.protobuf.ProtobufEncoder;
import org.jboss.netty.handler.codec.protobuf.ProtobufVarint32FrameDecoder;
import org.jboss.netty.handler.codec.protobuf.ProtobufVarint32LengthFieldPrepender;

import static org.jboss.netty.channel.Channels.pipeline;

/**
 * @author sergiizagriichuk
 */
class ProtoCommunicationClientPipeFactory implements ChannelPipelineFactory {

    public ChannelPipeline getPipeline() throws Exception {
        ChannelPipeline p = pipeline();
        p.addLast("frameDecoder", new ProtobufVarint32FrameDecoder());
        p.addLast("protobufDecoder", new ProtobufDecoder(Communication.DataMessage.getDefaultInstance()));

        p.addLast("frameEncoder", new ProtobufVarint32LengthFieldPrepender());
        p.addLast("protobufEncoder", new ProtobufEncoder());

        p.addLast("handler", new ProtoCommunicationClientHandler());
        return p;
    }

}

(Please take a look at line p.addLast("protobufDecoder", new ProtobufDecoder(Communication.DataMessage.getDefaultInstance()));) and just one factory can be created (as I understand) for ClientBootstrap class, I mean bootstrap.setPipelineFactory() method. So, in this situation I can use ONE message to send to server and ONE message to receive from server and it is bad for me, and I think not just for me :( How can I use different messages to and from for just one connection? Perhaps I can create a few protobufDecoder like this

p.addLast("protobufDecoder", new ProtobufDecoder(Communication.DataMessage.getDefaultInstance()));
p.addLast("protobufDecoder", new ProtobufDecoder(Communication.TestMessage.getDefaultInstance()));
p.addLast("protobufDecoder", new ProtobufDecoder(Communication.SrcMessage.getDefaultInstance()));

or other techniques? Thanks a lot.

Was it helpful?

Solution

I've found thread of author of netty in google groups and understood that I have to change my architecture or write my own decoder as I wrote above, So, Start to think what way will be easy and better.

OTHER TIPS

If you are going to write your own codecs anyway, you might want to look at implementing the Externalizable interface for custom data objects.

  • Serializable is low-effort, but worst performance (serializes everything).
  • Protobuf is a good trade-off between effort and performance (requires .proto maintenance)
  • Externalizable is high effort, but best performance (custom minimal codecs)

If you already know your project will have to scale like a mountain goat, you may have to go the hard road. Protobuf is not a silver bullet.

Theoretically this can be done by modifying the pipeline for each incoming message to suit the incoming message. Take a look at the port unification example in Netty.

Sequence would be:
1) In frame decoder or another "DecoderMappingDecoder" you check the message type of the incoming message
2) Modify the pipeline dynamically as shown in the example

But why not use different connections and follow this sequence:
1) Add other decoders in pipeline based on the incoming message only once.
2) Add the same instance of channel upstream handler as the last handler in the pipeline, this way all messages get routed to the same instance, which is almost like having a single connection.

the problem is that there is no way to distinct two different protobuf messages from each other in binary format. But there is a way to solve it within the protobuf file:

message AnyMessage {
    message DataMessage { [...] }
    optional DataMessage dataMessage = 1;
    message TestMessage { [...] }
    optional TestMessage testMessage = 2;
    message SrcMessage { [...] }
    optional SrcMessage srcMessage = 3;
}

optional fields that are not set produce no overhead. Additionally you can add an Enum, but it is just a bonus.

The issue is not quite a Netty limitation or encoder/decoder limitation. The problem is that Google Protocol Buffers are offering just a way to serialize/deserialize objects, but is not provide a protocol. They have some kind of RPC implementation as part of standard distribution, but if you'll try to implement their RPC protocol then you'll end up with 3 layers of indirection. What I have done in one of the project, was to define a message that is basically an union of messages. This message contains one field that is Type and another field that is the actual message. You'll still end-up with 2 indirection layers, but not 3. In this way the example from Netty will work for you, but as was mention in a previous post, you have to put more logic in the business logic handler.

You can use message tunneling to send various types of messages as payload in a single message. Hope that helps

After long research and suffering... I came up with idea of using composition of messages into one wrapper message. Inside that message I use oneof key to limit the number of allowed objects to the only one. Checkout the example:

message OneMessage {
    MessageType messageType = 1;

    oneof messageBody {
        Event event = 2;
        Request request  = 3;
        Response response = 4;
    }

    string messageCode = 5; //unique message code
    int64 timestamp = 6; //server time
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top