I'm using Java sockets to make two android devices communicating with the same app. The communication protocol is:

1. client sends packet size S
2. client sends byte array with size S

I'm using DataOutputStream and writeInt() to write the size as raw value on the stream. The server then reads this value whit DataInputStream and readInt(). The problem is that readInt() reads the right value only for the first packet. The second time this method returns a random int.

Relevant code snippets:

Client side: this method is called on top of a working TCP connection with the server

public void write(byte[] packet)
{
    try
    {
        dataOutputStream.writeInt(packet.length);
        dataOutputStream.flush();

        dataOutputStream.write(packet);
        dataOutputStream.flush();        
    }
    catch (IOException e)
    {
        Log.e(ERROR_TAG, "write() failed", e);
    }
}

Server side: this is the loop that reads data

...

int readBytes = 0;
int packetSize = 0;

while (true) {
    byte[] buffer = new byte[NET_BUFF_SIZE];

    try // first it reads the packet size from packet header
    {
        packetSize = dataInputStream.readInt();
    } catch (IOException e) {
        Log.e(ERROR_TAG, "readInt() failed", e);
        return;
    }

    while (readBytes < packetSize) {
        try {
            int readResult = dataInputStream.read(buffer);

            if (readResult != -1) {
                readBytes += readResult;
            } else {
                break;
            }
        } catch (IOException e) {
            Log.e(ERROR_TAG, "read() failed", e);
            break;
        }
    }
}

So, when the client calls write() to send the second packet, the server reads a wrong size from the stream.

DataOutputStream and DataInputStream are initialized in this way:

// Server
inputStream = clientSocket.getInputStream();
dataInputStream = new DataInputStream(inputStream);

// Client
outputStream = socket.getOutputStream();
dataOutputStream = new DataOutputStream(outputStream);

What I am missing?

有帮助吗?

解决方案 2

On your server side you should re-initialize the readBytes variable after the while(true) loop:

while (true) {
     readBytes = 0;
     ....
}

All the best. A debugger would help you spot this problem sooner.

其他提示

The server reads as much data as is available. It may read more than what's included in the packet that the client sent, or it may read less. With the loop you seem to handle the case when read returns less than what you expect, but you also should handle the case when it reads more than what's included in the packet. Remember that TCP is stream-oriented: even though you call flush there's no guarantee that the remote application receives the data in separate calls to read.

The DataInput interface defines a method called readFully that reads exactly as many bytes as you want, no more and no less. This means you can remove the loop, simplifying the code to read a packet to this:

packetSize = dataInputStream.readInt();
dataInputStream.readFully(buffer, 0, packetSize);
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top