Question

I'm trying to implement the Go back N protocol on two separate client and server applications. Say my sequence numbers must fit 3 bits, so 2^3 = 8 max numbers, and 2^3 - 1 = 7 window size.

I initially send my whole window. The first two packets (0 and 1) are received correctly. Packet 2 is dropped. When the Receiver gets packets 3 through 6, it was expecting 2, so it must nack the packet it got saying it wants 2.

Sender     Receiver
  0           0
  1           1
  2    (packet dropped)
  3         nack2
  4         nack2
  5         nack2
  6         nack2

When the Sender receives the first nack2, it understands that 0 and 1 have been received (through piggybacking) and move its window forward, but it must also resend its window starting at sequence number 2 (so 2-3-4-5-6- and possibly 7-0). By the time the Sender receives the second nack2, it has already sent those packets. Because of the protocol, the Sender will again resend his entire window, including 2. Now the Receiver will possibly receive 2 (and the others), but in the second nack2 batch it will re-receive 2, which is out of sequence, will have to nack its expected packet, and so on. Am I correct in all these assumptions?

If I am, it seems to me that Go Back N is sending a lot more packets than Stop and Wait, especially the more you increase its window size. What am I not getting?

Was it helpful?

Solution

The solution I found to this problem was to simply use more bits to represent the sequence number and therefore have a larger MAX. If your MAX is 2 * Window size, then a delayed 2 cannot be misinterpreted as a proper ACK.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top