Question

In a distributed systems class we've discussed an algorithm to create an algorithm for performing a multicast that tolerates process failures and still guarantees that if anyone receives the multicast, everyone will exactly once.

However, this algorithm depends on a reliable unicast. If instead of a reliable unicast we have a lossy one which can randomly drop packets, is there any way to construct a multicast algorithm with the same properties?

Was it helpful?

Solution

Usually reliable Unicast (like TCP, RUDP) is implemented on top of unreliable unicast (IP).

So, yes, you can implement your multicast algorithm on top of unreliable unicast, but it will probably mean that you have to duplicate the functionality that would otherwise be provided by the reliable unicast protocol. This mostly means taking care that if a packet gets dropped, the sender will re-transmit it.

Note that the guarantees given by 'reliable' protocols are usually rather weak (best-effort delivery). If the underlying network communication gets sufficiently screwed so that you can not get data across reliably, it simply gives up.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top