Question

I was working on a simple client server program, with the intention of creating a chat program. I am new to socket programming in C. I have learnt that, to serve multiple clients, the server needs to fork a new process each time a client connects. Each time a client requests connection, the accept() function returns a descriptor id and after the fork() the parent closes the id.

Instead I didn't close the ID, such that each new client will get a newid, when accept() is invoked.

nsockfd = accept(lsockfd, (struct sockaddr *) &cli_addr, &cli_len);

now this is stored in 2 variables:

    if (client1 < 0)
    {   client1 = nsockfd;
        printf("if loop %d\n",nsockfd);
    }
    else 
    {   client2 = nsockfd;
        printf("else loop %d\n",nsockfd);
    }

Now rest of code is

        snprintf(buf, sizeof(buf), "Hi client1 Nice to meet you.",inet_ntoa(cli_addr.sin_addr));

        ret = send(client1, buf, strlen(buf), 0);
        if (ret == -1) {
            perror("Error sending message");
            exit(1);
                }
        printf("SRV - %s\n", buf);
        strcpy(buf,"");
        snprintf(buf, sizeof(buf), "Hi client2 Nice to meet you.",inet_ntoa(cli_addr.sin_addr));
        if(client2 > 0)     
        {ret = send(client2, buf, strlen(buf), 0);
        if (ret == -1) {
            perror("Error sending message");
            exit(1);
                }
        printf("SRV - %s\n", buf);
        strcpy(buf,"");

Here the code is working as intended; each client prints only one of the statements.

If this is a flawless method, why is it taught that fork() should be used for serving each client.

I am working on a localhost? Is this the reason that this code is working for me?

Was it helpful?

Solution

It isn't a concurrent server if you don't either fork() or process the connection in a (new?) thread. That's the definition of a concurrent server.

If I'm reading your code correctly, what you've got is a simple sequential server. It can only process one connection at a time. That's fine if the computation required for each response is minimal, as in your example. It's not so good if the computation involves a lot of effort — access disk or database, for example.

Note that a sequential server design is completely legitimate. So too is a concurrent server design. They should be applied to different workloads. Generally, though, a concurrent server will handle large traffic volumes better than a sequential server. Imagine if Google used sequential servers for responding to search requests!

Another design uses a thread pool or process pool with one thread or process farming out the work to other threads or processes. These are trickier to write so they work well.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top