Question

So, I've been refactoring my videoconferencing application for a while, covering it with unit and general tests. Finally, I got to the point where I need to write a loopback test with 1 client sending video stream over to another client. This is not supposed to run as one process, even on the same machine. Nevertheless, I need to write this test to ensure that all separate components are working properly together. However, once I put this all together, I start to notice how some parts, previously covered with unit-tests, start to behave slightly differently - basically, deadlines are missed and real-time requirements are violated. I tend to believe, that this is happening due to the fact, that code for two clients runs slower in one executable, as opposed to the case if the two clients were on separate machines (as it would be in real-life scenario).

So, how one should design tests for such a software and account for these constraints? Asking for best practices here.

Was it helpful?

Solution

Congratulations, your testing has now progressed to the stage where 'integration testing' is required.

You have mentioned that in addition to u it testing you have done some general testing so far as well and have also put everything together and noticed that the testing reveals undesirable performance.

Integration testing is all about how all individual components operate together as major functional components.

I would say that if you are testing your clients all together (in one process/application?) you are essentially testing that as a scenario. If you won't be doing that in 'real life' then don't test like that.

Instead, set up your test harness in such a way that you are indeed testing your clients in a manner similar to real life. And then further reduce the scope to specific areas. An example. if your machine can handle it, set up 2 clients on separate executables. Maybe first run a test on sending data. then run a test on receiving it. that is, just concentrate on certain major areas - which is the point of integration testing.

so try to to clarify. Don't bother getting data from everything all at once. you can test sending data by creating a mock receiver and just monitor the sending component. if you see it is really fast and timely then you know how to narrow down issues when a real receiver starts limiting the sending stream.

so an idea of some testing items;

  • sending data

  • receiving data

  • maybe sending just voice data

monitor the latency, throughout etc. for each item you want to test. it may be just voice, it may be just handshaking..

Use two machines if one can't cut it with two or more threads. But realise what this might mean for end user application performance. In the integration phase you are testing functional and performance requirements specifically.

After you have tested integrating all these components you can move onto verification and system testing where it's end to end and you test for the high level requirements/ goals of the project.

Licensed under: CC-BY-SA with attribution
scroll top