How to deal with a not-yet-implemented method that will be done by a co-programmer?

softwareengineering.stackexchange https://softwareengineering.stackexchange.com/questions/363076

  •  25-01-2021
  •  | 
  •  

Вопрос

This is a question about how to work in teams.

Recently I worked on my first larger (~80 classes, Java) programming project with a team of 6 people, though only 4 of us were continously working on the code. We distributed the work to be done early on and at some point I needed to call a method that was not yet implemented by one of my co-programmers. How is the recommended way to deal with this?

Options I saw, though I don't really like any of them:

  1. Writing myself a //TODO and revisiting this line of code later to check if the method has been implemented in the meantime.

  2. Asking the corresponding team member to implement that now.

  3. Throwing a custom runtimeException with a clear description of what is not yet implemented. (At least we don't have to search for a long time to find out what is missing)

  4. Adding the needed method to their class and writing them a //TODO in the message body, possibly also send them a quick message about that change. (Now it's not my problem anymore, but this can cause annoying merge conflicts if they were working on this method in the meantime)

  5. Defining abstract classes or interfaces for everything before actually writing the code that does the work. (Didn't work too well because these interfaces were often changed)

Это было полезно?

Решение

It is an interesting question and the answer might be easier than you think.

Simply put, write tests that validate your assumptions. It does not matter if you do the implemenation or your fellow programmers

The long answer.

Any of the options that you list are somewhat passive and require you to come back and revisit the code (if any exists) sooner or later.

  • Comments need to be read and handled by your counterpart responsible for the implementation. Your code cannot be compiled in the meantime. If you check such state in a code repository, your continuous integration pipeline will not work, and it is bad practice anyways ... never check in broken code
  • Runtime exceptions seem better, but are still toxic, because your fellow programmer could assume that the implementation was already done without checking, leaving the system in an unstable state as well. If the method is triggered not so often, it could lead to broken production code ... bad practice as well ... never check in "not-implemented" exceptions
  • Waiting for your fellow programmers for implementation of the methods or a stub is also daunting. It breaks your workflow and the workflow of your fellow programmers. What happens if they are sick, in a meeting a g, on coffee break, do you want to spend your time waiting? ... don't wait for somebody if you don't have to
  • implement the missing methods definitely the best way to go forward. But what happens if your implementation does not satisfy the whole use case and your fellow programmers need to amend or change it? How do you and they make sure that it is still compatible with your intended? The answer is easy again. Write tests that verify, describe and document your intentions. If the tests break, it is easy to notice. If changes in that method need to be done that break your feature ... you see it immediately. You both have a reason to communicate and decide what to do. Split the functionality? Change your implementation, etc... never check in code that is not sufficiently documented by tests

To achieve a sufficient level of testing I would suggest you have a look at two disciplines.

  1. TDD - test-driven development - this will make sure you describe your intent and sufficiently test it. It also gives you the possibility to mock or fake methods and classes (also by using interfaces) that are not implemented yet. The code and tests will still compile and allow you to test your own code in isolation of your fellow programmers' code. (see: https://en.wikipedia.org/wiki/Test-driven_development )

  2. ATDD - acceptance test-driven development - this will create an outer loop (around the TDD loop) which helps you to test the feature as a whole. These tests will only turn green when the whole feature is implemented, thus giving you an automatic indicator when your fellows complete their work. Quite neat if you ask me.

Caveat: In your case, I would only write simple acceptance tests and not try to bring in too much of the business side, as it would just be too much to start with. Write simple integration tests that put together all the parts of the system the feature requires. That's all that is required

This will allow you to put your code in a Continous Integration pipeline and produce a highly reliable implementation.

If you want to get further in that topic check the following links:

Другие советы

Ask for stubs.

Or write them yourself. Either way, you and your coworkers need to agree on the interfaces and how they're intended to be used. That agreement needs to be relatively solidified so you can develop against stubs -- not to mention, so you can create your own mocks for your unit testing...

In your situation, I would talk to the team member with responsibility for that function. It may be that they are in a position to prioritise the development of that function so you are able to start using it sooner.

I would steer clear of your fourth option. You've written all your code, and as you say, you no longer consider it to be your problem. Your colleague then writes the implementation of the function, and no longer considers it to be their problem. Who's actually going to test that the code YOU wrote works correctly?

Лицензировано под: CC-BY-SA с атрибуция
scroll top