質問

When reading through internet, I've seen people are really into testing the front end applications. Some of them also say that they will never hire someone on front end who doesn't have testing experience.

I understand that testing is necessary when dealing with huge amount of computation, logic and intertwined modules, which is most of the time not the case with front end development. The project I'm working on will have a couple of modules like that, and I would write the tests for that, but what to do with the rest of the app?

For example my current task is to create an AuthGuard service, and my project lead explicitly said that I need to write tests for it. While looking into it I found so many examples that are useless in my opinion.

For example I came across this function:

canActivate(): Observable<boolean> | Promise<boolean> | boolean {
  if (this.authService.isLoggedIn()) {
    return true;
  } else {
    this.router.navigate(['/']);
    return false;
  }
}

being tested this way:

it('should return true for a logged in user', () => {
  authService = { isLoggedIn: () => true };
  router = new MockRouter();
  authGuard = new AuthGuard(authService, router);

  expect(authGuard.canActivate()).toEqual(true);
});

Well, no way, Sherlock! Obviously it's going to return true when there is an if statement, because that is how if statements work. And this is not the worse I've seen. I've seen a person creating a mock service, and mock api call with the same data and comparing the two.

I'm writing to check if there is something wrong with majority of our industry, or it just could be me? Did the test driven development gain too much attention and everyone is writing articles about how to do it while not mentioning that maybe we don't need it?

役に立ちましたか?

解決

Adversarial vs Aspirational

I think this is the problem you've tripped over.

In TDD the process is this:

  1. Write an Aspirational test describing the behaviour desired.
  2. Write the code in the unit that makes this test pass, while keeping the other tests passing.
  3. Review the Aspirations and make sure they are what you aspire too.
  4. Update your Aspirations
  5. Repeat

When your boss/manager/team lead is turning around and saying that it should be tested (in the context of TDD) they are talking about these Aspirational tests. In this methodology it makes sense to write a test that describes happy and unhappy paths even for the most trivial of cases - because these aren't tests. They are a design document. Perhaps it is better to think of them as self-checking properties.

However I suspect you are instead hearing that you should write the other kind of test - the adversarial kind.

In this kind of test you look at general knowledge (check box testing), the spec (black box testing) or the actual implementation (white box testing) and you hunt for actual weak points.

  • A check box test for example would fuzz an input field. Not because you know anything specific about it, but because it is a general attack that might happen, and you want to make sure the system can handle it.

  • A black box test might for example see what happens when you pop() and empty() stack. Does it behave right? What if it was quadruple pop()ed?

  • A white box test will look for that one statement that dereferences a null, or sets up an infinite loop, and proves that there is a problem here.

In which case, testing that if is just a non-starter. It doesn't make sense, an if(bool) is guaranteed to work. If it doesn't there is an issue with the platform/compiler not the code itself. (Which might be useful to know, but isn't the point either).


It is always easier to write Aspirational tests up-front. Otherwise it feels like you are rehashing the implementation, and it feels like you are just Yes Manning the whole thing.

If you are put in such a position, try to ignore the implementation. Read the user stories, or look at the users of the function (not the function itself). Cobble together the expectations, this paints the acceptability picture from both a business (from story) and usage (from code that calls it) perspective. This helps to keep the tests away from prescribing an implementation. It also helps with the Yes Man feeling.

Adversarial tests on the other hand can only be written against an implementation. By which I mean, that you only know how to attack it once something is known about it.

  • Once it is decided that the input is a textbox, then you can write ways to attack it.
  • Once you know the semantics of the interface, you can write ways to attack it.
  • Once you have an implementation to examine, you can write ways to attack it.

If you are tasked with writing them be aware of the depth of the attack you can provide at this point in the development process. Also be aware of your own attachment to the work, if you are overly attached it will lead to tests that coddle the code instead of revealing its weaknesses.

  • If it's your own code - If possible leave time between when you implement it, and when you attack it (several months is a good start).
  • leverage a check list of general issues, tailor it with knowledge about the dev who wrote it (if that's yourself honestly record and generalise the issues you tend to make).
  • get someone else to have a go at the code. (and add those issues to the list).
  • look at the tests already written and ask if the test would reveal something by going one step further.
  • look at logs of known issues in the system, they are the breadcrumb trails of successful adversaries.
ライセンス: CC-BY-SA帰属
所属していません softwareengineering.stackexchange
scroll top