Question

As of C# 7.2+, you can add a parameter modifier in before the parameter type to define a parameter as const, essentially.

It is like the ref or out keywords, except that in arguments cannot be modified by the called method. - MSDN

Example from MSDN:

void InArgExample(in int number)
{
    number = 19; // error CS8331, it prevents modification of this parameter
}

Why is it useful? You can guarantee the caller that the argument passed will be unmodified as a result of calling that function. A promise in other words.


According to SOLID design principles and Clean Code by Robert Martin, is this considered a code smell along the same lines as out parameters?

Sources:

Code Smell of the Week: Obsessed with Out Parameters

What's wrong with output parameters?

Was it helpful?

Solution

You've taken the line from MSDN out of context.

The most important part of the page is the first sentence: "The in keyword causes arguments to be passed by reference." It is primarily in this respect that the keyword is similar to out and ref.

This is important when you pass large structs to functions. Passing them without a keyword is inefficient because the struct is copied. (No modifications should be visible outside.) Passing by reference is more efficient.

However, using the old out and ref keywords for this purpose it problematic. out is completely useless: it cannot be used to pass information into the function. ref is possible, but confusing: it allows the function to modify its parameter (with changes visible outside). This is unusual for structs. It is a likely source of bugs if you do it. And it's communicating the wrong intent if you don't intend to do it.

Hence the in keyword: a way to pass structs by reference without allowing them to be modified. And thus the sentence you quoted.

Using in to prevent modification of the parameter only (as in the MSDN example) instead of using it to improve efficiency is a code smell, by the way. Never pass ints as in. There's no point in preventing modifications of normal by-value parameters, as the modifications cannot be seen outside the function anyway. (This confuses some new programmers, but understanding by-value and by-reference is something they need to learn anyway.)

OTHER TIPS

No, using the keyword in to make an argument passed by reference to a method read only is not a code smell.

Modifying an input parameter is a side-effect. It can easily lead to code that is difficult to reason about and hard to change.

Bertrand Meyer’s command-query separation principle (CQS) states that a method should either be a command or a query. If a method is a command, it mutates state (of it’s instance variables), but returns nothing (void). If a method is a query, it has a return value, but does not mutate state.

If you follow this principle your code generally becomes easier to reason about and has less bugs. For example any method that returns a value can be safely called multiple times and it will always produce the same result.

Back to the in keyword. MSDN writes:

Passing those arguments by reference avoids the (potentially) expensive copy

By using the in keyword you signal the caller that the argument will be passed to the method as a reference, yet it’s safe to do, because it won’t be changed.

As a sidenote, I would advise that when you use in in the method signature, also specify it in the caller. It shows the intent and when there's an overload without in, that method will be chosen first.

While the words “in” and “out” relates to each other, I think they are quite different in this case. While the out keyword provides an alternative way of returning output, the in parameter seems more like a way of adding “read only” to the input parameter. Whether or not this is important to the caller, I can’t really decide on. I think it should be clear from the naming anyway.

Its too new to make a call.

To be a code smell the snippet has to be indicative of a common bad design. With a new language feature there hasnt been time for people to abuse it enough yet.

I can see your point though. If you define an interface which limits the implementation this way does it imply that all your other old style methods do modify their parameters? Are side effects common in the rest of your code?

However, I can see in a few years everyone using in as standard. Not doing so might be the code smell.

Licensed under: CC-BY-SA with attribution
scroll top