Question

For decades it's been the case that interfaces were only only (only) for specifying method signatures. We were told that this was the "right way to do things™".

Then Java 8 came out and said:

Well, er, uh, now you can define default methods. Gotta run, bye.

I'm curious as to how this is being digested by both experienced Java developers and those who started recently (last few years) developing it. I'm also wondering about how this fits into the Java orthodoxy and practice.

I'm building some experimental code and while I was doing some refactoring, I ended up with an interface that simply extends a standard interface (Iterable) and adds two default methods. And I'll be honest, I feel pretty damn good about it.

I know this is a little open-ended but now that there has been some time for Java 8 to be used in real projects, is there an orthodoxy around usage of default methods yet? What I mostly see when they are discussed is about how to add new methods to an interface without breaking existing consumers. But what about using this from the start like the example I gave above. Has anyone run into any issues with providing implementations in their interfaces?

Was it helpful?

Solution

A great use-case are what I call "lever" interfaces: interfaces that only have a small number of abstract methods (ideally 1), but provide a lot of "leverage" in that they provide you with a lot of functionality: you only need to implement 1 method in your class but get a lot other methods "for free". Think of a collection interface, for example, with a single abstract foreach method and default methods like map, fold, reduce, filter, partition, groupBy, sort, sortBy, etc.

Here are a couple of examples. Let's start with java.util.function.Function<T, R>. It has a single abstract method R apply<T>. And it has two default methods that let you compose the function with another function in two different ways, either before or after. Both of those composition methods are implemented using just apply:

default <V> Function<V, R> compose(Function<? super V, ? extends T> before) {
    return (V v) -> apply(before.apply(v));
}

default <V> Function<T, V> andThen(Function<? super R, ? extends V> after) {
    return (T t) -> after.apply(apply(t));
}

You could also create an interface for comparable objects, something like this:

interface MyComparable<T extends MyComparable<T>> {
  int compareTo(T other);

  default boolean lessThanOrEqual(T other) {
    return compareTo(other) <= 0;
  }

  default boolean lessThan(T other) {
    return compareTo(other) < 0;
  }

  default boolean greaterThanOrEqual(T other) {
    return compareTo(other) >= 0;
  }

  default boolean greaterThan(T other) {
    return compareTo(other) > 0;
  }

  default boolean isBetween(T min, T max) {
    return greaterThanOrEqual(min) && lessThanOrEqual(max);
  }

  default T clamp(T min, T max) {
    if (lessThan(   min)) return min;
    if (greaterThan(max)) return max;
                          return (T)this;
  }
}

class CaseInsensitiveString implements MyComparable<CaseInsensitiveString> {
  CaseInsensitiveString(String s) { this.s = s; }
  private String s;

  @Override public int compareTo(CaseInsensitiveString other) {
    return s.toLowerCase().compareTo(other.s.toLowerCase());
  }
}

Or an extremely simplified collections framework, where all collections operations return Collection, regardless of what the original type was:

interface MyCollection<T> {
  void forEach(java.util.function.Consumer<? super T> f);

  default <R> java.util.Collection<R> map(java.util.function.Function<? super T, ? extends R> f) {
    java.util.Collection<R> l = new java.util.ArrayList();
    forEach(el -> l.add(f.apply(el)));
    return l;
  }
}

class MyArray<T> implements MyCollection<T> {
  private T[] array;

  MyArray(T[] array) { this.array = array; }

  @Override public void forEach(java.util.function.Consumer<? super T> f) {
    for (T el : array) f.accept(el);
  }

  @Override public String toString() {
    StringBuilder sb = new StringBuilder("(");
    map(el -> el.toString()).forEach(s -> { sb.append(s); sb.append(", "); } );
    sb.replace(sb.length() - 2, sb.length(), ")");
    return sb.toString();
  }

  public static void main(String... args) {
    MyArray<Integer> array = new MyArray<>(new Integer[] {1, 2, 3, 4});
    System.out.println(array);
    // (1, 2, 3, 4)
  }
}

This becomes very interesting in combination with lambdas, because such a "lever" interface can be implemented by a lambda (it is a SAM interface).

This is the same use-case that Extension Methods were added for in C♯, but default methods have one distinct advantage: they are "proper" instance methods, which means they have access to private implementation details of the interface (private interface methods are coming in Java 9), whereas Extension Methods are only syntactic sugar for static methods.

Should Java ever get Interface Injection, it would also allow type-safe, scoped, modular monkey-patching. This would be very interesting for language implementors on the JVM: at the moment, for example, JRuby either inherits from or wraps Java classes to provide them with additional Ruby semantics, but ideally, they want to use the same classes. With Interface Injection and Default Methods, they could inject e.g. a RubyObject interface into java.lang.Object, so that a Java Object and a Ruby Object are the exact same thing.

Licensed under: CC-BY-SA with attribution
scroll top