Question

I'm writing a large scale application where I'm trying to conserve as much memory as possible as well as boost performance. As such, when I have a field that I know is only going to have values from 0 - 10 or from -100 - 100, I try to use the short data type instead of int.

What this means for the rest of the code, though, is that all over the place when I call these functions, I have to downcast simple ints into shorts. For example:

Method Signature

public void coordinates(short x, short y) ...

Method Call

obj.coordinates((short) 1, (short) 2);

It's like that all throughout my code because the literals are treated as ints and aren't being automatically downcast or typed based on the function parameters.

As such, is any performance or memory gain actually significant once this downcasting occurs? Or is the conversion process so efficient that I can still pick up some gains?

Était-ce utile?

La solution

There is no performance benefit of using short versus int on 32-bit platforms, in all but the case of short[] versus int[] - and even then the cons usually outweigh the pros.

Assuming you're running on either x64, x86 or ARM-32:

  • When in use, 16-bit SHORTs are stored in integer registers which are either 32-bit or 64-bits long, just the same as ints. I.e. when the short is in use, you gain no memory or performance benefit versus an int.
  • When on the stack, 16-bit SHORTs are stored in 32-bit or 64-bit "slots" in order to keep the stack aligned (just like ints). You gain no performance or memory benefit from using SHORTs versus INTs for local variables.
  • When being passed as parameters, SHORTs are auto-widened to 32-bit or 64-bit when they are pushed on the stack (unlike ints which are just pushed). Your code here is actually slightly less performance and has a slightly bigger (code) memory footprint than if you used ints.
  • When storing global (static) variables, these variables are automatically expanded to take up 32-bit or 64-bit slots to guarantee alignment of pointers (references). This means you get no performance or memory benefit for using SHORTs versus INTs for global (static) variables.
  • When storing fields, these live in a structure in heap memory that maps to the layout of the class. In this class, fields are automatically padded to 32-bit or 64-bit to maintain the alignment of fields on the heap. You get no performance or memory benefit by using SHORTs for fields versus INTs.

The only benefit you'll ever see for using SHORTs versus INTs is in the case where you allocate an array of them. In this case, an array of N shorts is roughly half as long as an array of N ints.

Other than the performance benefit caused by having variables in a hot loop together in the case of complex but localized math within a large array of shorts, you'll never see a benefit for using SHORTS versus INTs.

In ALL other cases - such as shorts being used for fields, globals, parameters and locals, other than the number of bits that it can store, there is no difference between a SHORT and an INT.

My advice as always is to recommend that before making your code more difficult to read, and more artificially restricted, try BENCHMARKING your code to see where the memory and CPU bottlenecks are, and then tackle those.

I strongly suspect that if you ever come across the case where your app is suffering from use of ints rather than shorts, then you'll have long since ditched Java for a less memory/CPU hungry runtime anyway, so doing all of this work upfront is wasted effort.

Autres conseils

As far as I can see, the casts per se should have no runtime costs (whether using short instead of int actually improves performance is debatable, and depends on the specifics of your application).

Consider the following:

public class Main {
    public static void f(short x, short y) {
    }

    public static void main(String args[]) {
        final short x = 1;
        final short y = 2;
        f(x, y);
        f((short)1, (short)2);
    }
}

The last two lines of main() compile to:

  // f(x, y)
   4: iconst_1      
   5: iconst_2      
   6: invokestatic  #21                 // Method f:(SS)V

  // f((short)1, (short)2);
   9: iconst_1      
  10: iconst_2      
  11: invokestatic  #21                 // Method f:(SS)V

As you can see, they are identical. The casts happen at compile time.

The type casting from int literal to short occurs at compile time and has no runtime performance impact.

You need a way to check the effect of your type choices on the memory use. If short vs. int in a given situation is going to gain performance through lower memory footprint, the effect on memory should be measurable.

Here is a simple method for measuring the amount of memory in use:

      private static long inUseMemory() {
        Runtime rt = Runtime.getRuntime();
        rt.gc();
        final long memory = rt.totalMemory() - rt.freeMemory();
        return memory;
      }

I am also including an example of a program using that method to check memory use in some common situations. The memory increase for allocating an array of a million shorts confirms that short arrays use two bytes per element. The memory increases for the various object arrays indicate that changing the type of one or two fields makes little difference.

Here is the output from one run. YMMV.

Before short[1000000] allocation: In use: 162608 Change 162608
After short[1000000] allocation: In use: 2162808 Change 2000200
After TwoShorts[1000000] allocation: In use: 34266200 Change 32103392
After NoShorts[1000000] allocation: In use: 58162560 Change 23896360
After TwoInts[1000000] allocation: In use: 90265920 Change 32103360
Dummy to keep arrays live -378899459

The rest of this article is the program source:

    public class Test {
      private static int BIG = 1000000;
      private static long oldMemory = 0;

      public static void main(String[] args) {
        short[] megaShort;
        NoShorts[] megaNoShorts;
        TwoShorts[] megaTwoShorts;
        TwoInts[] megaTwoInts;
        System.out.println("Before short[" + BIG + "] allocation: "
            + memoryReport());
        megaShort = new short[BIG];
        System.out
            .println("After short[" + BIG + "] allocation: " + memoryReport());
        megaTwoShorts = new TwoShorts[BIG];
        for (int i = 0; i < BIG; i++) {
          megaTwoShorts[i] = new TwoShorts();
        }
        System.out.println("After TwoShorts[" + BIG + "] allocation: "
            + memoryReport());
        megaNoShorts = new NoShorts[BIG];
        for (int i = 0; i < BIG; i++) {
          megaNoShorts[i] = new NoShorts();
        }
        System.out.println("After NoShorts[" + BIG + "] allocation: "
            + memoryReport());
        megaTwoInts = new TwoInts[BIG];
        for (int i = 0; i < BIG; i++) {
          megaTwoInts[i] = new TwoInts();
        }
        System.out.println("After TwoInts[" + BIG + "] allocation: "
            + memoryReport());

        System.out.println("Dummy to keep arrays live "
            + (megaShort[0] + megaTwoShorts[0].hashCode() + megaNoShorts[0]
                .hashCode() + megaTwoInts[0].hashCode()));

      }

      private static long inUseMemory() {
        Runtime rt = Runtime.getRuntime();
        rt.gc();
        final long memory = rt.totalMemory() - rt.freeMemory();
        return memory;
      }

      private static String memoryReport() {
        long newMemory = inUseMemory();
        String result = "In use: " + newMemory + " Change "
            + (newMemory - oldMemory);
        oldMemory = newMemory;
        return result;
      }
    }

    class NoShorts {
      //char a, b, c;
    }

    class TwoShorts {
      //char a, b, c;
      short s, t;
    }

    class TwoInts {
      //char a, b, c;
      int s, t;
    }

First I want to confirm the memory savings as I saw some doubts raised. Per the documentation of short in tutorial here : http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html

short: The short data type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As with byte, the same guidelines apply: you can use a short to save memory in large arrays, in situations where the memory savings actually matters.

By using short You do save the memory in large arrays(hope that is the case) hence its good idea to use it.

Now to your question:

Is the performance/memory benefit of short nullified by downcasting?

Short Answer is NO. Down casting from int to short happens at compilation time itself hence no down impact from performance perspective, but since you are saving memory, it may result in the better performance in memory threshold scenarios.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top