Question

I have been programming in Java since 2004, mostly enterprise and web applications. But I have never used short or byte, other than a toy program just to know how these types work. Even in a for loop of 100 times, we usually go with int. And I don't remember if I have ever came across any code which made use of byte or short, other than some public APIs and frameworks.

Yes I know, you can use a short or byte to save memory in large arrays, in situations where the memory savings actually matters. Does anyone care to practice that? Or its just something in the books.

[Edited]

Using byte arrays for network programming and socket communication is a quite common usage. Thanks, Darren, to point that out. Now how about short? Ryan, gave an excellent example. Thanks, Ryan.

Was it helpful?

Solution

Keep in mind that Java is also used on mobile devices, where memory is much more limited.

OTHER TIPS

I use byte a lot. Usually in the form of byte arrays or ByteBuffer, for network communications of binary data.

I rarely use float or double, and I don't think I've ever used short.

I used 'byte' a lot, in C/C++ code implementing functionality like image compression (i.e. running a compression algorithm over each byte of a black-and-white bitmap), and processing binary network messages (by interpreting the bytes in the message).

However I have virtually never used 'float' or 'double'.

The primary usage I've seen for them is while processing data with an unknown structure or even no real structure. Network programming is an example of the former (whoever is sending the data knows what it means but you might not), something like image compression of 256-color (or grayscale) images is an example of the latter.

Off the top of my head grep comes to mind as another use, as does any sort of file copy. (Sure, the OS will do it--but sometimes that's not good enough.)

The Java language itself makes it unreasonably difficult to use the byte or short types. Whenever you perform any operation on a byte or short value, Java promotes it to an int first, and the result of the operation is returned as an int. Also, they're signed, and there are no unsigned equivalents, which is another frequent source of frustration.

So you end up using byte a lot because it's still the basic building block of all things cyber, but the short type might as well not exist.

Until today I haven't notice how seldom I use them.

I've use byte for network related stuff, but most of the times they were for my own tools/learning. In work projects these things are handled by frameworks ( JSP for instance )

Short? almost never.

Long? Neither.

My preferred integer literals are always int, for loops, counters, etc.

When data comes from another place ( a database for instance ) I use the proper type, but for literals I use always int.

I use bytes in lots of different places, mostly involving low-level data processing. Unfortunately, the designers of the Java language made bytes signed. I can't think of any situation in which having negative byte values has been useful. Having a 0-255 range would have been much more helpful.

I don't think I've ever used shorts in any proper code. I also never use floats (if I need floating point values, I always use double).

I agree with Tom. Ideally, in high-level languages we shouldn't be concerned with the underlying machine representations. We should be able to define our own ranges or use arbitrary precision numbers.

when we are programming for electronic devices like mobile phone we use byte and short.In this case we should take care on memory management.

It's perhaps more interesting to look at the semantics of int. Are those arbitrary limits and silent truncation what you want? For application-level code really wants arbitrary sized integers, it's just that Java has no way of expressing those reasonably.

I have used bytes when saving State while doing model checking. In that application the space savings are worth the extra work. Otherwise I never use them.

I found I was using byte variables when doing some low-level image processing. The .Net GDI+ draw routines were really slow so I hand-rolled my own.

Most times, though, I stick with signed integers unless I am forced to use something larger, given the problem constraints. Any sort of physics modeling I do usually requires floats or doubles, even if I don't need the precision.

Apache POI was using short quite a few times. Probably because of Excel's row/column number limitation.

A few months ago they changed to int replacing

createCell(short columnIndex)

with

createCell(int column).

On in-memory datagrids, it can be useful. The concept of a datagrid like Gemfire is to have a huge distributed map. When you don't have enough memory you can overflow to disk with LRU strategy, but the keys of all entries of your map remains in memory (at least with Gemfire).

Thus it is very important to make your keys with a small footprint, particularly if you are handling very large datasets. For the entry value, when you can it's also better to use the appropriate type with a small memory footprint...

I have used shorts and bytes in Java apps communicating to custom usb or serial micro-controllers to receive 10bit values wrapped in 2 bytes as shorts.

bytes and shorts are extensively used in Java Card development. Take a look at my answer to Are there any real life uses for the Java byte primitive type?.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top