Question

I realized today that I have blindly just followed this requirement for years without ever really asking why. Today, I ran across a NotSerializableException with a model object I created from scratch and I realized enough is enough.

I was told this was because of session replication between load-balanced servers, but I know I've seen other objects at session scope that do not implement Serializable. Is this the real reason?

Was it helpful?

Solution

It is the real reason. If it will actually matter in practice depends entirely on if the web server or application server will in fact serialize the object (or validate that it is serializable, but realistically the only way to do that is to actually serialize it).

Most application servers have the option (at least) of not being strict about it. If you don't use load balanced servers that actually share state, then you might have seen it without a problem.

OTHER TIPS

Because in order to be transferred across a wire they need to be serialisable to a form that can be put on the wire.

Like binary, or xml, Json, or simillar

There's more info here... Should any domain object, not be serializable?

I think the concept is akin to being asked, why must solid food be chewed down before being swallowed for digestion. But of course, the difference is that digested good cannot be in anyway deserialized.

I remember using Sun RPC (nowadays called ONC RPC) which performs XDR encoding because computer platforms/systems represent their data in their respective forms. For example, big endian vs small endian.

But the JVM, regardless of machine is big-endian, so endianness should not be a reason.

Data structure in computer memory has pointers and all the elements of an object may not sit on a contiguous memory block. However, when you pass an object thro i/o to another system, you cannot pass the memory distribution of that object.

An Object needs to be serialized before being stored in a database because you do not and cannot wish to replicate the dynamically changing arrangement of system memory.

Our data representation layer on the networks are all bit stream-based. Therefore, when you wish to pass data from one system to another, you have to convert the dimensional data represented in memory to one that can be streamed thro the networks byte by byte. Actually, bit by bit, and which often goes thro compression and security encryption. Compression and encryption routines are oo-structure-blind and bit streams are presumed. Network switches are oo-structure-blind. Network transmission does not even see bits. The bits are encoded into a transmission signal that is often analogue sinusoids and then modulated. These processes do not work on multidimensional/hierarchical schemata of oo-structured data.

I guess you could perform object level obfuscation and encryption, yet you still have to allow the system to convert those to bit streams, by converting them to char streams first.

Marshalling is when the shepherd has a herd of sheep and marshalls them thro a single-sheep bridge over troubled waters. Therefore, a marshaller has to marshall our objects into serial schemata with references written in, so that when the flock of information sheep emerge from the other end of the bridge, we are able to reassemble them back into its hierarchical schemata by the conjugate-demarshaller. In our case, our sheep is not marshalled thro just a bridge but thro narrow and precarious windings and landslides where the network transmission equipment at each turn stores a copy of the marshalled sheep to ensure they are able to resend the copy in case any of the sheep has fallen thro a ravine.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top