Question

We are writing a MMORPG and suppose we have the following tables. The location_dynamic_objects is the table going to be queried and updated HEAVILY. As you can see position_x, position_y, location_id columns are duplicated as well as the object type. But if we normalize and use joins we are to apply additional filters for the selected data. We plan to send all the location_static_objects ONCE to client, so there is no real point in keeping them together with location_dynamic_objects. Static objects represent unmovable data to be rendered and is send once to client on location load. Dynamic objects represent data being updated frequently like players, rockets, asteroids, etc and is constantly sent to the client, selecting depends on client position and location. Our question is should we give up normalization to achieve performance?

create table location_static_object_types (
  location_static_object_type_id integer auto_increment primary key,
  object_type_name                varchar(16) not null
);
create table location_static_objects (
  location_static_object_id      integer auto_increment primary key,
  location_static_object_type_id integer not null,
  location_id                    integer not null,
  position_x                     integer not null,
  position_y                     integer not null
);
create table location_dynamic_object_types (
  location_dynamic_object_type_id integer auto_increment primary key,
  object_type_name                varchar(16) not null 
);
create table location_dynamic_objects (
  location_dynamic_object_id      integer auto_increment primary key,
  location_dynamic_object_type_id integer not null,
  object_native_id                integer not null,
  location_id                     integer not null,
  position_x                      integer not null,
  position_y                      integer not null
);
Was it helpful?

Solution

Because denormalization increases the redundancy of your data, it increases your total data volume. For this reason it is most rare for a denormalization to improve performance of write accesses (creates and updates) to your data; the reverse is typically true. Further, even for read queries, denormalization trades off increased performance of a small set of queries, often just one, for decreased performance of all others acccessing the denormalized data. If you have properly employed artificial primary keys for your foreign key constraints, supplementd by corresponding uniqueness constraints on your natural (primary) keys, I would be amazed if you saw an iota of performance gain through denormalization.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top