Nowadays, more game engines adopts a component design (e.g. Unity, Unreal). In this kind of design, a GameObject
is composed of a list of components. In your situation, there can be a MeshComponent
and a PhysicalComponent
, both attaching to a single game object.
For simplicity, you can put a world transform variable to the GameObject
. During update phrase, PhysicalComponent
outputs the world transform to that variable. During rendering, the MeshComponent
reads that variable.
The rationale behind this design is to decouple between components. Neither MeshComponent
nor PhysicalComponent
knows each other. They just depends on a common interface. And it can be easier to extend the system by composition, than using single hierarchy of inheritance.
In a realistic scenario, however, you may need more sophisticated handling between physics/graphics synchronization. For example, the physics simulation may need to be run in fixed time step (e.g. 30Hz), while rendering need to be variable. And you may need to interpolate results from the output of physics engine. Some physics engine (e.g. Bullet) has direct support of this issue though.
Unity provided a good reference of their Components, which worth a look.