Several examples from Microsoft, provided in the SDK v1.6 Toolkit, give examples of how to determine the location of objects on the screen and interact with them with a custom cursor that represents the player's hand.
I would suggest checking out several of the examples provided in order to get a clear picture of how interactions can work. The toolkit is found in the same location as the office Kinect for Windows SDK.
http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx
ShapeGame
This example generates random shapes (some of them ellipses) which fall from the top of the window. Those shapes interact with the skeleton produced by the Kinect. You'll see how to get the position of elements in the window, and in relation to the skeleton.
BasicInteractions
This examples does several things that are of use. It shows how to produce a custom cursors based on the hand position. It also creates a ContentControl
that can be hooked up to Kinect events (such as a hand enter, hover and exit event). It is built in such a way, because it is a ContentControl
that anything can be put inside -- be it a single ellipse, or a complex layout.