A GPU context is described here. It represents all the state (data, variables, conditions, etc.) that are collectively required and instantiated to perform certain tasks (e.g. CUDA compute, graphics, H.264 encode, etc). A CUDA context is instantiated to perform CUDA compute activities on the GPU, either implicitly by the CUDA runtime API, or explicitly by the CUDA driver API.
A Command is simply a set of data, and instructions to be performed on that data. For example a command could be issued to the GPU to launch a kernel, or to move a graphical window from one place to the other on the desktop.
A channel represents a communication path between host (CPU) and the GPU. In modern GPUs this makes use of PCI Express, and represents state and buffers in both host and device, that are exchanged over PCI express, to issue commands to, and provide other data to, the GPU, as well as to inform the CPU of GPU activity.
For the most part, using the CUDA runtime API, it's not necessary to be familiar with these concepts, as they are all abstracted (hidden) underneath the CUDA runtime API.