Yes, Hadoop uses general file-writing APIs to write block data and would respect Unix-level quotas. In addition, there is the config property dfs.datanode.du.reserved
which lets you set a reserved space per volume (applied to all volumes) that the DataNodes will not consider writing onto.
However, it is generally bad practice to allow writes into the OS mount. If you envision looking for more storage space eventually (given that you are hitting limits already), it may be better to buy a few more disks and mount them on the DataNodes.