Question

I'm building an application that is a server that implements the tftp protocol. I'm having hard times in understand what's the difference between ASCII format and binary format (netascii and octect) in tftp, and how should I read files differently as the protocol states.

I know that an ASCII char can be rapresented with a single byte. So I don't understand what's the difference between reading in ascii mode (1 byte each character) and binary mode (1 raw byte).

I can read the file with flag ios::binary for binary mode (octet in tftp) and without it for ascii (netascii in tftp), but I really don't understand what's the difference in reading files in these two ways (I always come up with an array of bytes).

If someone can help me understand, I'll really appreciate it

The tftp protocol specification: http://www.rfc-editor.org/rfc/rfc1350.txt

The part that I don't understand is this one:

Three modes of transfer are currently supported: netascii (This is ascii as defined in "USA Standard Code for Information Interchange"
[1] with the modifications specified in "Telnet Protocol
Specification" [3].) Note that it is 8 bit ascii. The term
"netascii" will be used throughout this document to mean this
particular version of ascii.); octet (This replaces the "binary" mode of previous versions of this document.) raw 8 bit bytes; mail,
netascii characters sent to a user rather than a file. (The mail
mode is obsolete and should not be implemented or used.) Additional
modes can be defined by pairs of cooperating hosts.

Was it helpful?

Solution

There are two passages which can help clarify what the purpose of netascii is in RFC-1350/TFTP:

netascii (This is ascii as defined in "USA Standard Code for Information Interchange" [1] with the modifications specified in "Telnet Protocol Specification" [3].)

The "Telnet Protocol Specification" is RFC-764, and it describes the interpretation of various ASCII codes for use on the "Network Virtual Terminal". So, netascii would follow those interpretations (which include that lines must be terminated with a CR/LF sequence).

and:

A host which receives netascii mode data must translate the data to its own format.

So a host that used EBCDIC as it's native encoding, for example, might be expected to translate netascii to that encoding, but would leave "octet" data alone.

If you're implementing the TFTP server on a Unix (or other) system that uses LF for line endings, you'd be expected to add the CR for netascii transfers (as well as convert actual CR characters in the file to CR/NUL sequences.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top