Question

I'm very acquainted to Discrete Mathematics concepts in which, as you may know, sets must be specified with a single uppercase letter in range (A-Z). I was wondering if this rule or convention should be taken into account when declaring sets in Python or when coding. I have a point of view where it totally adds up but I also have a point of view where it doesn't.

People are used to implement the same data structures over and over. Most programmers would think an array or lists, dictionaries and sets are the same thing. They are sets in a way but they're obviously different on what you can achieve with each of them and as a result you have different catalogs of built-in functions to deal with. That's why I'd like sets to stand out in coding and stick to the mathematical convention, at least in Python.

I think if I bump into someone else's code and I find these sort of declarations, it would make sense to me and it'd say something positive about the coder. I'd like to stick to the mathematical concept even when programming but just wanted to know what your thoughts or opinions might be on this matter.

When Math meets Code, which conventions survive?
Thank you in advance.

Was it helpful?

Solution

For what it is worth, I also follow the idioms of the programming language in question. So I would use the Python naming conventions even for mathematical concepts. It is far more likely that someone reading Python code will expect and recognize the standard naming conventions. From the point of view of a discrete mathematician, you see set as a very important data type worthy of distinguishing. As a Python programmer, they are just another tool in the toolbox to me - meant to be used whenever I need an unordered collection of unique instances.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top