In Python 2.x, the best way to handle blocks of contiguous data is as strings. For the more advanced data structures, you shouldn't really think about how they're stored. Rather, if you need to interface with something that wants a specific binary structure, then think about how to convert Python data to a string with that format, and/or vice versa. Look into the "struct" module.
'\0' * blocksize
There's really no place in Python for the notion of pointers or memory addressing, and that's one of its strengths. Memory is allocated and released as you need it, automatically, in the background.
Simplest way to create an arbitrary sized array in Python 2.7
My background: new to python, most programming experience is with C and Java.
My understanding is that Python uses lists as the basic 'array' data structure. In C, arrays are guaranteed to be have a contiguous memory allocation. What is the underlying memory layout of lists in Python?
For example, what can be known about the memory layout of 'block' (see below)? I'm tempted to think about accessing elements in the lists via pointers, as one could in C, but I think I need a new paradigm for Python.
block = blocksize * 
Anyways, my real question. I need to pass a zero'd out chunk of memory to a function of 'blocksize' length. What is the best way to do this in Python? Currently, this is what I'm doing:
zero_block = blocksize * 
z = SHA256.new(array.array('b',zero_block)).hexdigest()
My understanding is that 'zero_block' is a list, and array.array(typecode[, initializer]) will call array.fromlist() to populate the array.
To sum it up:
- What is the correct way to think about memory layout for data types such as lists, sets, dicts, etc. in Python?
- In the above code, what would be a better way to create a array of zero's with 'blocksize' size?
- Where/how does the notion of a pointers and memory addressing fit in with Python?
ps. this is my first StackOverflow question asked;