質問

In one of our first CS lectures on security we were walked through C's issue with not checking alleged buffer lengths and some examples of the different ways in which this vulnerability could be exploited.

In this case, it looks like it was a case of a malicious read operation, where the application just read out however many bytes of memory

  1. Am I correct in asserting that the Heartbleed bug is a manifestation of the C buffer length checking issue?

  2. Why didn't the malicious use cause a segmentation fault when it tried to read another application's memory?

  3. Would simply zero-ing the memory before writing to it (and then subsequently reading from it) have caused a segmentation fault? Or does this vary between operating systems? Or between some other environmental factor?

  4. Apparently exploitations of the bug cannot be identified. Is that because the heartbeat function does not log when called? Otherwise surely any request for a ~64k string is likely to be malicious?

役に立ちましたか?

解決

Am I correct in asserting that the Heartbleed bug is a manifestation of the C buffer length checking issue?

Yes.

Is the heartbleed bug a manifestation of the classic buffer overflow exploit in C?

No. The "classic" buffer overflow is one where you write more data into a stack-allocated buffer than it can hold, where the data written is provided by the hostile agent. The hostile data overflows the buffer and overwrites the return address of the current method. When the method ends it then returns to an address containing code of the attacker's choice and starts executing it.

The heartbleed defect by contrast does not overwrite a buffer and does not execute arbitrary code, it just reads out of bounds in code that is highly likely to have sensitive data nearby in memory.

Why didn't the malicious use cause a segmentation fault when it tried to read another application's memory?

It did not try to read another application's memory. The exploit reads memory of the current process, not another process.

Why didn't the malicious use cause a segmentation fault when it tried to read memory out of bounds of the buffer?

This is a duplicate of this question:

Why does this not give a segmentation violation fault?

A segmentation fault means that you touched a page that the operating system memory manager has not allocated to you. The bug here is that you touched data on a valid page that the heap manager has not allocated to you. As long as the page is valid, you won't get a segfault. Typically the heap manager asks the OS for a big hunk of memory, and then divides that up amongst different allocations. All those allocations are then on valid pages of memory as far as the operating system is concerned.

Dereferencing null is a segfault simply because the operating system never makes the page that contains the zero pointer a valid page.

More generally: the compiler and runtime are not required to ensure that undefined behaviour results in a segfault; UB can result in any behaviour whatsoever, and that includes doing nothing. For more thoughts on this matter see:

Can a local variable's memory be accessed outside its scope?

For both me complaining that UB should always be the equivalent of a segfault in security-critical code, as well as some pointers to a discussion on static analysis of the vulnerability, see today's blog article:

http://ericlippert.com/2014/04/15/heartbleed-and-static-analysis/

Would simply zero-ing the memory before writing to it (and then subsequently reading from it) have caused a segmentation fault?

Unlikely. If reading out of bounds doesn't cause a segfault then writing out of bounds is unlikely to. It is possible that a page of memory is read-only, but in this case it seems unlikely.

Of course, the later consequences of zeroing out all kinds of memory that you should not are seg faults all over the show. If there's a pointer in that zeroed out memory that you later dereference, that's dereferencing null which will produce a segfault.

does this vary between operating systems?

The question is vague. Let me rephrase it.

Do different operating systems and different C/C++ runtime libraries provide differing strategies for allocating virtual memory, allocating heap memory, and identifying when memory access goes out of bounds?

Yes; different things are different.

Or between some other environmental factor?

Such as?

Apparently exploitations of the bug cannot be identified. Is that because the heartbeat function does not log when called?

Correct.

surely any request for a ~64k string is likely to be malicious?

I'm not following your train of thought. What makes the request likely malicious is a mismatch between bytes sent and bytes requested to be echoed, not the size of the data asked to be echoed.

他のヒント

A segmentation fault does not occur because the data accessed is that immediately adjacent to the data requested, and is generally within the memory of the same process. It might cause an exception if the request were sufficiently large I suppose, but doing that is not in the exploiter's interest, since crashing the process would prevent them obtaining the data.

For a clear explanation, this XKCD comic is hard to better:

enter image description here

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top