문제

I want to do some research, but I don't think it's important the number of bits it takes to represent the integer input and arithmetic on the abstract machine. So what is the model that addresses this?

Specifically, I want to prove that integer factorization is not in P. If the above is a bad idea, then what model should I be using? Clearly, I would be needing add, sub, mul, and divide instructions.

Or, should something else be used such as algebraic complexity theory?

도움이 되었습니까?

해결책

I suspect you're looking for either the RAM model or the transdichotomous model. They differ primarily in how they take into account the size of integers and their effect on the time of various operations. See What's the difference between transdichotomous model and RAM?.

If you want your results to have any applicability at all, the main thing is to pick a model that avoids treating the time to do arithmetic on exponential-length integers as polynomial, otherwise you can end up with absurd results. If you count each arithmetic operation as $O(1)$ time, no matter how large the numbers are, then you end up with absurd results like polynomial-time algorithms for factoring and indeed all of #PSPACE. See https://cs.stackexchange.com/a/53885/755.

Note that proving a problem is in P is not the same as proving it runs in polynomial time on some strange model of computation. P is defined as problems that have an algorithm that run in polynomial time on a Turing machine. As mentioned in the previous paragraph, if you change "on a Turing machine" to some other model of computation, that statement might no longer hold.

It's a famous open problem with factorization is in P or not. The problem is likely to be very difficult, with our current state of understanding. I would not recommend taking on such an ambitious goal.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 cs.stackexchange
scroll top