Question

I came across the following problem in my complexity-theory course:

Given a set of numbers $A := \{a_1, \dots, a_n\} \subset_{\mathrm{finite}} \mathbb{N}$ and a number $b$ also in $\mathbb{N}$ such that the following condition applies: $a_i$ divides $a_{i+1}$ for all $i < n$ and $a_i < a_{i+1}$. Prove that this special case of subset-sum is decidable in P.

Due to the given condition, $b$ has to be a multiple of the first $a \neq 1$. Taking $a_1 \neq 1: b = a_1 \cdot x$. Finding this x takes me back to the subset-sum problem though which is surely not in P.

Any help would be appreciated.

Was it helpful?

Solution

In short, the greedy algorithm works, where in each step you find the largest number in $A$ and subtract it from $b$. If $b$ becomes zero, you get a solution. If you reach a point where all numbers in $A$ are greater than $b$ output no.


In the following I list a formal description of the algorithm and a proof of correctness.

Here is a formal description of the algorithm. Let $A_0 = A, b_0 = b$ and $b_i$ be the value of $b$ after the $i$-th iteration. Let $A_i$ be the numbers left in $A$ after the $i$-th iteration. Then the algorithm goes as follows. In each step $i = 1,\dots$ find the largest number $a_j$ in $A_{i-1}$ not greater than $b_{i-1}$. If no such number exists output no. Otherwise, set $b_{i} = b_{i-1} - a_j$ and $A_i = A_{i-1} \setminus \{a_j\}$. If $b$ becomes equal to zero then output yes, else iterate.

Claim 1. The previous algorithm output the correct answer of the given instance of the restricted case of the subset sums described in the question.

Before we prove the claim we prove an auxiliary claim.

Claim 2. Let $a_1, \dots a_n$ be the numbers in $A$ in ascending order. Then $\sum\limits_{i=1}^{k-1}a_i < a_k$ for all $k \in [n]$.

Proof.(Claim 2). Proof with induction over $k$. For n = 1, the sum is empty. Now we prove it for $k$. $$\sum\limits_{i=1}^{k-2}a_i + a_{k-1} < 2a_{i-1} \leq a_i,$$ where the first inequality holds due to the induction hypothesis and the second one holds by assumption since $a_{k-1}$ divides and is smaller than $a{k}$.

Proof.(Claim 1) If the algorithm outputs yes, then it is clearly a yes-instance, since it only chooses numbers from the given sets and subtract there values from $b$.

Now we prove that, if our algorithm outputs no, the given instance is a no-instance. To this end we prove that if at step $i$ we choose an element $a_j$, then any solution of the given instance must contain this element. We prove this by induction over $i$. Note that any $a_j', j'>j$ is strictly greater than $b_i$ and hence can never be included, assuming by induction hypothesis, all previous selections of $a$ were a part of a solution if there exists one. Now using Claim 1, $\sum\limits_{i=1}^{j-1}a_j$ < $a_j$ and since we only remove elements, $A_i$ contains no other elements smaller than $a_j$ and hence, if we do not choose $a_j$ choosing all smaller elements will not suffice to get a sum equal to $b$. Hence, we have to choose $a_j$.

OTHER TIPS

Consider the following further special case of your problem: $a_i = c^{i-1}$ for some $c \ge 2$. For example, if $c = 10$, then we have $a_1 = 1, a_2 = 10, a_2 = 100, a_3 = 1000, \dots, a^n = c^{n-1}$.

In this case, there is a solution if and only if $0 \le b < c^n$ and the base $c$ representation of $b$ contains no digits other than 0 and 1. In particular, there can be a solution even for some $b$ that are not multiples of $c$, contradicting your second-to-last paragraph.

See if this helps you think about the problem.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top