# Thread: Cryptographic attacks as minimization of degree 4 polynomial (through 3-SAT problem)

1. ## Cryptographic attacks as minimization of degree 4 polynomial (through 3-SAT problem)

Cryptography is based on reason-result chains like hash functions: which are inexpensive to propagate in the intended direction, but seem hard to reverse. However, decomposing them into satisfaction of simple (direction-agnostic) relations like 3-SAT clauses, may bring a danger of existence of methods being able to effectively propagate this information also in the opposite direction, e.g. to reverse a hash function (?)

For example it turns out that 3-SAT problem can be translated into minimization of multivariate degree 4 real polynomial ( https://arxiv.org/pdf/1703.04456 ), allowing to take the search from discrete {0,1}^n final set of boolean values, inside its continuous [0,1]^n hypercube, exploiting (inexpensive) local gradients depending on the entire problem and leading toward a near minimum.
Specifically, 'x or y or z' can be translated into zeroing of
(x+y+z-1)^2 (x+y+z-2)^2 (x+y+z-3)^2
degree 6 polynomial. Sum of such polynomials over all clauses (using 1-x instead of x for negated variables) plus sum of x^2 (x-1)^2 over all variables to enforce final boolean values, gives a nonnegative polynomial which is zero if and only if all clauses are fulfilled. This degree 6 can be reduced to 4 (minimal nontrivial) by introducing one additional variable per clause.

Any Turing machine can be translated into a set of 3-SAT clauses: into a problem of finding values for variables such that all alternatives of triples of binary variables (like x or y or z, variables can be negated) are satisfied. For transformations based on OR, AND, XOR, SUM and permutations (like standard hashes and symmetric cryptography) this translation is straightforward.
So we could for example translate a hash function into a set of 3-SAT clauses, fix a final bit sequence and ask for the initial bit sequence leading to this final value. Then translate this 3-SAT into degree 4 polynomial, and e.g try gradient descent from multiple random initial points, or some more sophisticated numerical optimization (e.g. adding laplacian for smoothening).
Reaching the global minimum: zero, would mean finding a collision (or satisfying nonce for bitcoin mining).

Efficient 3-SAT solving could also break e.g. symmetric cryptography: find the only key for which decoded block contains correlations (wrong keys should lead to i.i.d. Pr(0)=Pr(1)=1/2). Or asymmetric by decomposing numbers (RSA) or solving discrete logarithm (ECC).
However, hashing seems simpler as there is often only a single way to reverse its reason-result chain, and this way should be suggested by gradients of the minimized polynomial (?)
If so, for protection there could be used some complexity barrier: elongating this reason-result chain.

The question is if could lead to essentially faster collision search (or proof-of-work in bitcoin mining) than brute force - exploiting the original reason-result chain in the opposite direction?
Have you seen this kind of cryptographic attacks - like through a continuous optimization of a polynomial?

2. ## Thanks (3):

Cyan (7th April 2017),Shelwien (4th April 2017),xinix (4th April 2017)

3. It is certainly an interesting paper, but only the degree reduction part - the idea of using a polynomial to represent boolean logic is trivial.
For example, we can use AND(x,y)=x*y; OR(x,y)=x+y-x*y; XOR(x,y)=x+y-2*x*y; NOT(x)=1-x
Its not even a fact that this is worse than the proposed representation from the paper, because cryptographic functions are not originally built using 3-SAT -
in the end the degree reduction methods should apply to actual functions, rather than some abstract intermediate representation like 3-SAT.

Also, you're underestimating the complexity of cryptographic functions - even outdated ones like DES and MD5 would produce polynomial degrees on the order of 10000.
(to be specific, x86 md5 implementation not counting mov and rol instructions still has 372 32-bit operations; 132 adds + 240 logic; addition needs ~7 logic operations per bit;
(132*7+240)*32=37248 is the estimated degree of the md5 polynomial).

So this method can be actually used for simple hashes, like crc32(x)=y, but not for actual cryptography.

As to bitcoin and SAT - see https://jheusser.github.io/2013/02/03/satcoin.html

4. ## Thanks (3):

Jarek (4th April 2017),schnaader (4th April 2017),xinix (4th April 2017)

Indeed for simple operations we can just e.g. write z=x*y for AND, but to propagate information from z to (x,y), it might be better to write it as minimization of degree 4 polynomial (z - xy)^2. Then gradient descent should lead to satisfaction of simultaneously all such operations through successive iterations.
The degree of such polynomial will not be 10000 as you write, but can be just 4 ... this 10000+ will be in the number of variables and terms of the polynomial.
Calculating gradient of degree 4 polynomial is relatively cheap ... but for more complex functions it can be also effectively performed thanks to https://en.wikipedia.org/wiki/Automatic_differentiation

While for NP-hard problems there is mainly focus on abstract ones like 3-SAT, Hamilton cycles, knapsack problem, TSP, subgraph isomorphism problem,
clique problem, vertex cover problem, independent set ... we usually forget that efficient solver could endanger the currently used cryptography.
Any calculation performed by standard computers can be translated into a 3-SAT problem increasing size at most polynomial time - this transformation officially goes through Turing machine, what is really costly ... but fixed sequences of simple formulas used in standard cryptography (mostly AND, OR, XOR, SUM, permutations) can be nearly directly translated into a 3-SAT problem.
Also for product for attacking RSA, we could even directly translate it to polynomial. E.g. for z=x+y sum, we could just use
sum_i (x_i + y_i + c_i - 2c_{i+1} - z_i)^2
degree two polynomial, where c are carry bits.
For a product there are needed more carry bits, but decomposing a natural number z=x*y to break RSA, we could fix z and try to propagate it to x and y e.g. using gradient descent.

Here are benchmarks from 2016 3-SAT solver competition: https://helda.helsinki.fi/bitstream/...roceedings.pdf
So all of them use combinatorial approaches - search, conclude in {0,1}^n discrete boolean values.
The advantage of translating it into a (degree 4) polynomial is ability to go inside this [0,1]^n hypercube and use gradients (calculated in linear time) which suggests where to go inside this hypercube.

The question is if these suggestions from local gradients can essentially speedup the search? If so, our cryptography might be endangered.
Probably the only way to check it is through implementation - I will try to do it during the summer break ... but for 1024 bit RSA or SHA-256 we are talking about ~million of clauses ...

6. ## Thanks:

xinix (4th April 2017)

7. I think its only a formal difference.
Sure, you can rewrite a system { a*b=c; c*d=e; } as (a*b-c)^2+(c*d-e)^2==0.
But the actual dependency would be still a*b*d=e, the different syntax won't make it simpler.
And there're no magic solutions for high degree equations - at some point it would be actually faster to solve them with SAT methods,
rather than the reverse (limited precision etc).

Also, the gradient descent doesn't help you in any way here - if 0 < eq1a^2+eq2a^2 < eq1b^2+eq2b^2 (same equations with different variable values),
how does it tell you which is closer to actual solution?

Btw, I also tried n-ary (n>2) logic, using probabilities instead of actual bits, trigonometry ( xor(x,y)=sin(Pi/2*(x+y)), modular arithmetics -
there's no magic solution.

n-ary logic was my most successful attempt - I actually built a working solver with it.
This is a system for 32-bit multiplication using that method - http://nishi.dreamhosters.com/u/multest1a.png
Need to work on it more, at some point - I make attempts once in a few years, but usually get stuck in
speed optimizations.

Also statistical approach worked pretty good for simple cases, like crcs and PRNG algorithms,
but on actual cryptohashes I've got p=0.5 for every bit in the end.

Recently I was thinking about applying compression methods (rainbow tables in particular) to SAT.
Its actually a compression problem, in a sense, because its easy to combine and factor binary tables,
just that we'd hit the memory limit really fast (1Tb of RAM would fit uncompressed bitmask for a 43-bit table, but we need min 129 bit for md5).
But it could be possible to invent some compressed representation that would let us work around that.
Rainbow table is a good example.

8. ## Thanks (2):

Jarek (4th April 2017),xinix (4th April 2017)

9. I see you are more experienced with this topic, but have you tried gradient descent?
Here the question is if nonnegative low degree polynomial actually reaches 0: finding global minimum, and gradient suggests local directions to minimize the function?

So generally it should lead to a local minimum - the only danger I can see is that there is created exponentially large number of strange local minima in the hypercube (?)
Otherwise, we could modify it to reach global minimum - solving the problem.

Boolean variables seem the safest approach as we can focus on the [0,1]^n hypercube - I would expect more pathologies for n-ary logic.

ps. your xor has reminded me that (very different): decomposition of n can be written as maximization of
cos(2pi x) + cos(2pi n/x)

10. ## Thanks:

xinix (4th April 2017)

11. > I see you are more experienced with this topic, but have you tried gradient descent?

Yes. It was actually my first idea on this topic.
I was 15 then (~1992) and got access to Mathematica 2.0 for DOS,
and it could solve all kinds of equations from school.
So I've got very excited and tried to go for RSA Challenge with it.
But practical tests showed that bruteforce is actually much faster.
Been playing with various ideas since then.

> Here the question is if nonnegative low degree polynomial actually reaches 0:
> finding global minimum, and gradient suggests local directions to minimize the function?

So you'd have an equation in the form (x-f(y,z))^2 + (y-g(z,t))^2 + ... =0
Taking partial derivative of that by x would give you 2*(x-f(y,z)).
Well, you can just set x to f(y,z) and get 0 there, but what's the point,
if you actually need to find y from x?

> So generally it should lead to a local minimum - the only danger I can see
> is that there is created exponentially large number of strange local minima

Of course. The dependency would be very complex (intentionally so, too),
and you'd end up using bruteforce to get out of local minima,
and the end result would be much slower than plain binary bruteforce.

I mean, your method can be used to find the solution, just won't
be faster than existing SAT solvers - likely much slower.

> I would expect more pathologies for n-ary logic.

The benefit of n-ary logic is that it lets me use predicates of two vars
as basic elements.
x y z      i x   i y   i z
0 0 0      0 0   0 0   0 0
0 1 0 -->  1 0 + 1 1 + 1 0
1 0 0      2 1   2 0   2 0
1 1 1      3 1   3 1   3 1

So I can use a simple 2D array to store and process the whole system -
processing becomes very simple and can be easily (relatively) optimized.
Its also great for visualization.

> your xor has reminded me that (very different):
> decomposition of n can be written as maximization of
> cos(2pi x) + cos(2pi n/x)

The problem is that trigonometric (or polynomial) forms would be actually
only useful if they'd let us simplify the equations with methods developed for these areas.
Unfortunately it doesn't apply to converted forms of cryptography.

Trigonometric form could be used to build an analog computer though.

12. ## Thanks (2):

Jarek (4th April 2017),xinix (4th April 2017)

13. Ok, I will have to try myself.
If indeed there is exponentially large number of local minima, there is a nice trick of adding laplacian (order 2 polynomial here) to smoothen the function and reduce the number of local minima - in analogy of evolving accordingly to diffusion equation (d_t = laplacian).
for some large lambda, then reduce it to zero with successive steps of gradient descent ...

14. Its easy to try really.
For example, let's try finding {a,b,c,d} for {e=1,f=0} in this:
Code:
a b c d  e f
0 0 0 1  0 0
0 0 1 0  0 1
0 1 0 0  1 0
1 0 0 0  1 1

e == (a || b)
f == (a || c)
d == ! (a || b || c)
(a && b) || (a && c) || (b && c) == False

Its actually correct (within precision), but took more than 500 iterations to get there.

15. ## Thanks:

Mike (5th April 2017)

16. So now you are saying that it's not a problem of local minima, but just of the convergence speed.
If so, there are lots of ways to speed up the convergence, e.g.
- increasing step size - this is just order 4 polynomial, a lot can be told about its smoothness, choosing the proper step,
- a smarter termination condition - we don't need 10^-25 accuracy, getting to e.g. 1/4 distance from some vertex should make it an candidate to test,
- smoothening methods like adding laplacian,
- using the knowledge about the problem, e.g. you can improve convergence of Newton method by multiplying the difference by the order of zero,
- maybe using a population of repulsive value, maybe simulating annealing,
- ...

My point is that while e.g. "FindMinimum" is great to get a perfect accuracy for an unknown problem, the situation here definitely requires a much more subtle approach ... and time to work on.
I don't know if there exists the problem of exponential number of nonzero local minima while translating cryptographic tasks here - if not, I believe other obstacles like convergence speed can be efficiently solved.

17. > So now you are saying that it's not a problem of local minima, but just of the convergence speed.

No. Convergence is bad, but its likely because of local minima (and solution being in a corner,
which is hard to reach).

> If so, there are lots of ways to speed up the convergence, e.g.

Yeah, but I'm pretty sure that Mathematica already uses all the published ones.

Anyway, I just don't understand why would you think that there'd be useful gradients
in a polynomial model of a boolean function.
For example: x&y==0 --> (x*y)^2+(x*(1-x))^2+(y*(1-y))^2==0.
How you're supposed to find a solution without bruteforce search,
if there're equations like this in the system?

Mathematica actually has multiple different solvers, and FindMinimum is the only
one that finds anything at all with your method.

And while my original method is marginally better (using a system rather than
a single equation), its still too slow for cryptography applications.

18. ## Thanks:

Mike (5th April 2017)

19. It is easy to show that P = NP would break all cryptography (except one time pad) using known plaintext. Build a decryption circuit that compares the plaintexts and then set the key bits one at a time and ask your polynomial SAT solver if there is a solution.

Most SAT problems can be solved quickly using ad-hoc methods or if the problem is over or under constrained. But crypto should not fall into one of these categories. It would be interesting if these general methods worked against the whole class of crypto problems, but I kind of doubt it would be more successful than attacking them one at a time.

20. You see, but there is a fast solution for existing cryptohashes, in the form of Rainbow Table method.
Sure, it still requires full bruteforce (and more) to build the database. But then the hashes can be cracked fast.

Also this is not really about proving P=NP, but just finding efficient SAT solutions.
The fact that the hash is converted to a polynomial doesn't mean that time to solve it is not exponential :)
But in theory it could be faster than other SAT algorithms, why not.

21. Shelwien, FindMinimum (and similar) search for some exact point of minimum, reducing the step down to zero.
This is a completely different problem from what we have here: we already know that the only interesting minima are {0,1}^n.
Just need a hint of the right direction to get something better than blind brute force, like gradient.
You draw outside [0,1]^n where the real action is going on - outside there is only boring rapid growth to infinity.

I imagine the search here as taking a random point from e.g. [1/4,3/4]^n, then making gradient descent with relatively huge steps like 1, at most a few of them, then test the closest vertex (round on all coordinates) as boolean ... and so on with a new random initial point.
So comparing to just brute search we have additionally gradients now: local suggestions based on the entire problem.
If these suggestions turn out somehow helpful, such search might be essentially faster than brute force.

I hope to find some time to test it for simple problems this weekend.

22. > Just need a hint of the right direction to get something better than blind

And I'm trying to explain that there won't be much useful gradients,
especially with your method which hides the polynomial dependency.

> You draw outside [0,1]^n where the real action is going on -
> outside there is only boring rapid growth to infinity.

I included outside parts just to make graphs look better.
The point there is the flat part in the middle,
with the 3 equally valid solutions.

> If these suggestions turn out somehow helpful, such search might be essentially faster than brute force.

Sure, but there'd be lots of local minima in the polynomials, especially with cryptohashes.
So you'd be essentially doing a bruteforce search, just with slower verification.

> I hope to find some time to test it for simple problems this weekend.

I don't think that your method will work, but I am interested in the task itself.

23. Here is a hint:
Evolving diffusion equation (d_t = laplacian) smoothens the function, finally leading to a single local minimum.
Hence we can look at smoothened: "f + lambda * laplacian f"
going with lambda from infinity to zero.

For lambda -> infinity we have just minimization of laplacian, which in our case (degree 4 polynomial) is just degree 2 polynomial - is trivial paraboloida with single minimum which is cheap to find ( https://en.wikipedia.org/wiki/Conjugate_gradient_method ) - this minimum is a promising candidate for the initial point of the search - found nearly analytically basing on the entire problem.
Now perform gradient descent for f + lambda * laplacian with lambda descending to zero - tracing a minimum of smoothened function like in adiabatic quantum computers ( https://en.wikipedia.org/wiki/Adiaba...um_computation ).
The question is to choose descending sequence of lambdas, and of steps - I will try it in a few days.

24. What function are you going to use for testing? I'd prefer a real hash, at least something like zipcrypt, or md5.

25. Hashes are costly and complex to transform to 3-SAT, so I thought about starting with integer factorization (RSA) - which additionally is simple for scaling the difficulty, what should allow to understand evolution of optimized parameters with problem size.

There are two basic ways to transform product into degree 4 polynomials:
- build binary tree of additions (like sum_i (x_i + y_i + c_i - 2c_{i+1} - z_i)^2 ), what needs more variables but seems more stable,
- directly add multiple values, like
((sum_{i=0..n} x_i * y_{n-i}) - z_n + "carry bits from previous" - "carry bits to further")^2

The first question is what (unique and cheaply calculated) minimum of laplacian will suggest ...

26. > I thought about starting with integer factorization (RSA)

Somehow I suspected that. Not sure that it would prove anything, because its not a cryptohash in the end.
But most likely you would be able to solve it with any method, so at best we'd only know whether its faster than existing solvers.

> Hashes are costly and complex to transform to 3-SAT

You can use CBMC (http://www.cprover.org/cbmc/) to transform from C code to SAT format.

Also look at zipcrypt ( http://nishi.dreamhosters.com/u/zipcloak_v0.rar : zipcrypt.inc ) - its simple and still kinda practical.

27. I have implemented in Mathematica generation of the degree 4 polynomial for factoring integers - attached.
As I have suspected, the problem is not with the convergence speed, but rather with quickly growing number of local minima ... I plan to test some global optimization tricks, but as expected - it doesn't look very optimistic.

28. ## Thanks:

Shelwien (8th April 2017)

29. Some updates (more details in arxiv):
As expected, the degree 4 polynomials for factorization problem from the previous post turn out really nasty.
I have tried, among others, various gradient descents, exploiting the laplacian - it doesn't essentially help comparing to brute-force.
Also, if we restrict to a line, we can find analytically minima of such 1D low degree polynomial - getting kind of global optimization ... but it turns out that taking random directions there will be most likely just a single minimum in this direction.

Where this nasty swarm of local minima come from?
So x^2 (1-x)^2 terms make that these minima are close to one of {0,1}^n.
Only one of these points really reaches 0 here (satisfies all binary clauses, solving factorization), however, all the remaining ones are lying in one or a few of binary clauses - there is exponentially large number of subsets of clauses which can be unsatisfied - many of such subsets have a corresponding local minimum, getting extremely complex landscape of local minima ... such that local hints from gradients are rather just a garbage.
So gradient descent just finds a smart way to lie - break some weak link (a few binary clauses) in the reason-result chain.

Finding situations where it is sufficient to lie in a single binary clause, it can reduce the value of the polynomial extremely close to zero.
It brings a valuable lesson, especially to understand that adiabatic quantum computers (like famous https://en.wikipedia.org/wiki/D-Wave_Systems ) have no chance to be really practical for larger problems - not only they would have to exponentially reduce speed to maintain adiabadicity, but also temperature to distinguish between local minima.

However, I have also another progress - the degree of polynomial can be reduced to 2 ... if additionally enforcing {0,1}^n boolean values for all variables, what originally required x^2(1-x)^2 degree 4 polynomials.
- 3SAT: "x or y or z" as (x+y+z-3u-2v-w)^2 + (u+v+w-1)^2,
- "x and y" as (x+y-2)^2,
- "z=x and y" as (x+y-2z-u)^2,
- "z=x or y" as (x+y-2u-v)^2+(u+v-z)^2,
- "z=x xor y" as (x-y+u-v)^2+(u+v-z)^2.

Zeros of degree 2 polynomial form a plane which can be found here.
So the question becomes: "does given plane intersect {0,1}^n"?
Alternatively: "does given sphere intersect {0,1}^n"?
Both are close to the https://en.wikipedia.org/wiki/Subset_sum_problem ... but maybe such geometric formulation can help somehow (?)

the polynomial/system can be solved even without x^2*(1-x)^2 terms.
But you would still have the same problem with local minima, because any term like (x=y and/or z) would have multiple roots,
and without these we'd be able to just directly invert the function (z = x xor y -> x = z xor y).

What kind of properties your polynomial form has, that you expect gradient descent to solve it faster than bruteforce?
Do you think that the value of the whole polynomial is a correct representation of distance to solution, or some such?

31. But in the post you have linked you also use x^2 (1-x)^2 terms?
I have previously naively expected that gradients should bring some local hints suggesting where to search for the answer - at least allow to improve brute force: beside just testing random values, perform a few steps of gradient descent to improve such random guess ... but as I have written yesterday, I don't longer think so - local "hints" from gradient usually turn out just a garbage.
The problem is that around every {0,1}^n there is some local minimum corresponding to violating some small number of binary clauses - from a distance all of them are attracting by gradient descent.

I have simplified it to degree 2 polynomial plus constraints of being in {0,1}^n.
Zeros of degree 2 polynomial are plane
Ax=b
where A here is rectangular integer matrix.
Hence the question now is if plane intersects with hypercube vertices:
{x:Ax=b} intersects {0,1}^n
which can be transformed into a question if sphere intersects with {0,1}^n.
I am trying to attack these simplified geometric problems, but they seem still really tough.

32. > But in the post you have linked you also use x^2 (1-x)^2 terms?

I meant this: http://nishi.dreamhosters.com/u/jd_sat3.png

> I have previously naively expected that gradients should bring some local
> hints suggesting where to search for the answer -

Well, that's not impossible, although its much better (for simplification and convergence)
to build a system of equations rather than a single one in your style.

Just imagine that you have to solve (x xor y = z), where x/y/z are not bits, but
eg. 32-bit integers. Its easily possible to solve it analytically, but with
your method its as troublesome as anything else.

> at least allow to improve brute force: beside just testing random values,
> perform a few steps of gradient descent to improve such random guess ...

The problem is that cryptographic functions are designed specifically
to make that impossible. For example, a partial solution (which turns
a large part of terms to zero, but not all) most likely would have very little
in common with the real solution.

> but as I have written yesterday, I don't longer think so -
> local "hints" from gradient usually turn out just a garbage.

I also tried statistical approach, where variables are probabilities
of bits being 0 or 1 (possibly contextual).
For example, a xor b = c -> c = a*(1-b)+(1-a)*b;
But I wasn't able to produce any skewed probabilities in cryptohashes -
when 0.5s are passed on input, I always got perfect 0.5s on output.

So now I think that SAT is mostly a problem of handling large amounts
of data efficiently. A compression problem, in a way.
I mean, building inverse function for md5 won't be a problem,
if we had ~2^128 bytes of fast memory - its only a problem because
we can't merge into a single equation a large enough part of it,
that it would become possible to notice (and factor out) any dependencies.

33. Ok, I see you represent it as a nonlinear equation F(x)=0.
In practice it is usually solved with some Newton's method ( https://en.wikipedia.org/wiki/Newton%27s_method ), again using linear approximation (differentiation):
x_{n+1) = x_n - (DF)(x_n)^-1 F(x_n)

x*y=0 for AND says x or y is zero, the other is any real ... x*y=1 for hyperbola
x+y-2xy for OR is zero in hyperbola y=x/(2x-1), is one in hyperbola y=(x-1)/(2x-1)
Intersecting such hyperbolas you will probably get an exponential number of intersections - most of them being fake solutions: non-binary ...

The likely pathology here is exponential number of zeros outside boolean values, which also attract from a distance in such gradient-based methods - I believe the problem is similar, I will think about it.
Without exponential growth of such attracting fake solutions, both methods should be essentially better than brute force.
I have focused on polynomials for simplicity and so better understanding (and preventing fake solutions), but you can differentiate your complex formulas using https://en.wikipedia.org/wiki/Automatic_differentiation

34. Shelwien, try to experiment with Newton method, but I believe your constraints are too weak - they will usually land far from boolean values.
You can improve it by adding x(1-x)=0 conditions to enforce binary values.

This way you would get set of constraints \forall_i F_i(x)=0.
Which can be formulated as just minimization of sum_i F_i(x)^2.
So yours and sum-of-squares formulations are practically equivalent - garbage information from gradient is true for both. E.g. Newton method will give up in one of local minima for sum-of-squares.

I believe the simplest we can get this way is degree 2 polynomial plus constraints of being in {0,1}^n - which is equivalent to question if plane (or sphere) intersects with {0,1}^n?
Try to solve this simplest form to understand its difficulty ...

35. After simplifications, the above considerations lead to the NP subset-sum problem - I was recently trying to play with it (page 5-6 of https://arxiv.org/pdf/1703.04456 ) in form:

we have a set of natural numbers x_i, can they be split into two subsets of the same sum ?

(*) in other words: if there exists a \in {-1,1}^n, such that sum_i a_i x_i = 0 ?

So there is defined a looking simple set of 2^n values and we would like to use some fancy mathematics to test if it contains zero.

It turns out that Fourier transform of this set of 2^n values can be found (product of cosines) - making the problem of testing if it contains 0 becoming testing if this integral is nonzero:

0 \neq \int_0^{2\pi} \prod_i cos(\phi x_i) d\phi

Unfortunately substituting exponents to cosines, or using the residue theorem, we return to the original problem (*)
Monte-Carlo would need exponential number of samples.

Another interesting fact is that we can find sum of powers ( >= 0 ) of all such 2^n possibilities - for odd powers it is zero, for low even there are inexpensive formulas.

I was also trying to get a formula for sum of other functions over these 2^n values, like being infinity in 0 to test infinity of sum (but testing divergence seems extremely difficult), or absolute values to differentiability of the sum (difficult to get a formula).

Any other ideas - especially how to squeeze something from this set of 2^n values?

36. It seems equivalent to https://en.wikipedia.org/wiki/Subset_sum_problem (or https://en.wikipedia.org/wiki/Knapsa...et-sum_problem)
We just have to set s = (sum a_i)/2 .

Not sure if its related, but huffman algorithm also gives us the minimum of sum l_i*a_i .

37. Indeed and I have stated it. From the minimization approaches in the first posts here, the best I could do is degree 2 polynomial plus conditions to be binary variables (e.g. by degree 4 polynomials: x^2(1-x^2)).
Degree 2 polynomial is minimized in a linear subspace - so the simplified question is if a given linear subspace intersects e.g. {0,1}^n.
The simplest nontrivial linear subspace we can get here is hyperplane (codimension 1) - defined by a normal vector, preferably of integer coordinates ... and this is exactly the subset-sum problem.

Hence, it seems the best we can do from the above minimization discussion is going to the subset sum problem - I have decided to focus on ... and found some fancy mathematical tricks for it.

The most convenient formulation is for a set of n natural numbers x_i, decide if size 2^n
S = { \sum_i a_i x_i : a_i \in {-1,1} }
set contains zero.

So the question is if we could use some mathematical trick to describe this looking simple set S, and extract from this description the information if S contains zero.

It turns out that we can analytically inexpensively calculate sum of k-th power of all elements from S ... we can also analytically find Fourier transform of this set S: it is this product of cosines.
The question if S contains zero becomes the question if the below integral is nonzero:

0 \neq \int_0^{2\pi} \prod_i cos(\phi x_i) d\phi

Here are examples of four cases of such product of cosines, differing only by the last x_i ... we can visually see that only one of them has average (integral) different from zero (with 14) - satisfying the subset-sum problem.
However, this difference drops like 1/2^n with the size of problem ...

Any other ideas for mathematical description of this set S? Or determining if this integral/average is zero?

ps. Indeed Huffman algorithm is extremely interesting from this point of view: as greedy polynomial-time algorithm, somehow turning out to be the optimal one (among prefix codes).

38. One option I considered is using a single number instead of a_i coefs.
We can extract the bits from it, for example, using cosines instead of modulo and sign(x)=x/Abs(x) to get a_i.

As to huffman, maybe we can treat S(a)=0 as a minimum of some f(a), where S(a)=f'(a).

39. I don't see how to use binary representation for a_i e.g. to get some simplified description of the S set?

Indeed the optimality of (polynomial time) Huffman suggests to try to get there with difficult problems.

Another algorithm with surprisingly only polynomial time is the one used in compressed sensing/sampling (good introduction):
You have a long vector of size N, which is "sparse": only K << N coordinates are nonzero.
To get these nonzero coefficients (also: their positions - exponential number of possibilities), you perform at least
M ~ K log(N/K) measurements - being projections in e.g. random directions.
The trick is is to use L1 minimization, which can be performed with (polynomial time) linear programming.

Any other algorithms which being polynomial seems surprising?

Page 1 of 2 12 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•