1. ## Random numbers

It occurred to me that pseudorandom and random are really the same thing. Pseudorandom is just a man-made version of random for computers.

You get a random number from measuring something natural, like air temperature. What gets measured is generally a large number of simple functions added together, like the impacts of of air molecules that were traveling in approximately straight lines at many different velocities. You get something random-looking when you steer away from the predictable part where the functions average out and look at a tiny piece of the number over a tiny range, like the digits that are left after the significant part.

Pseudorandom works the same way. Some number of simple functions are added (multiplication and any other complicated function can be broken down into lots of additions; xor is addition mod 2), then you do something equivalent to taking the residue mod something, which is the same as looking at the small digits of a number. The result is a part of the sum over a range where the simple functions interact with each other in complicated ways, so the behavior of each individual function is not apparent and the cumulative pattern is hard to discern.

Finding the value of a polynomial mod m (m is often a power of 2) is a pretty literal version of this procedure, and it turns up everywhere in pseudorandom applications.  Reply With Quote

2. this is true only for creature that exactly know how the Universe works. hey, God!  Reply With Quote

3. Originally Posted by Bulat Ziganshin this is true only for creature that exactly know how the Universe works. hey, God!
Actually, the idea of random was invented by people. How could you determine whether something natural is "truly" random, or the sum of deterministic functions? The idea of truly random is something we made up to do math. It couldn't have come from nature.

But actually that's tangential to my point. What I'm trying to do is share some insight into why some functions you can do on computers actually look random. Those processes look random because they mimic the kind of process in the real world that would give a random-looking result (the digits from the temperature don't have to be thought of as random, they could be part of the actual air temperature, which you could assume is meaningful and therefore not truly random).  Reply With Quote

4. Random means unpredictable. A hardware random number generator is unpredictable because you don't know the state of the system (the positions and velocities of every atom). A software random number generator is unpredictable because you don't have enough computing power (brute force search for a cryptographic key).  Reply With Quote

5. Originally Posted by Matt Mahoney Random means unpredictable. A hardware random number generator is unpredictable because you don't know the state of the system (the positions and velocities of every atom). A software random number generator is unpredictable because you don't have enough computing power (brute force search for a cryptographic key).
It's not intuitive that a deterministic function can be unpredictable. Pseudorandom functions can be thought of as using the same principle as natural random functions: simulate or measure a chaotic system (random.org helped put a name to what I've been describing).  Reply With Quote

6. Yet for some purposes, we can think of the digits of pi as random.

Strictly speaking, chaotic systems don't exist. Chaotic states are real-valued, like x := 4x(1-x) (initially 0 < x < 1) which never repeats. But all physical systems are discrete (finite number of states) because the universe has finite entropy.  Reply With Quote

7. Originally Posted by Matt Mahoney Yet for some purposes, we can think of the digits of pi as random.

Strictly speaking, chaotic systems don't exist. Chaotic states are real-valued, like x := 4x(1-x) (initially 0 < x < 1) which never repeats. But all physical systems are discrete (finite number of states) because the univerise has finite entropy.
let x = 3/4 which is in the range you state

then 4*(3/4)(1 - 3/4) = 3/4 it repeats with same value so you are missing something or wrong.  Reply With Quote

8. Originally Posted by Matt Mahoney Yet for some purposes, we can think of the digits of pi as random.

Strictly speaking, chaotic systems don't exist. Chaotic states are real-valued, like x := 4x(1-x) (initially 0 < x < 1) which never repeats. But all physical systems are discrete (finite number of states) because the universe has finite entropy.
I didn't think chaotic was predicated on whether or not the universe is discrete. That would be a problem, because I don't think there's a way to find out.  Reply With Quote

9. Chaotic models are quite useful. For example, 3 or more objects in orbit follow chaotic motion, which means that any uncertainty in the current state grows over time. This means that the motions of the planets and satellites in our solar system cannot be be predicted accurately far into the future. Weather is also chaotic, which limits our ability to forecast it. But of course these systems really have exact quantum states, which we do not know. The analog models we use are really just approximations (much more useful than the exact models).

The system x := 4x(1-x) is chaotic because if you start with x = 3/4 + e for any small e, it will show chaotic behavior. The more general form of the logistic map, x := rx(1-x) has some interesting properties. https://en.wikipedia.org/wiki/Logistic_map  Reply With Quote

10. I was looking at the wikipedia entry for Randomized algorithm and I came across something interesting. Here's an excerpt (emphasis is mine):

• In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions.

It's fascinating that you can have a situation where you can prove that a state is reachable, but the probability of reaching that state is impossible to calculate, even when you know all the rules (I think they meant to say the probability is uncomputable, because deciding is binary true or false). I guess you could probably construct such a situation out of Turing machines pretty easily, without yielding much deep insight. But a Turing machine has an infinite tape, which allows it to create an infinite number of new states. If you're simulating a chemical reaction, the number of states would be limited, so it's surprising that there are reachable states with uncomputable probabilities. (Assuming I'm interpreting this correctly.)

Edit: I went off on a tangent in college and studied some chemistry, so I interpret what it calls the state as meaning the concentrations of each molecule, i.e. the number and types of molecules that exist at a particular point. You could infer from this that it's impossible to assign a probability that a particular molecule ever gets made at all. Which could mean that the probability of spontaneously creating a self-replicating molecule is probably uncomputable. We could owe our existence to an event with an uncomputable probability of happening. It kind of strains comprehension to think that some event must have a probability, but that probability can't be known, even when you know all the parameters. Actually, if a system runs forever, does that mean there could be no way to get all of the probabilities to converge?  Reply With Quote

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•