It occurred to me that pseudorandom and random are really the same thing. Pseudorandom is just a man-made version of random for computers.
You get a random number from measuring something natural, like air temperature. What gets measured is generally a large number of simple functions added together, like the impacts of of air molecules that were traveling in approximately straight lines at many different velocities. You get something random-looking when you steer away from the predictable part where the functions average out and look at a tiny piece of the number over a tiny range, like the digits that are left after the significant part.
Pseudorandom works the same way. Some number of simple functions are added (multiplication and any other complicated function can be broken down into lots of additions; xor is addition mod 2), then you do something equivalent to taking the residue mod something, which is the same as looking at the small digits of a number. The result is a part of the sum over a range where the simple functions interact with each other in complicated ways, so the behavior of each individual function is not apparent and the cumulative pattern is hard to discern.
Finding the value of a polynomial mod m (m is often a power of 2) is a pretty literal version of this procedure, and it turns up everywhere in pseudorandom applications.