One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

- https://en.wikipedia.org/wiki/Stochastic_computing
- https://spectrum.ieee.org/computing/hardware/computing-with-random-pulses-promises-to-simplify-circuitry-and-save-power
- https://www.researchgate.net/profile/Weikang_Qian/publication/224182500_An_Architecture_for_Fault-Tolerant_Computation_with_Stochastic_Logic/links/00b495330bafab0a03000000.pdf
- https://pdfs.semanticscholar.org/120f/1b1c6ed76ca858e1c273ced4b2bfeb7a5a60.pdf