There are approximately N/ln(N) primes between N and 2N

Just saw this very nice video by @numberphile, and thought I whip up a small Python program to demonstrate the prime number theorem:


#!/usr/bin/env python
#
# "Chebyshev said it, and I say it again: There's always a prime between n and 2n."
#

import sys
import math

class PrimeFinder:

def __init__( self, n ):
self.n = n

def isNPrime( self, N ):
for x in range( 2, int( math.sqrt( N ) ) + 1 ):
if N % x == 0:
return False
return True

def computeAllPrimesBetweenNAndTwoN( self ):
result = []
for N in range( self.n, 2 * self.n + 1 ):
if self.isNPrime( N ):
result = result + [ N ]
return result

def main():
if len( sys.argv ) != 2:
print "Prints all prime numbers between N and 2N"
print "Usage: %s N" % sys.argv[ 0 ]
print "Where N is some positive, natural number."
sys.exit( 0 )

N = int( sys.argv[ 1 ] )
primeFinder = PrimeFinder( N )
allPrimes = primeFinder.computeAllPrimesBetweenNAndTwoN()
print "There are %u primes between %u and %u: %s" % (
len( allPrimes ), N, 2 * N, str( allPrimes )[ 1 : -1 ]
)

if __name__ == "__main__":
main()

And it seems to work, but check WolframAlpha if you don’t trust me 🙂


$ ./myprimes.py 100000
There are 8392 primes between 100000 and 200000: 100003, 100019, 100043 ...

How to use SciPy Least Squares to minimize multiple functions at once

SciPy comes with a least squares Levenberg-Marquardt implementation. This allows you to minimize functions. By defining your function as the difference between some measurements and your model function, you can fit a model to those measurements.

Sometimes your model contains multiple functions. You can also minimize for all functions using this approach:

  • Define your functions that you like to minimize A(p0), B(P1), …
    their cumulative paramaters will be a tuple (p0, p1, …).
  • Define your function to be minimized as f(x0), where x0 is expanded to the parameter tuple.
  • The function f returns a vector of differences between discrete measured sample and the individual functions A, B etc.
  • Let SciPy minimize this function, starting with a reasonably selected initial parameter vector.

This is an example implementation:


import math
import scipy.optimize

measured = {
1: [ 0, 0.02735, 0.47265 ],
6: [ 0.0041, 0.09335, 0.40255 ],
10: [ 0.0133, 0.14555, 0.34115 ],
20: [ 0.0361, 0.205, 0.2589 ],
30: [ 0.06345, 0.23425, 0.20225 ],
60: [ 0.132, 0.25395, 0.114 ],
90: [ 0.2046, 0.23445, 0.06095 ],
120: [ 0.2429, 0.20815, 0.04895 ],
180: [ 0.31755, 0.1618, 0.02065 ],
240: [ 0.3648, 0.121, 0.0142 ],
315: [ 0.3992, 0.0989, 0.00195 ]
}

def A( x, a, k ):
return a * math.exp( -x * k )

def B( x, a, k, l ):
return k * a / ( l - k ) * ( math.exp( -k * x ) - math.exp( -l * x ) )

def C( x, a, k, l ):
return a * ( 1 - l / ( l - k ) * math.exp( -x * k ) + k / ( l - k ) * math.exp( -x * l ) )

def f( x0 ):
a, k, l = x0
error = []
for x in measured:
error += [ C( x, a, k, l ) - measured[ x ][ 0 ],
B( x, a, k, l ) - measured[ x ][ 1 ],
A( x, a, k ) - measured[ x ][ 2 ]
]
return error

def main():
x0 = ( 0.46, 0.01, 0.001 ) # initial parameters for a, k and l
x, cov, infodict, mesg, ier = scipy.optimize.leastsq( f, x0, full_output = True, epsfcn = 1.0e-2 )
print x

if __name__ == "__main__":
main()

SciPy returns a lot more information, not only the final parameters. See their documentation for details. You also may want to tweak epsfcn for a better fit. This depends on your functions shape and properties.