Proof of fixed-point theorem

Joined
11/5/14
Messages
321
Points
53
Hey guys,

I tried to write a proof for Banach's Contraction Mapping theorem, which is extremely important for fixed-point iteration to numerically solve for the zeroes of an equation, but I think it even extends to PDEs, where a function that solves the PDE is a fixed point in infinite dimensional function spaces. Do you guys think, my proof is rigorous and technically correct?

Cheers,
Quasar.
 

Attachments

Thanks for sharing, I didn't know about Aitken's acceleration!

It would be a good exercise for me to show that, for the recursive sequence xn+1=12(xn+2xn) x_{n+1} = \frac{1}{2}\left(x_n + \frac{2}{x_n}\right), Aitken acceleration has quadratic (if I understood correctly?)rate of convergence, i.e. lim(Ax)nlxnl2=μ \lim \frac{(Ax)_n - l}{|x_n - l|^2}= \mu , where μ>0 \mu > 0 .
 
I recently purchased a copy of this book : An Introduction to Numerical Analysis by Suli and Mayers. It assumes a background of only a first course in analysis, and the presentation of the material is intuitive, rigorous & clear - no hand-waving. I find it to be such a nicely written undergraduate level intro.
 
Last edited:
Thanks for sharing, I didn't know about Aitken's acceleration!

It would be a good exercise for me to show that, for the recursive sequence xn+1=12(xn+2xn) x_{n+1} = \frac{1}{2}\left(x_n + \frac{2}{x_n}\right), Aitken acceleration has quadratic (if I understood correctly?)rate of convergence, i.e. lim(Ax)nlxnl2=μ \lim \frac{(Ax)_n - l}{|x_n - l|^2}= \mu , where μ>0 \mu > 0 .
and experimentally.
 
A nice feature of fixed point/contraction theorems is they converge for any initial guess in contrast to say Newton's method.
 
If I remember correctly, precisely why, one has to run a bracketing routine prior to Newton-Raphson. Geometrically speaking, if xn x_n is such that f(xn)f'(x_n) is close to zero, then xn+1x_{n+1} can be far away from the root.
 
By the way, do you have a suggestion, for a good intro book to C++ (using C++ 17). I definitely have to re-learn modern C++ & I want to implement some algorithms, I learn in my numerics book.

Quantlib implements many of the solvers in C++, but I'd like to make an attempt at writing my own code.

SO has a well-written post on this here. But, (1) the answer was first written a while back and (2) not a lot of books that combine both numerical methods and modern C++.
 
By the way, do you have a suggestion, for a good intro book to C++ (using C++ 17). I definitely have to re-learn modern C++ & I want to implement some algorithms, I learn in my numerics book.

Quantlib implements many of the solvers in C++, but I'd like to make an attempt at writing my own code.

SO has a well-written post on this here. But, (1) the answer was first written a while back and (2) not a lot of books that combine both numerical methods and modern C++.
Do you already know C++11? or even C++03?(?)
BTW C++17 is only a stepping stone to C++20. TBH I don't think you are ready.

It's like judo

white>yellow->orange->green->blue->brown->black

and you in C++ terms?

You can do all these algos in C. Or Python.
 
Last edited:
If you ask me, I think, I'm still white-belt in C++.
I think I know basic object oriented programming, but I need to come up to speed, for example when you write C++ 11.
 
Another way to solve fixed point

x = g(x)

is to solve it as a least-squares minimisation problem

min (x - g(x))^2.

And use favourite optimiser.
 
If I remember correctly, precisely why, one has to run a bracketing routine prior to Newton-Raphson. Geometrically speaking, if xn x_n is such that f(xn)f'(x_n) is close to zero, then xn+1x_{n+1} can be far away from the root.
For example.
and 1) several roots, 2) no roots are possible.
 
For example.
and 1) several roots, 2) no roots are possible.
f(x)=1211+Mx1.05 f(x) = \frac{1}{2}-\frac{1}{1+M|x - 1.05|} has roots x1=1.051Mx_1=1.05-\frac{1}{M} and x2=1.05+1Mx_2=1.05+\frac{1}{M}. It'll be fun to see the behavior of a 1D solver (or a bracketing routine) when 1/M1/M is near or smaller than the machine epsilon.

pathological_function.PNG
 
Even for M = 1.0e+9 etc. it will be a challenge for methods that use derivatives. It is like a boundary/internal layer problem

SHGO is a global optimiser that finds all roots.


It's very easy to use.
from scipy.optimize import minimize_scalar, minimize, shgo, brute, fmin
import math
import numpy as np

#DJD
print("boundary layer problem")
M = 1000
def func(x):
    z = 1.0/(1.0 + M*math.fabs(x - 1.05))
    return math.pow(0.5 - z,2)

bnds = [(0,None)]
result = shgo(func,  bounds = bnds, sampling_method ='sobol')
print("global minimum  ", result.x) 
print("success?  ", result.success)
print(result.fun)
print("local minima  ", result.xl) 
print("values at local minima  ", result.funl)
 
Last edited:
“The first rule of discovery is to have brains and good luck. The second rule of discovery is to sit tight and wait till you get a bright idea.”

George Pólya
 


Write your reply...
Back
Top Bottom