123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806180718081809181018111812181318141815181618171818181918201821182218231824182518261827182818291830183118321833183418351836183718381839184018411842184318441845184618471848184918501851185218531854185518561857185818591860186118621863186418651866186718681869187018711872187318741875187618771878187918801881188218831884188518861887188818891890189118921893189418951896189718981899190019011902190319041905190619071908190919101911191219131914191519161917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320242025202620272028202920302031203220332034203520362037203820392040204120422043204420452046204720482049205020512052205320542055205620572058205920602061206220632064206520662067206820692070207120722073207420752076207720782079208020812082208320842085208620872088208920902091209220932094209520962097209820992100210121022103210421052106210721082109211021112112211321142115 |
- try:
- from itertools import izip
- except ImportError:
- izip = zip
- from ..libmp.backend import xrange
- from .calculus import defun
- try:
- next = next
- except NameError:
- next = lambda _: _.next()
- @defun
- def richardson(ctx, seq):
- r"""
- Given a list ``seq`` of the first `N` elements of a slowly convergent
- infinite sequence, :func:`~mpmath.richardson` computes the `N`-term
- Richardson extrapolate for the limit.
- :func:`~mpmath.richardson` returns `(v, c)` where `v` is the estimated
- limit and `c` is the magnitude of the largest weight used during the
- computation. The weight provides an estimate of the precision
- lost to cancellation. Due to cancellation effects, the sequence must
- be typically be computed at a much higher precision than the target
- accuracy of the extrapolation.
- **Applicability and issues**
- The `N`-step Richardson extrapolation algorithm used by
- :func:`~mpmath.richardson` is described in [1].
- Richardson extrapolation only works for a specific type of sequence,
- namely one converging like partial sums of
- `P(1)/Q(1) + P(2)/Q(2) + \ldots` where `P` and `Q` are polynomials.
- When the sequence does not convergence at such a rate
- :func:`~mpmath.richardson` generally produces garbage.
- Richardson extrapolation has the advantage of being fast: the `N`-term
- extrapolate requires only `O(N)` arithmetic operations, and usually
- produces an estimate that is accurate to `O(N)` digits. Contrast with
- the Shanks transformation (see :func:`~mpmath.shanks`), which requires
- `O(N^2)` operations.
- :func:`~mpmath.richardson` is unable to produce an estimate for the
- approximation error. One way to estimate the error is to perform
- two extrapolations with slightly different `N` and comparing the
- results.
- Richardson extrapolation does not work for oscillating sequences.
- As a simple workaround, :func:`~mpmath.richardson` detects if the last
- three elements do not differ monotonically, and in that case
- applies extrapolation only to the even-index elements.
- **Example**
- Applying Richardson extrapolation to the Leibniz series for `\pi`::
- >>> from mpmath import *
- >>> mp.dps = 30; mp.pretty = True
- >>> S = [4*sum(mpf(-1)**n/(2*n+1) for n in range(m))
- ... for m in range(1,30)]
- >>> v, c = richardson(S[:10])
- >>> v
- 3.2126984126984126984126984127
- >>> nprint([v-pi, c])
- [0.0711058, 2.0]
- >>> v, c = richardson(S[:30])
- >>> v
- 3.14159265468624052829954206226
- >>> nprint([v-pi, c])
- [1.09645e-9, 20833.3]
- **References**
- 1. [BenderOrszag]_ pp. 375-376
- """
- if len(seq) < 3:
- raise ValueError("seq should be of minimum length 3")
- if ctx.sign(seq[-1]-seq[-2]) != ctx.sign(seq[-2]-seq[-3]):
- seq = seq[::2]
- N = len(seq)//2-1
- s = ctx.zero
- # The general weight is c[k] = (N+k)**N * (-1)**(k+N) / k! / (N-k)!
- # To avoid repeated factorials, we simplify the quotient
- # of successive weights to obtain a recurrence relation
- c = (-1)**N * N**N / ctx.mpf(ctx._ifac(N))
- maxc = 1
- for k in xrange(N+1):
- s += c * seq[N+k]
- maxc = max(abs(c), maxc)
- c *= (k-N)*ctx.mpf(k+N+1)**N
- c /= ((1+k)*ctx.mpf(k+N)**N)
- return s, maxc
- @defun
- def shanks(ctx, seq, table=None, randomized=False):
- r"""
- Given a list ``seq`` of the first `N` elements of a slowly
- convergent infinite sequence `(A_k)`, :func:`~mpmath.shanks` computes the iterated
- Shanks transformation `S(A), S(S(A)), \ldots, S^{N/2}(A)`. The Shanks
- transformation often provides strong convergence acceleration,
- especially if the sequence is oscillating.
- The iterated Shanks transformation is computed using the Wynn
- epsilon algorithm (see [1]). :func:`~mpmath.shanks` returns the full
- epsilon table generated by Wynn's algorithm, which can be read
- off as follows:
- * The table is a list of lists forming a lower triangular matrix,
- where higher row and column indices correspond to more accurate
- values.
- * The columns with even index hold dummy entries (required for the
- computation) and the columns with odd index hold the actual
- extrapolates.
- * The last element in the last row is typically the most
- accurate estimate of the limit.
- * The difference to the third last element in the last row
- provides an estimate of the approximation error.
- * The magnitude of the second last element provides an estimate
- of the numerical accuracy lost to cancellation.
- For convenience, so the extrapolation is stopped at an odd index
- so that ``shanks(seq)[-1][-1]`` always gives an estimate of the
- limit.
- Optionally, an existing table can be passed to :func:`~mpmath.shanks`.
- This can be used to efficiently extend a previous computation after
- new elements have been appended to the sequence. The table will
- then be updated in-place.
- **The Shanks transformation**
- The Shanks transformation is defined as follows (see [2]): given
- the input sequence `(A_0, A_1, \ldots)`, the transformed sequence is
- given by
- .. math ::
- S(A_k) = \frac{A_{k+1}A_{k-1}-A_k^2}{A_{k+1}+A_{k-1}-2 A_k}
- The Shanks transformation gives the exact limit `A_{\infty}` in a
- single step if `A_k = A + a q^k`. Note in particular that it
- extrapolates the exact sum of a geometric series in a single step.
- Applying the Shanks transformation once often improves convergence
- substantially for an arbitrary sequence, but the optimal effect is
- obtained by applying it iteratively:
- `S(S(A_k)), S(S(S(A_k))), \ldots`.
- Wynn's epsilon algorithm provides an efficient way to generate
- the table of iterated Shanks transformations. It reduces the
- computation of each element to essentially a single division, at
- the cost of requiring dummy elements in the table. See [1] for
- details.
- **Precision issues**
- Due to cancellation effects, the sequence must be typically be
- computed at a much higher precision than the target accuracy
- of the extrapolation.
- If the Shanks transformation converges to the exact limit (such
- as if the sequence is a geometric series), then a division by
- zero occurs. By default, :func:`~mpmath.shanks` handles this case by
- terminating the iteration and returning the table it has
- generated so far. With *randomized=True*, it will instead
- replace the zero by a pseudorandom number close to zero.
- (TODO: find a better solution to this problem.)
- **Examples**
- We illustrate by applying Shanks transformation to the Leibniz
- series for `\pi`::
- >>> from mpmath import *
- >>> mp.dps = 50
- >>> S = [4*sum(mpf(-1)**n/(2*n+1) for n in range(m))
- ... for m in range(1,30)]
- >>>
- >>> T = shanks(S[:7])
- >>> for row in T:
- ... nprint(row)
- ...
- [-0.75]
- [1.25, 3.16667]
- [-1.75, 3.13333, -28.75]
- [2.25, 3.14524, 82.25, 3.14234]
- [-2.75, 3.13968, -177.75, 3.14139, -969.937]
- [3.25, 3.14271, 327.25, 3.14166, 3515.06, 3.14161]
- The extrapolated accuracy is about 4 digits, and about 4 digits
- may have been lost due to cancellation::
- >>> L = T[-1]
- >>> nprint([abs(L[-1] - pi), abs(L[-1] - L[-3]), abs(L[-2])])
- [2.22532e-5, 4.78309e-5, 3515.06]
- Now we extend the computation::
- >>> T = shanks(S[:25], T)
- >>> L = T[-1]
- >>> nprint([abs(L[-1] - pi), abs(L[-1] - L[-3]), abs(L[-2])])
- [3.75527e-19, 1.48478e-19, 2.96014e+17]
- The value for pi is now accurate to 18 digits. About 18 digits may
- also have been lost to cancellation.
- Here is an example with a geometric series, where the convergence
- is immediate (the sum is exactly 1)::
- >>> mp.dps = 15
- >>> for row in shanks([0.5, 0.75, 0.875, 0.9375, 0.96875]):
- ... nprint(row)
- [4.0]
- [8.0, 1.0]
- **References**
- 1. [GravesMorris]_
- 2. [BenderOrszag]_ pp. 368-375
- """
- if len(seq) < 2:
- raise ValueError("seq should be of minimum length 2")
- if table:
- START = len(table)
- else:
- START = 0
- table = []
- STOP = len(seq) - 1
- if STOP & 1:
- STOP -= 1
- one = ctx.one
- eps = +ctx.eps
- if randomized:
- from random import Random
- rnd = Random()
- rnd.seed(START)
- for i in xrange(START, STOP):
- row = []
- for j in xrange(i+1):
- if j == 0:
- a, b = 0, seq[i+1]-seq[i]
- else:
- if j == 1:
- a = seq[i]
- else:
- a = table[i-1][j-2]
- b = row[j-1] - table[i-1][j-1]
- if not b:
- if randomized:
- b = (1 + rnd.getrandbits(10))*eps
- elif i & 1:
- return table[:-1]
- else:
- return table
- row.append(a + one/b)
- table.append(row)
- return table
- class levin_class:
- # levin: Copyright 2013 Timo Hartmann (thartmann15 at gmail.com)
- r"""
- This interface implements Levin's (nonlinear) sequence transformation for
- convergence acceleration and summation of divergent series. It performs
- better than the Shanks/Wynn-epsilon algorithm for logarithmic convergent
- or alternating divergent series.
- Let *A* be the series we want to sum:
- .. math ::
- A = \sum_{k=0}^{\infty} a_k
- Attention: all `a_k` must be non-zero!
- Let `s_n` be the partial sums of this series:
- .. math ::
- s_n = \sum_{k=0}^n a_k.
- **Methods**
- Calling ``levin`` returns an object with the following methods.
- ``update(...)`` works with the list of individual terms `a_k` of *A*, and
- ``update_step(...)`` works with the list of partial sums `s_k` of *A*:
- .. code ::
- v, e = ...update([a_0, a_1,..., a_k])
- v, e = ...update_psum([s_0, s_1,..., s_k])
- ``step(...)`` works with the individual terms `a_k` and ``step_psum(...)``
- works with the partial sums `s_k`:
- .. code ::
- v, e = ...step(a_k)
- v, e = ...step_psum(s_k)
- *v* is the current estimate for *A*, and *e* is an error estimate which is
- simply the difference between the current estimate and the last estimate.
- One should not mix ``update``, ``update_psum``, ``step`` and ``step_psum``.
- **A word of caution**
- One can only hope for good results (i.e. convergence acceleration or
- resummation) if the `s_n` have some well defind asymptotic behavior for
- large `n` and are not erratic or random. Furthermore one usually needs very
- high working precision because of the numerical cancellation. If the working
- precision is insufficient, levin may produce silently numerical garbage.
- Furthermore even if the Levin-transformation converges, in the general case
- there is no proof that the result is mathematically sound. Only for very
- special classes of problems one can prove that the Levin-transformation
- converges to the expected result (for example Stieltjes-type integrals).
- Furthermore the Levin-transform is quite expensive (i.e. slow) in comparison
- to Shanks/Wynn-epsilon, Richardson & co.
- In summary one can say that the Levin-transformation is powerful but
- unreliable and that it may need a copious amount of working precision.
- The Levin transform has several variants differing in the choice of weights.
- Some variants are better suited for the possible flavours of convergence
- behaviour of *A* than other variants:
- .. code ::
- convergence behaviour levin-u levin-t levin-v shanks/wynn-epsilon
- logarithmic + - + -
- linear + + + +
- alternating divergent + + + +
- "+" means the variant is suitable,"-" means the variant is not suitable;
- for comparison the Shanks/Wynn-epsilon transform is listed, too.
- The variant is controlled though the variant keyword (i.e. ``variant="u"``,
- ``variant="t"`` or ``variant="v"``). Overall "u" is probably the best choice.
- Finally it is possible to use the Sidi-S transform instead of the Levin transform
- by using the keyword ``method='sidi'``. The Sidi-S transform works better than the
- Levin transformation for some divergent series (see the examples).
- Parameters:
- .. code ::
- method "levin" or "sidi" chooses either the Levin or the Sidi-S transformation
- variant "u","t" or "v" chooses the weight variant.
- The Levin transform is also accessible through the nsum interface.
- ``method="l"`` or ``method="levin"`` select the normal Levin transform while
- ``method="sidi"``
- selects the Sidi-S transform. The variant is in both cases selected through the
- levin_variant keyword. The stepsize in :func:`~mpmath.nsum` must not be chosen too large, otherwise
- it will miss the point where the Levin transform converges resulting in numerical
- overflow/garbage. For highly divergent series a copious amount of working precision
- must be chosen.
- **Examples**
- First we sum the zeta function::
- >>> from mpmath import mp
- >>> mp.prec = 53
- >>> eps = mp.mpf(mp.eps)
- >>> with mp.extraprec(2 * mp.prec): # levin needs a high working precision
- ... L = mp.levin(method = "levin", variant = "u")
- ... S, s, n = [], 0, 1
- ... while 1:
- ... s += mp.one / (n * n)
- ... n += 1
- ... S.append(s)
- ... v, e = L.update_psum(S)
- ... if e < eps:
- ... break
- ... if n > 1000: raise RuntimeError("iteration limit exceeded")
- >>> print(mp.chop(v - mp.pi ** 2 / 6))
- 0.0
- >>> w = mp.nsum(lambda n: 1 / (n*n), [1, mp.inf], method = "levin", levin_variant = "u")
- >>> print(mp.chop(v - w))
- 0.0
- Now we sum the zeta function outside its range of convergence
- (attention: This does not work at the negative integers!)::
- >>> eps = mp.mpf(mp.eps)
- >>> with mp.extraprec(2 * mp.prec): # levin needs a high working precision
- ... L = mp.levin(method = "levin", variant = "v")
- ... A, n = [], 1
- ... while 1:
- ... s = mp.mpf(n) ** (2 + 3j)
- ... n += 1
- ... A.append(s)
- ... v, e = L.update(A)
- ... if e < eps:
- ... break
- ... if n > 1000: raise RuntimeError("iteration limit exceeded")
- >>> print(mp.chop(v - mp.zeta(-2-3j)))
- 0.0
- >>> w = mp.nsum(lambda n: n ** (2 + 3j), [1, mp.inf], method = "levin", levin_variant = "v")
- >>> print(mp.chop(v - w))
- 0.0
- Now we sum the divergent asymptotic expansion of an integral related to the
- exponential integral (see also [2] p.373). The Sidi-S transform works best here::
- >>> z = mp.mpf(10)
- >>> exact = mp.quad(lambda x: mp.exp(-x)/(1+x/z),[0,mp.inf])
- >>> # exact = z * mp.exp(z) * mp.expint(1,z) # this is the symbolic expression for the integral
- >>> eps = mp.mpf(mp.eps)
- >>> with mp.extraprec(2 * mp.prec): # high working precisions are mandatory for divergent resummation
- ... L = mp.levin(method = "sidi", variant = "t")
- ... n = 0
- ... while 1:
- ... s = (-1)**n * mp.fac(n) * z ** (-n)
- ... v, e = L.step(s)
- ... n += 1
- ... if e < eps:
- ... break
- ... if n > 1000: raise RuntimeError("iteration limit exceeded")
- >>> print(mp.chop(v - exact))
- 0.0
- >>> w = mp.nsum(lambda n: (-1) ** n * mp.fac(n) * z ** (-n), [0, mp.inf], method = "sidi", levin_variant = "t")
- >>> print(mp.chop(v - w))
- 0.0
- Another highly divergent integral is also summable::
- >>> z = mp.mpf(2)
- >>> eps = mp.mpf(mp.eps)
- >>> exact = mp.quad(lambda x: mp.exp( -x * x / 2 - z * x ** 4), [0,mp.inf]) * 2 / mp.sqrt(2 * mp.pi)
- >>> # exact = mp.exp(mp.one / (32 * z)) * mp.besselk(mp.one / 4, mp.one / (32 * z)) / (4 * mp.sqrt(z * mp.pi)) # this is the symbolic expression for the integral
- >>> with mp.extraprec(7 * mp.prec): # we need copious amount of precision to sum this highly divergent series
- ... L = mp.levin(method = "levin", variant = "t")
- ... n, s = 0, 0
- ... while 1:
- ... s += (-z)**n * mp.fac(4 * n) / (mp.fac(n) * mp.fac(2 * n) * (4 ** n))
- ... n += 1
- ... v, e = L.step_psum(s)
- ... if e < eps:
- ... break
- ... if n > 1000: raise RuntimeError("iteration limit exceeded")
- >>> print(mp.chop(v - exact))
- 0.0
- >>> w = mp.nsum(lambda n: (-z)**n * mp.fac(4 * n) / (mp.fac(n) * mp.fac(2 * n) * (4 ** n)),
- ... [0, mp.inf], method = "levin", levin_variant = "t", workprec = 8*mp.prec, steps = [2] + [1 for x in xrange(1000)])
- >>> print(mp.chop(v - w))
- 0.0
- These examples run with 15-20 decimal digits precision. For higher precision the
- working precision must be raised.
- **Examples for nsum**
- Here we calculate Euler's constant as the constant term in the Laurent
- expansion of `\zeta(s)` at `s=1`. This sum converges extremly slowly because of
- the logarithmic convergence behaviour of the Dirichlet series for zeta::
- >>> mp.dps = 30
- >>> z = mp.mpf(10) ** (-10)
- >>> a = mp.nsum(lambda n: n**(-(1+z)), [1, mp.inf], method = "l") - 1 / z
- >>> print(mp.chop(a - mp.euler, tol = 1e-10))
- 0.0
- The Sidi-S transform performs excellently for the alternating series of `\log(2)`::
- >>> a = mp.nsum(lambda n: (-1)**(n-1) / n, [1, mp.inf], method = "sidi")
- >>> print(mp.chop(a - mp.log(2)))
- 0.0
- Hypergeometric series can also be summed outside their range of convergence.
- The stepsize in :func:`~mpmath.nsum` must not be chosen too large, otherwise it will miss the
- point where the Levin transform converges resulting in numerical overflow/garbage::
- >>> z = 2 + 1j
- >>> exact = mp.hyp2f1(2 / mp.mpf(3), 4 / mp.mpf(3), 1 / mp.mpf(3), z)
- >>> f = lambda n: mp.rf(2 / mp.mpf(3), n) * mp.rf(4 / mp.mpf(3), n) * z**n / (mp.rf(1 / mp.mpf(3), n) * mp.fac(n))
- >>> v = mp.nsum(f, [0, mp.inf], method = "levin", steps = [10 for x in xrange(1000)])
- >>> print(mp.chop(exact-v))
- 0.0
- References:
- [1] E.J. Weniger - "Nonlinear Sequence Transformations for the Acceleration of
- Convergence and the Summation of Divergent Series" arXiv:math/0306302
- [2] A. Sidi - "Pratical Extrapolation Methods"
- [3] H.H.H. Homeier - "Scalar Levin-Type Sequence Transformations" arXiv:math/0005209
- """
- def __init__(self, method = "levin", variant = "u"):
- self.variant = variant
- self.n = 0
- self.a0 = 0
- self.theta = 1
- self.A = []
- self.B = []
- self.last = 0
- self.last_s = False
- if method == "levin":
- self.factor = self.factor_levin
- elif method == "sidi":
- self.factor = self.factor_sidi
- else:
- raise ValueError("levin: unknown method \"%s\"" % method)
- def factor_levin(self, i):
- # original levin
- # [1] p.50,e.7.5-7 (with n-j replaced by i)
- return (self.theta + i) * (self.theta + self.n - 1) ** (self.n - i - 2) / self.ctx.mpf(self.theta + self.n) ** (self.n - i - 1)
- def factor_sidi(self, i):
- # sidi analogon to levin (factorial series)
- # [1] p.59,e.8.3-16 (with n-j replaced by i)
- return (self.theta + self.n - 1) * (self.theta + self.n - 2) / self.ctx.mpf((self.theta + 2 * self.n - i - 2) * (self.theta + 2 * self.n - i - 3))
- def run(self, s, a0, a1 = 0):
- if self.variant=="t":
- # levin t
- w=a0
- elif self.variant=="u":
- # levin u
- w=a0*(self.theta+self.n)
- elif self.variant=="v":
- # levin v
- w=a0*a1/(a0-a1)
- else:
- assert False, "unknown variant"
- if w==0:
- raise ValueError("levin: zero weight")
- self.A.append(s/w)
- self.B.append(1/w)
- for i in range(self.n-1,-1,-1):
- if i==self.n-1:
- f=1
- else:
- f=self.factor(i)
- self.A[i]=self.A[i+1]-f*self.A[i]
- self.B[i]=self.B[i+1]-f*self.B[i]
- self.n+=1
- ###########################################################################
- def update_psum(self,S):
- """
- This routine applies the convergence acceleration to the list of partial sums.
- A = sum(a_k, k = 0..infinity)
- s_n = sum(a_k, k = 0..n)
- v, e = ...update_psum([s_0, s_1,..., s_k])
- output:
- v current estimate of the series A
- e an error estimate which is simply the difference between the current
- estimate and the last estimate.
- """
- if self.variant!="v":
- if self.n==0:
- self.run(S[0],S[0])
- while self.n<len(S):
- self.run(S[self.n],S[self.n]-S[self.n-1])
- else:
- if len(S)==1:
- self.last=0
- return S[0],abs(S[0])
- if self.n==0:
- self.a1=S[1]-S[0]
- self.run(S[0],S[0],self.a1)
- while self.n<len(S)-1:
- na1=S[self.n+1]-S[self.n]
- self.run(S[self.n],self.a1,na1)
- self.a1=na1
- value=self.A[0]/self.B[0]
- err=abs(value-self.last)
- self.last=value
- return value,err
- def update(self,X):
- """
- This routine applies the convergence acceleration to the list of individual terms.
- A = sum(a_k, k = 0..infinity)
- v, e = ...update([a_0, a_1,..., a_k])
- output:
- v current estimate of the series A
- e an error estimate which is simply the difference between the current
- estimate and the last estimate.
- """
- if self.variant!="v":
- if self.n==0:
- self.s=X[0]
- self.run(self.s,X[0])
- while self.n<len(X):
- self.s+=X[self.n]
- self.run(self.s,X[self.n])
- else:
- if len(X)==1:
- self.last=0
- return X[0],abs(X[0])
- if self.n==0:
- self.s=X[0]
- self.run(self.s,X[0],X[1])
- while self.n<len(X)-1:
- self.s+=X[self.n]
- self.run(self.s,X[self.n],X[self.n+1])
- value=self.A[0]/self.B[0]
- err=abs(value-self.last)
- self.last=value
- return value,err
- ###########################################################################
- def step_psum(self,s):
- """
- This routine applies the convergence acceleration to the partial sums.
- A = sum(a_k, k = 0..infinity)
- s_n = sum(a_k, k = 0..n)
- v, e = ...step_psum(s_k)
- output:
- v current estimate of the series A
- e an error estimate which is simply the difference between the current
- estimate and the last estimate.
- """
- if self.variant!="v":
- if self.n==0:
- self.last_s=s
- self.run(s,s)
- else:
- self.run(s,s-self.last_s)
- self.last_s=s
- else:
- if isinstance(self.last_s,bool):
- self.last_s=s
- self.last_w=s
- self.last=0
- return s,abs(s)
- na1=s-self.last_s
- self.run(self.last_s,self.last_w,na1)
- self.last_w=na1
- self.last_s=s
- value=self.A[0]/self.B[0]
- err=abs(value-self.last)
- self.last=value
- return value,err
- def step(self,x):
- """
- This routine applies the convergence acceleration to the individual terms.
- A = sum(a_k, k = 0..infinity)
- v, e = ...step(a_k)
- output:
- v current estimate of the series A
- e an error estimate which is simply the difference between the current
- estimate and the last estimate.
- """
- if self.variant!="v":
- if self.n==0:
- self.s=x
- self.run(self.s,x)
- else:
- self.s+=x
- self.run(self.s,x)
- else:
- if isinstance(self.last_s,bool):
- self.last_s=x
- self.s=0
- self.last=0
- return x,abs(x)
- self.s+=self.last_s
- self.run(self.s,self.last_s,x)
- self.last_s=x
- value=self.A[0]/self.B[0]
- err=abs(value-self.last)
- self.last=value
- return value,err
- def levin(ctx, method = "levin", variant = "u"):
- L = levin_class(method = method, variant = variant)
- L.ctx = ctx
- return L
- levin.__doc__ = levin_class.__doc__
- defun(levin)
- class cohen_alt_class:
- # cohen_alt: Copyright 2013 Timo Hartmann (thartmann15 at gmail.com)
- r"""
- This interface implements the convergence acceleration of alternating series
- as described in H. Cohen, F.R. Villegas, D. Zagier - "Convergence Acceleration
- of Alternating Series". This series transformation works only well if the
- individual terms of the series have an alternating sign. It belongs to the
- class of linear series transformations (in contrast to the Shanks/Wynn-epsilon
- or Levin transform). This series transformation is also able to sum some types
- of divergent series. See the paper under which conditions this resummation is
- mathematical sound.
- Let *A* be the series we want to sum:
- .. math ::
- A = \sum_{k=0}^{\infty} a_k
- Let `s_n` be the partial sums of this series:
- .. math ::
- s_n = \sum_{k=0}^n a_k.
- **Interface**
- Calling ``cohen_alt`` returns an object with the following methods.
- Then ``update(...)`` works with the list of individual terms `a_k` and
- ``update_psum(...)`` works with the list of partial sums `s_k`:
- .. code ::
- v, e = ...update([a_0, a_1,..., a_k])
- v, e = ...update_psum([s_0, s_1,..., s_k])
- *v* is the current estimate for *A*, and *e* is an error estimate which is
- simply the difference between the current estimate and the last estimate.
- **Examples**
- Here we compute the alternating zeta function using ``update_psum``::
- >>> from mpmath import mp
- >>> AC = mp.cohen_alt()
- >>> S, s, n = [], 0, 1
- >>> while 1:
- ... s += -((-1) ** n) * mp.one / (n * n)
- ... n += 1
- ... S.append(s)
- ... v, e = AC.update_psum(S)
- ... if e < mp.eps:
- ... break
- ... if n > 1000: raise RuntimeError("iteration limit exceeded")
- >>> print(mp.chop(v - mp.pi ** 2 / 12))
- 0.0
- Here we compute the product `\prod_{n=1}^{\infty} \Gamma(1+1/(2n-1)) / \Gamma(1+1/(2n))`::
- >>> A = []
- >>> AC = mp.cohen_alt()
- >>> n = 1
- >>> while 1:
- ... A.append( mp.loggamma(1 + mp.one / (2 * n - 1)))
- ... A.append(-mp.loggamma(1 + mp.one / (2 * n)))
- ... n += 1
- ... v, e = AC.update(A)
- ... if e < mp.eps:
- ... break
- ... if n > 1000: raise RuntimeError("iteration limit exceeded")
- >>> v = mp.exp(v)
- >>> print(mp.chop(v - 1.06215090557106, tol = 1e-12))
- 0.0
- ``cohen_alt`` is also accessible through the :func:`~mpmath.nsum` interface::
- >>> v = mp.nsum(lambda n: (-1)**(n-1) / n, [1, mp.inf], method = "a")
- >>> print(mp.chop(v - mp.log(2)))
- 0.0
- >>> v = mp.nsum(lambda n: (-1)**n / (2 * n + 1), [0, mp.inf], method = "a")
- >>> print(mp.chop(v - mp.pi / 4))
- 0.0
- >>> v = mp.nsum(lambda n: (-1)**n * mp.log(n) * n, [1, mp.inf], method = "a")
- >>> print(mp.chop(v - mp.diff(lambda s: mp.altzeta(s), -1)))
- 0.0
- """
- def __init__(self):
- self.last=0
- def update(self, A):
- """
- This routine applies the convergence acceleration to the list of individual terms.
- A = sum(a_k, k = 0..infinity)
- v, e = ...update([a_0, a_1,..., a_k])
- output:
- v current estimate of the series A
- e an error estimate which is simply the difference between the current
- estimate and the last estimate.
- """
- n = len(A)
- d = (3 + self.ctx.sqrt(8)) ** n
- d = (d + 1 / d) / 2
- b = -self.ctx.one
- c = -d
- s = 0
- for k in xrange(n):
- c = b - c
- if k % 2 == 0:
- s = s + c * A[k]
- else:
- s = s - c * A[k]
- b = 2 * (k + n) * (k - n) * b / ((2 * k + 1) * (k + self.ctx.one))
- value = s / d
- err = abs(value - self.last)
- self.last = value
- return value, err
- def update_psum(self, S):
- """
- This routine applies the convergence acceleration to the list of partial sums.
- A = sum(a_k, k = 0..infinity)
- s_n = sum(a_k ,k = 0..n)
- v, e = ...update_psum([s_0, s_1,..., s_k])
- output:
- v current estimate of the series A
- e an error estimate which is simply the difference between the current
- estimate and the last estimate.
- """
- n = len(S)
- d = (3 + self.ctx.sqrt(8)) ** n
- d = (d + 1 / d) / 2
- b = self.ctx.one
- s = 0
- for k in xrange(n):
- b = 2 * (n + k) * (n - k) * b / ((2 * k + 1) * (k + self.ctx.one))
- s += b * S[k]
- value = s / d
- err = abs(value - self.last)
- self.last = value
- return value, err
- def cohen_alt(ctx):
- L = cohen_alt_class()
- L.ctx = ctx
- return L
- cohen_alt.__doc__ = cohen_alt_class.__doc__
- defun(cohen_alt)
- @defun
- def sumap(ctx, f, interval, integral=None, error=False):
- r"""
- Evaluates an infinite series of an analytic summand *f* using the
- Abel-Plana formula
- .. math ::
- \sum_{k=0}^{\infty} f(k) = \int_0^{\infty} f(t) dt + \frac{1}{2} f(0) +
- i \int_0^{\infty} \frac{f(it)-f(-it)}{e^{2\pi t}-1} dt.
- Unlike the Euler-Maclaurin formula (see :func:`~mpmath.sumem`),
- the Abel-Plana formula does not require derivatives. However,
- it only works when `|f(it)-f(-it)|` does not
- increase too rapidly with `t`.
- **Examples**
- The Abel-Plana formula is particularly useful when the summand
- decreases like a power of `k`; for example when the sum is a pure
- zeta function::
- >>> from mpmath import *
- >>> mp.dps = 25; mp.pretty = True
- >>> sumap(lambda k: 1/k**2.5, [1,inf])
- 1.34148725725091717975677
- >>> zeta(2.5)
- 1.34148725725091717975677
- >>> sumap(lambda k: 1/(k+1j)**(2.5+2.5j), [1,inf])
- (-3.385361068546473342286084 - 0.7432082105196321803869551j)
- >>> zeta(2.5+2.5j, 1+1j)
- (-3.385361068546473342286084 - 0.7432082105196321803869551j)
- If the series is alternating, numerical quadrature along the real
- line is likely to give poor results, so it is better to evaluate
- the first term symbolically whenever possible:
- >>> n=3; z=-0.75
- >>> I = expint(n,-log(z))
- >>> chop(sumap(lambda k: z**k / k**n, [1,inf], integral=I))
- -0.6917036036904594510141448
- >>> polylog(n,z)
- -0.6917036036904594510141448
- """
- prec = ctx.prec
- try:
- ctx.prec += 10
- a, b = interval
- if b != ctx.inf:
- raise ValueError("b should be equal to ctx.inf")
- g = lambda x: f(x+a)
- if integral is None:
- i1, err1 = ctx.quad(g, [0,ctx.inf], error=True)
- else:
- i1, err1 = integral, 0
- j = ctx.j
- p = ctx.pi * 2
- if ctx._is_real_type(i1):
- h = lambda t: -2 * ctx.im(g(j*t)) / ctx.expm1(p*t)
- else:
- h = lambda t: j*(g(j*t)-g(-j*t)) / ctx.expm1(p*t)
- i2, err2 = ctx.quad(h, [0,ctx.inf], error=True)
- err = err1+err2
- v = i1+i2+0.5*g(ctx.mpf(0))
- finally:
- ctx.prec = prec
- if error:
- return +v, err
- return +v
- @defun
- def sumem(ctx, f, interval, tol=None, reject=10, integral=None,
- adiffs=None, bdiffs=None, verbose=False, error=False,
- _fast_abort=False):
- r"""
- Uses the Euler-Maclaurin formula to compute an approximation accurate
- to within ``tol`` (which defaults to the present epsilon) of the sum
- .. math ::
- S = \sum_{k=a}^b f(k)
- where `(a,b)` are given by ``interval`` and `a` or `b` may be
- infinite. The approximation is
- .. math ::
- S \sim \int_a^b f(x) \,dx + \frac{f(a)+f(b)}{2} +
- \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!}
- \left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right).
- The last sum in the Euler-Maclaurin formula is not generally
- convergent (a notable exception is if `f` is a polynomial, in
- which case Euler-Maclaurin actually gives an exact result).
- The summation is stopped as soon as the quotient between two
- consecutive terms falls below *reject*. That is, by default
- (*reject* = 10), the summation is continued as long as each
- term adds at least one decimal.
- Although not convergent, convergence to a given tolerance can
- often be "forced" if `b = \infty` by summing up to `a+N` and then
- applying the Euler-Maclaurin formula to the sum over the range
- `(a+N+1, \ldots, \infty)`. This procedure is implemented by
- :func:`~mpmath.nsum`.
- By default numerical quadrature and differentiation is used.
- If the symbolic values of the integral and endpoint derivatives
- are known, it is more efficient to pass the value of the
- integral explicitly as ``integral`` and the derivatives
- explicitly as ``adiffs`` and ``bdiffs``. The derivatives
- should be given as iterables that yield
- `f(a), f'(a), f''(a), \ldots` (and the equivalent for `b`).
- **Examples**
- Summation of an infinite series, with automatic and symbolic
- integral and derivative values (the second should be much faster)::
- >>> from mpmath import *
- >>> mp.dps = 50; mp.pretty = True
- >>> sumem(lambda n: 1/n**2, [32, inf])
- 0.03174336652030209012658168043874142714132886413417
- >>> I = mpf(1)/32
- >>> D = adiffs=((-1)**n*fac(n+1)*32**(-2-n) for n in range(999))
- >>> sumem(lambda n: 1/n**2, [32, inf], integral=I, adiffs=D)
- 0.03174336652030209012658168043874142714132886413417
- An exact evaluation of a finite polynomial sum::
- >>> sumem(lambda n: n**5-12*n**2+3*n, [-100000, 200000])
- 10500155000624963999742499550000.0
- >>> print(sum(n**5-12*n**2+3*n for n in range(-100000, 200001)))
- 10500155000624963999742499550000
- """
- tol = tol or +ctx.eps
- interval = ctx._as_points(interval)
- a = ctx.convert(interval[0])
- b = ctx.convert(interval[-1])
- err = ctx.zero
- prev = 0
- M = 10000
- if a == ctx.ninf: adiffs = (0 for n in xrange(M))
- else: adiffs = adiffs or ctx.diffs(f, a)
- if b == ctx.inf: bdiffs = (0 for n in xrange(M))
- else: bdiffs = bdiffs or ctx.diffs(f, b)
- orig = ctx.prec
- #verbose = 1
- try:
- ctx.prec += 10
- s = ctx.zero
- for k, (da, db) in enumerate(izip(adiffs, bdiffs)):
- if k & 1:
- term = (db-da) * ctx.bernoulli(k+1) / ctx.factorial(k+1)
- mag = abs(term)
- if verbose:
- print("term", k, "magnitude =", ctx.nstr(mag))
- if k > 4 and mag < tol:
- s += term
- break
- elif k > 4 and abs(prev) / mag < reject:
- err += mag
- if _fast_abort:
- return [s, (s, err)][error]
- if verbose:
- print("Failed to converge")
- break
- else:
- s += term
- prev = term
- # Endpoint correction
- if a != ctx.ninf: s += f(a)/2
- if b != ctx.inf: s += f(b)/2
- # Tail integral
- if verbose:
- print("Integrating f(x) from x = %s to %s" % (ctx.nstr(a), ctx.nstr(b)))
- if integral:
- s += integral
- else:
- integral, ierr = ctx.quad(f, interval, error=True)
- if verbose:
- print("Integration error:", ierr)
- s += integral
- err += ierr
- finally:
- ctx.prec = orig
- if error:
- return s, err
- else:
- return s
- @defun
- def adaptive_extrapolation(ctx, update, emfun, kwargs):
- option = kwargs.get
- if ctx._fixed_precision:
- tol = option('tol', ctx.eps*2**10)
- else:
- tol = option('tol', ctx.eps/2**10)
- verbose = option('verbose', False)
- maxterms = option('maxterms', ctx.dps*10)
- method = set(option('method', 'r+s').split('+'))
- skip = option('skip', 0)
- steps = iter(option('steps', xrange(10, 10**9, 10)))
- strict = option('strict')
- #steps = (10 for i in xrange(1000))
- summer=[]
- if 'd' in method or 'direct' in method:
- TRY_RICHARDSON = TRY_SHANKS = TRY_EULER_MACLAURIN = False
- else:
- TRY_RICHARDSON = ('r' in method) or ('richardson' in method)
- TRY_SHANKS = ('s' in method) or ('shanks' in method)
- TRY_EULER_MACLAURIN = ('e' in method) or \
- ('euler-maclaurin' in method)
- def init_levin(m):
- variant = kwargs.get("levin_variant", "u")
- if isinstance(variant, str):
- if variant == "all":
- variant = ["u", "v", "t"]
- else:
- variant = [variant]
- for s in variant:
- L = levin_class(method = m, variant = s)
- L.ctx = ctx
- L.name = m + "(" + s + ")"
- summer.append(L)
- if ('l' in method) or ('levin' in method):
- init_levin("levin")
- if ('sidi' in method):
- init_levin("sidi")
- if ('a' in method) or ('alternating' in method):
- L = cohen_alt_class()
- L.ctx = ctx
- L.name = "alternating"
- summer.append(L)
- last_richardson_value = 0
- shanks_table = []
- index = 0
- step = 10
- partial = []
- best = ctx.zero
- orig = ctx.prec
- try:
- if 'workprec' in kwargs:
- ctx.prec = kwargs['workprec']
- elif TRY_RICHARDSON or TRY_SHANKS or len(summer)!=0:
- ctx.prec = (ctx.prec+10) * 4
- else:
- ctx.prec += 30
- while 1:
- if index >= maxterms:
- break
- # Get new batch of terms
- try:
- step = next(steps)
- except StopIteration:
- pass
- if verbose:
- print("-"*70)
- print("Adding terms #%i-#%i" % (index, index+step))
- update(partial, xrange(index, index+step))
- index += step
- # Check direct error
- best = partial[-1]
- error = abs(best - partial[-2])
- if verbose:
- print("Direct error: %s" % ctx.nstr(error))
- if error <= tol:
- return best
- # Check each extrapolation method
- if TRY_RICHARDSON:
- value, maxc = ctx.richardson(partial)
- # Convergence
- richardson_error = abs(value - last_richardson_value)
- if verbose:
- print("Richardson error: %s" % ctx.nstr(richardson_error))
- # Convergence
- if richardson_error <= tol:
- return value
- last_richardson_value = value
- # Unreliable due to cancellation
- if ctx.eps*maxc > tol:
- if verbose:
- print("Ran out of precision for Richardson")
- TRY_RICHARDSON = False
- if richardson_error < error:
- error = richardson_error
- best = value
- if TRY_SHANKS:
- shanks_table = ctx.shanks(partial, shanks_table, randomized=True)
- row = shanks_table[-1]
- if len(row) == 2:
- est1 = row[-1]
- shanks_error = 0
- else:
- est1, maxc, est2 = row[-1], abs(row[-2]), row[-3]
- shanks_error = abs(est1-est2)
- if verbose:
- print("Shanks error: %s" % ctx.nstr(shanks_error))
- if shanks_error <= tol:
- return est1
- if ctx.eps*maxc > tol:
- if verbose:
- print("Ran out of precision for Shanks")
- TRY_SHANKS = False
- if shanks_error < error:
- error = shanks_error
- best = est1
- for L in summer:
- est, lerror = L.update_psum(partial)
- if verbose:
- print("%s error: %s" % (L.name, ctx.nstr(lerror)))
- if lerror <= tol:
- return est
- if lerror < error:
- error = lerror
- best = est
- if TRY_EULER_MACLAURIN:
- if ctx.mpc(ctx.sign(partial[-1]) / ctx.sign(partial[-2])).ae(-1):
- if verbose:
- print ("NOT using Euler-Maclaurin: the series appears"
- " to be alternating, so numerical\n quadrature"
- " will most likely fail")
- TRY_EULER_MACLAURIN = False
- else:
- value, em_error = emfun(index, tol)
- value += partial[-1]
- if verbose:
- print("Euler-Maclaurin error: %s" % ctx.nstr(em_error))
- if em_error <= tol:
- return value
- if em_error < error:
- best = value
- finally:
- ctx.prec = orig
- if strict:
- raise ctx.NoConvergence
- if verbose:
- print("Warning: failed to converge to target accuracy")
- return best
- @defun
- def nsum(ctx, f, *intervals, **options):
- r"""
- Computes the sum
- .. math :: S = \sum_{k=a}^b f(k)
- where `(a, b)` = *interval*, and where `a = -\infty` and/or
- `b = \infty` are allowed, or more generally
- .. math :: S = \sum_{k_1=a_1}^{b_1} \cdots
- \sum_{k_n=a_n}^{b_n} f(k_1,\ldots,k_n)
- if multiple intervals are given.
- Two examples of infinite series that can be summed by :func:`~mpmath.nsum`,
- where the first converges rapidly and the second converges slowly,
- are::
- >>> from mpmath import *
- >>> mp.dps = 15; mp.pretty = True
- >>> nsum(lambda n: 1/fac(n), [0, inf])
- 2.71828182845905
- >>> nsum(lambda n: 1/n**2, [1, inf])
- 1.64493406684823
- When appropriate, :func:`~mpmath.nsum` applies convergence acceleration to
- accurately estimate the sums of slowly convergent series. If the series is
- finite, :func:`~mpmath.nsum` currently does not attempt to perform any
- extrapolation, and simply calls :func:`~mpmath.fsum`.
- Multidimensional infinite series are reduced to a single-dimensional
- series over expanding hypercubes; if both infinite and finite dimensions
- are present, the finite ranges are moved innermost. For more advanced
- control over the summation order, use nested calls to :func:`~mpmath.nsum`,
- or manually rewrite the sum as a single-dimensional series.
- **Options**
- *tol*
- Desired maximum final error. Defaults roughly to the
- epsilon of the working precision.
- *method*
- Which summation algorithm to use (described below).
- Default: ``'richardson+shanks'``.
- *maxterms*
- Cancel after at most this many terms. Default: 10*dps.
- *steps*
- An iterable giving the number of terms to add between
- each extrapolation attempt. The default sequence is
- [10, 20, 30, 40, ...]. For example, if you know that
- approximately 100 terms will be required, efficiency might be
- improved by setting this to [100, 10]. Then the first
- extrapolation will be performed after 100 terms, the second
- after 110, etc.
- *verbose*
- Print details about progress.
- *ignore*
- If enabled, any term that raises ``ArithmeticError``
- or ``ValueError`` (e.g. through division by zero) is replaced
- by a zero. This is convenient for lattice sums with
- a singular term near the origin.
- **Methods**
- Unfortunately, an algorithm that can efficiently sum any infinite
- series does not exist. :func:`~mpmath.nsum` implements several different
- algorithms that each work well in different cases. The *method*
- keyword argument selects a method.
- The default method is ``'r+s'``, i.e. both Richardson extrapolation
- and Shanks transformation is attempted. A slower method that
- handles more cases is ``'r+s+e'``. For very high precision
- summation, or if the summation needs to be fast (for example if
- multiple sums need to be evaluated), it is a good idea to
- investigate which one method works best and only use that.
- ``'richardson'`` / ``'r'``:
- Uses Richardson extrapolation. Provides useful extrapolation
- when `f(k) \sim P(k)/Q(k)` or when `f(k) \sim (-1)^k P(k)/Q(k)`
- for polynomials `P` and `Q`. See :func:`~mpmath.richardson` for
- additional information.
- ``'shanks'`` / ``'s'``:
- Uses Shanks transformation. Typically provides useful
- extrapolation when `f(k) \sim c^k` or when successive terms
- alternate signs. Is able to sum some divergent series.
- See :func:`~mpmath.shanks` for additional information.
- ``'levin'`` / ``'l'``:
- Uses the Levin transformation. It performs better than the Shanks
- transformation for logarithmic convergent or alternating divergent
- series. The ``'levin_variant'``-keyword selects the variant. Valid
- choices are "u", "t", "v" and "all" whereby "all" uses all three
- u,t and v simultanously (This is good for performance comparison in
- conjunction with "verbose=True"). Instead of the Levin transform one can
- also use the Sidi-S transform by selecting the method ``'sidi'``.
- See :func:`~mpmath.levin` for additional details.
- ``'alternating'`` / ``'a'``:
- This is the convergence acceleration of alternating series developped
- by Cohen, Villegras and Zagier.
- See :func:`~mpmath.cohen_alt` for additional details.
- ``'euler-maclaurin'`` / ``'e'``:
- Uses the Euler-Maclaurin summation formula to approximate
- the remainder sum by an integral. This requires high-order
- numerical derivatives and numerical integration. The advantage
- of this algorithm is that it works regardless of the
- decay rate of `f`, as long as `f` is sufficiently smooth.
- See :func:`~mpmath.sumem` for additional information.
- ``'direct'`` / ``'d'``:
- Does not perform any extrapolation. This can be used
- (and should only be used for) rapidly convergent series.
- The summation automatically stops when the terms
- decrease below the target tolerance.
- **Basic examples**
- A finite sum::
- >>> nsum(lambda k: 1/k, [1, 6])
- 2.45
- Summation of a series going to negative infinity and a doubly
- infinite series::
- >>> nsum(lambda k: 1/k**2, [-inf, -1])
- 1.64493406684823
- >>> nsum(lambda k: 1/(1+k**2), [-inf, inf])
- 3.15334809493716
- :func:`~mpmath.nsum` handles sums of complex numbers::
- >>> nsum(lambda k: (0.5+0.25j)**k, [0, inf])
- (1.6 + 0.8j)
- The following sum converges very rapidly, so it is most
- efficient to sum it by disabling convergence acceleration::
- >>> mp.dps = 1000
- >>> a = nsum(lambda k: -(-1)**k * k**2 / fac(2*k), [1, inf],
- ... method='direct')
- >>> b = (cos(1)+sin(1))/4
- >>> abs(a-b) < mpf('1e-998')
- True
- **Examples with Richardson extrapolation**
- Richardson extrapolation works well for sums over rational
- functions, as well as their alternating counterparts::
- >>> mp.dps = 50
- >>> nsum(lambda k: 1 / k**3, [1, inf],
- ... method='richardson')
- 1.2020569031595942853997381615114499907649862923405
- >>> zeta(3)
- 1.2020569031595942853997381615114499907649862923405
- >>> nsum(lambda n: (n + 3)/(n**3 + n**2), [1, inf],
- ... method='richardson')
- 2.9348022005446793094172454999380755676568497036204
- >>> pi**2/2-2
- 2.9348022005446793094172454999380755676568497036204
- >>> nsum(lambda k: (-1)**k / k**3, [1, inf],
- ... method='richardson')
- -0.90154267736969571404980362113358749307373971925537
- >>> -3*zeta(3)/4
- -0.90154267736969571404980362113358749307373971925538
- **Examples with Shanks transformation**
- The Shanks transformation works well for geometric series
- and typically provides excellent acceleration for Taylor
- series near the border of their disk of convergence.
- Here we apply it to a series for `\log(2)`, which can be
- seen as the Taylor series for `\log(1+x)` with `x = 1`::
- >>> nsum(lambda k: -(-1)**k/k, [1, inf],
- ... method='shanks')
- 0.69314718055994530941723212145817656807550013436025
- >>> log(2)
- 0.69314718055994530941723212145817656807550013436025
- Here we apply it to a slowly convergent geometric series::
- >>> nsum(lambda k: mpf('0.995')**k, [0, inf],
- ... method='shanks')
- 200.0
- Finally, Shanks' method works very well for alternating series
- where `f(k) = (-1)^k g(k)`, and often does so regardless of
- the exact decay rate of `g(k)`::
- >>> mp.dps = 15
- >>> nsum(lambda k: (-1)**(k+1) / k**1.5, [1, inf],
- ... method='shanks')
- 0.765147024625408
- >>> (2-sqrt(2))*zeta(1.5)/2
- 0.765147024625408
- The following slowly convergent alternating series has no known
- closed-form value. Evaluating the sum a second time at higher
- precision indicates that the value is probably correct::
- >>> nsum(lambda k: (-1)**k / log(k), [2, inf],
- ... method='shanks')
- 0.924299897222939
- >>> mp.dps = 30
- >>> nsum(lambda k: (-1)**k / log(k), [2, inf],
- ... method='shanks')
- 0.92429989722293885595957018136
- **Examples with Levin transformation**
- The following example calculates Euler's constant as the constant term in
- the Laurent expansion of zeta(s) at s=1. This sum converges extremly slow
- because of the logarithmic convergence behaviour of the Dirichlet series
- for zeta.
- >>> mp.dps = 30
- >>> z = mp.mpf(10) ** (-10)
- >>> a = mp.nsum(lambda n: n**(-(1+z)), [1, mp.inf], method = "levin") - 1 / z
- >>> print(mp.chop(a - mp.euler, tol = 1e-10))
- 0.0
- Now we sum the zeta function outside its range of convergence
- (attention: This does not work at the negative integers!):
- >>> mp.dps = 15
- >>> w = mp.nsum(lambda n: n ** (2 + 3j), [1, mp.inf], method = "levin", levin_variant = "v")
- >>> print(mp.chop(w - mp.zeta(-2-3j)))
- 0.0
- The next example resummates an asymptotic series expansion of an integral
- related to the exponential integral.
- >>> mp.dps = 15
- >>> z = mp.mpf(10)
- >>> # exact = mp.quad(lambda x: mp.exp(-x)/(1+x/z),[0,mp.inf])
- >>> exact = z * mp.exp(z) * mp.expint(1,z) # this is the symbolic expression for the integral
- >>> w = mp.nsum(lambda n: (-1) ** n * mp.fac(n) * z ** (-n), [0, mp.inf], method = "sidi", levin_variant = "t")
- >>> print(mp.chop(w - exact))
- 0.0
- Following highly divergent asymptotic expansion needs some care. Firstly we
- need copious amount of working precision. Secondly the stepsize must not be
- chosen to large, otherwise nsum may miss the point where the Levin transform
- converges and reach the point where only numerical garbage is produced due to
- numerical cancellation.
- >>> mp.dps = 15
- >>> z = mp.mpf(2)
- >>> # exact = mp.quad(lambda x: mp.exp( -x * x / 2 - z * x ** 4), [0,mp.inf]) * 2 / mp.sqrt(2 * mp.pi)
- >>> exact = mp.exp(mp.one / (32 * z)) * mp.besselk(mp.one / 4, mp.one / (32 * z)) / (4 * mp.sqrt(z * mp.pi)) # this is the symbolic expression for the integral
- >>> w = mp.nsum(lambda n: (-z)**n * mp.fac(4 * n) / (mp.fac(n) * mp.fac(2 * n) * (4 ** n)),
- ... [0, mp.inf], method = "levin", levin_variant = "t", workprec = 8*mp.prec, steps = [2] + [1 for x in xrange(1000)])
- >>> print(mp.chop(w - exact))
- 0.0
- The hypergeoemtric function can also be summed outside its range of convergence:
- >>> mp.dps = 15
- >>> z = 2 + 1j
- >>> exact = mp.hyp2f1(2 / mp.mpf(3), 4 / mp.mpf(3), 1 / mp.mpf(3), z)
- >>> f = lambda n: mp.rf(2 / mp.mpf(3), n) * mp.rf(4 / mp.mpf(3), n) * z**n / (mp.rf(1 / mp.mpf(3), n) * mp.fac(n))
- >>> v = mp.nsum(f, [0, mp.inf], method = "levin", steps = [10 for x in xrange(1000)])
- >>> print(mp.chop(exact-v))
- 0.0
- **Examples with Cohen's alternating series resummation**
- The next example sums the alternating zeta function:
- >>> v = mp.nsum(lambda n: (-1)**(n-1) / n, [1, mp.inf], method = "a")
- >>> print(mp.chop(v - mp.log(2)))
- 0.0
- The derivate of the alternating zeta function outside its range of
- convergence:
- >>> v = mp.nsum(lambda n: (-1)**n * mp.log(n) * n, [1, mp.inf], method = "a")
- >>> print(mp.chop(v - mp.diff(lambda s: mp.altzeta(s), -1)))
- 0.0
- **Examples with Euler-Maclaurin summation**
- The sum in the following example has the wrong rate of convergence
- for either Richardson or Shanks to be effective.
- >>> f = lambda k: log(k)/k**2.5
- >>> mp.dps = 15
- >>> nsum(f, [1, inf], method='euler-maclaurin')
- 0.38734195032621
- >>> -diff(zeta, 2.5)
- 0.38734195032621
- Increasing ``steps`` improves speed at higher precision::
- >>> mp.dps = 50
- >>> nsum(f, [1, inf], method='euler-maclaurin', steps=[250])
- 0.38734195032620997271199237593105101319948228874688
- >>> -diff(zeta, 2.5)
- 0.38734195032620997271199237593105101319948228874688
- **Divergent series**
- The Shanks transformation is able to sum some *divergent*
- series. In particular, it is often able to sum Taylor series
- beyond their radius of convergence (this is due to a relation
- between the Shanks transformation and Pade approximations;
- see :func:`~mpmath.pade` for an alternative way to evaluate divergent
- Taylor series). Furthermore the Levin-transform examples above
- contain some divergent series resummation.
- Here we apply it to `\log(1+x)` far outside the region of
- convergence::
- >>> mp.dps = 50
- >>> nsum(lambda k: -(-9)**k/k, [1, inf],
- ... method='shanks')
- 2.3025850929940456840179914546843642076011014886288
- >>> log(10)
- 2.3025850929940456840179914546843642076011014886288
- A particular type of divergent series that can be summed
- using the Shanks transformation is geometric series.
- The result is the same as using the closed-form formula
- for an infinite geometric series::
- >>> mp.dps = 15
- >>> for n in range(-8, 8):
- ... if n == 1:
- ... continue
- ... print("%s %s %s" % (mpf(n), mpf(1)/(1-n),
- ... nsum(lambda k: n**k, [0, inf], method='shanks')))
- ...
- -8.0 0.111111111111111 0.111111111111111
- -7.0 0.125 0.125
- -6.0 0.142857142857143 0.142857142857143
- -5.0 0.166666666666667 0.166666666666667
- -4.0 0.2 0.2
- -3.0 0.25 0.25
- -2.0 0.333333333333333 0.333333333333333
- -1.0 0.5 0.5
- 0.0 1.0 1.0
- 2.0 -1.0 -1.0
- 3.0 -0.5 -0.5
- 4.0 -0.333333333333333 -0.333333333333333
- 5.0 -0.25 -0.25
- 6.0 -0.2 -0.2
- 7.0 -0.166666666666667 -0.166666666666667
- **Multidimensional sums**
- Any combination of finite and infinite ranges is allowed for the
- summation indices::
- >>> mp.dps = 15
- >>> nsum(lambda x,y: x+y, [2,3], [4,5])
- 28.0
- >>> nsum(lambda x,y: x/2**y, [1,3], [1,inf])
- 6.0
- >>> nsum(lambda x,y: y/2**x, [1,inf], [1,3])
- 6.0
- >>> nsum(lambda x,y,z: z/(2**x*2**y), [1,inf], [1,inf], [3,4])
- 7.0
- >>> nsum(lambda x,y,z: y/(2**x*2**z), [1,inf], [3,4], [1,inf])
- 7.0
- >>> nsum(lambda x,y,z: x/(2**z*2**y), [3,4], [1,inf], [1,inf])
- 7.0
- Some nice examples of double series with analytic solutions or
- reductions to single-dimensional series (see [1])::
- >>> nsum(lambda m, n: 1/2**(m*n), [1,inf], [1,inf])
- 1.60669515241529
- >>> nsum(lambda n: 1/(2**n-1), [1,inf])
- 1.60669515241529
- >>> nsum(lambda i,j: (-1)**(i+j)/(i**2+j**2), [1,inf], [1,inf])
- 0.278070510848213
- >>> pi*(pi-3*ln2)/12
- 0.278070510848213
- >>> nsum(lambda i,j: (-1)**(i+j)/(i+j)**2, [1,inf], [1,inf])
- 0.129319852864168
- >>> altzeta(2) - altzeta(1)
- 0.129319852864168
- >>> nsum(lambda i,j: (-1)**(i+j)/(i+j)**3, [1,inf], [1,inf])
- 0.0790756439455825
- >>> altzeta(3) - altzeta(2)
- 0.0790756439455825
- >>> nsum(lambda m,n: m**2*n/(3**m*(n*3**m+m*3**n)),
- ... [1,inf], [1,inf])
- 0.28125
- >>> mpf(9)/32
- 0.28125
- >>> nsum(lambda i,j: fac(i-1)*fac(j-1)/fac(i+j),
- ... [1,inf], [1,inf], workprec=400)
- 1.64493406684823
- >>> zeta(2)
- 1.64493406684823
- A hard example of a multidimensional sum is the Madelung constant
- in three dimensions (see [2]). The defining sum converges very
- slowly and only conditionally, so :func:`~mpmath.nsum` is lucky to
- obtain an accurate value through convergence acceleration. The
- second evaluation below uses a much more efficient, rapidly
- convergent 2D sum::
- >>> nsum(lambda x,y,z: (-1)**(x+y+z)/(x*x+y*y+z*z)**0.5,
- ... [-inf,inf], [-inf,inf], [-inf,inf], ignore=True)
- -1.74756459463318
- >>> nsum(lambda x,y: -12*pi*sech(0.5*pi * \
- ... sqrt((2*x+1)**2+(2*y+1)**2))**2, [0,inf], [0,inf])
- -1.74756459463318
- Another example of a lattice sum in 2D::
- >>> nsum(lambda x,y: (-1)**(x+y) / (x**2+y**2), [-inf,inf],
- ... [-inf,inf], ignore=True)
- -2.1775860903036
- >>> -pi*ln2
- -2.1775860903036
- An example of an Eisenstein series::
- >>> nsum(lambda m,n: (m+n*1j)**(-4), [-inf,inf], [-inf,inf],
- ... ignore=True)
- (3.1512120021539 + 0.0j)
- **References**
- 1. [Weisstein]_ http://mathworld.wolfram.com/DoubleSeries.html,
- 2. [Weisstein]_ http://mathworld.wolfram.com/MadelungConstants.html
- """
- infinite, g = standardize(ctx, f, intervals, options)
- if not infinite:
- return +g()
- def update(partial_sums, indices):
- if partial_sums:
- psum = partial_sums[-1]
- else:
- psum = ctx.zero
- for k in indices:
- psum = psum + g(ctx.mpf(k))
- partial_sums.append(psum)
- prec = ctx.prec
- def emfun(point, tol):
- workprec = ctx.prec
- ctx.prec = prec + 10
- v = ctx.sumem(g, [point, ctx.inf], tol, error=1)
- ctx.prec = workprec
- return v
- return +ctx.adaptive_extrapolation(update, emfun, options)
- def wrapsafe(f):
- def g(*args):
- try:
- return f(*args)
- except (ArithmeticError, ValueError):
- return 0
- return g
- def standardize(ctx, f, intervals, options):
- if options.get("ignore"):
- f = wrapsafe(f)
- finite = []
- infinite = []
- for k, points in enumerate(intervals):
- a, b = ctx._as_points(points)
- if b < a:
- return False, (lambda: ctx.zero)
- if a == ctx.ninf or b == ctx.inf:
- infinite.append((k, (a,b)))
- else:
- finite.append((k, (int(a), int(b))))
- if finite:
- f = fold_finite(ctx, f, finite)
- if not infinite:
- return False, lambda: f(*([0]*len(intervals)))
- if infinite:
- f = standardize_infinite(ctx, f, infinite)
- f = fold_infinite(ctx, f, infinite)
- args = [0] * len(intervals)
- d = infinite[0][0]
- def g(k):
- args[d] = k
- return f(*args)
- return True, g
- # backwards compatible itertools.product
- def cartesian_product(args):
- pools = map(tuple, args)
- result = [[]]
- for pool in pools:
- result = [x+[y] for x in result for y in pool]
- for prod in result:
- yield tuple(prod)
- def fold_finite(ctx, f, intervals):
- if not intervals:
- return f
- indices = [v[0] for v in intervals]
- points = [v[1] for v in intervals]
- ranges = [xrange(a, b+1) for (a,b) in points]
- def g(*args):
- args = list(args)
- s = ctx.zero
- for xs in cartesian_product(ranges):
- for dim, x in zip(indices, xs):
- args[dim] = ctx.mpf(x)
- s += f(*args)
- return s
- #print "Folded finite", indices
- return g
- # Standardize each interval to [0,inf]
- def standardize_infinite(ctx, f, intervals):
- if not intervals:
- return f
- dim, [a,b] = intervals[-1]
- if a == ctx.ninf:
- if b == ctx.inf:
- def g(*args):
- args = list(args)
- k = args[dim]
- if k:
- s = f(*args)
- args[dim] = -k
- s += f(*args)
- return s
- else:
- return f(*args)
- else:
- def g(*args):
- args = list(args)
- args[dim] = b - args[dim]
- return f(*args)
- else:
- def g(*args):
- args = list(args)
- args[dim] += a
- return f(*args)
- #print "Standardized infinity along dimension", dim, a, b
- return standardize_infinite(ctx, g, intervals[:-1])
- def fold_infinite(ctx, f, intervals):
- if len(intervals) < 2:
- return f
- dim1 = intervals[-2][0]
- dim2 = intervals[-1][0]
- # Assume intervals are [0,inf] x [0,inf] x ...
- def g(*args):
- args = list(args)
- #args.insert(dim2, None)
- n = int(args[dim1])
- s = ctx.zero
- #y = ctx.mpf(n)
- args[dim2] = ctx.mpf(n) #y
- for x in xrange(n+1):
- args[dim1] = ctx.mpf(x)
- s += f(*args)
- args[dim1] = ctx.mpf(n) #ctx.mpf(n)
- for y in xrange(n):
- args[dim2] = ctx.mpf(y)
- s += f(*args)
- return s
- #print "Folded infinite from", len(intervals), "to", (len(intervals)-1)
- return fold_infinite(ctx, g, intervals[:-1])
- @defun
- def nprod(ctx, f, interval, nsum=False, **kwargs):
- r"""
- Computes the product
- .. math ::
- P = \prod_{k=a}^b f(k)
- where `(a, b)` = *interval*, and where `a = -\infty` and/or
- `b = \infty` are allowed.
- By default, :func:`~mpmath.nprod` uses the same extrapolation methods as
- :func:`~mpmath.nsum`, except applied to the partial products rather than
- partial sums, and the same keyword options as for :func:`~mpmath.nsum` are
- supported. If ``nsum=True``, the product is instead computed via
- :func:`~mpmath.nsum` as
- .. math ::
- P = \exp\left( \sum_{k=a}^b \log(f(k)) \right).
- This is slower, but can sometimes yield better results. It is
- also required (and used automatically) when Euler-Maclaurin
- summation is requested.
- **Examples**
- A simple finite product::
- >>> from mpmath import *
- >>> mp.dps = 25; mp.pretty = True
- >>> nprod(lambda k: k, [1, 4])
- 24.0
- A large number of infinite products have known exact values,
- and can therefore be used as a reference. Most of the following
- examples are taken from MathWorld [1].
- A few infinite products with simple values are::
- >>> 2*nprod(lambda k: (4*k**2)/(4*k**2-1), [1, inf])
- 3.141592653589793238462643
- >>> nprod(lambda k: (1+1/k)**2/(1+2/k), [1, inf])
- 2.0
- >>> nprod(lambda k: (k**3-1)/(k**3+1), [2, inf])
- 0.6666666666666666666666667
- >>> nprod(lambda k: (1-1/k**2), [2, inf])
- 0.5
- Next, several more infinite products with more complicated
- values::
- >>> nprod(lambda k: exp(1/k**2), [1, inf]); exp(pi**2/6)
- 5.180668317897115748416626
- 5.180668317897115748416626
- >>> nprod(lambda k: (k**2-1)/(k**2+1), [2, inf]); pi*csch(pi)
- 0.2720290549821331629502366
- 0.2720290549821331629502366
- >>> nprod(lambda k: (k**4-1)/(k**4+1), [2, inf])
- 0.8480540493529003921296502
- >>> pi*sinh(pi)/(cosh(sqrt(2)*pi)-cos(sqrt(2)*pi))
- 0.8480540493529003921296502
- >>> nprod(lambda k: (1+1/k+1/k**2)**2/(1+2/k+3/k**2), [1, inf])
- 1.848936182858244485224927
- >>> 3*sqrt(2)*cosh(pi*sqrt(3)/2)**2*csch(pi*sqrt(2))/pi
- 1.848936182858244485224927
- >>> nprod(lambda k: (1-1/k**4), [2, inf]); sinh(pi)/(4*pi)
- 0.9190194775937444301739244
- 0.9190194775937444301739244
- >>> nprod(lambda k: (1-1/k**6), [2, inf])
- 0.9826842777421925183244759
- >>> (1+cosh(pi*sqrt(3)))/(12*pi**2)
- 0.9826842777421925183244759
- >>> nprod(lambda k: (1+1/k**2), [2, inf]); sinh(pi)/(2*pi)
- 1.838038955187488860347849
- 1.838038955187488860347849
- >>> nprod(lambda n: (1+1/n)**n * exp(1/(2*n)-1), [1, inf])
- 1.447255926890365298959138
- >>> exp(1+euler/2)/sqrt(2*pi)
- 1.447255926890365298959138
- The following two products are equivalent and can be evaluated in
- terms of a Jacobi theta function. Pi can be replaced by any value
- (as long as convergence is preserved)::
- >>> nprod(lambda k: (1-pi**-k)/(1+pi**-k), [1, inf])
- 0.3838451207481672404778686
- >>> nprod(lambda k: tanh(k*log(pi)/2), [1, inf])
- 0.3838451207481672404778686
- >>> jtheta(4,0,1/pi)
- 0.3838451207481672404778686
- This product does not have a known closed form value::
- >>> nprod(lambda k: (1-1/2**k), [1, inf])
- 0.2887880950866024212788997
- A product taken from `-\infty`::
- >>> nprod(lambda k: 1-k**(-3), [-inf,-2])
- 0.8093965973662901095786805
- >>> cosh(pi*sqrt(3)/2)/(3*pi)
- 0.8093965973662901095786805
- A doubly infinite product::
- >>> nprod(lambda k: exp(1/(1+k**2)), [-inf, inf])
- 23.41432688231864337420035
- >>> exp(pi/tanh(pi))
- 23.41432688231864337420035
- A product requiring the use of Euler-Maclaurin summation to compute
- an accurate value::
- >>> nprod(lambda k: (1-1/k**2.5), [2, inf], method='e')
- 0.696155111336231052898125
- **References**
- 1. [Weisstein]_ http://mathworld.wolfram.com/InfiniteProduct.html
- """
- if nsum or ('e' in kwargs.get('method', '')):
- orig = ctx.prec
- try:
- # TODO: we are evaluating log(1+eps) -> eps, which is
- # inaccurate. This currently works because nsum greatly
- # increases the working precision. But we should be
- # more intelligent and handle the precision here.
- ctx.prec += 10
- v = ctx.nsum(lambda n: ctx.ln(f(n)), interval, **kwargs)
- finally:
- ctx.prec = orig
- return +ctx.exp(v)
- a, b = ctx._as_points(interval)
- if a == ctx.ninf:
- if b == ctx.inf:
- return f(0) * ctx.nprod(lambda k: f(-k) * f(k), [1, ctx.inf], **kwargs)
- return ctx.nprod(f, [-b, ctx.inf], **kwargs)
- elif b != ctx.inf:
- return ctx.fprod(f(ctx.mpf(k)) for k in xrange(int(a), int(b)+1))
- a = int(a)
- def update(partial_products, indices):
- if partial_products:
- pprod = partial_products[-1]
- else:
- pprod = ctx.one
- for k in indices:
- pprod = pprod * f(a + ctx.mpf(k))
- partial_products.append(pprod)
- return +ctx.adaptive_extrapolation(update, None, kwargs)
- @defun
- def limit(ctx, f, x, direction=1, exp=False, **kwargs):
- r"""
- Computes an estimate of the limit
- .. math ::
- \lim_{t \to x} f(t)
- where `x` may be finite or infinite.
- For finite `x`, :func:`~mpmath.limit` evaluates `f(x + d/n)` for
- consecutive integer values of `n`, where the approach direction
- `d` may be specified using the *direction* keyword argument.
- For infinite `x`, :func:`~mpmath.limit` evaluates values of
- `f(\mathrm{sign}(x) \cdot n)`.
- If the approach to the limit is not sufficiently fast to give
- an accurate estimate directly, :func:`~mpmath.limit` attempts to find
- the limit using Richardson extrapolation or the Shanks
- transformation. You can select between these methods using
- the *method* keyword (see documentation of :func:`~mpmath.nsum` for
- more information).
- **Options**
- The following options are available with essentially the
- same meaning as for :func:`~mpmath.nsum`: *tol*, *method*, *maxterms*,
- *steps*, *verbose*.
- If the option *exp=True* is set, `f` will be
- sampled at exponentially spaced points `n = 2^1, 2^2, 2^3, \ldots`
- instead of the linearly spaced points `n = 1, 2, 3, \ldots`.
- This can sometimes improve the rate of convergence so that
- :func:`~mpmath.limit` may return a more accurate answer (and faster).
- However, do note that this can only be used if `f`
- supports fast and accurate evaluation for arguments that
- are extremely close to the limit point (or if infinite,
- very large arguments).
- **Examples**
- A basic evaluation of a removable singularity::
- >>> from mpmath import *
- >>> mp.dps = 30; mp.pretty = True
- >>> limit(lambda x: (x-sin(x))/x**3, 0)
- 0.166666666666666666666666666667
- Computing the exponential function using its limit definition::
- >>> limit(lambda n: (1+3/n)**n, inf)
- 20.0855369231876677409285296546
- >>> exp(3)
- 20.0855369231876677409285296546
- A limit for `\pi`::
- >>> f = lambda n: 2**(4*n+1)*fac(n)**4/(2*n+1)/fac(2*n)**2
- >>> limit(f, inf)
- 3.14159265358979323846264338328
- Calculating the coefficient in Stirling's formula::
- >>> limit(lambda n: fac(n) / (sqrt(n)*(n/e)**n), inf)
- 2.50662827463100050241576528481
- >>> sqrt(2*pi)
- 2.50662827463100050241576528481
- Evaluating Euler's constant `\gamma` using the limit representation
- .. math ::
- \gamma = \lim_{n \rightarrow \infty } \left[ \left(
- \sum_{k=1}^n \frac{1}{k} \right) - \log(n) \right]
- (which converges notoriously slowly)::
- >>> f = lambda n: sum([mpf(1)/k for k in range(1,int(n)+1)]) - log(n)
- >>> limit(f, inf)
- 0.577215664901532860606512090082
- >>> +euler
- 0.577215664901532860606512090082
- With default settings, the following limit converges too slowly
- to be evaluated accurately. Changing to exponential sampling
- however gives a perfect result::
- >>> f = lambda x: sqrt(x**3+x**2)/(sqrt(x**3)+x)
- >>> limit(f, inf)
- 0.992831158558330281129249686491
- >>> limit(f, inf, exp=True)
- 1.0
- """
- if ctx.isinf(x):
- direction = ctx.sign(x)
- g = lambda k: f(ctx.mpf(k+1)*direction)
- else:
- direction *= ctx.one
- g = lambda k: f(x + direction/(k+1))
- if exp:
- h = g
- g = lambda k: h(2**k)
- def update(values, indices):
- for k in indices:
- values.append(g(k+1))
- # XXX: steps used by nsum don't work well
- if not 'steps' in kwargs:
- kwargs['steps'] = [10]
- return +ctx.adaptive_extrapolation(update, None, kwargs)
|