Ole Peters

Home Publications Research




Order here.
Rory puts it like this: 100 people doing something once is not the same as one person doing the same thing 100 times. It's surprisingly easy, in mathematical models of group behavior, to accidentally assume that it is.


Order here.
A science thriller by best-selling author Marc Elsberg. The story is woven around our ergodicity research and explores possible socio-political interpretations.


Order here.
Chapter 19, "The logic of risk taking" uses the example of ruin to illustrate a failure of ergodicity: in certain gambles, my expected winnings may be huge but playing in sequence guarantees ruin. The chapter is also available as a blog post.


Order here.
Rick argues that a failure of ergodicity can imply that it is futile to develop far-reaching theories. Imagine a system whose possible states are so numerous and diverse that the future will not meaningfully resemble the past. Navigating such a system is more like driving by vision through a thick fog in unknown territory than it is like programming an autopilot.


AIP Staff
Exploring gambles reveals foundational difficulty behind economic theory (and a solution!).
AIP publishing (2016).


M. Buchanan
Gamble with time.
Nature Phys. 9, 3 (2013).
doi: 10.1038/nphys2520


O. Peters
Time, for a change.
Gresham College London (2012).
PDF slides

Towers Watson
The irreversibility of time (2012).


M. Mauboussin
Shaking the foundation (2012).


R. Bookstaber
A Crack in the foundation (2011).


O. Peters
Time and chance.
TEDx Goodenough London (2011).
PDF slides


O. Peters
On time and risk.
Santa Fe Institute Bulletin
24, 1, 36--41 (2009).



[34] A. Adamou, Y. Berman, O. Peters
        https://researchers.one/articles/The-Two-Growth-Rates-of-the-Economy/a630235a4df76b98c50bd37e/v1.

We interpret the two growth rates of ergodicity economics for multiplicative growth as follows. The growth rate of the ensemble average is approximately GDP growth; the time-average growth rate is the growth experienced by an individual citizen. Because the two differ, a disconnect can emerge between a central government (working with aggregates) and the typical citizen.



[33] O. Peters, A. Adamou, M. Kirstein, Y. Berman
        https://researchers.one/articles/what-are-we-weighting-for-a-mechanistic-model-for-probability-weighting/5f52699d36a3e45f17ae7e7c/v1.

We note that observations of so-called probability weighting only constitute a difference in opinion between an observer and a subject in an experiment. They do not by themselves say who is right and who is wrong. An appearance of overweighting low-probability events can emerge when subjects protect against surprises by taking into account uncertainties they have about the relative frequencies of events.



[32] O. Peters
       
        Nature Phys. 15, 12, 1216--1221 (2019).
        doi: 10.1038/s41567-019-0732-0.

An overview and stock-taking piece: where is ergodicity economics in 2019, and what's the basic story?



[31] A. Adamou, Y. Berman, D. Mavroyiannis, and O. Peters
       
        Decision Analysis 18, 4, 257--272 (2021).
        doi: 10.1287/deca.2021.0436.
https://www.researchers.one/article/2019-10-7 (2019).

The key model of decision theory is expected utility theory (EUT). At the beginning of the ergodicity economics program stands the observation that the 18th-century proponents of this theory overlooked something: in order to say how fast something grows, we have to know the dynamic it follows. If it grows exponentially, for instance, the additive growth rate is not meaningful because it changes all the time. We have to state the exponential growth rate instead. EUT implements a correction away from additive growth rates by introducing a utility function and computing expected changes in that function. It is better, we suggest, to find the ergodic growth rate and compute its time average (which is identical to its expectation value, of course). Our interpretation of EUT: it's a pre-ergodicity attempt of creating ergodic growth rates.
In this paper we simplify further by removing the randomness from the problem. Random or not -- the appropriate functional form of a growth rate is always determined by the dynamic. In the deterministic case, the value of that growth rate is constant, in the random case it is ergodic. By considering the deterministic problem we remove potential confusion arising from the minefield that is probabilistic reasoning. In economics, the problem of known payments that happen at different times is called "discounting." Thus we solve the deterministic discounting problem from the point of view of ergodicity economics. It turns out that the problem is richer than the standard economics treatment suggests, and we're able to explain a number of observed behaviors, such as so-called preference reversal and faster discounting by poorer individuals.



[30] O. Peters
        https://www.researchers.one/article/2019-05-1 (2019).

Formal economic theory, diverse as it is, does have a common cornerstone: expected utility theory. Various forms and extensions of this theory, including prospect theory, dominate formal economics today. The trend that led us here started with Bernoulli's seminal paper of 1738. This paper contains a mathematical inconsistency. A deep understanding of this inconsistency suggests that it lies behind much of the failure of economics to build plausible, first-principles, realistic models of human behavior. For instance, ergodicity economics -- the model that humans prefer faster wealth growth to slower wealth growth -- is not strictly consistent with Bernoulli's form of expected utility theory.



[29] O. Peters and A. Adamou
        arXiv:1802.02939 (2018).

The key observation that drives our research into economics is this: if wealth follows non-additive dynamics, then its time-average growth rate is not the growth rate of its expectation value. This fact has been overlooked in the history of economics from the day the field started to use formal mathematics, some time in the 1650s. As a consequence, economics today is conceptually confused and full of puzzles and riddles that could easily be solved. Our zeroth-order answer to many such puzzles is: just compute the time-average growth rate of wealth, assuming multiplicative dynamics. A question that often comes up in response is this: the expectation value assumes infinitely many parallel systems, and the time average assumes an infinite time horizon. Neither assumption is realistic. What happens when these limits are not good approximations? The answer is this: the problem remains stochastic, of course. Hence the answers you find are qualitatively different -- probabilistic statements instead of deterministic statements. The full distributions of the growth rate for finite time and finite ensemble are not known. But the problem can be mapped onto a famous model in statistical mechanics, called the random energy model. This enables the transfer of much knowledge from what's known about spin glasses to the problem of averaging finitely many geometric Brownian motions, i.e. wealth trajectories under multiplicative dynamics. It's a very neat case of synergies emerging from two fields speaking to each other. Most of the results we present are known to researchers working on spin glasses, but in economics they are less known, wherefore we present the mapping and key results step by step.



[28] O. Peters and A. Adamou
        arXiv:1801.03680 (2018).

Economic life (and life in general) is about making choices, facing an uncertain future. In economics, these choices are formalized in a framework called ``expected utility theory.'' The framework was invented in the 1730s, at a time when our mathematical understanding of randomness and uncertainty was very limited. Like a time capsule, the framework carried into the modern world concepts from the 18th century. Specifically, these concepts are a) the utility function and b) the mathematical expectation value. The trouble is that they make the formalism too general -- behavior is deemed ``optimal'' if the decision maker prefers it. Thus the optimal behavior for a gambling addict may be reckless behavior that will ruin him. We re-visit the general problem using modern mathematical concepts, specifically a) a dynamic that connects decisions to their consequences for future decisions, and b) time-averaging that determines how the consequences of decisions accumulate over time. This modern perspective drastically changes the meaning of the terms of decision theory. A ``utility function'' now appears as an encryption of a dynamic: people whose behavior is consistent with a given utility function actually just maximize the growth rate of their wealth over time under the corresponding dynamic. The original formalism cannot see this because it doesn't include time, and instead works in a static space of possibilities (a probability space). Two examples were treated in Peters and Gell-Mann (2016): linear utility functions correspond to additive dynamics, and logarithmic utility functions correspond to multiplicative dynamics. Here we treat the general problem (for Ito processes). We find consistency conditions that determine when a dynamic possesses a corresponding utility function (existence), and we provide a recipe for constructing the function if it exists. We provide further examples, mapping the square-root utility function to its dynamic and, inverting the procedure, mapping a curious dynamic to what turns out to be an exponential utility function. This formal work illustrates our break with traditional epistemology: we observe behavior, make the deliberate assumption that it is sensible, and ask what circumstances the individual must be exposed to so that this behavior is indeed sensible. In contrast, the classic approach observes behavior, generally assumes that it is irrational and describes the particular form of irrationality. Our approach allows a deeper probing of people's motivations: someone in poverty may buy a lottery ticket because it's his only chance of survival and is in that sense rationally the optimal choice. The classic approach tends to be only descriptive here: the observation of anyone buying a lottery ticket would merely imply that that person likes financial risk.



[27] O. Peters and M. Werner
        arXiv:1706.07773 (2017).

Our starting point is the curious observation that many published scientific studies have recently been found to be irreproducible. For example, we may wish to confirm the value of some previously measured quantity, measure it again, and find a result that differs from the value reported in the literature. This can happen for many different reasons -- we may have made a mistake, the original study may have made a mistake, the measurement protocol may be underspecified, the measured quantity may not be properly defined, one of the two measurements may be a statistical fluke, or fraudulent, and on and on. We ask what would happen if a non-stationary, non-ergodic, quantity were measured. Of course, not being stationary there is no reason for a second measurement to find the same value. But -- and this is the key finding -- in this situation a measurement in a single experiment can look reliable, fooling us into believing that it's reproducible. Using Brownian motion, we show that the finite-time average over a measured value converges in time, in the sense that changes in the average become arbitrarily small. However, the distribution of the time average in an ensemble of independent runs of the experiment becomes increasingly broad. Measurements are stable over time, but irreproducible across the ensemble. Pleasingly, our chosen model is so simple that the key results can be computed exactly.



[26] Y. Berman, O. Peters and A. Adamou
        arXiv:1605.05631 (2016).

We fit observed distributions of wealth -- how many people have how much -- to a model of noisy exponential growth and reallocation. Everyone's wealth is assumed to follow geometric Brownian motion (GBM), enhanced by a term that collects from everyone at a rate in proportion to his wealth and redistributes the collected amount evenly across the population. We use US data from 1917 until 2012. Firstly, we find that the best-fit reallocation rate has been negative since the 1980s, meaning everyone pays the same dollar amount and the collected amount is redistributed in proportion to his wealth, a flow of wealth from poor to rich. This came as a big surprise: GBM on its own generates indefinitely increasing inequality, and one would expect this extreme model to require a correction that reduces the default tendency of increasing inequality. But that's not the case: recent conditions are such that GBM needs to be corrected to speed up the increase in wealth inequality if we want to describe the observations. Secondly, our model has an equilibrium (ergodic) distribution if reallocation is positive, and it has no such distribution if reallocation is negative. Fitting the reallocation rate thus asks the system: are you ergodic? And the answer is no. With current best-fit parameters the model is non-ergodic, and throughout history whenever the parameters implied the existence of an ergodic distribution, their values implied equilibration times on the order of decades to centuries, so that the equilibrium state had no practical relevance.



[25] O. Peters and A. Adamou
        arXiv:1507.04655 (2015).

Insurance buyers pay a known fee today to reduce an uncertain future loss. Sellers accept the fee and promise to cover the uncertain loss. For an insurance contract to be signed voluntarily both parties to the deal must perceive it as beneficial to them. If the deal is judged by computing the expectation value of profit, a puzzle arises: the linearity of the expectation value implies that the deal can only be beneficial to one party. From this perspective insurance deals are signed because either people have different information (I know something you don't), or different attitudes towards risk (utility functions), or they are not rational. None of these explanations is satisfying, as e.g. Arrow pointed out at a 2014 Santa Fe Institute meeting. We resolve the puzzle by rejecting the underlying axiom that the expectation value of profit should be used to judge the desirability of the contract. Expectation values are averages over ensembles, but an individual signing an insurance contract is not an ensemble. Individuals are not ensembles, but they do live across time. Averaging over time is therefore the appropriate way of removing randomness from the model, and the time-average growth rate is the object of interest. Computing this object, the puzzle is resolved: a range of insurance fees exists where the contract increases the time-average growth rates of the wealth of both parties. We generalize: the fundamental reason for trading any product or service is a win-win situation. Both trading partners are better off as a result. This is possible because wealth must be modeled as a far-from equilibrium growth-process, such as geometric Brownian motion. In such processes the linear intuition associated with expectation values and equilibrium processes is misleading.



[24] O. Peters and A. Adamou
        SFI working paper #15-07-028 (2015).
        arXiv:1506.03414 (2015).

Social structure is the difference between a collection of individuals and a social system. We observe in the world around us plenty of structure, such as families, firms, nation states. Many different explanations for specific cases have been put forward. Here we develop a null model that favors the formation of structure in very general settings. We observe that cooperation -- the pooling and redistribution of resources -- confers an evolutionary advantage on cooperators over non-cooperators under multiplicative dynamics. This predicts the emergence of cooperation-promoting structure, such as governments that tax and redistribute some proportion of income. The evolutionary advantage is only visible if the performance of an entity is considered over time; it is invisible if the performance of the expectation value (an average over parallel universes) is considered. Mainstream economics focuses on the latter, thereby overlooking the most fundamental benefits of cooperation.



[23] O. Peters
        arXiv:1110.1578 (2011).

Karl Menger argued in 1934 that only bounded utility functions are permissible in the formalism of expected utility theory. I point out that his mathematical argument does not allow this conclusion. This is important from the dynamic perspective we are developing. As it turns out, bounded utility functions, when translated into ergodic observables, lead to finite-time singularities in the wealth process. To obtain a workable realistic formalism utility functions must not be bounded.



[22] O. Peters and A. Adamou
        arXiv:1101.4548 (2011).

We solve the equity premium puzzle by exploiting a fundamental constraint on the stochastic properties of stock markets. In a world of one risky asset (shares) and one risk-free asset (bonds), if investors do better in the long run by borrowing money (short bonds) and buying the shares, an instability will arise. Since any investor is better off borrowing money and buying shares, who should lend? Who should sell the shares? Increased borrowing will lead to inflated prices and higher interest rates, eventually destroying the advantage of a long-shares, short-bonds portfolio. We predict that it will be long-term optimal to simply hold shares, without leverage. Knowing that long-term performance (unlike performance of expectation values) is negatively affected by fluctuations, we arrive at a quantitative relationship between volatility and the excess growth rate of the expectation value of share prices compared to the growth rate of bond prices. This difference is usually called the equity premium, and our treatment is a solution of the equity premium puzzle.



[21] O. Peters and G. Pruessner
        arXiv:0912.2305v1 (2009).

One theory of how SOC works posits that SOC systems are self-tuned models with absorbing-state phase transitions. The idea is that the tuning and order parameters are coupled linearly, with an increase in the order parameter leading to a decrease, say, in the tuning parameter. Add to this a slow external forcing that drives the tuning parameter up, and you obtain a system that resides close to criticality. We look numerically for evidence of this mechanism. To our surprise, measurements of the order parameter conditioned on a given value of the tuning parameter do not reveal a sudden phase-transition-like pickup of the order parameter. Rather, the relationship looks smooth and linear.



[20] A. Adamou and O. Peters
       

        Significance 13, 3, 32--37 (2016).
        doi:10.1111/j.1740-9713.2016.00918.x

The paper is based on a talk Alexander Adamou gave at the Royal Statistical Society in 2015. We ask how change is described in mathematical models. A common, though big, assumption is that a physical system is well described by a mathematical model that permits an equilibrium state. In the equilibrium state macroscopic observables -- like economic inequality -- don't change. To reproduce in an equilibrium model an observation of changing observables one must assume a change in external conditions. We point out that change can be thought of differently: a mathematical model can naturally lead to changing macroscopic observables if it does not possess an equilibrium distribution, or if its equilibrium distribution cannot be reached on relevant time scales. Simple models of wealth are a point in case: they are non-equilibrium models where increases in real-world economic inequality are the expectation rather than a puzzling observation.



[19] O. Peters and M. Gell-Mann
       

        Chaos 26, 023103 (2016).
        arXiv:140585
        doi:10.1063/1.4940236

Mainstream economics uses mathematical models that resemble economic processes. It insists that expectation values of random variables in these models carry meaningful information. There is no immediate reason to believe this because expectation values need not have any physical meaning. Under what conditions, then, could the formalism work? We point out that some variables can be transformed so that the expectation value of the newly transformed variable is also its time average. The expectation value is then meaningful as an indication of what happens in the long run. Expectation values of variables that do not have this ergodic property should be treated with skepsis.



[18] O. Peters and W. Klein
        Phys. Rev. Lett. 110, 100603 (2013).
        Free PDF file
        arXiv:1209.4517
        doi:10.1103/PhysRevLett.110.100603

In [15] I demonstrated that leverage can be optimized by considering time-average growth rates rather than the growth rate of expectation values. The fact that the two are different (and non-zero) is a specific form of ergodicity breaking. In [15] time average growth rates for a single system were considered. Here we prove that the time average growth rate is the same for finite ensembles of non-interacting geometric Brownian motions (GBM) as for an individual GBM. The paper has quantitative statements and nice illustrations of the effects of finiteness of averaging times and finiteness of ensembles.



[17] O. Peters, K. Christensen and D. Neelin
        Eur. Phys. J. Special Topics 205, 147--158 (2012).
        Free PDF file
        doi:10.1140/epjst/e2012-1567-5

We were curious about two things: the relationship between column-integrated water vapor and precipitation under extreme conditions (e.g. hurricanes) and the distribution event sizes not measured as a total released water column at a specific location (i.e. a time-integrated rain rate at some point in space) but as an intensity integrated over space. Hurricanes really seem to be more than just a big rain shower, which isn't too surprising given the level of dynamical organization in a hurricane.



[16] O. Peters
       
        Phil. Trans. R. Soc. A 369, 1956, 4913--4931 (2011).
        arXiv:1011.4404v2
        doi:10.1098/rsta.2011.0065

In [15] I showed that under the geometric Brownian motion model leverage can be optimized by maximizing the time-average growth rate of wealth. It cannot be optimized by optimizing the growth rate of the expectation value of wealth. Since classical decision theory is based on expectation values it has a problem here. While writing [15] I became curious how classical decision theory treated this most basic issue -- how to weigh an increase in risk against an increase in expected gain? The answer is expected utility theory, a framework that phrases the problem and its solution in a very complicated way from my perspective. A poorly constrained element of human psychology, the utility function, is the central part of the solution. The treatment is circular: general risk preferences are encoded in a utility function, and this function is then used to compute the specific risk preferences in the problem under consideration -- we have to input the answer to all questions of risk-reward balance type, in order to obtain the answer to one question of this type. Why then have a formalism at all? I became interested in the history of the development of this framework. How did we end up with such a complicated formalism? The origin of expected utility theory is Daniel Bernoulli's 1738 treatment of the St Petersburg paradox. I wondered whether the paradox could be resolved in the same way as the leverage problem. If I compute the expected exponential growth rate, instead of the expected additive rate of change, for the St Petersburg lottery, would I resolve the paradox? The answer, surprisingly, is yes. Expected utility theory was only developed because it predates any considerations of ergodicity. In 1738 no one questioned whether expectation values and time averages are necessarily the same. This question only arose when physics adopted stochstic reasoning some 200 years after economics, namely in the late 19th century.



[15] O. Peters
       
        Quant. Fin. 11, 11, 1593--1602 (2011).
        arXiv:0902.2965v2
        doi:/10.1080/14697688.2010.513338

I show that under the geometric Brownian motion (GBM) model leverage can be optimized by maximizing the time-average growth rate of wealth. It cannot be optimized by optimizing the growth rate of the expectation value of wealth. This is the case because wealth does not have the ergodic property that its time average converges to its expectation value.



[14] O. Peters, A. Deluca, A. Corral, J. D. Neelin and C. E. Holloway
        J. Stat. Mech.
P11030 (2010).
        Free PDF file
        arXiv:1010.4201
        doi:/10.1088/1742-5468/2010/11/P11030

We were curious whether the avalanche exponents for rainfall, first reported in [1], would look similar when measured in different locations. Overall, this is the case. We measured exponents all over the world, using data sets from the Atmospheric Radiation Measurement (ARM) network. Exponents are identical within statistical accuracy, although only the measurement technique can make a difference (optical rain gauge vs. micro rain radar). Cut-offs in the distributions of rain event sizes are as one might expect based on local climatology -- for example, tropical locations support larger events.



[13] O. Peters and M. Girvan,
        J. Stat. Phys. 141, 1, 53--59 (2010).
        Free PDF file
        arXiv:0902.1956v2
        doi:10.1007/s10955-010-0039-0
       
Erratum

Our work in [5] showed that a mechansim for SOC that keys off the order parameter only does not automatically achieve universality. It is possible for a system to tune itself to its critical point in this way, but one would not observe universal finite-size scaling. We asked ourselves what sort of a mechanism might achieve universal finite-size scaling, and we came up with the following answer: couple the tuning parameter to a quantity that diverges at criticality. Specifically, if we introduce a coupling beteween the temperature and the susceptibility of an Ising model so that tuning parameter maximizes the susceptibility, then, in a finite system, the reduced temperature will approach the critical point with the same scaling behavior as the maximum of the susceptibility, t scales like L^{-1/vu}. This in turn sets the finite-size scaling exponents of all other observables to their universal values.



[12] D. Neelin, O. Peters, J. W.-B. Lin, K. Hales and C. Holloway in "Stochastic Physics and Climate Modeling", edited by T. Palmer and P. Williams. Cambridge University Press (2010), Chap. 16.

This is a re-publication of [8] as a book chapter.



[11] O. Peters and D. Neelin
        Int. J. Mod. Phys. B 23, 28--29, 5453--5465 (2009).
        doi: 10.1142/S0217979209063778

We discuss how a separation of scales can lead to scale-freedom, try out a method inspired by equilibrium critical phenomena to find the critical point in a presumed phase transition for atmospheric convection, and take a first look at hurricanes -- what was the relationship between column water vapor and precipitation during hurricane Katrina?



[10] O. Peters, D. Neelin and S. Nesbitt
        J. Atmos. Sci. 66, 9, 2913--2924 (2009).
        doi: 10.1175/2008JAS2761.1

This paper explores the spatial structure of convection clusters. The simplest model for clustering is percolation theory, and we wonder in how far this null model predicts the statistics of convection clusters. The most exciting finding in this paper is a prediction from gradient percolation: take a 2-d blank lattice, color in a fraction p of randomly chosen cells in each column, and let p decrease linearly from 1 in the left-most colum to 0 in the right-most column. The boundary of the largest cluster of connected colored-in sites is a fractal with dimension 4/3. It is a well-known (and previously unexplained) fact that cloud boundaries are fractals with dimension 4/3 (first reported by Shaun Lovejoy in 1982), but before our paper it had not been pointed out that this is a prediction of the null model -- percolation theory.
We also find that near the transition to convection the size distribution of convective clusters looks a lot like the size distribution for critical percolation.



[9] D. Neelin, O. Peters and K. Hales
        J. Atmos. Sci. 66, 8, 2367--2384 (2009).
        doi: 10.1175/2009JAS2962.1

This paper looks in detail at the statistical properties of convection that we found in [6]. Analyses are conditioned on tropospheric and sea-surface temperature, which sharpens all features significantly. We identify convective transition points for different tropospheric temperatures, which is interesting in the context of climate change because we can now directly observe an important robust property of convection under a change of temperature.



[8] D. Neelin, O. Peters, J. W.-B. Lin, K. Hales and C. Holloway
        Phil. Trans. R. Soc. A 366, 2581--2604 (2008).
        doi: 10.1098/rsta.2008.0056

We explore how the idea to model the onset of convection as a self-tuning continuous phase transition can be integrated into existing climate model components for convection.



[7] G. Pruessner and O. Peters,
        Phys. Rev. E 77, 048102 (2008).
        doi: 10.1103/PhysRevE.77.048102

We clarify comments that were made following the publication of [5].



[6] O. Peters and D. Neelin,
        Nature Phys. 2, 393--396 (2006).
        Free PDF file
        doi: 10.1038/Nphys314

In [1] we reported an observation of a scale-free event-size distribution for rainfall. We speculated that a self-organized critical mathematical model of some sort would capture this aspect. Unfortunately, this seemed like a dead-end at the time. We knew we could use a self-organized critical model, but what's the point if it only reproduces an observation we have already seen. For this paper [6] we made a list of properties we would expect to observe if convection really was well described by a self-organized critical model similar to the particle models (BTW, Manna, Oslo) that are related to underlying absorbing-state phase transitions. Specifically, we predicted that there should be an underlying transition that could be observed by conditioning observations on some measure of instability. We chose column-integrated water vapor for this measure and found that the rain rate can then be interpreted as an order parameter for a phase transition. At a critical value of water vapor the system becomes unstable and convection suddenly sets in, as reflected in a strong pick-up of the rain rate. This pick-up looks as one would expect for a continuous phase transition.



[5] G. Pruessner and O. Peters,         Phys. Rev. E 73, 025106(R) (2006).
        Free PDF file
        doi: 10.1103/PhysRevE.73.025106

We investigate a mechanism for obtaining SOC. This mechanism is phrased in terms of absorbing-state phase transitions but is formulated sufficiently generally that it should apply to any continuous phase transition. We try it out in the 2-d Ising model because the model is fully solved and we can compute analytically any quantities of interest. We find that the mechanism is capable of generating criticality but not universality. There is strong evidence for universal behavior under SOC conditions, even for SOC models to have identical finite-size scaling to the corresponding underlying absorbing-state phase transitions. We conclude, therefore, that something is missing from the absording-state mechanism for SOC.



[4] O. Peters and K. Christensen,
        J. Hydrol. 328, 46--55 (2006).
        doi: 10.1016/j.jhydrol.2005.11.045

We summarize the SOC-related statements about rainfall of [1] and [2] and look at the effect of sensititvity thresholds on the apparent fractal dimension of rain durations. Different studies came to different conclusions about the value of this dimension, and the apparent disagreement may be the result of different measurement techniques and corresponding sensitivity thresholds.



[3] K. Christensen, N. Moloney, O. Peters and G. Pruessner,
        Phys. Rev. E. 70, 067101(R) (2004).
        doi: 10.1103/PhysRevE.70.067101

We ask whether the critical exponents, observed numerically through finite-size scaling, are identical in the Oslo model and in its absorbing-state counterpart, i.e. an Oslo model with periodic boundary conditions and particle conservation. The answer is a resounding yes. Even the critical particle densities are identical to 4 significant figures.



[2] O. Peters and K. Christensen,
        Phys. Rev. E. 66, 036120 (2002).
        doi: 10.1103/PhysRevE.66.036120

This paper is an extended version of [1].



[1] O. Peters, C. Hertlein, and K. Christensen,
        Phys. Rev. Lett. 88, 018701 (2002).
        doi: 10.1103/PhysRevLett.88.018701

We define event sizes as rain rates integrated over a period of consecutive non-zero measurements at some location in space. Data from a highly sensitive micro rain radar are analyzed, and we find a scale-free distribution of rain events. The basic idea is the following: the atmosphere is a slowly driven system (heated on the ground by the sun and moistened through evaporation from the oceans). Over time heat at low levels, radiative cooling aloft, all combined with increasing moisture leads to instability. The result is convection, in its most dramatic form leading to cumulonimbus cloud towers and thunderstorms. Convection is a fast process, compared to the speed of the driving mechanism. This places the system as a whole close to the onset of convection and rainfall, a fact famously pointed out by Arakawa and Schubert in 1974. If the transition from a quiescent atmosphere to a convectively active atmosphere is well described as a continuous phase transition then critical properties of this transition may be observed in the statistics of rainfall. These would include a scale-free (power-law) distribution of event sizes, and that's what we observe.







eXTReMe Tracker