Séminaire de probabilités XLVIII

503 276 0
Séminaire de probabilités XLVIII

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Lecture Notes in Mathematics  2168 Séminaire de Probabilités Catherine Donati-Martin Antoine Lejay Alain Rouault Editors Séminaire de Probabilités XLVIII Lecture Notes in Mathematics Editors-in-Chief: J.-M Morel, Cachan B Teissier, Paris Advisory Board: Camillo De Lellis, Zurich Mario di Bernardo, Bristol Michel Brion, Grenoble Alessio Figalli, Zurich Davar Khoshnevisan, Salt Lake City Ioannis Kontoyiannis, Athens Gabor Lugosi, Barcelona Mark Podolskij, Aarhus Sylvia Serfaty, New York Anna Wienhard, Heidelberg 2168 More information about this series at http://www.springer.com/series/304 Catherine Donati-Martin • Antoine Lejay • Alain Rouault Editors Séminaire de Probabilités XLVIII 123 Editors Catherine Donati-Martin Laboratoire de Mathématiques Université de Versailles-St Quentin Versailles, France Antoine Lejay Campus scientifique IECL Vandoeuvre-les-Nancy, France Alain Rouault Laboratoire de Mathématiques Université de Versailles-St Quentin Versailles, France ISSN 0075-8434 Lecture Notes in Mathematics ISBN 978-3-319-44464-2 DOI 10.1007/978-3-319-44465-9 ISSN 1617-9692 (electronic) ISBN 978-3-319-44465-9 (eBook) Mathematics Subject Classification (2010): 60G, 60J, 60K © Springer International Publishing Switzerland 2016 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Preface After the exceptional 47th volume of the Séminaire de Probabilités dedicated to Marc Yor, we continue in this 48th volume with the usual formula: some of the contributions are related to talks given during the Journées de Probabilités held in Luminy (CIRM) in 2014 and in Toulouse in 2015, and the other ones come from spontaneous submissions Apart from the traditional topics such as stochastic calculus, filtrations and random matrices, this volume continues to explore the subject of peacocks, recently introduced in previous volumes Other particularly interesting papers involve harmonic measures, random fields and loop soups We hope that these contributions offer a good sample of the mainstreams of current research on probability and stochastic processes, in particular those active in France We would like to remind the reader that the website of the Séminaire is http://portail.mathdoc.fr/SemProba/ and that all the articles of the Séminaire from Volume I in 1967 to Volume XXXVI in 2002 are freely accessible from the web site http://www.numdam.org/numdam-bin/feuilleter?j=SPS We thank the Cellule MathDoc for hosting all these articles within the NUMDAM project Versailles, France Vandoeuvre-lès-Nancy, France Versailles, France Catherine Donati-Martin Antoine Lejay Alain Rouault v Contents Root to Kellerer Mathias Beiglböck, Martin Huesmann, and Florian Stebegg Peacocks Parametrised by a Partially Ordered Set Nicolas Juillet 13 Convex Order for Path-Dependent Derivatives: A Dynamic Programming Approach Gilles Pagès 33 Stability Problem for One-Dimensional Stochastic Differential Equations with Discontinuous Drift Dai Taguchi 97 The Maximum of the Local Time of a Diffusion Process in a Drifted Brownian Potential 123 Alexis Devulder A Link Between Bougerol’s Identity and a Formula Due to Donati-Martin, Matsumoto and Yor 179 Mátyás Barczy and Peter Kern Large Deviation Principle for Bridges of Sub-Riemannian Diffusion Processes 189 Ismaël Bailleul Dévissage of a Poisson Boundary Under Equivariance and Regularity Conditions 199 Jürgen Angst and Camille Tardif Weitzenböck and Clark-Ocone Decompositions for Differential Forms on the Space of Normal Martingales 231 Nicolas Privault vii viii Contents On the Range of Exponential Functionals of Lévy Processes 267 Anita Behme, Alexander Lindner, and Makoto Maejima t-Martin Boundary of Killed Random Walks in the Quadrant 305 Cédric Lecouvey and Kilian Raschel On the Harmonic Measure of Stable Processes 325 Christophe Profeta and Thomas Simon On High Moments of Strongly Diluted Large Wigner Random Matrices 347 Oleskiy Khorunzhiy Dyson Processes on the Octonion Algebra 401 Songzi Li Necessary and Sufficient Conditions for the Existence of ˛-Determinantal Processes 423 Franck Maunoury Filtrations of the Erased-Word Processes 445 Stéphane Laurent Projections, Pseudo-Stopping Times and the Immersion Property 459 Anna Aksamit and Libo Li Stationary Random Fields on the Unitary Dual of a Compact Group 469 David Applebaum On the Spatial Markov Property of Soups of Unoriented and Oriented Loops 481 Wendelin Werner Root to Kellerer Mathias Beiglböck, Martin Huesmann, and Florian Stebegg Abstract We revisit Kellerer’s Theorem, that is, we show that for a family of real probability distributions t /t2Œ0;1 which increases in convex order there exists a Markov martingale St /t2Œ0;1 s.t St t To establish the result, we observe that the set of martingale measures with given marginals carries a natural compact Polish topology Based on a particular property of the martingale coupling associated to Root’s embedding this allows for a relatively concise proof of Kellerer’s theorem We emphasize that many of our arguments are borrowed from Kellerer (Math Ann 198:99–122, 1972), Lowther (Limits of one dimensional diffusions ArXiv eprints, 2007), Hirsch-Roynette-Profeta-Yor (Peacocks and Associated Martingales, with Explicit Constructions Bocconi & Springer Series, vol 3, Springer, Milan; Bocconi University Press, Milan, 2011), and Hirsch et al (Kellerer’s Theorem Revisited, vol 361, Prépublication Université dÉvry, Columbus, OH, 2012) Mathematics Subject Classification (2010): Primary 60G42, 60G44; Secondary 91G20 M Beiglböck ( ) Institut für Stochastik und Wirtschaftsmathematik, Technische Universität Wien, Wiedner Hauptstraße 8, 1040 Wien, Austria e-mail: mathias.beiglboeck@tuwien.ac.at M Huesmann Institut für Angewandte Mathematik, Rheinische Friedrich-Wilhelms-Universität Bonn, Endenicher Allee 60, 53115 Bonn, Germany e-mail: huesmann@iam.uni-bonn.de F Stebegg Department of Statistics, Columbia University, 1255 Amsterdam Avenue, New York, 10025 NY, USA e-mail: florian.stebegg@columbia.edu © Springer International Publishing Switzerland 2016 C Donati-Martin et al (eds.), Séminaire de Probabilités XLVIII, Lecture Notes in Mathematics 2168, DOI 10.1007/978-3-319-44465-9_1 M Beiglböck et al Introduction 1.1 Problem and Basic Concepts We consider couplings between probabilities t /t2T on the real line, where t ranges over different choices of time sets T Throughout we assume that all t have a first moment We represent these couplings as probabilities (usually denoted by or P) on the canonical space ˝ corresponding to the set of times under consideration More precisely ˝ may be RT or the space D of càdlàg functions if T D Œ0; 1 In each case we will write St / for the canonical process and F D Ft / for the natural filtration ˘ t // denotes the set of probabilities P for which St P t M t // will denote the subset of probabilities (“martingale measures”) for which S is a martingale wrt F resp the right-continuous filtration F C D FtC /t2Œ0;1 in the case D D To have M t // Ô ; it is necessary that t / increases in convex order, i.e s '/ Ä t '/ for all convex functions ' and s Ä t This is an immediate consequence of Jensen’s inequality We denote the convex order by : Our interest lies in the fact that this condition is also sufficient, and we shall from now on assume that t /t2T increases in convex order, i.e that t /t2T is a peacock in the terminology of [5, 6] The proof that M t /t2T / ¤ ; gets increasingly difficult as we increase the cardinality of the set of times under consideration If T D f1; 2g, this follows from Strassen’s Theorem [18] and we take this result for granted The case T D f1; : : : ; ng immediately follows by composition of oneperiod martingale measures k M k ; kC1 / If T is not finite, the fact that M t /t2T / Ô ; is less immediate and to establish that M t /t2T / contains a Markov martingale is harder still; these results were first proved by Kellerer in [11, 12] and now go under the name of Kellerer’s theorem We recover these classical results in a framework akin to that of martingale optimal transport 1.2 Comparison with Kellerer’s Approach Kellerer [11, 12] works with peacocks indexed by a general totally ordered index set T and the corresponding natural filtration F He establishes compactness of martingale measures on RT which correspond to the peacock t /t2T Then Strassen’s theorem allows him to show the existence of a martingale with given marginals t /t2T for general T To show that M t /t2T / also contains a Markov martingale is more involved On a technical level, an obstacle is that the property of being a Markovian martingale measure is not suitably closed Kellerer circumvents this difficulty based on a stronger notion of Markov kernel, the concept of Lipschitz or Lipschitz-Markov kernels on which all known proofs of Kellerer’s Theorem rely The key step to showing that M t /t2T / contains a Markov martingale is to establish the existence Spatial Markov Property of Loop-Soups 491 F1 (this corresponds to Á without the first and last jumps of each excursion) This can be interpreted as a spatial Markov property of the occupation field on oriented edges (the random function that assigns to each oriented edge the total number of jumps of the soup along this edge) of the ˛ D soup of oriented loops We will discuss this again at the end of this section In the same spirit, we can in fact “symmetrize” also Proposition also when F2 is a subset of the complement of F1 Let us then define the collection of crossings Á1!2 to be the parts of the loops in the loop-soup of the type a0 ; e1 ; : : : ; an with a0 F1 , an F2 and a1 ; : : : ; an D n F1 [ F2 / We also define Á2!1 similarly, and note that there are as many crossings from F1 to F2 as there are crossings from F2 to F1 Let X (resp X ) denote the vector of endpoints of Á1!2 (resp Á2!1 ) and Y (resp Y ) the vector of starting points of Á2!1 (resp Á1!2 ) Then, we can note that X and Y are exactly the same as the ones defined in Proposition 1, while X and Y correspond to those that one obtains when interchanging F1 and F2 Furthermore, Á1!2 and Á2!1 are fully determined by Á (or alternatively by the symmetric family Á0 of excursions outside of F1 that reach F2 ) It follows readily from Proposition that: Proposition Conditionally on Á1!2 and on Á2!1 , the missing parts of the loops that they are part of (these are the loops of the ˛ D soup of oriented loops that intersect both F1 and F2 ) are described by two independent unordered bridges with DnF2 conditional distributions BDnF X ;Y and BX ;Y Note that the other loops in the loop-soup (i.e the loops that either not intersect at least one of the two sets F1 or F2 ) are just described by a loop-soup in the complement of F1 and a loop-soup in the complement of F2 , that are coupled to share exactly the same loops that stay in D n F1 [ F2 / Let us now prove Proposition Proof Let us consider a family E of N excursions, such that P.Á D E/ > and such that the N excursions E1 ; : : : ; EN of E are all different Then if Á D E and LQ D L, all the loops in L are simple, and they occur necessarily exactly once (and not more) Hence, for such an L, the probability that L D L is proportional to g n.L/ where n is the sum of the lengths of the loops in L (and the proportionality constant does not depend on L) On the other hand, if X and Y are the vector of end-points of E, the BDnF X;Y probability to sample a unordered bridge that gives rise exactly to L when concatenating it to E is proportional to g K (where K D n.L/ n.E/ is the total length of the generalized bridge), because there is just one permutation per bridge that works It therefore follows immediately that conditionally on Á D E, the distribution of the missing bridges is indeed BX ;Y in D n F1 Instead of treating directly the case of multiple occurrences of the same excursions in Á, we will use the following trick (a similar idea can be used to show the fact that the loops erased during Wilson algorithm correspond exactly to an oriented loop-soup, see for instance [19]) We choose a very large integer W (that is going to tend to infinity), and we decide to replace the graph by the graph 492 W Werner W , which is obtained by keeping the same set of vertices as , but where each edge of is replaced by W copies of itself In this way, each site has now gW outgoing edges instead of g There is of course a straightforward relation between random walks, loops and bridges on W and on For instance, a loop-soup (resp bridge, resp excursion) on W is directly projected on a loop-soup (resp bridge, resp excursion) on Let us couple loop-soups with intensity ˛ D in all of the W ’s on the same probability space, in such a way that the projections of the loop-soups in W onto (in the sense described above) are the same for all W’s We fix also F1 , F2 , and W define (with obvious notation), L W , ÁW , L etc Note that the vectors of extremal points X and Y are then the same for all ÁW ’s We can also note that the probability that some edge is used more than once in the loop-soup does tend to as W ! The probability that all excursions in ÁW are different therefore tends to as W ! But conditionally on the fact that all excursions in ÁW are different (applying our W previous result to W ), we know that the conditional distribution of L n ÁW given W W Á is the bridge probability measure from X to Y in D n F1 Projecting this onto , we get that the conditional distribution of ˇ given ÁW (on the event that in ÁW , no two excursions are the same) is the unordered bridge measure BX ;Y in D n F1 If U.W/ is the event that no two excursions of ÁW appear twice, we therefore get that, conditionally on Á D E and U.W/, the conditional distribution of ˇ is the unordered bridge measure BX ;Y in D n F1 We now just let W ! 1, which concludes the proof of the proposition 3.2 Partial Resampling of Soups of Unoriented Loops at c D Let us now come back to the setting where the graph is unoriented When one considers a soup of unoriented loops with intensity (recall that this corresponds to c D or ˛ D 1=2 i.e to a soup of oriented loops with intensity =2 where we forget the orientation of each loop) We denote the collection of unoriented loops that intersect both F1 and F2 by L D LQ1 ; : : : ; LQM /, the corresponding collection of (unoriented) excursions by Á D ÁQ ; : : : ; ÁQ N / and the endpoints of these N excursions by Z D Z1 ; : : : ; Z2N / The missing parts of the (unoriented) loops are unoriented paths that join each Zi to exactly one other Zj , so that ˇ is an unordered Z -bridge in D n F1 Note again that it is intuitively possible to explore the excursions ÁQ j “starting” from their intersections with F1 in both directions, until hitting F2 (and in this way, one did yet discover the missing parts ˇ) Proposition The conditional distribution of ˇ given Á is exactly the unordered unoriented bridge measure BZ in D n F1 Spatial Markov Property of Loop-Soups 493 Just as in the oriented case, we stress that an important feature in this statement is that this conditional distribution is a measurable function of the vector Z (the other information on the excursions are not needed) We will further comment on this in the next subsection Proof We will follow the same idea as in the proof of the oriented case As in the unoriented case, when the N pieces EQ ; : : : ; EQ N of E are all different, the statement is almost immediate (for each good ordered bridge, only one pairing works in order to complete E into L, and the probability to complete these N pieces into L is therefore proportional to g K where K is the difference between the total number of jumps in the loop-configuration and in E) We then use the same trick with copying each edge a large number of times The very same argument the works, almost word for word (a) (b) Fig Illustration of Proposition 3: discovering (a) the unoriented excursions away from the right part that reach the small square, (b) sampling the three unoriented bridges in the complement of the small square 494 W Werner 3.3 Spatial Markov Properties The particular case where F2 is the complement of F1 is also of interest for the soup of unoriented loops Let us for instance describe how things work for the occupation times of loop-soups (which is the main focus of the papers of Le Jan [9]) If one then conditions on the numbers of jumps of the loop-soup on all edges between a point in F1 and a point of F2 (in either direction—the loops being unoriented there is anyway no direction), then the conditional distribution of the parts ˇ in F2 of the loops that intersect both F1 and F2 is described by Proposition and it is a unordered unoriented bridge in F2 (and it is in fact fully described by the knowledge of the number of jumps along the edges between F1 and F2 , i.e this conditional distribution is a function of these number jumps of the edges between F1 and F2 ) But, the situation is symmetric and we can interchange the roles of F1 and F2 ; we therefore conclude that given ˇ and the numbers of jumps along the edges between F1 and F2 , the conditional distribution of Á0 defined to be the collection Á where one has removed the two extremal jumps of each Áj (these are the jumps between F1 and F2 ), is that of an unordered unoriented bridge in F1 (and the law of this bridge is also fully described by the number of jumps between F1 and F2 ) In other words, when one conditions on these number of jumps along the edges between F1 and F2 , we can enumerate these jumps (using some deterministic lexicographic rule) by Zj ; Zj /jÄ2N where Zj F1 and Zj F2 Then, the conditional distribution of Á0 and ˇ are conditionally independent unordered DnF1 bridges, respectively following the unordered bridge measures BDnF Z and BZ In particular, when adding on top of this the loop-soups in F1 and the loop-soups in F2 , it follows that conditionally on the occupation times (i.e on the number of jumps Ne across each edge) on the edges between F1 and F2 , the occupation times on sites and edges in F1 is independent of the occupation times on sites and edges in F2 We can rephrase this property in the following sentence: The occupation time field on edges of the soup of unoriented loops for c D does satisfy the spatial Markov property We can note that if U is a non-negative function of the occupation time field on Q the edges of the form U Ne // D e ue Ne /, such that the expectation of U (for the c D loop-soup) is equal to one, then if we define the new probability measure Q on occupation times on edges by dQ=dP Ne// D U Ne //, then the spatial Markov property also holds for Q This can be used to represent a modification of the Markov chain (i.e different walks with non-uniform jump probabilities) If we consider an unoriented graph, but that we interpret as an oriented graph (each unoriented edge defines an oriented edge in each direction), on which we define an ˛ D soup of oriented loops, then we can also reformulate the results of Sect 3.1 in a similar way More precisely, for each edge, we can define the total number of jumps N1 e/ by the soup in one direction of e, and N2 e/, the number of jumps in the opposite direction Then, if we define Ne WD N1 e/; N2 e//, this two-component occupation time field on edges of the ˛ D soup of oriented loops satisfies the spatial Markov property in the same sense as above Spatial Markov Property of Loop-Soups 495 Let us now come back to the study of the loops themselves, and not just of the cumulated occupation time of the soup As in the oriented case, we can also (when F2 is a subset of F1 ) rephrase Proposition in a more symmetric way, involving the crossings between F1 and F2 We define Á1$2 the set of (unoriented) parts of loops in the c D loop-soup that join a point of F1 to a point of F2 and otherwise stay in the complement of F1 [ F2 , and we denote by Z the vector of endpoints of these crossings in F2 , and by Z the set of endpoints in F1 Then: Proposition Conditionally on Á1$2 , the missing parts of the unoriented loops that these crossings are part of (these are the loops in the loop-soup that intersect both F1 and F2 ) are described by two independent unordered unoriented bridges with respective conditional distributions BDnF and BDnF Z Z0 Figure that illustrates the corresponding result in the Brownian case, can also be used to illustrate this result It is also easy to generalize Proposition and Proposition to more than two sets F1 and F2 (and have instead n disjoint sets F1 ; : : : ; Fn ) For instance, in the unoriented case, one then conditions on the set Á$ of all crossings from any Fi to any other Fj that also stay in the complement of all the other Fk ’s These crossings define n vectors Z ; : : : ; Z n (where Z j is a list of the even number of endpoints on Fj of the aforementioned crossings) Conditionally on Á$ , the missing parts of the loops (that are the loops in the loop-soup that touch at least two different Fj ’s) are described by n conditionally independent unordered unoriented bridges with D0 [F respective distributions BZj j (where D D D n [i Fi ) for j Ä n Such decompositions of the loops in the soup that intersect disjoint compact sets into crossings + conditionally independent unordered bridges, can be immediately transcribed to the case of Brownian loops on the cable system associated to this graph as studied in [10]; we leave this as a simple exercise to the reader This is all of course closely related to the Markov property of the Gaussian Free Field, as well as to Dynkin’s isomorphism theorem [4] via the relation between the square of the GFF and the loop-soup (see e.g., [9] and the references therein for background) With such Markovian-type properties in hand, a natural next step is to define random sets that play the role of stopping times for one-dimensional Markov processes In the setting of the discrete GFF, these are the local sets as defined in [15], and that turned out to be very useful concepts Just as for one-dimensional stopping times, there are several possible ways to define them, depending on what precise filtration on considers In the present case (we here describe the definitions in the unoriented loop-soup for c D 1, but the oriented case would be almost identical), one can for instance say that: • A random set of points F is a stopping set for the occupation time field filtration, if for any F1 , the event fF D F1 g is measurable with respect to the occupation time field on all edges adjacent to F1 • A random set of points F is a stopping set for the loop-soup filtration, if for any F1 , the event fF D F1 g is measurable with respect to the trace of the loop-soup 496 W Werner on all edges adjacent to F1 (i.e it is measurable with respect to the set of loops that are fully contained in F1 and the set of excursions Á defined above, when F2 is the complement of F1 • A random set of points F is a stopping set for the loop-soup, if for any F1 such that P.F D F1 / > 0, conditionally on the event fF D F1 g, the distribution of the loop-soup outside of F1 consists of the union of an independent loop-soup in the complement F2 of F1 and of a set of bridges in F2 , with law described as above via the end-points of the excursions Á in F1 Clearly, the first definition implies the second one, which implies the third one by Proposition (the third property for the first two definitions can be viewed as a “strong Markov property” of these fields), but the converse is not true (the last definition allows the use of “external randomness” in the definition of F (while the second does not), and the second one allows features of individual loop (while the first does not) 3.4 Brownian Loop-Soup Decompositions The previous results have almost identical counterparts in the setting of oriented Brownian loop-soups with intensity ˛ D and unoriented Brownian loop-soups with intensity c D Suppose that D is an open set in d-dimensional space, such that the (Dirichlet) Green’s function in D is finite (away from the diagonal) Suppose that F1 and F2 are two disjoint compact sets in D, that are both non-polar for Brownian motion (i.e Brownian motion started away from these sets has a non-zero probability to hit them) Then, we can again define: The law of unordered oriented Brownian bridges in D n F1 from a finite family X D x1 ; : : : ; xn / of points to another such family Y D y1 ; : : : ; yn /, and the law of unordered unoriented Z-Brownian bridges in D n F1 from a finite family of points Z D z1 ; : : : ; z2n / to itself (in the latter case, points of Z are paired, like in the random walk case) This works as long as all Green’s functions involved are finite (which is the case as soon as all xi 6D yj for all i; j, and that zi 6D zj for all i 6D j) The set Á of N oriented (resp unoriented) excursions of the loops in an oriented (resp unoriented) loop-soup with intensity ˛ D (resp c D 1) away from F2 , that reach F1 In the ordered case, we call their endpoints vector X D X1 ; : : : ; XN / and their starting point vector Y , and in the unoriented case, we call Z D Z1 ; : : : ; Z2N / the extremity vector Spatial Markov Property of Loop-Soups 497 Then, the Brownian counterparts of Proposition and of Proposition go as follows: Proposition – For the soup of oriented Brownian loops with ˛ D 1: Conditionally on Á, the missing pieces of the loops (that the pieces Á are part of) are distributed like an unordered Brownian bridge from X to Y in D n F1 – For the soup of unoriented Brownian loops with c D 1: Conditionally on Á, the missing pieces of the loops are distributed like an unordered unoriented Z Brownian bridge in D n F1 And as before, one can derive the more symmetric results: For instance, if F1 and F2 are two disjoint compact subsets of D, we can define the crossings from F1 to F2 and vice-versa in the oriented case, and the crossings between F1 and F2 in the unoriented case When one conditions on these crossings, one can then complete the picture with two conditionally independent unordered oriented bridges (in the oriented case) or by two conditionally independent unordered unoriented bridges (in the unoriented case) We illustrate this result in Figs and (here we consider the oriented case, D is the rectangle, F1 is the small circle and F2 the large circle) Conditionally on the points (and their status—square or circle depending on the orientation of the loops) on the two circles, the three pictures in Fig are independent (this is the oriented version of Fig 2) In the context of two-dimensional continuous systems, clusters of loops in a loop-soup are interesting to study, as pointed out in [18]; it has been proved in [16] that boundaries of such clusters for c Ä form Conformal Loop Ensembles with parameter Ä D Ä.c/, where Ä.1/ D The CLE4 (and the SLE4 curves) is also known (see [3, 15]) to be related quite directly to the Gaussian Free Field The role of the c D 1-clusters of loops in the framework of cable-systems and in relation to (a) (b) Fig Sketch of the oriented Brownian case: (a) the two oriented loops that touch the two circles, (b) keeping only the endpoints of these crossing on each circle, with trace of the orientation 498 (c) W Werner (d) (e) Fig (c) The outer bridges joining each circle point to a square point, (d) sampling the inner bridges joining each circle point to a square point, (e) the six crossings, joining a circle point to a square point The final loops are oriented so that the crossings from small to large circle go from a circle point to a square point the Gaussian Free Field has been pointed out by Lupu [10] (the clusters provides a direct link between the loop-soups and the Gaussian Free Field itself, rather than just to its square) The present result sheds some light on the recently derived [14] decomposition of critical 2d loop-soup clusters (for c D 1) in terms of Poisson point processes of Brownian excursions (we refer to [14] for comments and questions) Resampling for Continuous-Time Loop-Soups, the GFF and Random Currents We devote now a short separate section on the case of discrete continuous-time loop-soups, that have been studied by Le Jan [9] As we shall see, in that setting, it is natural to consider the conditioned distribution of the loop-soup (unoriented for c D i.e ˛ D 1=2, or oriented for ˛ D 1) given the value of their local times on a given family of sites Some of the results are very closely related to Dynkin’s isomorphism theorem (i.e it will be a pathwise version of a generalization of it) Just as previously, we will describe the case of simple random walk on the graph where each point x has the same number g of outgoing edges, but the results can easily be generalized to the case of general Markov chains Some of following considerations will be reminiscent of the arguments in [9] (Sects and in particular) In the first subsections, we will focus on the case of unoriented loop-soups, and we will briefly indicate the similar type of results that one gets in the oriented case Spatial Markov Property of Loop-Soups 499 4.1 Slight Reformulation of the Resampling Property of the Discrete Loop-Soup We can start with the same setting as before, with the graphs and D , the random walk on this graph killed upon hitting n D, and its Green’s function GD ; / In the previous sections, we chose for expository reasons (as this was for instance the natural preparation for the Brownian case) to study loops in the loop-soup that visit two different sets of sites F1 and F2 But in fact, the following setting is a little more natural and more general: Consider now a family e1 ; : : : ; en of edges of D, and the graph D0 obtained by removing these n edges from D We can now sample an unoriented loop-soup (for c D 1), and observe the numbers N1 ; : : : ; Nn of jumps along those n unoriented edges We now want to know the conditional distribution of the entire loop-soup given this information In particular, we would like to know how these N1 C C Nn jumps are hooked together into loops (clearly, the loop-soup in D0 consisting of the loops that use none of these n edges is independent of N WD N1 ; : : : ; Nn /) We can associate to N the vector Z consisting of the 2N1 C C 2Nn endpoints of these jumps Once we label them, we can as before the collection ˇ of pairing and bridges that join them in the loop-soup Note that the bridge is allowed to contain no jump when one pairs two identical end-points We can also define the unordered bridge measures in D0 (corresponding to paths that use no edge of D n D0 ) as before Then, exactly as before, one can prove the following version of the resampling: Proposition The conditional distribution of ˇ given N1 ; : : : ; Nn is exactly the unordered unoriented bridge measure BZ in D0 Note that for some choices of family of edges e1 ; : : : ; eN , it can happen that an even number of endpoints of the discovered jumps are at a certain vertex where no neighboring edge is in D0 In that case, the bridge measure pairs these jumps at random and the corresponding bridge is anyway the empty bridge from x to x A trivial example is of course the case where e1 ; : : : ; en are all the edges of D Then, the proposition just says that the conditional distribution of the loops given the occupation time measure is obtained by just pairing at random the incoming edges at each site “Loops can exchange their hats uniformly at random at each site” This reformulation makes it clear that in the discrete time setting, the Markov property of the occupation time field is really a Markov property on the edges (which is not surprising, given that the field is actually naturally defined on the edges) 4.2 Continuous-Time Loops Following Le Jan’s approach [9], we now introduce the associated continuoustime Markov chain, for each site x, the chain stays an exponential waiting time 500 W Werner of mean 1=g before jumping along one of the g outgoing edges chosen at random (for expository reasons, we describe this in the case where each edge has the same number of outgoing edges) Note that we allowed stationary edges in the graph, so that the continuous-time Markov chain can also “jump” along those (and we can keep track of these jumps, even if they not affect the occupation time at sites) As pointed out by Le Jan, the loop-soup of such continuous-time loops for ˛ D 1=2 is particularly interesting, as its cumulated occupation time (on sites) is exactly the square of a Gaussian Free Field on this graph (here one may introduce one or more killing point, so that the loop-soup occupation-time is finite, and the free field with boundary value at this point is well-defined) In this setting, the loops of the discrete Markov chain correspond exactly to loops of the continuous-time chain, but the latter also contains some additional stationary loops, that just stay at one single point without jumping during their entire life-time When one considers a continuous-time loop and a finite set of vertices in the graph that it does visit, one can cut-out from the loops the time that it does spend at these points and obtain a finite sequence of excursions away from this set This corresponds to the usual excursion theory of continuous-time Markov processes (an excursion from x to y will be a path that jumps out of x at time and jumps into y at the endpoint of the excursion) One can the introduce the natural excursion measure A x;y , which is the natural measure on set of unoriented excursions that go from x to y while avoiding all the points in A (it corresponds to the discrete excursion measure that puts a mass g n to such an excursion with n jumps, and one then adds n independent exponential waiting times at the n 1/ points inside the excursions One can view the continuous-time Markov chain as the limit when M ! of the discrete-time Markov chain on a graph DM , where one has added to each site x, M stationary edges from x to itself (when one renormalizes time by 1=M, the geometric number of successive jumps along these added stationary edges from x to x before jumping on another edge, does converges to the exponential random variables)—this approach is for instance used in [19] in order to derive the properties of the continuous-time chains and loop-soups from the properties of the discretetime loop-soups Let us now consider a finite set of points x1 ; : : : ; xn in the graph, and for a given M, we condition on the N1 ; : : : ; Nn of jumps by the loop-soup along the stationary unoriented edges e1 ; : : : ; en More precisely N1 will denote the total number of jumps in the loop-soup along the M added stationary edges from x to x Note that because both end-points of a stationary edge are the same, these N1 jumps correspond to 2N1 jump-endpoints, that are all at x1 We can now apply Proposition to this case; this describes the distribution of how to complete and hook up these N1 C C Nn jumps into unoriented loops in order to recover the loops in the loopsoup that they correspond to One has to pair all these 2N1 C C 2Nn endpoints Mind that as M gets large, the mass of the trivial excursion from x1 to x1 with zero life-time is always 1, while the mass of (unoriented) excursions with at least one jump along the “non-added” nM stationary edges neighboring these points from x1 to some xj that stays away from fx1 ; : : : ; xn g during the entire positive lifetime (if it is positive) will be of order 1=M (unless all neighbors of x1 are in fx1 ; : : : ; xn g in which case this quantity is zero) and that the set of excursions from x1 to xj that visit Spatial Markov Property of Loop-Soups 501 at least one of the points of fx1 ; : : : ; xn g during its positive life-time is of the order of O.1=M/2 It is a simple exercise that we safely leave to the reader to check that in the M ! limit, the discrete Markovian description becomes the following: Proposition If we consider the continuous-time Markov chain loop-soup and condition on the total occupation time l.x1 /; : : : ; l.xn / at the n points x1 ; : : : ; xn , then the unoriented excursions away from this set of points by the loop-soup will be distributed exactly like a Poisson point process of excursions with intensity P x1 ;:::;xn D 1=2/ l.x conditioned on the event that the number of l i /l.xj / xi ;xj iÄj excursions starting or ending at each of the n points x1 ; : : : ; xn is even The particular case where the set of points fx1 ; : : : ; xn g is the whole vertex set is again of some interest: The conditional distribution of the number of unoriented jumps on the edges given the occupation time field on the vertices is a collection of independent Poisson random variables with respective means l.xi /l.xj /, conditioned by the event that for all site x, the total number of jumps on the incoming edges at x is even This is exactly the random current distribution associated with the Ising model For some further comments on this relation with random currents, the GFF and Ising, we refer to [11] 4.3 Relation with Dynkin’s Isomorphism It should be of course noted that this decomposition is closely related Dynkin’s isomorphism (see [4, 5, 12] and the references therein), except that one here conditions here on the value of the square of the GFF instead of the value of the GFF itself The previous result implies (when one only looks at occupation times and not at the loop-soup itself) that conditionally on the value of the square of the GFF at the set of points fx1 ; : : : ; xn g, the square of the value of the GFF at the other points is the sum of the occupation times of the conditioned Poisson point process of excursions with an independent squared GFF in the remaining (smaller) domain If one however conditions the GFF at the n sites to be all equal to the same value t, then one can consider instead a graph where all these points are identified as a single point and note that when the GFF on the new graph conditioned to have value t at that point is distributed as the GFF on the initial graph, conditioned to have value t at each of the n points One can apply the previous statement to that new graph and note that the conditioning on the event that the number of excursions-extremities at each boundary site is even then disappears, because when there is just one such site, this number is anyway even (each excursion from this point to itself has two endpoints) Here it is however essential that the signs of all these values are the same (because if one identifies them into a single point, then they will anyway correspond to the same value of the GFF, not just to the same value of its square In summary, conditioning by the value of the square of the GFF gives rise to the parity conditioning, but it is also possible to condition on the actual value of the GFF and the parity conditioning becomes irrelevant when one looks at the occupation 502 W Werner times only Note that Dynkin’s isomorphism then follows, because in the latter case, the conditional distribution of the square of the GFF at the other points (which is therefore the square of the GFF in this smaller domain with boundary conditions given by these conditioned boundary values) will be the sum of the contribution of the loops that only visit those points (which is a squared GFF in the remaining domain) with the occupation time of the Poisson point process of excursions, while the conditioned GFF is a GFF with some prescribed boundary conditions, that can be viewed as the sum of a GFF in the complement of the set of marked points with the deterministic harmonic extension of these boundary values 4.4 The Oriented Case One can follow almost word for word the same strategy to study the conditional distribution of oriented continuous-time loop-soups at ˛ D given their cumulated local time at sites In that case, the excursions will be oriented, and the conditional distribution of the excursions away from these points will be a Poisson point process conditioned on the event that for each site, the number of incoming excursions is equal to the number of outgoing ones The particular case where the set of points is the whole vertex set is again interesting The conditional distribution of the set of jumps will be independent Poisson on each oriented edge, but conditioned on the fact that the number of incoming jumps at each site is going to its number of outgoing jumps We leave all the details and further results to the interested reader Note We found out that the recently posted preprint [2] by Camia and Lis describes some ideas that are similar to the present paper (which was prepared totally independently of [2]) Acknowledgements The support of SNF grant SNF-155922, of the Clay foundation, and the hospitality of the Isaac Newton Institute in Cambridge (where the present work has been carried out) are gratefully acknowledged The author is also part of the NCCR-Swissmap I also thank Yves Le Jan for comments on the first version of this preprint References D Brydges, J Fröhlich, T Spencer, The random walk representation of classical spin systems and correlation inequalities Commun Math Phys 83, 123–150 (1982) F Camia, M Lis, Non-backtracking loop soups and statistical mechanics on spin networks (2015) arXiv:1507.05065 J Dubédat, SLE and the free field: partition functions and couplings J Am Math Soc 22, 995–1054 (2009) E.B Dynkin, Markov processes as a tool in field theory J Funct Anal 50, 167–187 (1983) Spatial Markov Property of Loop-Soups 503 E.B Dynkin, Gaussian and non-Gaussian random fields associated with Markov processes J Funct Anal 55, 344–376 (1984) G.F Lawler, Loop-erased random walk, in Perplexing Problems in Probability Festschrift in Honor of Harry Kesten, Progress in Probability, vol 44 (Birkhäuser, Boston, 1999), pp 197–217 G.F Lawler, V Limic, Random Walks: a Modern Introduction (Cambridge University Press, Cambridge, 2010) G.F Lawler, W Werner, The Brownian loop soup Probab Theory Relat Fields 128, 565–588 (2004) Y Le Jan, Markov Paths, Loops and Fields Lecture Notes in Mathematics, vol 2026 (Springer, Berlin, 2011) 10 T Lupu, From loop clusters and random interlacement to the free field Ann Probab 44(3), 2117–2146 (2016) 11 T Lupu, W Werner, A note on Ising random currents, Ising-FK, loop-soups and the Gaussian free field Electron Commun Probab 21 (2016) 12 M.B Marcus, J Rosen, Markov Processes, Gaussian Processes, and Local Times (Cambridge University Press, Cambridge, 2006) 13 E Nelson, The free Markoff field J Funct Anal 12, 211–227 (1973) 14 W Qian, W Werner, Decomposition of two-dimensional loop-soup clusters (2015) arXiv:1509.01180 15 O Schramm, S Sheffield, A contour line of the continuous Gaussian free field Probab Theory Relat Fields 157, 47–80 (2013) 16 S Sheffield, W Werner, Conformal loop ensembles: the Markovian characterization and the loop-soup construction Ann Math 176, 1827–1917 (2012) 17 K Symanzik, Euclidean quantum field theory, in Local Quantum Theory, ed by R Jost (Academic, New York, 1969) 18 W Werner, SLEs as boundaries of clusters of Brownian loops C.R Math Acad Sci Paris 337, 481–486 (2003) 19 W Werner, Topics on the Gaussian Free Field Lecture Notes (Springer, Berlin, 2014) 20 D.B Wilson, Generating random spanning trees more quickly than the cover time, in Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing (Association for Computing Machinery, New York, 1996), pp 296–303 LECTURE NOTES IN MATHEMATICS 123 Editors in Chief: J.-M Morel, B Teissier; Editorial Policy Lecture Notes aim to report new developments in all areas of mathematics and their applications – quickly, informally and at a high level Mathematical texts analysing new developments in modelling and numerical simulation are welcome Manuscripts should be reasonably self-contained and rounded off Thus they may, and often will, present not only results of the author but also related work by other people They may be based on specialised lecture courses Furthermore, the manuscripts should provide sufficient motivation, examples and applications This clearly distinguishes Lecture Notes from journal articles or technical reports which normally are very concise Articles intended for a journal but too long to be accepted by most journals, usually not have this “lecture notes” character For similar reasons it is unusual for doctoral theses to be accepted for the Lecture Notes series, though habilitation theses may be appropriate Besides monographs, multi-author manuscripts resulting from SUMMER SCHOOLS or similar INTENSIVE COURSES are welcome, provided their objective was held to present an active mathematical topic to an audience at the beginning or intermediate graduate level (a list of participants should be provided) The resulting manuscript should not be just a collection of course notes, but should require advance planning and coordination among the main lecturers The subject matter should dictate the structure of the book This structure should be motivated and explained in a scientific introduction, and the notation, references, index and formulation of results should be, if possible, unified by the editors Each contribution should have an abstract and an introduction referring to the other contributions In other words, more preparatory work must go into a multi-authored volume than simply assembling a disparate collection of papers, communicated at the event Manuscripts should be submitted either online at www.editorialmanager.com/lnm to Springer’s mathematics editorial in Heidelberg, or electronically to one of the series editors Authors should be aware that incomplete or insufficiently close-to-final manuscripts almost always result in longer refereeing times and nevertheless unclear referees’ recommendations, making further refereeing of a final draft necessary The strict minimum amount of material that will be considered should include a detailed outline describing the planned contents of each chapter, a bibliography and several sample chapters Parallel submission of a manuscript to another publisher while under consideration for LNM is not acceptable and can lead to rejection In general, monographs will be sent out to at least external referees for evaluation A final decision to publish can be made only on the basis of the complete manuscript, however a refereeing process leading to a preliminary decision can be based on a pre-final or incomplete manuscript Volume Editors of multi-author works are expected to arrange for the refereeing, to the usual scientific standards, of the individual contributions If the resulting reports can be forwarded to the LNM Editorial Board, this is very helpful If no reports are forwarded or if other questions remain unclear in respect of homogeneity etc, the series editors may wish to consult external referees for an overall evaluation of the volume Manuscripts should in general be submitted in English Final manuscripts should contain at least 100 pages of mathematical text and should always include – a table of contents; – an informative introduction, with adequate motivation and perhaps some historical remarks: it should be accessible to a reader not intimately familiar with the topic treated; – a subject index: as a rule this is genuinely helpful for the reader – For evaluation purposes, manuscripts should be submitted as pdf files Careful preparation of the manuscripts will help keep production time short besides ensuring satisfactory appearance of the finished book in print and online After acceptance of the manuscript authors will be asked to prepare the final LaTeX source files (see LaTeX templates online: https://www.springer.com/gb/authors-editors/book-authorseditors/manuscriptpreparation/5636) plus the corresponding pdf- or zipped ps-file The LaTeX source files are essential for producing the full-text online version of the book, see http://link.springer.com/bookseries/304 for the existing online volumes of LNM) The technical production of a Lecture Notes volume takes approximately 12 weeks Additional instructions, if necessary, are available on request from lnm@springer.com Authors receive a total of 30 free copies of their volume and free access to their book on SpringerLink, but no royalties They are entitled to a discount of 33.3 % on the price of Springer books purchased for their personal use, if ordering directly from Springer Commitment to publish is made by a Publishing Agreement; contributing authors of multiauthor books are requested to sign a Consent to Publish form Springer-Verlag registers the copyright for each volume Authors are free to reuse material contained in their LNM volumes in later publications: a brief written (or e-mail) request for formal permission is sufficient Addresses: Professor Jean-Michel Morel, CMLA, École Normale Supérieure de Cachan, France E-mail: moreljeanmichel@gmail.com Professor Bernard Teissier, Equipe Géométrie et Dynamique, Institut de Mathématiques de Jussieu – Paris Rive Gauche, Paris, France E-mail: bernard.teissier@imj-prg.fr Springer: Ute McCrory, Mathematics, Heidelberg, Germany, E-mail: lnm@springer.com ... convex order that we introduce now is also known under the names second stochastic order or Choquet order Definition (Convex Order) The measures ;R P.RRd / are said to be in convex order if for... maximum of a martingale, in Séminaire de Probabilités, XXXII Lecture Notes in Mathematics, vol 1686 (Springer, Berlin, 1998), pp 250–263 D Hobson, M Klimmek, Model independent hedging strategies... of the Séminaire de Probabilités dedicated to Marc Yor, we continue in this 48th volume with the usual formula: some of the contributions are related to talks given during the Journées de Probabilités

Ngày đăng: 14/05/2018, 13:55

Từ khóa liên quan

Mục lục

  • Preface

  • Contents

  • Root to Kellerer

    • 1 Introduction

      • 1.1 Problem and Basic Concepts

      • 1.2 Comparison with Kellerer's Approach

      • 1.3 Further Literature

      • 2 The Compact Set of Martingales Associated to a Peacock

        • 2.1 The Countable Case

        • 2.2 The Right-Continuous Case

        • 2.3 General Peacocks

        • 3 Root to Markov

          • 3.1 Lipschitz-Markov Kernels

          • 3.2 Compactness of Lipschitz-Markov Martingales

          • 3.3 Further Comments

          • References

          • Peacocks Parametrised by a Partially Ordered Set

            • 1 Introduction

            • 2 Definitions

            • 3 The Kellerer Theorem and Trying to Generalise It

              • 3.1 Problem 7a

              • 3.2 Problem 7b

              • 4 Proof of Theorem 2

                • 4.1 A Peacock Not Associated to a Martingale

                • 4.2 A Martingale Not Associated to a Markovian Martingale

                • 4.3 A Markovian Martingale Not Associated to a Lipschitz–Markov Martingale

                • 5 A Positive Result

                  • 5.1 Disintegration of a Measure in {μP:E(μ)=0}

                    • 5.1.1 Diatomic Convex Order

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan