Differential Equations and Their Applications Part 13 pot

20 183 0
Differential Equations and Their Applications Part 13 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

230 Chapter 8. Applications of FBSDEs Differentiating (5.14) with respect to x twice and denote v = u~.~, p = qz~, then we see that (v,p) satisfies the following (linear) BSPDE: (5.15) dv= 1 2 2 (2xa 2+xr)v~ (a 2+r)v {-Tx~v~- - xap~-(2a-O)p}dt-pdW(t); (t,y) C [O,T) • v(T, x) = g"(x). Here again the well-posedness of (5.15) can be obtained by considering its equivalent form after the Euler transformation (since r and a are indepen- dent of x!). Now we can applying Chapter 5, Corollary 6.3 to conclude that v > 0, whenever g" ~ 0, and hence ~ is convex provided g is. We can discuss more complicated situation by using the comparison theorems in Chapter 5. For example, let us assume that both r and a are deterministic functions of (t, x), and we assume that they are both C 2 for simplicity. Then (5.10) coincides with (5.4). Now differentiating (5.4) twice and denoting v uz~, we see that v satisfies the following PDE: (5.16) where 0 = vt + 2 x2 a2v~ + 5xv~ + by + r~(xu~ - u), 1 v(T,x) = 9"(z), 9 >_ O, = 2a 2 + 2xaaz + r; D = a 2 + 4xaa~ + (xcr~) 2 + x2aa~z + 2xrz + r. Now let us denote V = xu~ -u, then some computation shows that V satisfies the equation: 0 = Vt + 2x2a2Vxz + axVx + (xrx - r)Y, on [0, T) x (0, (x~), (5.17) L V(T, z) = xg'(x) - g(x), x >_ O, for some function ~ depending on a and /~ (whence r and a). Therefore applying the comparison theorems of Chapter 5 (use Euler transformation if necessary) we can derive the following results: assume that g is convex, then (i) if r is convex and xg'(x) - g(x) > O, then u is convex. (ii) if r is concave and xg'(x) - g(x) < O, then u is convex. (iii) if r is independent of x, then u is convex. Indeed, if xg'(x) - g(x) >_ O, then V ~ 0 by Chapter 5, Corollary 6.3. This, together with the convexity of r and g, in turn shows that the solution v of (5.16) is non-negative, proving (i). Part (ii) can be argued similarly. To see (iii), note that when r is independent of x, (5.16) is homogeneous, thus the convexity of h implies that of fi, thanks to Chapter 5, Corollary 6.3 again. w A stochastic Black-Scholes formula 231 w Robustness of Black-Scholes formula The robustness of the Black-Scholes formula concerns the following prob- lem: suppose a practitioner's information leads him to a misspecified value of, say, volatility a, and he calculates the option price according to this misspecified parameter and equation (5.4), and then tries to hedge the con- tingent claim, what will be the consequence? Let us first assume that the only misspecified parameter is the volatil- ity, and denote it by a a(t, x), which is C 2 in x; and assume that the interest rate is deterministic and independent of the stock price. By the conclusion (iii) in the previous part we know that u is convex in x. Now let us assume that the true volatility is an {5vt}t>o-adapted process, denoted by ~, satisfying (5.18) 3(t) ~ a(t, x), V(t, x), a.s. Since in this case we have proved that u is convex, it is easy to check that in this case (6.16) of Chapter 5 reads (5.19) (s - s + (2Q - AJ)q + ] - f = lx2152 - a2]u~ >_ 0, where (s M) is the differential operator corresponding to the misspecified coefficients (r, 3). Thus we conclude from Chapter 5, Theorem 6.2 that ~(t,x) _> u(t, x), V(t, x), a.s. Namely the misspecified price dominates the true price. Now let us assume that the inequality in (5.18) is reversed. Since both (5.4) and (5.14) are linear and homogeneous, (-~,-~) and (-u,0) are both solutions to (5.14) and (5.4) as well, with the terminal condition being replaced by -g(x). But in this case (5.19) becomes _ = x2152 _ >_ 0, because u is convex, and 5 2 <_ a 2. Thus -~ _7 -u, namely ~ _< u. Using the similar technique we can again discuss some more compli- cated situations. For example, let us allow the interest rate r to be mis- specified as well, but in the form that it is convex in x, say. Assume that the payoff function h satisfies xh~(x) - h(x) 7_ O, and that § and 5 are true interest rate and volatility such that they are {~'t}t_>o-adapted ran- dom fields satisfying ~(t,x) > r(t,x), and ~(t,x) > ~(t,x), V(t,x). Then, using the notation as before, one shows that (~ - s = ~x2152 - a~]u~ + (~- r)[xu. - u] > 0, because u is convex, and xu~ - u = V >_ O, thanks to the arguments in the previous part. Consequently one has ~t(t, x) >_ u(t, x), V(t, x), a.s. Namely, we also derive a one-sided domination of the true values and misspecified values. 232 Chapter 8. Applications of FBSDEs We remark that if the misspecified volatility is not the deterministic function of the stock price, the comparison may fail. We refer the inter- ested readers to E1 Karoui-Jeanblanc-Picqu@-Shreve [1] for an interesting counterexample. w An American Game Option In this section we apply the result of Chapter 7 to derive an ad hoc option pricing problem which we call the American Game Option. To begin with let us consider the following FBSDE with reflections (compare to Chapter 7, (3.2)) { jot Jo (6.1) Note that the forward equation does not have reflection; and we assume that m = 1 and 02(t,x,w) = (L(t,x,w),U(t,x,w)), where L and U are two random fields such that L(t, x, w) <_ U(t, x, w), for all (t, x, w) E [0, T] x IR n • f~. We assume further that both L and U are continuous functions in x for all (t,w), and are {Ft}t_>0-progressively measurable, continuous processes for all x. In light of the result of the previous section, we can think of X in (6.1) as a price process of financial assets, and of Y as a wealth process of an (large) investor in the market. However, we should use the latter interpretation only up until the first time we have d~ < 0. In other words, no externM funds are allowed to be added to the investor's wealth, although he is allowed to consume. The American game option can be described as follows. Unlike the usual American option where only the buyer has the right to choose the exercise time, in a game option we allow the seller to have the same right as well, namely, the seller can force the exercise time if he wishes. However, in order to get a nontrivial option (i.e., to avoid immediate exercise to be optimal), it is required that the payoff be higher if the seller opts to force the exercise. Of course the seller may choose not to do anything, then the game option becomes the usual American option. To be more precise, let us denote by J~t,T the set of {gvt}t_>0-stopping times taking values in It, T], and t E [0, T) be the time when the "game" starts. Let T E J~t,T be the time the buyer chooses to exercise the option; and a E .h~t, T be that of the seller. If T _< a, then the seller pays L(T, X~); if a < % then the seller pays U(a,X~). If neither exercises the option by the maturity date T, then the sellcr pays B = g(XT). We define the minimal hedging price of this contract to be the infimum of initial wealth amounts ]Io, such that the seller can deliver the payoff, a.s., without having to use additional outside funds. In other words, his wealth process has to follow the dynamics of Y (with d~ _> 0), up to the exercise time a A ~- A T, w An American game option 233 and at the exercise time we have to have (6.2) Y~A~^T >_ g(XT)l{aAr=T} -t- L(r, Xr)l{r<T,r<_a } + U(cr, X~)l{~<,-}. Our purpose is to determine the minimal hedging price, as well as the corresponding minimal hedging process. To solve this option pricing problem, let us first study the following stochastic game (Dynkin game) is useful: there are two players, each can choose a (stopping) time to stop the game over an given horizon [t, T]. Let cy C Mt,T be the time that player I chooses, and r E .A~t,T be that of player II'w. If a < r, the player I pays U(a)(= U(a,X~)) to player II; whereas if T _< a < T, player I pays L(r)(= L(r,X~)) (yes, in both cases the player I pays!). If no one stops by time T, player I pays B. There is also a running cost h(t)(= h(t, Xt, Yt, Zt)). In other words the payoff player I has to pay is given by (6.3) f r RtB(a, T) ~= h(u)du + BI{,,A~-=T} Jt + L(r)l{r<T, r<_a} + U(a)l{,<~}, where B E L2(fl) is a given ~T measurable random variable satisfying L(T) ~ B ~_ U(T). Suppose that player // is trying to maximize the payoff, while player I attempts to minimize it. Define the upper and lower value s of the game by (6.4) V(t) ~ essinf esssup E{RS(a,r)iJ:t}, ~ T ~'E.A4t,T V(t) =t' esssup essmf" E{Rff(a, T)}J:t} respectively; and we say that the game has a value if V(t) = V(t) ~ V(t). The solution to the Dynkin game is given by the following theorem, which can be obtained by a line by line analogue of Theorem 4.1 in Cvitani5 and Karatzas [2]. Here we give only the statement. Theorem 6.1. Suppose that there exists a solution (X, Y, Z, 4) to FB- SDER (6.1) (with (_92(t,x) = (L(t,x),U(t,x)). Then the game (6.3) with B = g(XT), h(t) = h(t, Xt,Yt, Zt), and L(t,w) = L(t, Xt(w)), U(t,w) = U(t, Xt(w)) has value V(t), given by the backward component Y of the solution to the FBSDER, i.e. V(t) = V(t) = V_(t) = Yt, a.s., for all 0 < t < T. Moreover, there exists a saddle-point (at, ?t) E A/[t,T x 2eft,T, given by ~t~inf{sE[t,T): ]Is=U(s, Xs)}AT, ~t~inf{sE[t,T): Ys= L(s, Xs)} AT, 234 Chapter 8. Applications of FBSDEs namely, we have _< E{R =Yt <_ E{R~ (xr) (a, ~t)IV:t}, a.s. for every (a, ~-) C ./~t,T X -/~t,T. [] In what follows when we mention FBSDER, we mean (6.1) specified as that in Theorem 6.1. Theorem 6.2. The minimal hedging price of the American Game Option is greater or equal to V(O), the upper value of the game (at t = O) of Theorem 6.1. If the corresponding FBSDER has a solution ()(, ]2 2, r then the minimal hedging price is equal to 1)o. Proof: Fix the exercise times ~, T of the seller and the buyer, respec- tively. If Y is the seller's hedging process, it satisfies the following dynamics for t < T Acr AT: /o' /o Yt + h(s, Xs, Ys, Z~)ds = ZsdWs - it, with ~ non-decreasing. Hence, the left-hand side is a supermartingale. Prom this and the requirement that Y be a hedging process, we get Yt _> E{Rf(X~)(cr, r)lf't}, Vt, a.s. in the notation of Theorem 4.1. Since the buyer is trying to maximize the payoff, and the seller to minimize it, we get Yt >_ ~. Vt, a.s Consequently, the minimal hedging price is no less than 17(0). Conversely, if the FBSDER has a solution with I) as the backward component, then by Theorem 6.1, process 1) is equal to the value process of the game, and by (4.4) (with t = 0) and (2.10), up until the optimal exercise time ~ := 50 for the seller, it obeys the dynamics of a wealth process, since Ct is nondecreasing for t _< 8o. So, the seller can start with ]10, follow the dynamics of Y until t ~ and then exercise, if the buyer has not exercised first. In general, from the saddle-point property we know that, for any ~- E J~'0,T, 1)~AT ~ g(XT)I{~Ar=T} 'V L(T, Xv)l{v<T,r_<~} + U(~, X~)I{~<~}. This implies that that the seller can deliver the required payoff if he uses as his exercise time, no matter what the buyer's exercise time T is. Con- sequently, 1)o = V(0) is no less than the minimal hedging price. [] Chapter 9 Numerical Methods for FBSDEs In the previous chapter we have seen various applications of FBSDEs in theoretical and applied fields. In many cases a satisfactory numerical simu- lation is highly desirable. In this chapter we present a complete numerical algorithm for a fairly large class of FBSDEs, and analyze its consistency as well as its rate of convergence. We note that in the standard forward SDEs case two types of approximations are often considered: a strong scheme (P 1 which typically converges pathwisely at a rate (~), and a weak scheme which approximates only approximates E{f(X(T))}, with a possible faster rate of convergence. However, as we shall see later, in our case the weak convergence is a simple consequence of the pathwise convergence, and the rate of convergence of our scheme is the same as the strong scheme for pure forward SDEs, which is a little surprising because a FBSDE is much more complicated than a forward SDE in nature. w Formulation of the Problem In this chapter we consider the following FBSDE: for t C [0, T], { x J0t J0t Z(t) + f b(s,O(s))ds+[ a(s,X(s),Y(s))dW(s); (1.1) T T Y(t) = g(X(T)) + ftt b(s'O(s))ds- ft Z(s)dW(s), where O = (X, Y, Z). We note that in some applications (e.g., in Chapter 8, w Black's Consol Rate Conjecture), the FBSDE (1.1) takes a slightly simpler form: { x jot + Jot x(t) = + f f (1.2) T rT Y(t) = g(X(T))+ [ - / Jt Jt That is, the coefficients b and/~ do not depend on Z explicitly, and often in these cases only the components (X, Y) are of significant interest. In what follows we shall call (1.2) the "special case" when only the approximation of (X, Y) are considered; and we call (1.1) the "general case" if the approx- imation of (X, ]I, Z) is required. We note that in what follows we restrict ourselves to the case where all processes involved are one dimensional. The higher dimensional case can be discussed under the same idea, but techni- cally much more complicated. Furthermore, we shall impose the following standing assumptions: 236 Chapter 9. Numerical Methods of FBSDEs (A1) The functions b, b and a are continuously differentiable in t and twice continuously differentiable in x, y, z. Moreover, if we denote any one of these functions generically by r then there exists a constant c~ E (0, 1), such that for fixed y and z, r y, z) E C1+~'2+% Furthermore, for some L>0, I1r ", Y, Z)ll~,2,~ < L, V(y,z) E ~2. (A2) The function a satisfies (1.3) # _< e(t, x, y) < C, V(t, x, y) e [0, T] x 1R 2, where 0 < it _< C are two constants. (A3) The function g belongs boundedly to C 4+~ for some a C (0, 1) (one may assume that a is the same as that in (A1)). It is clear that the assumptions (A1) (A3) are stronger that those in Chapter 4, therefore applying Theorem 2.2 of Chapter 4, we see that the FBSDE (1.1) has a unique adapted solution which can be constructed via the Four Step Scheme. That is, the adapted solution (X, Y, Z) of (1.1) can be obtained in the following way: { /0 /0 (1.4) X(t) = x + /~(s, X(s))ds + ~(s, X(s))dW(s), Y(t) = O(t,X(t)), Z(t) = a(t,X(t),O(t,X(t))Ox(t,X(t)), where /,(t, x) = b(t, ~, o(t, x), ~(t, x, o(t, x))o~ (t, x))), ,~(t, x) = o-(t,x,e(t,x)); and 0 E C l+~'2+a for some 0 < a < 1 is the unique classical solution to the quasilinear parabolic PDE: {1 Ot + -~a(t,x,O) 0~ + b(t,x,O,~(t,x,O)O~)O~ (1.5) +b(t,z,O,~(t,z,O)O~) = O, (t,x) 9 (O,T) x a, O(T, ~) = 9(~), x e a. We should point out that, by using standard techniques for gradient esti- mates, that is, applying parabolic Schauder interior estimates to the differ- ence quotients repeatedly (cf. Gilbarg & Trudinger [1]), it can be shown that under the assumptions (A1)-(A3) the solution 0 to the quasilinear PDE (1.5) actually belongs to the space C2+~ '4+a. Consequently, there exists a constant K > 0 such that (1.6) I]0llo~+ll0tll~+ll0ttll~+ll0~ll~+ll0~ll~+ll0~ll~+ll0~ll~ < .~. Our line of attack is now clear: we shall first find a numerical scheme for the quasilinear PDE (1.5), and then find a numerical scheme for the w Numerical approximation of the quasilinear PDE 237 (forward) SDE (1.4). We should point out that although the numerical analysis for the quasilinear PDE is not new, but the special form of (1.5) has not been covered by existing results. In the next Section 2 we shall study the numerical scheme of the quasilinear PDE (1.5) in full details, and then in Section 3 we study the (strong) numerical scheme for the forward SDE in (1.4). w Numerical Approximations of the Quasilinear PDE In this section we study the numerical approximation scheme and its con- vergence analysis for the quasilinear parabolic PDE (1.5). We will first carry out the discussion for the special case completely, upon which the study of the general case will be built. w A special case In this case the coefficients b and b are independent of Z, we only ap- proximate (X,Y). Note that in this case the PDE (1.5), although still quasilinear, takes a much simpler form: o~ + ~(t, x, o)%~ + b(t, x, o)ox + ~(t, ~, o) = o, t 9 (o, T), (2.1) [ O(T, x) = g(x), x 9 IR. Let us first standardize the PDE (3.1). Define u(t,x) = O(T - t,x), and for ~ = a, b, and/~, respectively, we define ~(t, x, y) = ~(T - t, ~, y), V(t, ~, y). Then u satisfies the PDE (2.2) { u(o,U~ - x)l~(t' x' u)~2 = ~(x). - ~(t, ~, ~)~ - ~(t, ~, ~1 = o; To simplify notation we replace 9, b and ~ by a, b and b themselves in the rest of this section. We first determine the characteristics of the first order nonlinear PDE (2.3) ut - b(t, x, u)ux = O. Elementary theory of PDEs (see, e.g., John [1]) tells us that the character- istic equation of (2.3) is det[aijt'(s) - 5ijx'(s)l = O, s >__ O, where s is the parameter of the characteristic and (aij) is the matrix -b(t, x, u) . -1 238 Chapter 9. Numerical Methods of FBSDEs In other words, if we let parameter s = t, then the characteristic curve C is given by the ODE: (2.4) ~'(t) = b(t, x(t), ~(t, x(t) ). Further, if we let 7 be the arclength of C, then aiong C we have dT= [1 + b2(t,x,u(t,x))] 89 and O~- - r , where r = [1 + b2(t,x,u(t,x))] 89 Thus, along C, equation (2.2)is simplified to (2.5) r l~2(t,x,u)u~+g(t,~,~); -~v = z , ~(o, ~) = g(~). We shall design our numerical scheme based on (2.5). w Numerical scheme Let h > 0 and At > 0 be fixed numbers. Let xi = ih, i = 0,+1, , and t k : kAt, k = 0, 1, , N, where t N : T. For a function f(t, x), let fk(.) = f(t k, .); and let fk = f(t k, xi) denote the grid value of the function f. Define for each k the approximate solution w k by the following recursive steps: 0 Step 0: Set w i = g(xi), i ,-1,0, 1, ; use linear interpolation to obtain a function w~ defined on x C JR. Suppose that w k-1 (x) is defined for x C JR, let w k-1 = w k-l(xi) and , k 1 wk l~. Ak ~(t k, wk l~. b~=b(t k x~,w~ ); ~=o(tk,x~, ~ ,, b~ = ~, , ,, -~ b~At, ~-~ = w~-l( ~); (2.6) x i = xi - 2 k 2 k ~(~)~ = h-~[~+~ - 2w~ + ~Ld. Step k: Obtain the grid values for the k-th step approximate solution, denoted by {w~}, via the following difference equation: k _ ,t~/k 1 1 (2.7) wi 9 _ k 2 ~ a At 2(~,~) ~(~), + (~)~; -~<i<~, Since by our assumption G is bounded below positively and b and g are bounded, there exists a unique bounded solution of (2.7) as soon as an evaluation is specified for w a-1 (x). Finally, we use linear interpolation to extend the grid values of k {w i }i=_~ to all x C R to obtain the k-th step approximate solution wk(-). w Numerical approximation of the quasilinear PDE 239 Before we do the convergence analysis for this numerical scheme, let us point out a standard localization idea which is essential in our future discussion, both theoretically and computationally. We first recall from Chapter 4 that the (unique) classical solution of the Cauchy problem (2.2) (therefore (2.5)) is in fact the uniform limit of the solutions {u R} (R + oc) to the initial-boundary problems: (2.2)~ 1 2 - / u, - ~(t, x, u) ~x~ - b(t, x, ~)u~ - ~(t, ~, ~) = 0, t ~(t,x) = g(x), Ixl = R, 0 < t < T. It is conceivable that we can also restrict the corresponding difference equa- tion (2.7) so that -i0 ~ i ~ io, for some i0 < co. Indeed , if we denote w i~ to be the following localized difference equation (2.7)io k -k-1 1 w~ -w~ k2 2 (~)~; - (~) 5x(w)~+ hu 2 o g(xi), -i0 < i < i0; W i ~ _ _ wk. io = g(x• o), k = 0,1,2,-", -i0 _<i _<i0, c io~k~ k is the uniform limit of twi L then by (A1) and (A2), one can show that w i as io + co, uniformly in i and k. In particular, if we fix the mesh size h > 0, and let R = ioh, then the quantities (2.8) maxlu(tk,x~) wk[ and max ]uR(tk,xi) _ wi~O,k i io~i<_io differ only by a error that is uniform in k, and can be taken to be arbitrarily small as i0 (or ioh = R) is sufficiently large. Consequently, as we shall see later, if for fixed h and At we choose R (or io) so large that the error between the two quantities in (2.8) differ by O(h + [Atl) , then we can replace (2.2) by (2.2)R, and (2.7) by (2.7)~ o without changing the desired results on the rate of convergence. But on the other hand, since for the io,k localized solutions the error luR(t k, x=t=io) w• o I 0 for all k = 0, 1, 2, -, the maximum absolute value of the error ]u R ( t k, xi ) -w~ ~ ], i = -i0,-'-, i0, will always occur in an "interior" point of (-R, R). Such an observation will be particularly useful when a maximum-principle argument is applies (see, e.g., Theorem 2.3 below). Based on the discussion above, from now on we will use the localized version of the solutions to (2.2) and (2.7) whenever necessary, without further specifications. To conclude this subsection we note that the approximate solutions {wk(.) are defined only on the times t = t k, k = 0, 1, ,N. An approxi- mate solution defined on [0, T] • ]R is defined as follows: for given h > 0 [...]... of the quasilinear PDE 245 by virtue of Theorem 2.2 and the definitions of h and At This proves (3) To show (2), let n and t be fixed, and assume that t E (t k, tk+l] Then, u (n) (t, x) = w k (x) is obviously Lipschitz in x So it remains to determine the Lipschitz constant of every w k Let x I and x 2 be given We may assume t h a t x 1 E [xi,xi+l) and x 2 E [xj,xj+l) , with i < j For i < g < j - 1,... the true solution of (2.29)), and u i~k-~ = u(tk-~, xi), v'i = v(t k-~, xi), with k xi~k = xi + b o ( t k , x i , u k - l , v k i - 1 ) A t ; xi^k = Xi + B o ( t k , x i , u k - l , v k - 1 ) A t Also, a(u) k, bo(u,v) k and Bo(u,v)~k are analogous to ai,k (b0)~k and (Bo)ik ,~ except t h a t U k-~ and Vik-~ are replaced by u k-~ and v k-~ Estimating the error {(e~) k} and {(e~) k) in the same fashion... ~-1 and 72 be the arc-lengths along Cl and C2, respectively Then, dT1 = r xl(t))dt; d~-2 = r where r = [1 + b2(t,x,u(t,x),v(t,x))]l/2; r = [1 + B~o(t,x,u(t,x),v(t,x))] 1/2 Thus, along C1 and C2, respectively, r o and (2.28) can be simplified to (2.30) {~21 0U ~ Ov Numerical = ~a2(t'x'u)u~ = + bo(t,x, u, v); 1 Scheme For any n C IN, let At = T / n Let h > 0 be given Let t k = k A t , k 0, 1,2,. , and. .. quasilinear PDE 241 -k-1 = u k - l ( ~ ) and b(u)~ and a(u) k correspond to ~k and aik bi where u i defined in (2.6), except that the values {w~ -~} are replaced by LIuk-~I'~ eik J, is the error term to be estimated We have the following lemma L e m m a 2.1 There exists a constant C > O, depending only on b, b, a, T, and the constant K in (1.6), such that for a11 k = O , , N and - c o < i < o% I~I _ C ( h... in y and z, one shows that (2.35) I(/1)kl _ K + 1, and I ~ : ( z ) l ~ C for some (generic) constant C > 0 Define BoK(t,x,y,z) ~=Bo(t,x,y,~g(z)); BoK(t,x,y,z) A=Bo(t,x,y,gg(z)) w Numerical approximation of the quasilinear PDE 247 Then Bo and Bog are uniformly bounded and Uniform Lipschitz in all varig ables... Yi, and extend U ~ and Y ~ to all x C R by linear interpolation Next, suppose t h a t U k - l , V k-1 are defined such t h a t U k-1 (xi) = U~ -1, V k-1 (x~) = V~ - l , and let k A k ( b ) , = bo( tk , x,, U~ -1 , Y~k-1); (B0)~k =~0(tk,x~, u k - 1 , v.k-l~ i ~ ,, a~ = a ( t k , x i , U ~ - I ) ; (2.31) -k Xi U.k-1 , v_.k-l~At., ~ , ; :xi+bo(tk,xi, x i = x ~ + B o ( t k , xi, Uk-1 , Vik-1)At, -k ~ and. .. ~(t, ~, y, -~(t, ~, y)z) One can check that, if a, b and b satisfy (A1)-(A3), then so do the functions a, bo and b0 Further, if we again set u(t,x) = O(T - t,x), V(t, x), then (2.25) becomes 1_ 2 (2.27) u~ = ~ (t,x,~)u~x + ~o(t,x,u,u~)~ +~o(t,x,u,~x); ~(0, x) = 9(x) We will again drop the sign in the sequel Now define v(t, x) = Ux(t, x) Using standard "difference quotient" argument (see, e.g., Gilbarg-Trudinger...240 Chapter 9 Numerical Methods of FBSDEs and At > 0, (2.9) wh,At(t,x) = E t 9 (0, T]; W k ( x ) l ( t ~ - l tk](t), k=l w~ t = O Clearly, for each k and i, w h ' A t ( t k , x i ) = W/k, where {w~} is the solution to (2.7) w Error analysis We first analyze the approximate solution {wk(.)} To begin with, let us introduce some notations: for each k and i, let (2.1o) xi-k ~=xi - b(tk,x.u~-l)At, u~-~-I . (A1) and (A2), one can show that w i as io + co, uniformly in i and k. In particular, if we fix the mesh size h > 0, and let R = ioh, then the quantities (2.8) maxlu(tk,x~) wk[ and max. {Vik}, and that of 0., b, b and their partial derivatives. Consequently, ! k 1 k 1 ' (h + At), Vk, i. (2.36) 1(4)}1 < C~{l~ I+ I~:~ I} +63 Use of the maximum principle and the. xh~(x) - h(x) 7_ O, and that § and 5 are true interest rate and volatility such that they are {~'t}t_>o-adapted ran- dom fields satisfying ~(t,x) > r(t,x), and ~(t,x) > ~(t,x),

Ngày đăng: 10/08/2014, 20:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan