during the preparation of my presentation of lecture 9 I stumbled over two things I wanted to (resp. was asked to) share:
1) I was wondering how asking u to be “causal” together with the absence of a right hand side with values on (−∞,0) enforces u≡0 on (0,∞) in the initial value problem in the introduction.
Moreover, I was not 100% sure how to write that initial value problem as an evolutionary equation in our setting right off the bat (although later the formulation of thm. 9.4.1. gives a pretty clear hint).
2) Is there any short reasoning, why the operators (1+ε∂∗t,ν)−1 and (∂t,νM(∂t,ν))∗ commute in the proof of thm. 9.3.2. in the first line of computation after “the Thus, (9.3.) implies (ii)”?
Thank you for your comments
Best regards
Boy
Marcus Waurick, 2020/01/07 11:52
Dear Boy,
thank you very much for your comments.
1) Since there is no data on (−∞,0), u should vanish on this part. Since we ask for u to satisfy the initial value at 0, u cannot vanish identically. Next, the differential equation satisfied by u on (0,∞) is u′=0. Hence, u needs to be constant there. The initial value problem written as an evolutionary problem can now be read off by Theorem 9.4.1 in the particular case M0=1,M1=0,A=0. Hence,
∂t,ν(u−u01[0,∞))=0
or
(∂t,ν)−1u=δ0u0.
2) One way to spot this is to apply the Fourier–Laplace transformation and to convince oneself that (∂t,νM(∂t,ν))∗ on the Fourier–Laplace transformed side is given by the multiplication operator of multiplying by the function
t↦(M(it+ν))∗(−it+ν)on L2(R;H).
Does this help?
Best regards,
MM
B. W. Schultz, 2020/01/08 10:35
Dear Marcus,
yes it was, thank you very much.
Best regards,
Boy
Gabriel McCracken, 2019/12/20 13:51
Dear all,
Firstly, I'd like to ask a more general question which came up during our discussion:
In practice, when computing the extension, at some point one probably has to compute an adjoint explicitly and/or a Riesz representative of some functional. So the question is, given some point u∈H0 where the closure of C is defined, is it easier to compute the closure at u or to compute the extension evaluated at u? The question is in particular to be seen with respect to examples as in Theorem 9.3.2.
Secondly, as someone who isn't familiar with the extension construction given, I want to vent some thoughts. I'll denote the inclusions ι0:dom(C)→H0 and ι1:dom(C∗)→H1. Both are bounded as the domains are equipped with graph norms, however the left-inverse restrictions r0:dom(C)⊆H0→dom(C), r1 similarly, are only continuous in general, as here the left hand side is equipped with the usual norm in H0 resp. H1. We have the following commuting diagram with ι0,ι′1 bounded:
dom(C)C⟶H1R−1H1⟶H′1↓ι0↓ι′1H0C−1⟶dom(C∗)′
Hence, I have a natural way to extend by bounded injections in domain and codomain, but do not have a natural way to restrict back by bounded surjections. This feels like a bad thing to have, as I'd expect the operator norm of the extension to only grow, and here I don't really see how operator norms might relate.
Christian Seifert, 2019/12/21 18:47
Dear Gabriel,
Concerning extension vs. closure: Typically it is rather difficult to compute closures of sums (especially their domains). Using the extensions shifts the problem to compute the extrapolations. These extrapolations of A, however, are sometimes known: If you think of differential operators, you can sometimes just shift the regularity of the objects (from H1 and L2 to L2 and H−1) say in terms of Sobolev spaces (here you can use some theory from the theory of distributions and Sobolev spaces). So it is sometimes not just a matter of taste, but kind of convenient.
Best, Christian
Gabriel McCracken, 2019/12/20 13:04
Dear all,
here are some little comments from the team Darmstadt:
1. In the context of Proposition 9.1.1. the + in f(0+) seems superfluous, even if f∈dom(∂t,ν) by Sobolev-embedding. We get that this is meant to be seen with a wide scope on Thm 9.4.1, but maybe that's worth a comment at that point.
2. The notation H−1(C) is meant with sight on Example 9.2.4, but as someone who hasn't seen these things before (not speaking for my collegues), I don't see the advantage over writing dom(C)′.
3. On the one hand, doing half of the proof of Prop. 9.2.2. (b) in stating (b) feels a bit clumsy, but on the other hand, it's important to state how this inclusion of operators is meant, especially where the Riesz iso comes in. (More on that topic in a comment below.) We couldn't really decide how to resolve that issue “perfectly”.
4. We had a small discussion whether it's better to write 1H or idH (cf. Thm. 9.2.6), but since a “constant 1-function on H” probably won't get any application, the confusion should be non-existent. Plus, 1H is probably better from a functional calculus perspective.
5. The position of “widetilde” in in Proposition 9.2.8. is a bit unfortunate, and maybe it's better to write (˜H,||T−1⋅||H) and “~(⋅) denotes the completion w.r.t. ||T−1⋅||H ”. Also, we discussed whether an anlogous statement holds if we replace 0∈ρ(T) by T injective, but Sebastian's more of an expert on that topic than I am.
6. On page 114 bottom and 115 top you probably want the convergence to be “strongly” instead of “stongly” :)
Best,
Gabriel
Sebastian Bechtel, 2019/12/20 18:34
To point 5: As written there, one would really needs 0∈ρ(T). However, there is rich extrapolation theory for, say, sectorial operators, which only needs that the operator is injective.
Best, Sebastian
Christian Seifert, 2019/12/21 18:39
Dear Gabriel (and Sebastian),
Thanks for the comments from Darmstadt.
to 1: You are right; we wanted to have the formulation for the ODE case in Section 9.1 in the spirit of Theorem 9.4.1 later.
to 2: There are two advantages. First, the one you mentioned is the conistency of the notations with the scale of Sobolev spaces (with an integer exponent). Second, if you have a look at Exercise 9.6 then you see that you can define order versions, i.e.\ H−n(C) for n∈N. There, the notation is much more handy.
to 3: I agree. However, one needs some explanation for the statement C∗⊆C⋄ which we did prefer to put into the formulation of (b).
to 4: We did have the same discussion. However, we did already use 1 at various places already and wanted to keep consistent notation, if possible.
to 5: The widetilde is supposed to “act” on (H,‖T−1⋅‖H) (we complete the normed/correspondic metric space). Nevertheless, the positioning could be improved.
to 6: Thanks for spotting the typo.
Best, Christian
Sebastian Bechtel, 2019/12/19 15:54, 2019/12/19 15:55
Dear all,
a short comment on the condition for initial values. You demand that the initial values are in dom(A). It is then a natural question how for away from optimal this condition is. Often, your A is a block matrix that consists of first-order differential objects. In this case dom(A) should agree with the (1/2,2)-real interpolation space between full differential operator domain for the equation and ground spaces provided the setting is sufficiently regular.
Best,
Sebastian
Christian Seifert, 2019/12/21 18:22, 2019/12/21 18:22
Dear Sebastian,
I guess you think of rewriting the Laplacian Δ=divgrad in a 2×2 block operator A as we usually do, right?! Of course, then there is a relation between dom(A) and the domain of Δ. However, note that A then acts on two-dimensional vectors of functions. So your mentioned relation via interpolation just covers on the the components.
Best, Christian
Sascha Trostorff, 2019/12/16 20:40
Dear all,
during our discussion in Kiel today, we found a small gap in the proof of Theorem 9.4.1 (i) ⇒ (ii): It is unclear, why the limit at the very end of the proof of implication (i) ⇒ (ii) should be true for ψ∈H1ν(R;H)∩L2,ν(R;dom(A)). Alternatively, I suggest to note first, that it suffices to prove the assertion for ψ∈C∞c(R;dom(A)) by density and then to do the calculation for those functions ψ. Then, the last limit is no problem, as ψ is bounded in dom(A) on compact sets.
Best regards
Sascha
Jürgen Voigt, 2019/12/15 00:17
Dear Marcus, dear Sascha,
I cannot refrain from a further comment on your “definition” on p.106, line -1. It just became clear to me that this is not well-defined. Let λ∈K, η∈H′. Then I can use the formula with φ=η, ψ=0 and obtain (λη):=λ∗η. On the other hand, I can use the formula with φ=0 and ψ=λη (after all, this is an element of H′), and obtain (λη):=λη.
What do you say now?
Best wishes, Jürgen
Sascha Trostorff, 2019/12/15 13:23
Dear Jürgen,
thanks for that enlightening example. So, yes, we will use a different symbol for the scalar multiplication in H′ in the final lecture notes.
Best regards
Sascha
Jürgen Voigt, 2019/12/15 15:05, 2019/12/21 18:31
EDIT
Dear all, this was my last post concerning the definition on line -1 of p.106. I suggested to the ISemTeam to discard my previous contributions concerning this topic, in order to make the forum less confusing.
End EDIT
Dear Sascha,
thanks! I have to confess that meanwhile Hendrik convinced me (and I convinced myself) that there seems to be no formal problem with your definition. My false trick (= mistake) in my above post is that I wrote
(λη)(x):=λ∗η(x) as (λη):=λ∗η. However, your intention is (correctly!) to read λ∗η(x) as λ∗(η(x)). And there is no problem with the second equation (λη):=λη; applied to x it just says that (λη)(x):=(λη)(x). All my “contradictory examples” were of this kind, I think. Sorry for all the confusing discussion!
For my defence I want to mention that from the start I had in mind that you can do a similar redefinition of the scalar multiplication in arbitrary vector spaces. However, in this context one would have to use the original scalar multiplication and define λ@x:=λ∗x. However, your definition uses the fact that the vector space is a function space, and does not use directly the scalar multiplication in the function space.
Best regards, Jürgen
Sascha Trostorff, 2019/12/15 15:25
Dear Jürgen,
well, that’s true, however you still have the problem with the already defined linear structure on operators, which, I think, indeed causes a problem, since you now have two definitions for the expression λφ. So again, we should use a different symbol.
Best regards
Sascha
Hendrik Vogt, 2019/12/16 09:37
Dear Sascha,
instead of working with this strange new scalar multiplication, why don't you work with the anti-dual space, provided with the usual linear structure? Would this lead to problems later on?
Best wishes, Hendrik
Sascha Trostorff, 2019/12/16 09:59
Dear Hendrik,
no, that would not cause any problem, I think. Indeed, I had the same idea yesterday after the discussion with Jürgen.
Best regards
Sascha
Hendrik Vogt, 2019/12/14 11:21
Dear ISem-Team,
I've got a few question regarding Lecture 9 (which is the lecture I was most looking forward to because I like initial values ).
1. Before Proposition 9.2.2 it is said that C⋄ and the “dual operator” of C agree up to the Riesz isomorphism. Can you please say which dual operator is meant, precisely?
2. In Example 9.2.3 it is said that (∂t,ν)−1 acts as the distributional derivative “taking into account the exponential weight”. (By the way, the “f” in “(∂t,ν)−1f” should be removed, right?) I'm not sure what is meant here. The operator (∂t,ν)−1 just acts as the distributional derivative on all the functions in L2,ν(R), doesn't it?
3. In the proof of Theorem 9.2.6, (i)⇒(ii), I don't understand why you argue like this. Why not not say
A−1x+B−1x=limn→∞(A−1xn+B−1xn)=limn→∞⟨(A+B)xn,⋅⟩|dom(A∗+B∗)=⟨f,⋅⟩|dom(A∗+B∗)?
4. I can't really parse the sentence after Remark 9.4.2: what does the “only” refer to? Should I read “with the help of our theory only” or “only for L2,ν-right-hand sides”?
Best wishes, Hendrik
Sascha Trostorff, 2019/12/14 20:56
Dear Hendrik,
I hope I can answer your questions properly.
1. If you consider C as a bounded operator from dom(C) to H1 the usual dual operator C′ would be a bounded operator from H′1 to dom(C)′. If you now identify H′1 with H1 via the Riesz isomorphism, you end up with the definition of C⋄.
2. Well, the operator (∂t,ν)−1 is defined on all L2,ν-functions, but it is not the usual distributional derivative of functions in L1,loc, since the pairing with a test functions is not via the usual L2 inner product, but with respect to the exponentially weighted inner product (see the formula before the sentence you have cited).
3. Thank you very much, you are completely right.
4. The only refers to L2,ν right-hand sides. As Theorem 9.4.1 (ii) shows, one does not need the theory of extrapolation spaces, to formulate initial value problems within the framework of evolutionary equations. However, this is only possible, if the initial datum belongs to dom(A). For initial values in H, one has to deal with the extrapolated solution operator as in Theorem 9.4.1 (iii).
I hope this answers your questions.
Best regards
Sascha
Jürgen Voigt, 2019/12/14 22:55
Dear Sascha,
let me just add to your answer to Hendrik's point 1, that in the lecture you should have written more precisely “dual operator of C:dom(C)→H1”, and to make the “up to” more precise: one has the formula C⋄=C′R−1H1. (I hope I got this correct!)
And let me point out a typo: On the last line on p.107 one should have “dom(C)′” in the index at the last norm.
Best wishes, Jürgen
Sascha Trostorff, 2019/12/15 13:10
Dear Jürgen,
thanks a lot, you are completely right.
Best regards
Sascha
Hendrik Vogt, 2019/12/15 11:35
Dear Sascha,
thanks a lot for your answers!
1. It did already help a lot to know that I should consider C as a bounded operator from dom(C) to H1. (Just from looking at the lecture, the option “the dual operator of the unbounded operator C from H0 to H1 would also have been possible.) I agree with Jürgen that an explicit formula is helpful, so thanks, Jürgen!
2. OK, I guess I got it now – thanks! Let me say it in other words: “somehow”, (∂t,ν)−1f should be thought of as the distributional derivative of f, but it's a functional on dom(∂t,ν), and if I plug in a test function φ∈C∞c(R), then I obtain ∂uf(e−2ν⋅φ), where ∂uf, is the usual distributional derivative of f. But this factor e−2ν⋅ is to be expected because it corresponds to the weighted inner product. Correct?
4. Puh, I'm still not sure if I get it. What I would understand is the sentence “The reformulation of initial value problems given in Theorem 9.4.1 is only possible if the initial value is in dom(A).” Is this what is meant? In the sentence in the lecture, I don't understand what is meant by “for L2,ν-right-hand sides” (which right hand sides?), and I'm also not sure about the meaning of “with the help of our theory”. So I'd be happy if you could say if my above interpretation of the sentence is correct.
Best wishes, Hendrik
Sascha Trostorff, 2019/12/15 13:09
Dear Hendrik,
2. Yes, that is exactly what we meant.
4. Okay, if the initial value lies in the domain of A then (M1+A)1[0,∞)U0 belongs to L2,ν and so, a solution could already be obtained with the help of Picard’s Theorem. Just for the case of general initial values you need the theory developed in Lecture 9.
Does this clarify the matter?
Best regards
Sascha
Hendrik Vogt, 2019/12/16 08:54, 2019/12/16 19:30
Dear Sascha,
2. thanks for the confirmation!
4. Ah, thanks – now, after the 30th reading or so, I think I can finally parse that sentence. Here's my attempt:
… reformulate IVPs with the help of our theory only for L2,ν-right-hand sides given …
Is this correct? If yes, then I don't think that this is a valid sentence structure! But I'm happy if I finally got it
Best wishes, Hendrik
Sascha Trostorff, 2019/12/16 10:01
Dear Hendrik,
this is how you could/should read it .
Best regards
Sascha
Marcus Waurick, 2019/12/13 14:41
Dear Jürgen,
thank you for your comments. You are right, we have, in fact, unintentionally omitted some parentheses: The definition of the scalar multiplication of functionals should have been
λϕ:=(H∋x↦λ∗(ϕ(x))).
In this way, we circumvent the double meaning (λϕ)(x)=λ∗ϕ(x). We have interpreted the right-hand side as λ∗(ϕ(x)) so that there is no notational ambiguity of possibly using λϕ and λ∗ϕ potentially meaning the same.
Due to the above mentioned omission, we have led you to write
(λϕ):=λ∗ϕ, which was nothing we have intended, we are sorry for this.
I hope that the above mentioned definition resolves the `catastrophe' you were suggesting we had been causing.
Best regards,
Marcus
Jürgen Voigt, 2019/12/13 23:00
Dear Marcus,
sorry, but your definition, at the beginning of your post, still comes down to
(λφ)(x):=λ∗φ(x)
(because λ∗(φ(x))=λ∗φ(x) by the usual conventions), and omitting arguments you are again back to (λφ):=λ∗φ, where on the r.h.s. the usual meaning is intended. As I wrote to Sascha, one simply should not use notation which in different ``worlds'' have different meanings, but where you cannot recognise from the notation, in which world they should be interpreted.
Best wishes, Jürgen
Jürgen Voigt, 2019/12/13 12:20, 2019/12/13 12:21
Dear virtual lecturers,
concerning the definition of the linear structure of H′ on p.106, line -1:
I think one can define the linear structure as you propose, but the notation you suggest is a catastrophe (IMHO). What you define is - leaving aside the evaluation at the elements x -
(λφ):=λ∗φ,
and addition is as usual. What would then ((λφ)) be? On the one hand, I could argue ((λφ))=(1(λφ))=(λφ)=λ∗φ, but also ((λφ))=(λ∗φ)=λφ. The problem is that parentheses do not lose their usual meaning! You simply have to use an extra symbol for the new scalar multiplication, for instance
λ@φ:=λ∗φ.
Best wishes, Jürgen
Sascha Trostorff, 2019/12/13 14:40
Dear Jürgen,
I am not sure, if I can follow your argumentation. With the definition of the linear structure we just re-define the meaning of λφ (with or without parantheses). So, I do not understand what ((λφ)) should mean (besides its natural meaning of λφ). Nevertheless, I agree that it is a good idea to use a different symbol for the scalar multiplication, since we stay with the usual linear structure on L(H0,H1), which leads to an inconsistency in the case H1=C.
Best regards
Sascha
Jürgen Voigt, 2019/12/13 22:30
Dear Sascha,
I was under the impression that the parentheses on the left hand side of your definition should belong to the definition. If not, then for ψ=0 the definition reads
λφ(x):=λ∗φ(x),
which I find even stranger, and omitting the evaluation you arrive at λφ:=λ∗φ. My point is that λφ
is already defined, and in fact to define the new scalar multiplication you use the established definition. And then, you cannot decide from the notation in which of the `worlds' you live, and this is confusing.
Best wishes, Jürgen
discussion/lecture_09.txt · Last modified: 2019/10/21 17:01 by matcs
Discussion on Lecture 09
Dear all,
during the preparation of my presentation of lecture 9 I stumbled over two things I wanted to (resp. was asked to) share:
1) I was wondering how asking u to be “causal” together with the absence of a right hand side with values on (−∞,0) enforces u≡0 on (0,∞) in the initial value problem in the introduction. Moreover, I was not 100% sure how to write that initial value problem as an evolutionary equation in our setting right off the bat (although later the formulation of thm. 9.4.1. gives a pretty clear hint).
2) Is there any short reasoning, why the operators (1+ε∂∗t,ν)−1 and (∂t,νM(∂t,ν))∗ commute in the proof of thm. 9.3.2. in the first line of computation after “the Thus, (9.3.) implies (ii)”?
Thank you for your comments
Best regards
Boy
Dear Boy,
thank you very much for your comments.
1) Since there is no data on (−∞,0), u should vanish on this part. Since we ask for u to satisfy the initial value at 0, u cannot vanish identically. Next, the differential equation satisfied by u on (0,∞) is u′=0. Hence, u needs to be constant there. The initial value problem written as an evolutionary problem can now be read off by Theorem 9.4.1 in the particular case M0=1,M1=0,A=0. Hence, ∂t,ν(u−u01[0,∞))=0 or (∂t,ν)−1u=δ0u0.
2) One way to spot this is to apply the Fourier–Laplace transformation and to convince oneself that (∂t,νM(∂t,ν))∗ on the Fourier–Laplace transformed side is given by the multiplication operator of multiplying by the function t↦(M(it+ν))∗(−it+ν)on L2(R;H).
Does this help?
Best regards,
MM
Dear Marcus,
yes it was, thank you very much.
Best regards,
Boy
Dear all,
Firstly, I'd like to ask a more general question which came up during our discussion: In practice, when computing the extension, at some point one probably has to compute an adjoint explicitly and/or a Riesz representative of some functional. So the question is, given some point u∈H0 where the closure of C is defined, is it easier to compute the closure at u or to compute the extension evaluated at u? The question is in particular to be seen with respect to examples as in Theorem 9.3.2.
Secondly, as someone who isn't familiar with the extension construction given, I want to vent some thoughts. I'll denote the inclusions ι0:dom(C)→H0 and ι1:dom(C∗)→H1. Both are bounded as the domains are equipped with graph norms, however the left-inverse restrictions r0:dom(C)⊆H0→dom(C), r1 similarly, are only continuous in general, as here the left hand side is equipped with the usual norm in H0 resp. H1. We have the following commuting diagram with ι0,ι′1 bounded: dom(C)C⟶H1R−1H1⟶H′1↓ι0↓ι′1H0C−1⟶dom(C∗)′ Hence, I have a natural way to extend by bounded injections in domain and codomain, but do not have a natural way to restrict back by bounded surjections. This feels like a bad thing to have, as I'd expect the operator norm of the extension to only grow, and here I don't really see how operator norms might relate.
Dear Gabriel,
Concerning extension vs. closure: Typically it is rather difficult to compute closures of sums (especially their domains). Using the extensions shifts the problem to compute the extrapolations. These extrapolations of A, however, are sometimes known: If you think of differential operators, you can sometimes just shift the regularity of the objects (from H1 and L2 to L2 and H−1) say in terms of Sobolev spaces (here you can use some theory from the theory of distributions and Sobolev spaces). So it is sometimes not just a matter of taste, but kind of convenient.
Best, Christian
Dear all,
here are some little comments from the team Darmstadt:
1. In the context of Proposition 9.1.1. the + in f(0+) seems superfluous, even if f∈dom(∂t,ν) by Sobolev-embedding. We get that this is meant to be seen with a wide scope on Thm 9.4.1, but maybe that's worth a comment at that point.
2. The notation H−1(C) is meant with sight on Example 9.2.4, but as someone who hasn't seen these things before (not speaking for my collegues), I don't see the advantage over writing dom(C)′.
3. On the one hand, doing half of the proof of Prop. 9.2.2. (b) in stating (b) feels a bit clumsy, but on the other hand, it's important to state how this inclusion of operators is meant, especially where the Riesz iso comes in. (More on that topic in a comment below.) We couldn't really decide how to resolve that issue “perfectly”.
4. We had a small discussion whether it's better to write 1H or idH (cf. Thm. 9.2.6), but since a “constant 1-function on H” probably won't get any application, the confusion should be non-existent. Plus, 1H is probably better from a functional calculus perspective.
5. The position of “widetilde” in in Proposition 9.2.8. is a bit unfortunate, and maybe it's better to write (˜H,||T−1⋅||H) and “~(⋅) denotes the completion w.r.t. ||T−1⋅||H ”. Also, we discussed whether an anlogous statement holds if we replace 0∈ρ(T) by T injective, but Sebastian's more of an expert on that topic than I am.
6. On page 114 bottom and 115 top you probably want the convergence to be “strongly” instead of “stongly” :)
Best, Gabriel
To point 5: As written there, one would really needs 0∈ρ(T). However, there is rich extrapolation theory for, say, sectorial operators, which only needs that the operator is injective.
Best, Sebastian
Dear Gabriel (and Sebastian),
Thanks for the comments from Darmstadt.
to 1: You are right; we wanted to have the formulation for the ODE case in Section 9.1 in the spirit of Theorem 9.4.1 later.
to 2: There are two advantages. First, the one you mentioned is the conistency of the notations with the scale of Sobolev spaces (with an integer exponent). Second, if you have a look at Exercise 9.6 then you see that you can define order versions, i.e.\ H−n(C) for n∈N. There, the notation is much more handy.
to 3: I agree. However, one needs some explanation for the statement C∗⊆C⋄ which we did prefer to put into the formulation of (b).
to 4: We did have the same discussion. However, we did already use 1 at various places already and wanted to keep consistent notation, if possible.
to 5: The widetilde is supposed to “act” on (H,‖T−1⋅‖H) (we complete the normed/correspondic metric space). Nevertheless, the positioning could be improved.
to 6: Thanks for spotting the typo.
Best, Christian
Dear all,
a short comment on the condition for initial values. You demand that the initial values are in dom(A). It is then a natural question how for away from optimal this condition is. Often, your A is a block matrix that consists of first-order differential objects. In this case dom(A) should agree with the (1/2,2)-real interpolation space between full differential operator domain for the equation and ground spaces provided the setting is sufficiently regular.
Best, Sebastian
Dear Sebastian,
I guess you think of rewriting the Laplacian Δ=divgrad in a 2×2 block operator A as we usually do, right?! Of course, then there is a relation between dom(A) and the domain of Δ. However, note that A then acts on two-dimensional vectors of functions. So your mentioned relation via interpolation just covers on the the components.
Best, Christian
Dear all,
during our discussion in Kiel today, we found a small gap in the proof of Theorem 9.4.1 (i) ⇒ (ii): It is unclear, why the limit at the very end of the proof of implication (i) ⇒ (ii) should be true for ψ∈H1ν(R;H)∩L2,ν(R;dom(A)). Alternatively, I suggest to note first, that it suffices to prove the assertion for ψ∈C∞c(R;dom(A)) by density and then to do the calculation for those functions ψ. Then, the last limit is no problem, as ψ is bounded in dom(A) on compact sets.
Best regards
Sascha
Dear Marcus, dear Sascha,
I cannot refrain from a further comment on your “definition” on p.106, line -1. It just became clear to me that this is not well-defined. Let λ∈K, η∈H′. Then I can use the formula with φ=η, ψ=0 and obtain (λη):=λ∗η. On the other hand, I can use the formula with φ=0 and ψ=λη (after all, this is an element of H′), and obtain (λη):=λη.
What do you say now?
Best wishes, Jürgen
Dear Jürgen,
thanks for that enlightening example. So, yes, we will use a different symbol for the scalar multiplication in H′ in the final lecture notes.
Best regards
Sascha
EDIT Dear all, this was my last post concerning the definition on line -1 of p.106. I suggested to the ISemTeam to discard my previous contributions concerning this topic, in order to make the forum less confusing. End EDIT
Dear Sascha,
thanks! I have to confess that meanwhile Hendrik convinced me (and I convinced myself) that there seems to be no formal problem with your definition. My false trick (= mistake) in my above post is that I wrote (λη)(x):=λ∗η(x) as (λη):=λ∗η. However, your intention is (correctly!) to read λ∗η(x) as λ∗(η(x)). And there is no problem with the second equation (λη):=λη; applied to x it just says that (λη)(x):=(λη)(x). All my “contradictory examples” were of this kind, I think. Sorry for all the confusing discussion!
For my defence I want to mention that from the start I had in mind that you can do a similar redefinition of the scalar multiplication in arbitrary vector spaces. However, in this context one would have to use the original scalar multiplication and define λ@x:=λ∗x. However, your definition uses the fact that the vector space is a function space, and does not use directly the scalar multiplication in the function space.
Best regards, Jürgen
Dear Jürgen,
well, that’s true, however you still have the problem with the already defined linear structure on operators, which, I think, indeed causes a problem, since you now have two definitions for the expression λφ. So again, we should use a different symbol.
Best regards
Sascha
Dear Sascha,
instead of working with this strange new scalar multiplication, why don't you work with the anti-dual space, provided with the usual linear structure? Would this lead to problems later on?
Best wishes, Hendrik
Dear Hendrik,
no, that would not cause any problem, I think. Indeed, I had the same idea yesterday after the discussion with Jürgen.
Best regards
Sascha
Dear ISem-Team,
I've got a few question regarding Lecture 9 (which is the lecture I was most looking forward to because I like initial values
).
1. Before Proposition 9.2.2 it is said that C⋄ and the “dual operator” of C agree up to the Riesz isomorphism. Can you please say which dual operator is meant, precisely?
2. In Example 9.2.3 it is said that (∂t,ν)−1 acts as the distributional derivative “taking into account the exponential weight”. (By the way, the “f” in “(∂t,ν)−1f” should be removed, right?) I'm not sure what is meant here. The operator (∂t,ν)−1 just acts as the distributional derivative on all the functions in L2,ν(R), doesn't it?
3. In the proof of Theorem 9.2.6, (i)⇒(ii), I don't understand why you argue like this. Why not not say A−1x+B−1x=limn→∞(A−1xn+B−1xn)=limn→∞⟨(A+B)xn,⋅⟩|dom(A∗+B∗)=⟨f,⋅⟩|dom(A∗+B∗) ?
4. I can't really parse the sentence after Remark 9.4.2: what does the “only” refer to? Should I read “with the help of our theory only” or “only for L2,ν-right-hand sides”?
Best wishes, Hendrik
Dear Hendrik,
I hope I can answer your questions properly.
1. If you consider C as a bounded operator from dom(C) to H1 the usual dual operator C′ would be a bounded operator from H′1 to dom(C)′. If you now identify H′1 with H1 via the Riesz isomorphism, you end up with the definition of C⋄.
2. Well, the operator (∂t,ν)−1 is defined on all L2,ν-functions, but it is not the usual distributional derivative of functions in L1,loc, since the pairing with a test functions is not via the usual L2 inner product, but with respect to the exponentially weighted inner product (see the formula before the sentence you have cited).
3. Thank you very much, you are completely right.
4. The only refers to L2,ν right-hand sides. As Theorem 9.4.1 (ii) shows, one does not need the theory of extrapolation spaces, to formulate initial value problems within the framework of evolutionary equations. However, this is only possible, if the initial datum belongs to dom(A). For initial values in H, one has to deal with the extrapolated solution operator as in Theorem 9.4.1 (iii).
I hope this answers your questions.
Best regards
Sascha
Dear Sascha,
let me just add to your answer to Hendrik's point 1, that in the lecture you should have written more precisely “dual operator of C:dom(C)→H1”, and to make the “up to” more precise: one has the formula C⋄=C′R−1H1. (I hope I got this correct!)
And let me point out a typo: On the last line on p.107 one should have “dom(C)′” in the index at the last norm.
Best wishes, Jürgen
Dear Jürgen,
thanks a lot, you are completely right.
Best regards
Sascha
Dear Sascha,
thanks a lot for your answers!
1. It did already help a lot to know that I should consider C as a bounded operator from dom(C) to H1. (Just from looking at the lecture, the option “the dual operator of the unbounded operator C from H0 to H1 would also have been possible.) I agree with Jürgen that an explicit formula is helpful, so thanks, Jürgen!
2. OK, I guess I got it now – thanks! Let me say it in other words: “somehow”, (∂t,ν)−1f should be thought of as the distributional derivative of f, but it's a functional on dom(∂t,ν), and if I plug in a test function φ∈C∞c(R), then I obtain ∂uf(e−2ν⋅φ), where ∂uf, is the usual distributional derivative of f. But this factor e−2ν⋅ is to be expected because it corresponds to the weighted inner product. Correct?
4. Puh, I'm still not sure if I get it. What I would understand is the sentence “The reformulation of initial value problems given in Theorem 9.4.1 is only possible if the initial value is in dom(A).” Is this what is meant? In the sentence in the lecture, I don't understand what is meant by “for L2,ν-right-hand sides” (which right hand sides?), and I'm also not sure about the meaning of “with the help of our theory”. So I'd be happy if you could say if my above interpretation of the sentence is correct.
Best wishes, Hendrik
Dear Hendrik,
2. Yes, that is exactly what we meant.
4. Okay, if the initial value lies in the domain of A then (M1+A)1[0,∞)U0 belongs to L2,ν and so, a solution could already be obtained with the help of Picard’s Theorem. Just for the case of general initial values you need the theory developed in Lecture 9.
Does this clarify the matter?
Best regards
Sascha
Dear Sascha,
2. thanks for the confirmation!
4. Ah, thanks – now, after the 30th reading or so, I think I can finally parse that sentence. Here's my attempt: … reformulate IVPs with the help of our theory only for L2,ν-right-hand sides given … Is this correct? If yes, then I don't think that this is a valid sentence structure! But I'm happy if I finally got it
Best wishes, Hendrik
Dear Hendrik,
this is how you could/should read it
.
Best regards
Sascha
Dear Jürgen,
thank you for your comments. You are right, we have, in fact, unintentionally omitted some parentheses: The definition of the scalar multiplication of functionals should have been λϕ:=(H∋x↦λ∗(ϕ(x))). In this way, we circumvent the double meaning (λϕ)(x)=λ∗ϕ(x). We have interpreted the right-hand side as λ∗(ϕ(x)) so that there is no notational ambiguity of possibly using λϕ and λ∗ϕ potentially meaning the same.
Due to the above mentioned omission, we have led you to write (λϕ):=λ∗ϕ, which was nothing we have intended, we are sorry for this.
I hope that the above mentioned definition resolves the `catastrophe' you were suggesting we had been causing.
Best regards,
Marcus
Dear Marcus,
sorry, but your definition, at the beginning of your post, still comes down to (λφ)(x):=λ∗φ(x) (because λ∗(φ(x))=λ∗φ(x) by the usual conventions), and omitting arguments you are again back to (λφ):=λ∗φ, where on the r.h.s. the usual meaning is intended. As I wrote to Sascha, one simply should not use notation which in different ``worlds'' have different meanings, but where you cannot recognise from the notation, in which world they should be interpreted.
Best wishes, Jürgen
Dear virtual lecturers,
concerning the definition of the linear structure of H′ on p.106, line -1: I think one can define the linear structure as you propose, but the notation you suggest is a catastrophe (IMHO). What you define is - leaving aside the evaluation at the elements x - (λφ):=λ∗φ, and addition is as usual. What would then ((λφ)) be? On the one hand, I could argue ((λφ))=(1(λφ))=(λφ)=λ∗φ, but also ((λφ))=(λ∗φ)=λφ. The problem is that parentheses do not lose their usual meaning! You simply have to use an extra symbol for the new scalar multiplication, for instance λ@φ:=λ∗φ.
Best wishes, Jürgen
Dear Jürgen,
I am not sure, if I can follow your argumentation. With the definition of the linear structure we just re-define the meaning of λφ (with or without parantheses). So, I do not understand what ((λφ)) should mean (besides its natural meaning of λφ). Nevertheless, I agree that it is a good idea to use a different symbol for the scalar multiplication, since we stay with the usual linear structure on L(H0,H1), which leads to an inconsistency in the case H1=C.
Best regards
Sascha
Dear Sascha,
I was under the impression that the parentheses on the left hand side of your definition should belong to the definition. If not, then for ψ=0 the definition reads λφ(x):=λ∗φ(x), which I find even stranger, and omitting the evaluation you arrive at λφ:=λ∗φ. My point is that λφ is already defined, and in fact to define the new scalar multiplication you use the established definition. And then, you cannot decide from the notation in which of the `worlds' you live, and this is confusing.
Best wishes, Jürgen