discussion:lecture_05

Discussion on Lecture 05

Discussion on Lecture 05

Hendrik Vogt, 2019/12/13 14:47, 2019/12/13 17:51

Dear ISem-Team,

it seems to me that there is a gap in Lecture 5: in the proof of Lemma 5.3.4 we need to know how to compute the inverse Fourier-Laplace transform of the function 1[Rn,Rn]g(i+ν), but nowhere in the lecture I found a formula for this. Thus it is hard to “note” that the terms in the left-hand side of (5.10) are the indicated inverse Fourier-Laplace transforms.

Of course this gap is mostly harmless for the expert, but I think it's quite demanding for anyone not so familiar with the Fourier transform. Here's what I can gather from the lecture: the major idea is to apply formula (5.3). (EDIT: The following argument is to complicated! I overlooked the last statement of Theorem 5.1.4; see below.) This tells us that f(t)=12πRei(t)sˆf(s)ds=12πReistˆf(s)ds for a.e. t if both f and ˆf are integrable. In other words, if g and F1g integrable, then F1g(t)=12πReistg(s)ds(a.e. t). More generally, I will need the latter formula for gL1L2(R;H) (or for gL2(R;H) having compact support).

Now the operator Lν is unitary, so (Lν)=(Lν)1=(Fexp(νm))1=exp(νm)F1. Thus one obtains Lνg(t)=exp(νt)F1g(t)=12πRe(ν+is)tg(s)ds(a.e. t) for g as above. This is precisely the formula that I did not find in the lecture. Maybe I overlooked it?

Best wishes, Hendrik

Marcus Waurick, 2019/12/13 15:29, 2019/12/13 15:29

Dear Hendrik,

Thank you for your remark! I agree that the argument is a bit short. In particular, I agree with the rationale to show that Lν=exp(νm)F. What I don't understand is, why you reside on formula (5.3).

Using Theorem 5.1.4, one could argue that h:=1[R,R]g(i+ν)L1L2. By the formula for F given there, we obtain that Fh=(Fh)()Next, one can construct a sequence of Cc(R;H)-functions, (hn)n, such that spthn[R1,R+1] such that hnh in L2(R;H). By the support condition, we also obtain hnh in L1. Thus, FhnFh in L2 and uniformly. Hence, (Fhn)(t)=(Fhn)(t)=12πR+1R1eisthn(s)ds.Letting n, we obtain for a.e.tR (Fh)(t)=12πRReisth(s)ds. Is this true or did I overlook something?

(I dont think that my proof is shorter, but I think it might be more adapted to the situation at hand.)

Best regards, moppi

Hendrik Vogt, 2019/12/13 17:57

Dear Moppi,

oh, thanks a lot – so I did indeed overlook something! Namely, that last statement of Theorem 5.1.4 is so short that I skipped over it while reading.

Now I claim that your proof is much shorter if you omit the approximation argument with the functions hn. Honest question: why did you use approximation? Can't I just see the last formula of your post directly from Theorem 5.1.4?

Best wishes, Hendrik

Marcus Waurick, 2019/12/13 20:24

Dear Hendrik,

well yes. But I wanted to be absolutely clear. In theorem 5.1.4 we only claimed something about the Schwartz space. Therefore, I was hesitant to apply a formula that has not been formally proved to be true in the situation of theorem 5 1.4. for L_1 functions.

Cheers,

moppi

Hendrik Vogt, 2019/12/13 23:05

Dear Moppi,

thanks for the explanation! Now I had a closer look, and to be honest, I realised I have no idea what exactly the last assertion of Theorem 5.1.4 is. My interpretation was: on L1L2, the inverse Fourier transform is given by fˆf(). Your interpretation appears to be that the formula is stated only on the Schwarz space. Neither from the formulation nor from the proof of the theorem it is clear what is meant, is it? In fact, that last assertion is not mentioned in the proof, right?

Best wishes, Hendrik

Marcus Waurick, 2019/12/13 23:28

Dear Hendrik,

You are just right. Hence, to be safe, we should stick to your explanation and keep the rest for the final version of the lecture notes.

Best regards,

moppi

Maximilian Weinberg, 2019/12/04 01:41

Dear all,

I was wondering what the generator of the Riemann-Liouville fractional integral semigroup is.

Since the map MM(t,ν) is multiplicative, the operators defined by M(z)=zα, namely αt, form a semigroup in α. Its generator (if existent) should belong to the operator defined by M(z)=αzα|α=0=log(z), where log is the main branch of the complex logarithm. So, let's call the hypothetical generator log(t).

I was wondering what it looks like. Unfortunatly, the most interesting property of log, the multiplicativity is not of much help since it only implies that αlog(t)=log((t)α) (where the latter is the operator induced by M(z)=log(zα)). This does not help at all, since vertical lines in C do not get mapped onto vertical lines by zzα, so the second operator cannot (easily) be associated to log(t).

Does any of the lecturers or fellow readers have an idea about the nature of this generator?

Best regards,

Maximilian Weinberg

Jürgen Voigt, 2019/11/28 21:14

Dear virtual lecturers,

just two quite minor remarks.

p.57, in the Definition: The definition of sb(M) is somewhat awkward, requiring that ν should be such that a condition holds where the existence of some ν is required.

Comments 5.4, line 2 of the second paragraph: “Unitary” is only applied to operators between Hilbert spaces (to my knowledge). You probably want to say `isometric isomorphism'. (Or only `isomorphism'? I didn't look up the reference!)

Best wishes, Jürgen

Marcus Waurick, 2019/11/29 09:48

Dear Jürgen,

thank you for your remarks.

You are absolutely right, the definition of sb(M) should rather read sb(M)=inf{νR;CRe>νdom(M),M,CRe>ν<}.However, we hope that the version in the lecture notes was understandable. We shall fix it for the final version, anyway.

Also, we meant to say isometric isomorphism. Thanks!

Best regards,

MM

Hendrik Vogt, 2019/11/28 15:44

Dear all,

I have a comment on the proof of Theorem 5.3.5. Near the end of the proof, the space S(R;H) is used. However, this is not the space introduced at the beginning of Lecture 3! In order to conclude “by linearity” in the proof of Theorem 5.3.5, one has to work with the “alternative space” S(R;H)=lin{1[a,b]x;a,bR, a<b, xH}, and then one has to note that the results from Lecture 4 are true with this definition of S(R;H). (This version I'd call the space of step functions; it is well-known that it is also dense in the L2,ν-spaces.)

Best wishes, Hendrik

Johannes Becker, 2019/11/27 10:55, 2019/11/27 10:56

Dear ISem-Team

thank you for the lecture! With a little delay, some minor notes we gathered during our Darmstadt session:

  • On page 51, the notation Cb is used, but introduced on p.52.
  • On p. 52, the comma after the definition of S(R;H) is superfluous
  • The final requirement of Remark 5.1.2 of k>1p is correct, but seems a little weird given that we assume p[1,], so we just require k2. Our guess was, that this stems from a proof for the multi-dimensional case, in which case we would require k>dp.
  • Again in remark 5.1.2 we want to estimate the growth behaviour of our function against 1(1+|t|)pk, but |t| is not a polynomial. I think this should either be replaced by a fitting polynomial (square…) or requires some explanation.
  • The explanation for why the equalities on the bottom of p.53 hold only appears after the page break, on p.54.
  • In the middle of p.54 we choose a suitable positive nullsequence (an)n. This suitable seems superfluous, as any positive nullsequence should do the trick, or are we overlooking anything?
  • On p.58 in Prop. 5.3.2: Is there any reason for estimating M(t,ν) against the whole half-plane? After all from my understanding the norm of the multiplication operator should only depend on the values of the material law on the line with real part ν, or?
  • In Exercise 5.3 statement (v), there seems to be some quantification over x1 and x2 missing.

That's it for now, thank you for your replies in advance,

Johannes

Sascha Trostorff, 2019/11/27 14:18, 2019/11/27 14:53

Dear Johannes,

thank you very much for your comments.

On page 51, the notation Cb is used, but introduced on p.52.

You are right, thanks for pointing out.

On p. 52, the comma after the definition of S(R;H) is superfluous

Thanks!

The final requirement of Remark 5.1.2 of k>1p is correct, but seems a little weird given that we assume p[1,], so we just require k2. Our guess was, that this stems from a proof for the multi-dimensional case, in which case we would require k>dp.

Yes, indeed we had the d-dimensional variant in mind.

Again in remark 5.1.2 we want to estimate the growth behaviour of our function against 1(1+|t|)pk, but |t| is not a polynomial. I think this should either be replaced by a fitting polynomial (square…) or requires some explanation.

I am not sure, if I understand this comment correctly. The condition for being a Schwartz space function is the boundedness of the derivatives of the function multiplied by an arbitrary polynomial in t or, equivalenty, in |t|. Now, (1+|t|)k is obviously a polynomial in |t|, so the boundedness follows. Do I oversee something?

The explanation for why the equalities on the bottom of p.53 hold only appears after the page break, on p.54.

Thank you, we will try to avoid this in a future version.

In the middle of p.54 we choose a suitable positive nullsequence (an)n. This suitable seems superfluous, as any positive nullsequence should do the trick, or are we overlooking anything?

EDIT: I don't understand why you can choose any nullsequence, could you provide some explanation?

On p.58 in Prop. 5.3.2: Is there any reason for estimating M(t,ν) against the whole half-plane? After all from my understanding the norm of the multiplication operator should only depend on the values of the material law on the line with real part ν, or?

You are right. Of course it suffices to estimate the norm by the supremum on the imaginary axis shifted by ν.

In Exercise 5.3 statement (v), there seems to be some quantification over x1 and x2 missing.

Thank you. The statement should hold for each x1D1 and x2D2.

Kind regards

Sascha

Sebastian Bechtel, 2019/11/27 20:20

Dear Sascha,

EDIT: I don't understand why you can choose any nullsequence, could you provide some explanation?

I´m the person to blame for this :P And it really seems like I overlooked something. My idea was as follows:

Take ψnCc with ψnf in L1. Pick n0 large enough such that |ψn0(s)f(s)|γ1 is small for almost every s. Then choose a small enough such that |(Sa(ψn0))(s)ψn0(s)δ1| is small for all s. Then I wanted to write |(Sa(f))(s)f(s)δ1||(Sa(fψl))(s)|+|(Sa(ψlψn0))(s)| +|(Sa(ψn0))(s)ψn0(s)δ1|+|ψn0(s)f(s)|γ1. Using that Sa(ψn) converges to Sa(f) in L1 for n, I wanted to exploit that this gives also pointwise everywhere convergence. This allows to make the first term small. I also thought I would get the second term small by a Cauchy sequence argument, but I can´t assure that n0 is large enough and I don´t get around this at the moment because if I change n0 it seems like I also have to change a which means I eventually have to pick another pointwise convergent subsequence of (Sa(ψl))l… Maybe someone here has an idea or can show that this is just wrong ;)

The condition for being a Schwartz space function is the boundedness of the derivatives of the function multiplied by an arbitrary polynomial in t or, equivalenty, in |t|

This is the point. It is obvious that |(1+t)k|(1+|t|)k. However, as far as I see it, you need comparability of polynomials in t and those in |t|. Let l be the smallest natural number with n/2l. Then it is not hard to show (1+|t|)k2k|1+t2|l as was suggested in Johannes´ post, but I think you have to do this to get around with your calculation (probably the comparability constant is not optimal, this was the quickest worst comparability that I was able to show ;)).

Best, Sebastian

Christian Seifert, 2019/11/28 08:33

Dear Sebastian,

concerning your comment on Remark 5.1.2

The condition for being a Schwartz space function is the boundedness of the derivatives of the function multiplied by an arbitrary polynomial in t or, equivalenty, in |t|

let me just note that suptR|t|kf(n)(t)=suptR{0}|t|ktktkf(n)(t)=suptRtkf(n)(t). Thus, instead of monomial weights in t we can equivalently use monomial weights in |t|. Of course, we can also work with polynomial weights (either in t or |t|).

Best, Christian

Marcus Waurick, 2019/11/28 16:07

Dear Sebastian,

concerning your attempt to prove the almost everywhere convergence. I don't think it really has a chance to work like this (if you stick to the abstract arguments you pointed anyway). My argument is a bit meta but I think it still applies: If your line of reasoning would work, one could show that (RRf(t)eit()dt)R>0 converges almost everywhere to fL2(R) as R. In fact, one approximates f by smooth functions ψn0 and then goes ahead and tries to work with convergence properties of SR:=F1[R,R](m) as R.

Since, however, as far as I know, it is unknown, whether (RRf(t)eit()dt)R>0 converges almost everywhere as R for all fL2(R), the pointed out strategy is not very likely to work as is.

Thanks for your thoughts on this!

Cheers,

MM

Sebastian Bechtel, 2019/11/28 19:31

Dear Moppy,

I assume the convergence to f is a typo and you mean (2π)1/2ˆf? Then I agree that this seems to be a question that should be equivalently hard to answer than the present question. I´m not so firm with this kind of questions on the Fourier transform but I will have another look into it these days and if I find out something more I´ll let you know!

Best, Sebastian

Marcus Waurick, 2019/11/28 19:49

Dear Sebastian,

You correctly pointed out my typo. Thanks for this.

In any case the question remains and I'm curious to learn any news on that.

Cheers,

moppi

Sebastian Bechtel, 2019/11/28 22:38

Dear Moppi,

I did some research. Have a look at the Carleson theorem (https://en.wikipedia.org/wiki/Carleson%27s_theorem). Looks to me as this applied to ˆf(ξ) in place of f would give the claim. However, since this seems to be a very deep and hard theorem, your meta argument is still convincing ;)

Best, Sebastian

Jürgen Voigt, 2019/11/29 13:11, 2019/11/29 13:18

Dear Sebastian, dear Sascha, dear Marcus,

here is a comment on Sebastian's question concerning p.54.

The operator Sa can be rewritten as Saf=fγa, where γa=1aγ(a); and (12πγa)a>0 is a `δ-family'. Starting with the function ζ:=1[12,12] and defining ζa=1aζ(a), Taf:=fζa, one can see immediately that Taf(s)f(s) (a0) for all Lebesgue points of f. Now, I am not well-acquainted with the Lebesgue set of H-valued functions, but my guess is that - as for the complex-valued case - the complement of the Lebesgue set of f is a null set.

I didn't do the details for showing that the case of the δ-family (12πγa)a>0 can be reduced to the second one, but my idea is that one shows that the tail of γ at ± can be neglected, and the other part can be represented as a suitable integral of the second δ-family.

Concerning Lebesgue points I recommend §9.2 of Teschl's book “Topics in Real and Functional Analysis”. (Google `Gerald Teschl Topics in Real and Functional Analysis'!)

Best wishes, Jürgen

Fabian Gabel, 2019/11/24 23:09, 2019/11/25 15:26

Dear ISem-team,

when I tried to fill in the details in Example 5.3.3(d), I found that there could be a factor 2π too much in the definition of k. Here is what I did (maybe you can tell me where I made a mistake):

We want to show that M(t,μ)=k=:k(). This is equivalent to showing that

M(im+μ)=LμM(t,μ)Lμ=Lμk()Lμ

With the use of Exercise 5.4, as suggested on p.60, I get that on the one hand

(Lμk()Lμ)(φ)=Lμ(kLμφ)=2πLμ(k)φ for φL1,μ(R;H).

On the other hand, we have

(M(im+μ)(φ))(t)=M(it+μ)φ(t)=12π0e(it+μ)sk(s)dsφ(t)=Lμk(t)φ(t)

Note also that I used Exercise 5.4 for φL1,μ and not L2,μ as stated (I did not attempt to prove it for L2,μ functions though). Not sure if this is intended or just a typo in the lecture notes.

Some further possible typos:

There are a few instances in which you mention Plancharel's Theorem. I have never read his name spelled that way.

p.53, l.10: I think that there is a minus missing in the exponential due to the antilinearity in the first component of the scalar product: eistf(s),g(t)

p.60, l.-9: should use be an upright i, i.e. g(±iRn+ρ)

Best wishes, Fabian

Sascha Trostorff, 2019/11/25 09:44

Dear Fabian,

yes, indeed, the factor 1/2π in Example 5.3.3 (d) is wrong. The formula in Exercise 5.4 is also correct for fL2,ν as it is claimed in the notes. This simply follows by your computation and approximation.

Sorry for misspelling Plancherel (I always had problems with this name =))

I do not understand, what minus sign is missing on p.53, l.10. The minus sign cancels out, due to the antilinearity in the first argument.

Thanks for spotting the typo on p.60.

Best regards

Sascha

Fabian Gabel, 2019/11/25 10:09

Dear Sascha,

thank you for your reply.

I do not understand, what minus sign is missing on p.53, l.10. The minus sign cancels out, due to the antilinearity in the first argument.

I got confused. Of course, the formula is correct as it is stated in the lecture notes.

I have a further remark, now regarding the proof of Theorem 5.3.5:

p.62, l.6: If I'm not mistaken the estimate needs some powers of 2, and should read something like |t|>1g(it+ρ)2dtM2,CRe>μx2(eρa+eρb)22π If not, then this estimate does not seem to work well with the one in l.9.

Best wishes, Fabian

Sascha Trostorff, 2019/11/25 11:53

Dear Fabian,

thanks for reading so carefully. Yes, there are squares missing on the right hand side of the estimate.

Best regards

Sascha

Sebastian Bechtel, 2019/11/21 18:05, 2019/11/21 18:05

Dear all,

I have a question concerning material laws and the definition of the functional calculus which is also supposed to be a test to check if Markus Haase is reading the forum :P

You require that the material law M is analytic on an open half-plane and bounded on closed sub half-planes. As far as I can tell, to define the functional calculus for one t,ν it would also suffice to consider functions in L(R,H). You could also consider functions on a half-plane which are bounded on lines parallel to the imaginary axis. The reason why you need analyticity and boundedness off the imaginary lines is that you want consistency between different t,ν when you vary ν?

Also in this case, wouldn´t it be sufficient to just demand for boundedness on strips of finite width and in this sense define two abscissas of boundedness inbetween which you can define consistent time derivative functional calculi?

More generally spoken: For a calculus unitarily equivalent to a multiplier calculus, is there any reason to require holomorphic functions with ´global´ bound than consistency?

Best, Sebastian

Sascha Trostorff, 2019/11/21 18:12, 2019/11/21 18:15

Dear Sebastian,

thanks for that question. The main reason for analyticity on a complete right half plane is the independence of the parameter and causality. Indeed, we will see in a future lecture that each causal operator (autonomous and bounded) arises from a material law. So there is no way to avoid analyticity on a half plane, if you want to obtain causal operators. Hope this answers your question.

Best regards

Sascha

Sascha Trostorff, 2019/11/18 19:45, 2019/11/18 20:01

Dear all,

in our discussion in Kiel today, we found two mistakes in the proof of Theorem 5.3.5. The function g defined there is of course not bounded on CReμ but just on vertical strips in this half plane. However, everything we need in the sequel is the boundedness of g on the strip {zC;Rez[μ,ν]}, so the proof still works. Moreover, in the estimate of the L2-norm of g(i+ρ) there are squares missing on the right-hand side of the inequality.

Best regards

Sascha

Jürgen Voigt, 2019/11/15 14:26

Dear virtual lecturers,

I am confused by your use of the symbol m. In Examples 2.4.2 and 2.4.3 you use it as a symbol which transforms functions into multiplication operators. In Corollary 3.2.7 it pops up as the identity on R. (I may have missed the definition; is it somewhere? My search algorithms do not work on the pdf-files you provide!) And now in the present lecture you define m:=m(m), where in the last expression the first m is the function id and the second is the symbol used in Lecture 2. Could you explain, please?

Lemma 5.1.1, line 2: Misprint; the arrow should be an =.

Section 2, line 1: I would have expected “analogous”. (“Analogue” does exist indeed also as an adjective, but I got the impression that it is rather used as a contrast to “digital”, like in “analogue clock” or in “analogue signal”. But I may be wrong!)

Remark 5.2.1, second sentence: I fail to understand what you want to communicate here. Which of the mappings should Lν be?

And I am a little confused by Remark 5.2.2. OKay, the multiplication operator in the H-valued case may be analogous to the scalar-valued case. But this doesn't prove anything! So I cannot really accept your “As a consequence”; in particular, as you find it necessary to add that “the statements generalise in a straightforward way”.

Best wishes, Jürgen

Marcus Waurick, 2019/11/15 15:00

Dear Jürgen,

thank you very much for your comments. Concerning the usage of m. This always denotes a multiplication operator, where one is supposed to put in the arguments of the functions, that you apply the corresponding operator to: In Corollary 3.2.7 we have (exp(νm)f)(t):=exp(νt)f(t). In Examples 2.4.2 and 2.4.3 we have (V(m)f)(ω)=V(ω)f(ω)and likewise in the definition just before Remark 5,2.2.

Admittedly the usage in Corollary 3,2,7 and in the definition before Remark 5.2.2 is slightly different as the Hilbert spaces the operators are mapping from and into differ. We emphasise that it should always be clear from the context (we shall use this notation mostly for exp(νm) and M(im+ν)), which of the above situation is meant.

Could you specify, where you found m:=m((m)), please?

Thanks for spotting the misprints and pointing out a potential language issue.

Concerning Lν in Remark 5.2.1; we wanted to argue that the unitary Lν mapping from L2,ν to L2 can be seen as a shifted variant of the Fourier transform. This however (due to a lack of integral representation for the Fourier-transformation on L2) can more directly be seen for L1,ν-function as one has the integral representation for this. We would have avoided the confusion, if we had sticked to ϕCc(R;H).

Concerning you remark on Remark 5.2.2. Yes, you are correct the statement `analogous to the scalar-valued case' does not prove anything. It is a hint as to how to obtain the same results for the situation highlighted in Remark 5.2.2. So, in order to avoid this language confusion, I propose to erase `As a consequence' and just say 'By Example 2.4.2, m is selfadjoint.'

Best regards,

MM

Jürgen Voigt, 2019/11/16 23:28, 2019/11/17 00:44

Dear Marcus,

this is again concerning m. Your explanations are quite helpful (and dearly needed for all participants, I think). Let me try to state it again in my way, in particular explaining the context in more (full?) generality.

One has two Banach spaces E,F and a set Ω, the latter mostly with additional structure, and one has two function spaces E(Ω)EΩ), F(Ω)FΩ. (The latter is only semi-true, because Ω may by a measure space, and the function spaces may by quotients modulo a.e. equality.)

Next, one has a function M:ΩL(E,F), and one has a mapping n:ΩΩ. Then, clearly, the expression M(n) has no defined meaning, because the only arguments one can insert into M() are elements of Ω. So, there is not reason, why one cannot give a meaning to M(n), and you define it as a mapping from a subset of E(Ω) to F(Ω), (M(n)f)(ω):=M(n(ω))f(ω)(ωΩ), where the domain of M(n) consists of those fE(Ω) such that the r.h.s belongs to F(Ω).

The symbol m is always the mapping id on Ω, in this context. (Unfortunately, you mention none of these definitions in Example 2.4.2; so the symbol V(m) is simply mysterious.)

Actually, up to before Theorem 5.2.3, there never occurs a function n different from m in the arguments of M (or V). It is only in Proposition 5.3.2 that you need n=im+ν, in this context. (In Theorem 5.2.3 one could still interpret im+ν as im+νI.

Maybe now you can see why I said that m is defined as m(m) – you didn't write it in this way, and it was meant as a kind of joke on my side –: in the case where you define it, the function m is the identity on R.

I think that after the Examples 2.4.2 and 2.4.3 the first time that m is used again, is in Corollary 3.2.7, and strictly speaking, exp(νm) would have to be written correctly as exp(ν(m)).

I hope that I interpreted your explanations correctly.

Best wishes, Jürgen

Hendrik Vogt, 2019/11/17 09:00

Dear Moppi,

here's my interpretation of what you wrote: If one has a function with a free argument like f() and one writes “m” instead of the dot, then the operator of multiplication by this function is meant. In other words, one does not have a formal definition, but it's kind of “meta”. Is it correct like this? (Of course I'm used to you using this notation :-))

Best wishes, Hendrik

Marcus Waurick, 2019/11/17 09:24

Dear Hendrik, dear Jürgen,

Thank you very much for your comments.

Well, yes you are both right. Let me add to this a less 'meta' interpretation (which works for all but the exp-example): In L_2-spaces over subsets of real scalars, there is always a 'canonical' multiplication operator: the operator of multiplying with the argument, m. This operator is selfadjoint and has a trivial functional calculus. Hence, taking a function f, the operator of multiplying by f is given by f(m). This notation is then lifted to other examples as well, where 'the multiplication by the argument operator' does not immediately make sense, for example as in the examples of lecture 2.

In the case of M the meaning of M(im+ν) is that of a noncommutative version of the functional calculus for m.

I hope these two interpretations add some more insight.

Best regards,

MM

Hendrik Vogt, 2019/11/17 13:30

Dear Moppi,

so in fact “id(m)” is kind of overkill, because it's the same as m, yes?

As for the other occurences, your answer left me more confused than before. In the “exp-example” of exp(νm) I do see a functional calculus, no need for “meta”. But in Examples 2.4.2 and 2.4.3 there's no “canonical” multiplication operator m, is there?

Best wishes, Hendrik

P.S.: Did you see my post of yesterday (Nov. 16) concerning S(R;X)? If I'm not mistaken, then it is relevant for the later use of that space, e.g. for the definition of “uniformly Lipschitz continuous” on page 40.

Marcus Waurick, 2019/11/17 13:59

Dear Hendrik,

Yes, id(m) is notational overkill. The problem with the exp(νm) example is that you map from one L2 into a different one. So it is not precisely a function of m.

If you are given any measure space (Ω,Σ,μ), I don't know what it means to multiply by some ωΩ. Thus even though m does not make sense here, V(m) does.

Concerning S(R;X), yes, we need to put simple functions supported on compacts, only. It is also possible to choose, Cc(R;X) in the definition of uniformly Lipschitz continuous. In fact, any space D with the property that for all aR and νR, {fD;1(,a]fD} is dense in L2,ν is fine for the definition of domains for uniformly Lipschitz continuous mappings.

Best regards,

moppi

Hendrik Vogt, 2019/11/17 14:15

Dear Moppi,

ah, thanks a lot! Indeed, I overlooked that exp(νm) is a map between different spaces. So all this still looks rather “meta” to me!

Also, thanks for the confirmation regarding S(R;X). So Janik's question was indeed quite important.

Best wishes, Hendrik

Jürgen Voigt, 2019/11/18 13:44

Dear Hendrik, dear Marcus,

I do not understand why you say that id(m) is “overkill”. If I put myself into the general context as started in the examples in Lecture 2 and as I sketched in my previous post on m, then I simply cannot understand what m would mean, as an operator. By definition, m is a mapping, the identity on Ω (and later on K). If you say now that m is also the operator of multiplication by m, then I ask why you introduced the notation V(m) in Lecture 2, instead of just saying that you use the same symbol V also for the maximal multiplication operator.

In fact, if you really want to use m as the multiplication operator, then - combining this with the notation of Lecture 2 - you arrive at my previous suggestion m(m).

I am not suggesting that you change your present notation. I just want to understand!

Maybe I am misunderstanding something fundamental, because I also do not understand why, in the Definition on p.56, you say “In particular”. I would expect that the subsequent statement is a special consequence of the previous definition, but I cannot see this.

Best wishes, Jürgen

Marcus Waurick, 2019/11/18 16:33, 2019/11/18 16:36

Dear Jürgen,

thank your for your remarks. In the situation of Lecture 2, the symbol V has already the meaning of the measurable function V:ΩK. Thus, V(m) emphasises the change of perspective to a multiplication operator in L2(μ). Other authors use MV instead of V(m) to denote the multiplication operator of multiplying by V. In certain situations it is then, however, not feasible to write a longer mapping in the index of M just because it cannot be read quite comfortably. Also, M is the usual notation for a material law, so using MV might run into a conflict with one the most central objects in the context of evolutionary equations.

It is, as always, a matter of taste. In the situation of id(m), we chose to just write m since, in this particular situation, a stand alone m does make sense (as the multiplication by the argument operator) and is consistent with the notation V(m); also just writing m is more convenient.

The notation m(m) in my opinion is not consistent with what we wrote in the lecture notes, since m is not a measurable (or operator-valued) function on some measure space. We did not write m to denote the identity. If we did so, it is an honest mistake and I have to apologise for this. A stand alone m is always an operator from some L2-space into the same space. Any mapping applied to m is always an operator from one L2-space into another and mostly the same L2-space.

I hope this clarifies something,

Best regards,

MM

Jürgen Voigt, 2019/11/18 23:27

Dear Marcus,

now I see my mistake: I tried to think of V(m) as Vm (stupidly enough, instead of reading carefully your and Hendrik's explanations), and therefore came up with m as the identity on Ω. I should simply have kept to the definition (V(m)f)(ω):=V(ω)f(ω). (Prescription: in the expression to the left of f, every m should be replaced by the variable, and the variable is inserted into f.) Consequently, (mf)(ω):=ωf(ω), supposing that the variable ω can act on the values of f (i.e., m=id(m)).

Thanks for your patience!

Best wishes, Jürgen

P.s. I have a question: how does one quote something from another participant, if one wants to comment or answer something?

Fabian Gabel, 2019/11/24 13:10, 2019/11/24 13:16

Dear Jürgen,

P.s. I have a question: how does one quote something from another participant, if one wants to comment or answer something?

It seems like quotes are preceded by a > symbol at the beginning of a line, see e.g. :

> This is a quote

renders to

This is a quote.

It seems however that this method of quoting does not work well with statements spanning multiple lines or containing formulae. For copying a formula, here is what I would do:

  • right click on the formula
  • * Math Settings
  • * * Math Renderer
  • * * * Plain Source

This will affect all formulas in this forum. Now you can copy and create a comment as described. In order to revert the rendering repeat the above and change the renderer back to 'Common HTML'.

Hope this helps. I also found the Wiki Syntax quite helpful.

Best wishes, Fabian

discussion/lecture_05.txt · Last modified: 2019/10/21 17:00 by matcs