Where the question comes from: I have to show the existence of a p that solves the first two equations of page 82. The inversion of the grad operator is not convenient especially if there aren't further assumptions on Ω. Hence the inversion of α∗ is the next idea, but for this you need further assumptions on α.
Best regards
Markus
Marcus Waurick, 2019/12/13 08:16, 2019/12/13 08:17
The exercise is intended to somewhat perform the steps from the evolutionary equation backwards to the system one started out with.
Given suitable regularity it is possible to also start out with a sufficiently regular solution of the original system and then show that this leads to a solution of the evolutionary equation, which is unique.
Does this clarify the matter?
Best regards,
MM
Markus Borkowski, 2019/12/13 14:34
Dear Marcus,
I realized that I have overlooked some important details and so my question is kind of weird. Thanks for your reply.
BG
Markus
Sebastian Bechtel, 2019/12/11 16:58
Dear all,
two minor remarks:
p. 82 l. 16: definining
Proof Lemma 7.3.1: First of all, it seems like you use that a−1 can be bounded in operator norm by 1/c? Maybe you could point to Proposition 6.2.2: In this proposition, invertibility of a is shown together with the estimate Rea−1≥c/‖a‖2 (which you use in part (b)) and in the course of this proof also this estimate for ‖a−1‖ (though it is not explicitly stated in that proposition). Moreover, it seems to me that you´ve always factored out a on the wrong side (you can estimate a−1bτ−h as you did and then write a+bτ−h=a(1+a−1bτ−h)). However, then you have to adapt part (b).
Best, Sebastian
Marcus Waurick, 2019/12/12 10:27, 2019/12/12 11:08
Dear Sebastian,
thank your for your remarks. We try to redefinine define in the future ;)
Concerning Lemma 7.3.1 you are completely right: we wrongfully omitted that given a∈L(H) with ℜa≥c yields ‖a−1‖≤1/c. Thanks for pointing this out; it really should be contained in the statement of Proposition 6.2.2. The seemingly strange order of factors follows from the following computation:
(a+bτ−h)=(1+bτ−ha−1)a.Hence,
(a+bτ−h)−1=((1+bτ−ha−1)a)−1=a−1(1+bτ−ha−1)−1,which is the term we mentioned in the lecture notes.
Does this help or did I misinterpret your statement?
Best regards,
moppi
Sebastian Bechtel, 2019/12/12 20:49
Dear Moppi,
it's fine, I confused myself as I thought this is an L2-norm and not an L(L2)-norm (though you always use the cannonical symbols for operators, like k and C :P).
And then of course also the order is not an issue.
Best, Sebastian
Janik Wilhelm, 2019/12/10 20:29, 2019/12/10 20:31
Dear virtual lecturers,
I have got a question concerning exercises 7.5 to 7.7. Did I miss something or are the conditions given in each exercise not sufficient to guarantee the existence of the Fourier transform? If we assume for instance ν0>1 and consider the function k:R→R with k(x):=χ[0,∞)(x)e(v0−1)x, then k satisfies k∈L1,ν0 and sptk⊂R≥0 as required, but the Fourier transform does not exist.
Best wishes,
Janik
Sascha Trostorff, 2019/12/11 09:02
Dear Janik,
yes, you are right, the Fourier transform of k does not need to exist. With ˆk we do not mean the Fourier transform, but the Laplace transform on a suitable right half plane (compare Example 5.3.1 (d)). I hope this answers your question.
Best regards
Sascha
Luca Calo, 2019/12/05 15:15
Dear ISem-Team,
after the discussion in our group at Darmstadt we have some minor remarks:
On the pages 82 and 83 we were not sure how we can go from the variables v,T,ω,q to p and u to see how the original problem can be solved
In proposition 7.1.4 there is a little typo in the first sentence (“H0 is a Hilbert space” instead of “H0 is a Hilbert spaces” is correct).
In the proof of Proposition 7.1.4 a reference to Corollary 2.2.6 could be helpful (to decompose H0=clos(ran(N0))⊕ker(N0))
Similarly, in the definition of ∂αt for α∈[0,1] there could be a reference to Chapter 5.
In the proof of Proposition 7.2.1 it could be helpful to mention that x1=PBx. Then it is a little bit easier to see why the claim follows.
Best wishes, Luca
Marcus Waurick, 2019/12/06 10:44, 2019/12/06 10:45
Dear Darmstadt-group, dear Luca,
thank you very much for your comments. In particular thanks for pointing out the typos and possible additional references, as they might improve the reading.
Concerning pages 82 and 83: The rationale is the following: Assume smooth data (i.e. right-hand sides are in H1ν). Then you see that the variables v,T,ω,q and p are also in H1ν by Picard's Theorem. Moreover, v,T,ω,q and p are in L2,ν(R;dom(V∗AV)). The equations together with the additional regularity allow you to perform the steps deriving the evolutionary equation `backwards'. For this, it also makes sense to have a look at Nathanael's question below.
Does this clarify the matter or do you mean something more particular?
Best regards,
MM
Nathanael Skrepek, 2019/12/04 12:19
Dear virtual lecturers
I am a little bit puzzled by your modelling of the equations of poro-elasticity. In particular when you introduce the new state variables v,T,ω,q you change the order of the operators. For instance, you can see that in the equations for the new state variables in the fourth equation:
∂tC−1T−Gradv=0,
which is
∂tGradu−Grad∂tu=0
in the original variables.
I am not sure if you assume this to be true from the very beginning or if there is some justification for this. Can Exercise 6.6 help to show this?
Best regards,
Nathanael
Marcus Waurick, 2019/12/04 13:32
Dear Nathanael,
thank you for your question. The derivation of the evolutionary equations can be seen as being motivated by the equations one started out with. The problem is rather philosophical: What was first – the equation or the operator setting?
Our method of choice is as follows: Take the equations from the engineers, reformulate the equations whilst assuming that every partial derivative commutes with one another and then show well-posedness of the resulting equation by means of Picard's Theorem.
It is then a post processing procedure (similar to the rationale e.g. for the heat equation in Lecture 6) to show that the original equations are satisfied given suitable conditions on the source terms are satisfied.
For the equation ∂tGradu=Grad∂tu to hold one would need that u∈H1ν(R;dom(Grad)). The question thus amounts to provide conditions on the source terms to guarantee u∈H1ν(R;dom(Grad)). So, if f,g are both in H1ν. Picard's Theorem tells us, that (v,p,ω,T,q)∈H1ν(R,⋯)∩dom(VAV∗). Thus, u=∂−1tv∈H1ν(R,dom(Grad)). Hence, the desired equality is true.
I am sure that a solution to the “new” problem will satisfy this condition since the fourth equation (∂tC−1T−Gradv=0) already forcing it to be true. In fact it is an additional condition, which wasn't in the original problem.
I would just propose to mention that. Because I don't see how you guarantee that the original problem doesn't have solutions that don't satisfy this additional condition.
Marcus Waurick, 2019/12/04 14:24
Dear Nathanael,
I don't see that
∂tC−1T−Gradv=0is an \emph{additional} condition. By definition, C−1T=Gradu. By definition v=∂tu. So, the `additional' condition is the defining equation for T once differentiated with respect to time. This, however, is considered to hold anyway. The reason for this is that partial derivatives are assumed to commute while working with the model one starts out with.
Does this clarify anything or did I fail to understand you correctly?
as you said, you assume them to commute, but this isn't mentioned anywhere before. So if I just take the original equations as they are, there is no reason why this should hold.
Anyway, I understand that you assume for the sake of modelling that you have something like Schwarz's theorem. However, when I read this I was wondering if this is a consequence of something I was missing or if this is an additional assumption/condition.
Best Regards, Nathanael
Marcus Waurick, 2019/12/04 16:10
Dear Nathanael,
ah, now I get it. Thanks. Yes, we should have mentioned that we assume that one can freely interchange differentiation operators at the stage of modelling.
Cheers,
MM
Nathanael Skrepek, 2019/12/06 10:54
Dear Marcus,
sorry to bother you again. I was thinking about this setting again and I slowly get a better understanding. However, I don't see that u∈H1ν(R;domGrad) justifies
∂t,νGradu=Grad∂t,νu.
I guess
H1ν(R,domGrad)=dom∂t,ν∩{f∈L2ν(R,L2(Ω)d):f(t)∈domGrada.e.}.
Did I miss something? Maybe Exercise 6.6 can help, but I didn't manage to succeed.
Best regards, Nathanael
Marcus Waurick, 2019/12/06 11:35
Dear Nathanael,
no bother! The definition for u∈H1ν(R;dom(Grad) is that u∈L2,ν(R;dom(Grad)) and ∂t,νu∈L2,ν(R;dom(Grad)).
(Note that since we known that v∈L2,ν(R;dom(Grad)), we thus know that u=∂−1t,νv∈H1ν(R;dom(Grad)); this is the knowledge we have from the introduction of the time derivative of functions taking values in the Hilbert space H=dom(Grad))
The operator
T:H1ν(R;dom(Grad))→L2,ν(R;L2(Ω)d×d)defined by Tu=∂t,νGradu is continuous. Also T coincides with the continuous operator
˜T:H1ν(R;dom(Grad))→L2,ν(R;L2(Ω)d×d)
defined by ˜Tu=Grad∂t,νu on the total subset
{(t,x)↦ϕ(t)ψ(x);ϕ∈C∞c(R),ψ∈dom(Grad)}of H1ν(R;dom(Grad)).
This shows the claim.
Alternatively, you could also apply Hille's Theorem to u=∂−1t,νv. In which case, you obtain the equality
Grad∂−1t,νv=∂−1t,νGradv.Multiplying both sides with ∂t,ν (which is allowed, since the right-hand side is in the domain of ∂t,ν), we obtain
∂t,νGradu=∂t,νGrad∂−1t,νv=∂t,ν∂−1t,νGradv=Gradv=Grad∂t,νu
I hope this helps.
Thank you for this careful treatment. Defining ∂t,ν on L2,ν(R,dom(Grad)) seems crucial.
That is why I am concerned whether we want to apply Picards Theorem on the differential operator defined on L2,ν(R,dom(Grad)). When I take a look at Theorem 7.1.2 Then it looks like we rather have ∂t,ν on L2,ν(R,L2(Ω)n) for some suitable n∈N.
Best regards, Nathanael
Marcus Waurick, 2019/12/06 12:51
Dear Nathanael,
we are in fact applying Picard's Theorem in the space L2,ν(R,H) (with H given in 7.1.2).
So, for right-hand sides in L2,ν(R;H), we only obtain solutions in L2,ν(R;H). However, if the right-hand sides are more regular in the time variables; that is, if the right-hand side is contained in H1ν(R;H), we deduce from Picard's Theorem (this is the last statement in Theorem 6.2.1), that the solution is in H1ν(R;H) and in L2,ν(R;dom(V∗AV)). The latter implies that v∈L2,ν(R;dom(Grad)) and we are in the situation, I was positioning myself in in my previous post.
Does this help?
Best regards,
MM
Nathanael Skrepek, 2019/12/06 15:49
Dear Marcus,
I can follow until the point, where we have
v∈L2,ν(R,dom(Grad))∩H1ν(R,L2(Ω)d). How do you end up with ∂−1t,νv∈H1ν(R,dom(Grad))? More precisely, why is ∂−1t,νv∈L2,ν(R,dom(Grad)) true?
Is it because ∂−1t,ν on L2,ν(R,L2(Ω)d) restricted to L2,ν(R,dom(Grad)) equals ∂−1t,ν on L2,ν(R,dom(Grad))? Which seems to be true but confuses me.
Best regards, Nathanael
Marcus Waurick, 2019/12/06 16:41
Dear Nathanael,
well this observation of your's is true.
In Lecture 3, we have shown that if H is any Hilbert space and v∈L2,ν(R;H), then ∂−1t,νv∈H1ν(R;H). We apply this to H being the domain of Grad (endowed with the graph norm).
Agreed?
Best regards,
MM
Christian Seifert, 2019/12/03 17:35
Dear all,
We did upload a corrected version of Lecture 07 (we did correct small mistakes in Exercises 7.4 to 7.7).
Best wishes, Christian
Hendrik Vogt, 2019/11/27 23:33, 2019/11/27 23:45
Dear all,
I have some comments on Propositions 7.1.4 and 7.2.1.
In the proof of Proposition 7.1.4 it seems to me that lines 3-8 are not really needed: if in line 9 one chooses x∈¯ran(N0) (which is possible by the decomposition in line 1 of the proof), then the computation that follows still goes through as is, doesn't it? I don't see where it is used that x is in ran(N0) (and not just in the closure; cf. line 2 of the proof).
Proposition 7.2.1 had me baffled for a few minutes. The reason is that (unnecessarily, as far as I see) two things are combined here: the trivial identity ⟨x,Bx⟩=⟨PBx,BPBx⟩ and the interesting observation that Rez≥ν implies Rezα≥να, for ν>0. So I'd say this is a proposition about complex numbers that has nothing to do with Hilbert spaces and operators. And I'd express it, equivalently, as an inequality: Rezα≥(Rez)α, for Rez>0.
As for the proof of Proposition 7.2.1, I'd do it as follows: First observe that w.l.o.g. one can assume Rez=1. I put x:=argz; then Rezα=cos(αx)/(cosx)α, and we have to show that this is ≥1. Equivalently, we have to show that lncos(αx)≥αlncosx. This, however, follows from the fact that ln∘cos is concave on (−π/2,π/2) (and =0 in x=0). To see the concavity, note that (ln∘cos)′=−tan is decreasing.
Best wishes, Hendrik
Christian Seifert, 2019/11/28 08:51
Dear Hendrik,
Many thanks for the comments.
Concerning the proof of Proposition 7.1.4, the lines 3-8 show closedness of the range of N0, which we actually do not need for the argument by line 2.
I agree on your comment on Proposition 7.2.1. We will adjust that for the final version.
Best wishes, Christian
discussion/lecture_07.txt · Last modified: 2019/10/21 17:00 by matcs
Discussion on Lecture 07
Dear all,
I have a question concerning Exercise 7.1 ©.
In section 7.1 for the operator α there aren't any further assumptions mentioned except that it is in L(L2(Ω)). Is this sufficient to solve Exercise 7.1 ©?
Where the question comes from: I have to show the existence of a p that solves the first two equations of page 82. The inversion of the grad operator is not convenient especially if there aren't further assumptions on Ω. Hence the inversion of α∗ is the next idea, but for this you need further assumptions on α.
Best regards
Markus
Dear Markus,
thank you for question! in fact you don't need any further assumptions on α. The trick is to use the full evolutionary equation (∂t,νM0+M1+VAV∗)U=Fwith U=(v,p,ω,T,q). By Picard's Theorem, a right-hand side F∈H1ν(R;H), H as in Theorem 7.1.2, leads to U∈H1ν(R;H)∩L2,ν(R;dom(VAV∗)) satisfying the evolutionary equation at hand. Looking into the condition U∈H1ν(R;H)∩L2,ν(R;dom(VAV∗)) more closely, you will find all the necessary information asked for in part © of this exercise. Re-arranging terms in (∂t,νM0+M1+VAV∗)U=F will provide you with a solution of the desired two equations.
The exercise is intended to somewhat perform the steps from the evolutionary equation backwards to the system one started out with.
Given suitable regularity it is possible to also start out with a sufficiently regular solution of the original system and then show that this leads to a solution of the evolutionary equation, which is unique.
Does this clarify the matter?
Best regards,
MM
Dear Marcus,
I realized that I have overlooked some important details and so my question is kind of weird. Thanks for your reply.
BG
Markus
Dear all,
two minor remarks:
iningBest, Sebastian
Dear Sebastian,
thank your for your remarks. We try to redefinine define in the future ;)
Concerning Lemma 7.3.1 you are completely right: we wrongfully omitted that given a∈L(H) with ℜa≥c yields ‖a−1‖≤1/c. Thanks for pointing this out; it really should be contained in the statement of Proposition 6.2.2. The seemingly strange order of factors follows from the following computation: (a+bτ−h)=(1+bτ−ha−1)a.Hence, (a+bτ−h)−1=((1+bτ−ha−1)a)−1=a−1(1+bτ−ha−1)−1,which is the term we mentioned in the lecture notes.
Does this help or did I misinterpret your statement?
Best regards,
moppi
Dear Moppi,
it's fine, I confused myself as I thought this is an L2-norm and not an L(L2)-norm (though you always use the cannonical symbols for operators, like k and C :P).
And then of course also the order is not an issue.
Best, Sebastian
Dear virtual lecturers,
I have got a question concerning exercises 7.5 to 7.7. Did I miss something or are the conditions given in each exercise not sufficient to guarantee the existence of the Fourier transform? If we assume for instance ν0>1 and consider the function k:R→R with k(x):=χ[0,∞)(x)e(v0−1)x, then k satisfies k∈L1,ν0 and spt k⊂R≥0 as required, but the Fourier transform does not exist.
Best wishes, Janik
Dear Janik,
yes, you are right, the Fourier transform of k does not need to exist. With ˆk we do not mean the Fourier transform, but the Laplace transform on a suitable right half plane (compare Example 5.3.1 (d)). I hope this answers your question.
Best regards
Sascha
Dear ISem-Team, after the discussion in our group at Darmstadt we have some minor remarks:
Best wishes, Luca
Dear Darmstadt-group, dear Luca,
thank you very much for your comments. In particular thanks for pointing out the typos and possible additional references, as they might improve the reading.
Concerning pages 82 and 83: The rationale is the following: Assume smooth data (i.e. right-hand sides are in H1ν). Then you see that the variables v,T,ω,q and p are also in H1ν by Picard's Theorem. Moreover, v,T,ω,q and p are in L2,ν(R;dom(V∗AV)). The equations together with the additional regularity allow you to perform the steps deriving the evolutionary equation `backwards'. For this, it also makes sense to have a look at Nathanael's question below.
Does this clarify the matter or do you mean something more particular?
Best regards,
MM
Dear virtual lecturers
I am a little bit puzzled by your modelling of the equations of poro-elasticity. In particular when you introduce the new state variables v,T,ω,q you change the order of the operators. For instance, you can see that in the equations for the new state variables in the fourth equation: ∂tC−1T−Gradv=0, which is ∂tGradu−Grad∂tu=0 in the original variables. I am not sure if you assume this to be true from the very beginning or if there is some justification for this. Can Exercise 6.6 help to show this?
Best regards, Nathanael
Dear Nathanael,
thank you for your question. The derivation of the evolutionary equations can be seen as being motivated by the equations one started out with. The problem is rather philosophical: What was first – the equation or the operator setting?
Our method of choice is as follows: Take the equations from the engineers, reformulate the equations whilst assuming that every partial derivative commutes with one another and then show well-posedness of the resulting equation by means of Picard's Theorem.
It is then a post processing procedure (similar to the rationale e.g. for the heat equation in Lecture 6) to show that the original equations are satisfied given suitable conditions on the source terms are satisfied.
For the equation ∂tGradu=Grad∂tu to hold one would need that u∈H1ν(R;dom(Grad)). The question thus amounts to provide conditions on the source terms to guarantee u∈H1ν(R;dom(Grad)). So, if f,g are both in H1ν. Picard's Theorem tells us, that (v,p,ω,T,q)∈H1ν(R,⋯)∩dom(VAV∗). Thus, u=∂−1tv∈H1ν(R,dom(Grad)). Hence, the desired equality is true.
Best regards,
MM
Dear Marcus,
Thank you for your quick response.
I am sure that a solution to the “new” problem will satisfy this condition since the fourth equation (∂tC−1T−Gradv=0) already forcing it to be true. In fact it is an additional condition, which wasn't in the original problem.
I would just propose to mention that. Because I don't see how you guarantee that the original problem doesn't have solutions that don't satisfy this additional condition.
Dear Nathanael,
I don't see that ∂tC−1T−Gradv=0is an \emph{additional} condition. By definition, C−1T=Gradu. By definition v=∂tu. So, the `additional' condition is the defining equation for T once differentiated with respect to time. This, however, is considered to hold anyway. The reason for this is that partial derivatives are assumed to commute while working with the model one starts out with.
Does this clarify anything or did I fail to understand you correctly?
Best regards,
MM
Dear Marcus,
as you said, you assume them to commute, but this isn't mentioned anywhere before. So if I just take the original equations as they are, there is no reason why this should hold.
Anyway, I understand that you assume for the sake of modelling that you have something like Schwarz's theorem. However, when I read this I was wondering if this is a consequence of something I was missing or if this is an additional assumption/condition.
Best Regards, Nathanael
Dear Nathanael,
ah, now I get it. Thanks. Yes, we should have mentioned that we assume that one can freely interchange differentiation operators at the stage of modelling.
Cheers,
MM
Dear Marcus,
sorry to bother you again. I was thinking about this setting again and I slowly get a better understanding. However, I don't see that u∈H1ν(R;domGrad) justifies ∂t,νGradu=Grad∂t,νu. I guess H1ν(R,domGrad)=dom∂t,ν∩{f∈L2ν(R,L2(Ω)d):f(t)∈domGrada.e.}. Did I miss something? Maybe Exercise 6.6 can help, but I didn't manage to succeed.
Best regards, Nathanael
Dear Nathanael,
no bother! The definition for u∈H1ν(R;dom(Grad) is that u∈L2,ν(R;dom(Grad)) and ∂t,νu∈L2,ν(R;dom(Grad)).
(Note that since we known that v∈L2,ν(R;dom(Grad)), we thus know that u=∂−1t,νv∈H1ν(R;dom(Grad)); this is the knowledge we have from the introduction of the time derivative of functions taking values in the Hilbert space H=dom(Grad))
The operator T:H1ν(R;dom(Grad))→L2,ν(R;L2(Ω)d×d)defined by Tu=∂t,νGradu is continuous. Also T coincides with the continuous operator ˜T:H1ν(R;dom(Grad))→L2,ν(R;L2(Ω)d×d) defined by ˜Tu=Grad∂t,νu on the total subset {(t,x)↦ϕ(t)ψ(x);ϕ∈C∞c(R),ψ∈dom(Grad)}of H1ν(R;dom(Grad)).
This shows the claim.
Alternatively, you could also apply Hille's Theorem to u=∂−1t,νv. In which case, you obtain the equality Grad∂−1t,νv=∂−1t,νGradv.Multiplying both sides with ∂t,ν (which is allowed, since the right-hand side is in the domain of ∂t,ν), we obtain ∂t,νGradu=∂t,νGrad∂−1t,νv=∂t,ν∂−1t,νGradv=Gradv=Grad∂t,νu I hope this helps.
Best regards,
MM
Dear Marcus,
Thank you for this careful treatment. Defining ∂t,ν on L2,ν(R,dom(Grad)) seems crucial.
That is why I am concerned whether we want to apply Picards Theorem on the differential operator defined on L2,ν(R,dom(Grad)). When I take a look at Theorem 7.1.2 Then it looks like we rather have ∂t,ν on L2,ν(R,L2(Ω)n) for some suitable n∈N.
Best regards, Nathanael
Dear Nathanael,
we are in fact applying Picard's Theorem in the space L2,ν(R,H) (with H given in 7.1.2).
So, for right-hand sides in L2,ν(R;H), we only obtain solutions in L2,ν(R;H). However, if the right-hand sides are more regular in the time variables; that is, if the right-hand side is contained in H1ν(R;H), we deduce from Picard's Theorem (this is the last statement in Theorem 6.2.1), that the solution is in H1ν(R;H) and in L2,ν(R;dom(V∗AV)). The latter implies that v∈L2,ν(R;dom(Grad)) and we are in the situation, I was positioning myself in in my previous post.
Does this help?
Best regards,
MM
Dear Marcus,
I can follow until the point, where we have v∈L2,ν(R,dom(Grad))∩H1ν(R,L2(Ω)d). How do you end up with ∂−1t,νv∈H1ν(R,dom(Grad))? More precisely, why is ∂−1t,νv∈L2,ν(R,dom(Grad)) true?
Is it because ∂−1t,ν on L2,ν(R,L2(Ω)d) restricted to L2,ν(R,dom(Grad)) equals ∂−1t,ν on L2,ν(R,dom(Grad))? Which seems to be true but confuses me.
Best regards, Nathanael
Dear Nathanael,
well this observation of your's is true.
In Lecture 3, we have shown that if H is any Hilbert space and v∈L2,ν(R;H), then ∂−1t,νv∈H1ν(R;H). We apply this to H being the domain of Grad (endowed with the graph norm).
Agreed?
Best regards,
MM
Dear all,
We did upload a corrected version of Lecture 07 (we did correct small mistakes in Exercises 7.4 to 7.7).
Best wishes, Christian
Dear all,
I have some comments on Propositions 7.1.4 and 7.2.1.
In the proof of Proposition 7.1.4 it seems to me that lines 3-8 are not really needed: if in line 9 one chooses x∈¯ran(N0) (which is possible by the decomposition in line 1 of the proof), then the computation that follows still goes through as is, doesn't it? I don't see where it is used that x is in ran(N0) (and not just in the closure; cf. line 2 of the proof).
Proposition 7.2.1 had me baffled for a few minutes. The reason is that (unnecessarily, as far as I see) two things are combined here: the trivial identity ⟨x,Bx⟩=⟨PBx,BPBx⟩ and the interesting observation that Rez≥ν implies Rezα≥να, for ν>0. So I'd say this is a proposition about complex numbers that has nothing to do with Hilbert spaces and operators. And I'd express it, equivalently, as an inequality: Rezα≥(Rez)α, for Rez>0.
As for the proof of Proposition 7.2.1, I'd do it as follows: First observe that w.l.o.g. one can assume Rez=1. I put x:=argz; then Rezα=cos(αx)/(cosx)α, and we have to show that this is ≥1. Equivalently, we have to show that lncos(αx)≥αlncosx. This, however, follows from the fact that ln∘cos is concave on (−π/2,π/2) (and =0 in x=0). To see the concavity, note that (ln∘cos)′=−tan is decreasing.
Best wishes, Hendrik
Dear Hendrik,
Many thanks for the comments. Concerning the proof of Proposition 7.1.4, the lines 3-8 show closedness of the range of N0, which we actually do not need for the argument by line 2. I agree on your comment on Proposition 7.2.1. We will adjust that for the final version.
Best wishes, Christian