Tuesday, July 5, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Condition for Baumslag-Solitar group to be virtually abelian

Posted: 05 Jul 2022 08:07 AM PDT

The Baumslag-Solitar group is the group given by the presentation $$ BS(p, q) = \langle a, b \mid a^{-1}b^p a = b^q \rangle $$ where $p,q \in \mathbb{Z} \setminus \{0\}$.

I have read that this group is virtually abelian (i.e. contains a finite index abelian subgroup) if and only if $|p| = |q|$, but this is given without citation. I have also been unable to find this (or an equivalent statement) in any other papers. However I have found the result that $BS(p,q)$ is residually finite if and only if $|p| = |q|$ or at least one of $|p|$ and $|q|$ is $1$.

I have been trying to extend this condition to show that in the first case $BS(p, q)$ is virtually abelian (and the converse), but have so far been unsuccessful.

I've also been trying that if $|p| \neq |q|$ but at least one is $1$, then $BS(p,q)$ is not virtually abelian. It is clear that in this case you get a semidirect product of two copies of $\mathbb{Z}$, but I've been unable to show no finite abelian subgroups exist.

Thanks for any hints/help!

Is there an elegant notation to indicate that what follows is a range?

Posted: 05 Jul 2022 08:05 AM PDT

I'm currently creating a table containing values for a number of variables, many of which actually are the mean values. To indicate that those are the mean values, I use the bar on the corresponding variable in the header (e.g. $\bar{x}$). Maybe this is not the standard way in maths, but it's common in my (physics) community.

Now I also have a column where I would like to give a range for the corresponding variable, but I can't think of a good way to indicate this, so people can intuitively understand it. Of course I can always just some "arbitrary" indicator (e.g. $\tilde{x}$, or $\hat{x}$) and explain what I mean by that in the notes. In addtion, I of course also need to indicate the range in the value cells themselves, e.g. $a-b$, $[a, b]$, or $a<x<b$ (of which I actually prefer the first, I know, not very mathy).

I was just wondering if there is something more intuitive.

The only other idea I could think of is the make two colums, one with $x_\text{min}$ and one with $x_\text{max}$ (or $\text{min}(x)$/$\text{max}(x)$).

Apologies if this is not the correct community. I found this post and thus thought to give it a try here.

Growth Rate of Supremum of Random Walk (with negative drift)

Posted: 05 Jul 2022 08:03 AM PDT

Let $(X_k)_{k=1}^\infty$ be a sequence of i.i.d. random variables, with $\mathbb{E}(X_i)$ finite and negative. Define $S_n := X_1 + ... + X_n$, and $M_n := \max(S_1,S_2,...,S_n)$.

It follows from the final bullet point of Sangchul Lee's answer to this question Is the expectation of the supremum of a random walk with negative drift finite? that if $X_i$ have infinite variance, then $\mathbb{E}(M_n) \rightarrow +\infty$ as $n \rightarrow +\infty$. However I am curious how quickly this growth to infinity occurs?


I am not sure if there is a general answer in terms of the distribution of $X_i$, so I started tried to use a simple test example. I defined the $X_i$ variables to be absolutely continuous with density function $f(x) := \frac{2}{(x+3)^3}$ for $x \in [-2,\infty)$.

This is the simpliest example I could think of which would have a finite negative mean (in this case the mean is $-1$), yet infinite variance. However, with this example, I am struggling to work out how quickly $\mathbb{E}(M_n)$ diverges.

By a very rough numerical calculation, it seems as though the density function of $M_2$ decays like $O(\frac{1}{x^{2.8}})$ (maybe the true exponent is $e$?) I'm guessing that as $n \rightarrow +\infty$ the density of $M_n$ tends to a decay rate of $O(\frac{1}{x^2})$, but again I have no idea how to attack this.


Does anyone know what kind of analytic tools I can use to tackle this problem? Cheers.

find n for even $f(n)$: $f(1)=1,f(2n)=n, f(2n+1)=f(n)+f(n+1)$ from concrete mathematics

Posted: 05 Jul 2022 08:02 AM PDT

I am wondering why $f(n)$ is even why $3|n$. I can only prove that $f(n)$ is even when $n=3\cdot 2^k$ because $f(3)$ is even.

Not sitting two students from class eleven side by side how can you sit 10 of class eleven students and 14 of class twelve students together?

Posted: 05 Jul 2022 08:09 AM PDT

I have tried it this way: Since no two students from class eleven can sit together, we can make a 10 pairs of students by taking 1 from class eleven and 1 from class twelve. If we treat those 10 pairs as 10 individual students, we will have 14 individual students to arrange, taking the remaining 4 of class twelve students. As a result we will have 14! permutation.

The answer in text is given $^{15}P_{10} \times 14!$

Do we have to take 10 students from 15 slots in total because if we count each slot at the right of each class eleven students, there are 10 slots, 4 extra slots from the non paired class twelve students, and 1 extra slot because a class twelve student can also sit at the left of the first class eleven student in the row? Is this the reason we take 10 out of 15 instead of 10 out of 14?

Exchanging beverages, A cooperative bargaining problem

Posted: 05 Jul 2022 07:59 AM PDT

Suppose player 1 and 2 each has a glass of beverage A and B, respectively. Player 1 is indifferent between the two beverages, while player 2 likes beverage A twice as his own beverage. Can they reach an agreement on exchanging some amount of their beverages?

I'm interested in finding the Nash bargaining solution for the given situation. I can model their utilities by $U_1(a,b)=a+b$ and $U_2(a,b)=a+2b$, but I have no idea how I should represent the feasibility set as a closed subset of $\mathbb{R}^2$ and solve the problem. All of the examples I have seen before, such as splitting a dollar, have one quantity that is being negotiated, while here, there are two.

$\alpha$ is transcendental and there exists some $\beta$ such that $f(\beta) =\alpha$. Show that $\beta$ is transcendental.

Posted: 05 Jul 2022 07:56 AM PDT

I am starting to study field theory and I encountered this question :

Suppose that $L:K$ is an extension, that $\alpha$ is an element of L which is transcendental over K, and that $f$ is a non-constant element of $K[x]$. Show that $f(\alpha)$ is transcendental over $K$. Show that, if $\beta$ is an element of L which satisfies $f(\beta) =\alpha$, then $\beta$ is transcendental over $K$.

I have already shown that $f(\alpha)$ is transcendental over $K$ but I'm having trouble showing that $\beta$ is transcendental. Any help would be appreciated.

Speed of convergence for Monte Carlo integration

Posted: 05 Jul 2022 07:56 AM PDT

I am following this Wikipedia page on naive Monte Carlo method. Here is my slightly modified "proof"/reasoning for the $\mathcal{O} (N^{-1/2})$ speed of convergence of the method. I would like to have comments on whether I have understood the reasoning correctly.

The method: Let $\Omega \subset \mathbb{R}^n $ and let $\mu$ denote the $n$-dimensional Lebesgue measure. Let $x_1, x_2, \dots, x_N$ be a sample from the uniform distribution of $\Omega$. Thus, the probability density function is

$$ \rho(x)=\begin{cases} 1/\mu\left(\Omega\right){,}&\mathrm{in}\ \Omega{,}\\ 0{,}&\mathrm{otherwise.} \end{cases}$$

Then the method $$ \int_\Omega f \mathrm{d}\mu \simeq \frac{\sum_{k=1}^N f(x_i)}{N} \mu(\Omega)$$ has $\mathcal{O} (N^{-1/2}) $ speed of convergence.

Proof: Let $X_1, X_2, \dots, X_N$ be i.i.d. random variables. The distribution is the uniform distribution on $\Omega$. Thus, $f(X_1), f(X_2), \dots, f(X_N)$ are i.i.d. random variables as well. We denote the variance of $f(X_1), f(X_2), \dots, f(X_N)$ by $\sigma^2$.

The speed of convergence, i.e., the standard deviation of the method is

$$ \sqrt{\mathrm{Var}\left(\frac{\sum_{k=1}^Nf\left({X}_k\right)}{N}\mu\left(\Omega\right)\right)} = \sqrt{\frac{\mu \left(\Omega \right)^2}{N^2}\mathrm{Var}\left(\sum _{k=1}^Nf\left({X}_k\right)\right)} \\ = \sqrt{\frac{\mu \left(\Omega \right)^2}{N^2}\sum _{k=1}^N\mathrm{Var}\left(f\left({X}_k\right)\right)} \\ = \sqrt{\frac{\mu\left(\Omega\right)^2}{N^2}N\sigma^2} =\frac{\mu \left(\Omega \right)\sigma }{\sqrt{N}}. $$ Here $\mu(\Omega) \sigma $ is constant. Thus, the standard deviation is $\mathcal{O} (N^{-1/2})$.

By saying that the method converges, we mean that for large $N$, the standard deviation is close to zero. That is, for large enough $N$, "on average" it holds that $$  \left(\frac{\sum_{k=1}^N f(X_i)}{N} \mu(\Omega)-\int_\Omega f \mathrm{d}\mu \right)^2 \approx 0,$$ since $$ \mathbb{E} \left(f(X_i)\right)=\frac{1}{\mu(\Omega)} \int_{\Omega} f \mathrm{d}\mu. $$

However, that does not guarantee that the result or even the average of results of the method are close to 0. By law of large numbers, the average is close to zero with a high probability.

Does there exist a subset $A$ of $\mathbb{R}^n$ such that the sum over the volumes of every countable covering is infinite but $A$ has finite volume?

Posted: 05 Jul 2022 08:11 AM PDT

My measure theory material states that the Lebesgue outer measure of a set $A \subset \mathbb{R}^n$ is at most $\infty$ if for every countable covering $A \subset \bigcup_{k\in \mathbb{N}}C_k$, the sum over the volumes of individual $C_k$ is infinite. Therefore, I am interested in knowing whether it is possible that a set $A \subset \mathbb{R}^n$ has a finite Lebesgue outer measure, yet the said sum is infinite for all coverings of $A$?

Expected value of a complicated function

Posted: 05 Jul 2022 07:52 AM PDT

I want a closed form/ semi closed form of the expected value of a complicated function.

The function looks like this,

$$ f(x) = \frac{\sin(A \frac{x}{2})}{\sin(\frac{x}{2})} \frac{\sin(A B\frac{x}{2})}{\sin(B \frac{x}{2})} \cos\Big[\frac{x}{2}[(A - 1) + B(M - 1)]\Big] $$

The terms $A$, $B$, $M$ are constants.

Where, $x$ comes from a Gaussian distribution.

$$ x \sim \mathcal{N}(\mu, \sigma) $$

I want to know what is E[f(x)]. How to approach it?

In integral form, I want to solve the following integral.

$$ E[f(x)] = \int_{-\infty}^{+\infty} \frac{\sin(A \frac{x}{2})}{\sin(\frac{x}{2})} \frac{\sin(A B\frac{x}{2})}{\sin(B \frac{x}{2})} \cos\Big[\frac{x}{2}[(A - 1) + B(M - 1)]\Big] \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x - \mu)^2}{(2\sigma^2)}} dx $$

Mathematica is not able to compute this. Can this integral be performed in the Complex domain?

$\lim_{t\to 0^{\pm}}‎\frac{‎\Vert ‎x+ty‎\Vert‎^2-‎\Vert ‎x‎\Vert‎^2‎}{‎2t‎}=‎\| ‎x\|‎\cdot‎\lim_{t\to 0^{\pm}}‎\frac{\| x+ty\| -\| x\|}{t}$?

Posted: 05 Jul 2022 08:08 AM PDT

Let $(X,\| \cdot \|)$ be a normed space. Define $$‎‎‎\rho '_{‎\pm‎}(x,y)=\lim_{t\to 0^{\pm}}‎\frac{‎\Vert ‎x+ty‎\Vert‎^2 -‎\Vert ‎x‎\Vert‎^2‎‎}{‎2t‎}.$$ How can prove the following? $$‎‎‎\rho '_{‎\pm‎}(x,y)=‎\| ‎x\|‎‎\cdot‎‎ ‎\lim_{t\to 0^{\pm}}‎\frac{\| x+ty\| -\| x\|}{t}.$$

Aperiodicity of diagonally dominant Markov Matrices.

Posted: 05 Jul 2022 07:47 AM PDT

I would greatly appreciate any further thoughts on the following proposition:

"Given $P \in \mathbb{R}^{n\times n}$ a diagonally dominant Markov matrix; it follows that $P$ will always converge to a steady state."

I am not too familiar with the Jordan normal form, though I suspect it can shed some light on the proposition. I have only been able to prove that -1 cannot be an eigenvalue and suspect a similar device can be used to prove it cannot be complex:

$$|p_{ii} - z| \leq \sum_{j = 1\\j\neq i}^{n}|p_{ij}|\\ |p_{ii} + 1| \leq \sum_{j = 1\\j\neq i}^{n}|p_{ij}|\\ |p_{ii} + 1| \leq |p_{ii}|$$

The last inequality being absurd. I brought up the Jordan form as it can clarify the existence of $P^{\infty}$, an equivalent statement.

How I can calculate saturation?

Posted: 05 Jul 2022 07:37 AM PDT

We have tree points $P_1=[1:0:0],P_2=[0:1:0],P_3=[0:0:1]$. We know that ideal of points has a form: $I=<xy,yz,xz>$. And we can take: $I^2=<x^2y^2, x^2yz, xy^2z, x^2z^2, xyz^2, y^2z^2>$.I raise to the power $2$ of: $I^2=<xy,yz,xz>^2$.
But why $I^{(2)}=<xyz, x^2y^2, x^2z^2, y^2z^2>$ has this form? And How can I do saturation? Which ideal is not nessesary? How I can take geometrical interpretation of saturation, and symbolic power on this example?

Which stastical test should I use?

Posted: 05 Jul 2022 07:32 AM PDT

I have 10 participants who were asked to rate (from 1-5) 30 statements. I then asked each participant to assign each statement one of three categories.

I want to establish if there is an relationship between the rating scores and the categories, i.e. do the participants rate a specific category higher than others?

Which statistical test is appropriate for this?

My initial thoughts are ANOVA but I am not sure if this violates the independence assumption. The participants often disagreed on the categorisation of some of the statements.

Using Euclidean geometry, how to find $x$? [closed]

Posted: 05 Jul 2022 07:40 AM PDT

This question comes from a friend exam that I'm helping to review. I've been trying hard but can't find the answer.

Using Euclidean geometry, how to find the angle $x$?

angle inside a kite

I've been able to work out all the angles based on x and 180, but then I got stuck.

Here's my calculation:

I named the center point as E

∠ABD = 180 - 6x

∠ABC = 180 - 3x

∠ABC = ∠ABD + ∠DBC

180 - 3x = 180 - 6x + ∠DBC

∠DBC = 3x

∠BEC = 180 - 4x

∠BDC = 180 - 5x

∠DEC = 180 - ∠ACD - ∠BDC = 180 - (x + 180 - 5x) = 4x

Would anyone be able to help me with this question?

Taylor series for the absolute function of x [closed]

Posted: 05 Jul 2022 07:36 AM PDT

Is there a Taylor series for the absolute value of x?

Such that: $|x|≈ a_0 + a_1(x-c)+a_2(x-c)^2...+a_n(x-c)^n$ (where c is a constant)

It seems possible, but I've not been able to find the series on the internet not have I been able to create it myself.

It would be really helpful if someone could find it or conclude that it's not possible. I want to figure this out, because I couldn't find a series myself, I could use this to prove a theorem.

How can I solve $\sin{\xi \Psi}-\zeta \frac{\cos{\xi \Psi}}{\cos{\Psi}}=0$ using perturbation theory?

Posted: 05 Jul 2022 08:09 AM PDT

I was trying to find the roots to a trigonometric equation and so I asked a question in this mathematics server/forum. Later, someone posted an answer: basically, approached it using perturbation theory. He got an approximation of its roots until $\epsilon^1$, namely, in the form of $a_{o}+\epsilon\ a_{1}$. I could only get to $a_{o}$, thus I just wanted someone to help me get to at least $a_{1}$ or $a_{2}$. I'll write the equation I'm trying to solve and the $a_{n}$ terms.

Trig. equation: $$\sin{\xi \Psi}-\zeta \frac{\cos{\xi \Psi}}{\cos{\Psi}}=0,$$ where $0≤\xi≤1$, $0≤\zeta≤2$ or $3$, and $0<\Psi≤ \frac{\pi}{2}.$

Perturbed equation:

If you let $\xi=1-\epsilon\ $ and $\ \Psi(\epsilon)=\sum\limits_n \epsilon^n a_n$, the previous equation reduces to this:

$$\sin{(1-\epsilon) \Psi}-\zeta \frac{\cos{(1-\epsilon) \Psi}}{\cos{\Psi}}=0.$$

$a_{0}:$ $$a_{0}= \frac{\pi}{2}\ \mathrm{or}\ \arcsin{\zeta}$$

$a_{1}:$ $$a_1=a_o \frac{\cos^2(a_o)+\zeta\sin(a_o)}{\cos^2(a_o)+(\zeta-1)\sin(a_o)}$$

Errors in a factory

Posted: 05 Jul 2022 08:01 AM PDT

I am trying to learn probability for my next year at University.

I am really struggling with this exercise and have no idea how to start solving it.

In a warehouse there are 100 products, 60 of them are from the first factory, while the other 40 are from the second factory. 3% of the products of the first factory have errors, and 6% of the second factory. what is the probability that the product randomly taken from the warehouse has errors?

Any help would be greatly appreciated.

Find the extrema of $f(x,y)=max(x,y)$ constrained to $\mathscr A=\{(x,y) ∈ \mathbb R^2 ∣ x^2+y^2=1\}$

Posted: 05 Jul 2022 07:59 AM PDT

I know the maxima occurs at $f(x,y) = 1$ at the points $(1,0)$ and $(0,1)$... but I can't seem to find all the minima. I've analized what happens at the other vertices of the circle: $(0,-1)$ and $(-1,0)$ $\rightarrow f(0,-1)=0$ and $f(-1,0)=0$. But I've come to the conclusion that the minima occurs at a point $(a,a)$ where $a ∈ \mathbb R_<0$, such that $2a^2=1\rightarrow a=\frac{\sqrt 2}{2} \lor a=-\frac{\sqrt 2}{2}$.

Does the minima occurs at ($-\frac{\sqrt 2}{2}, -\frac{\sqrt 2}{2}$)?

Solve the equation $\sqrt{x^2-3x+18}+\sqrt{x^2+5x+2} = 2\sqrt{x^4+2x^3+5x^2+84x+36}.$

Posted: 05 Jul 2022 07:46 AM PDT

Solve the equation $$\sqrt{x^2-3x+18}+\sqrt{x^2+5x+2} = 2\sqrt{x^4+2x^3+5x^2+84x+36}.$$ I don't know where to start, but what I know is that $$(x^2-3x+18)(x^2+5x+2) = x^4+2x^3+5x^2+84x+36.$$ Any suggestion would be appreciate.

Prove that every permutation $\sigma\in S_n$ may be written as an iterated product of powers of $(1,2)$ and $(1,2,3,\cdots ,n)$.

Posted: 05 Jul 2022 07:46 AM PDT

Prove that every permutation $\sigma\in S_n$ may be written as an iterated product of powers of $(1,2)$ and $(1,2,3\cdots ,n)$. Negative powers are allowed.

Any hints.

Also, negative power of $-1$ means that in two-column form of a cycle (though, lack proof, and request one) the two rows are swapped.

Let, $\sigma= \begin{pmatrix} 1 & 2 & 3 &4 \\ 4 & 1 & 3 & 2\end{pmatrix} \implies (142)$, then $\sigma^{-1}= \begin{pmatrix} 4 & 1 & 3 &2 \\ 1 & 2 & 3 & 4\end{pmatrix} \implies (412)$

So, $\sigma^{-2}$ power is the same as the $(\sigma^2)^{-1}$?

Also, Is $(\sigma^2)^{-1} = (\sigma^{-1})^2$?

Taking the above example: $\sigma^2= \begin{pmatrix} 1 & 2 & 3 &4 \\ 2 & 4 & 3 & 1\end{pmatrix} \implies (241)$

And, $(\sigma^2)^{-1}= \begin{pmatrix} 2 & 4 & 3 &1 \\ 1 & 2 & 3 & 4\end{pmatrix} \implies (214)$

$(\sigma^{-1})^2= \begin{pmatrix} 1 & 2 & 3 &4 \\ 4 & 1 & 3 & 2\end{pmatrix} \implies (142)$

But, need which property of algebra, or otherwise; to prove it.

Watson-Nevanlinna theorem for $e^{1/z}$

Posted: 05 Jul 2022 08:02 AM PDT

I am currently trying to understand Watson-Nevanlinna (WN) theorem, which gives sufficents conditions for a function $f(z)$ to be equal to the Borel sum of its asymptotic expansion as $z\mapsto0$.

The theorem is stated as follows.

Let $f(z)$ be analytic in the circle $C_R=\{z:Re(z^{-1})>R^{-1}\}$, $R>0$, and let $f(z)\sim\sum_{n=0}^\infty f_n z^n$ be its asymptotic expansion as $z\mapsto0$. If the remainder after summing $N-1$ terms satisfies

$$ \vert R_N(z)\vert\leq A\sigma^N N! |z|^N,\quad A>0,\,{}\sigma>0, $$

uniformly in $N$ and $z\in C_R$, then:

  • $B(t)=\sum_{n=0}^\infty f_n z^n/n!$ converges in $S_\sigma=\{t:\mathrm{dist}(t,\mathbb{R}^+)<1/\sigma\}$, and
  • $f(z)$ equals the convergent integral $f(z)=1/z\int_0^\infty e^{-tz}B(t)dt$ for any $z\in C_R$.

However, from reading some reviews on asymptotic expansions, I was under the impression that if $\sum_{n=0}^\infty f_n z^n$ is asymptotic to $f(z)$ as $z\mapsto0$ along paths in the $Re(z)>0$ part of the complex plane of $z$ (where the circle $C_R$ is contained), then it is also asymptotic to $f(z)+e^{1/z}$, as $e^{1/z}$ has an expansion in powers of $z$ that vanishes at all orders. Due to this, I was taking $e^{1/z}$ as some sort of lower boundary in the accuracy in obtaining a function $f(z)$ from its asymptotic expansion as $z\mapsto0$ in powers of $z$.

WN theorem seems to suggest that, provided the conditions on $f(z)$ are satisfied, one can recover $f(z)$ exactly from the asymptotic expansion, and I am having trouble understanding that. Indeed, the function $e^{1/z}$ itself seems to satisfy the conditions of the theorem, but its Borel transform $B(t)$ is identically zero and so is the integral. The theorem then would ensure $e^{1/z}=0$ in $C_R$, which is not true.

Thanks in advance!

Determining a probability function for a sum of N i.i.d. geometric distributions between where N is discrete with a geometric pf [closed]

Posted: 05 Jul 2022 07:51 AM PDT

I was completing revision for an upcoming task and this question was presented. Was hoping for some insight!

The random variable $N$ is discrete, with probability function $$f_N(n)=\begin{cases} p(1-p)^{n-1}, & \text{; n = 1, ...} \\ 0, & \text{; otherwise,} \end{cases}$$ where $0 < p < 1$. The random variables $X_1, X_2, ...$ are independent of $N$ and are independent and identically distributed with probability function $$f_X(k)=\begin{cases} r(1-r)^{k-1}, & \text{; k = 1, ...} \\ 0, & \text{; otherwise,} \end{cases}$$ where $0<r<1$. Find the probability generating function of $$S_N=\sum_{j=1}^N X_j$$ and hence determine the probability function of $S_N$.

Any help is greatly appreciated! I can derive the moment-generating-function just fine but need some guidance. Thanks!

What are simple conditions for the adjoint of a positive, unbounded, densely defined operator on a Hilbert space to be positive?

Posted: 05 Jul 2022 07:44 AM PDT

I'm reasking this deleted question because I believe I've made a some progress towards an answer, which I'm also interested in knowing.

Here's the restatement:

Suppose $\ A\ $ is a densely defined, [unbounded]$\,^{\color{red}\dagger}$, symmetric, positive operator on a Hilbert space. Is there any additional condition in terms of $\ A\ $ that forces the adjoint $\ A^*\ $ to be also positive?

$\,^{\color{red}\dagger}$ This qualification was omitted from the original statement of the question, although it did appear in its title. If $\ A\ $ is bounded, then it's easy to show that $\ A^*\ $ is an everywhere defined, bounded and positive extension of $\ A\ $, so I'm only interested in the question for $\ A\ $ unbounded.

I first need to deal with one of the comments on the original question. It is not true in general that $\ A^*=A\ $. What is true is that $\ A^*\supseteq A\ $, but it's quite possible that $\ A^*\supsetneq A\ $—i.e. that the extension is strict. The identity $\ A^*=A\ $ holds if and only if $\ A\ $ is self-adjoint.

Here's the progress I've made towards answering the question:

  • I first guessed that $\ A^*\ $ might always be positive and I tried to prove that $\ \langle A^*y,y\rangle\ge0\ $ for $\ y\in\mathscr{D}\big(A^*\big)\ $ by taking a sequence $\ x_n\ $ of members of $\ \mathscr{D}(A)\ $ which converges to $\ y\ $. I eventually proved that this guess was actually incorrect, by constructing the counterexample given in the partial answer below.
  • I'm aware that if $\ B^+\ $ is an extension of an operator $\ B\ $ then $\ B^*\ $ is an extension of $\ {B^+}^{\,*}\ $, that an unbounded, densely defined, symmetric operator has a self-adjoint extension if and only if its deficieny indices are equal, and it has a unique self-adjoint extension if and only if its deficiency indices are both zero.
  • If this section of Wikipedia's article on extensions of symmetric operators is accurate, and I'm understanding it correctly, then a positive operator $\ A\ $ always has positive self-adjoint extensions, all of which lie between the the Friedrichs extension, $\ A_\infty\ $ and the Krein-von Neumann extension, $\ A_0\ $.

Assuming I've understood the above-cited Wikipedia material correctly, it would follow from the above results that the deficiency indices of a positive, densely-defined operator are always equal, that $\ A^*\supseteq A_0\ $, and that $\ A^*\ $ is positive if and only if $\ A^*=A_0\ $, the Krein-von Neumann extension of $\ A\ $. While this might already be regarded as an answer to the question, I suspect that examining the properties of the Cayley transform of $\ A\ $ might lead to simpler conditions for guaranteeing the positivity of $\ A^*\ $.

What are the properties of this new characteristic of mathematical objects?

Posted: 05 Jul 2022 07:42 AM PDT

I will call it "hypermodulus". In simple words, hypermodulus is the exponent of the scalar part of the finite part of the logarithm of the object: $H(A)=\exp (\operatorname{scal} \operatorname{f.p.} \ln A)$.

The "finite part" is meant to mean regularization: Hadamard, Dirichlet, Ramanujan, etc.

The "scalar part" is meant to mean the coefficient of the basis vector $1$ in rings over reals (real part), the arithmetic mean of the diagonal elements in square real matrices (trace divided by the matrix rank) or arithmetic mean of components in $\mathbb R^n$.

Now, it seems hypermodulus can be defined for various objects. Let's see.

  1. For all complex numbers except zero the hypermodulus is equal to the modulus (absolute value).

  2. Zero. The Taylor series for logarithmic function (expanded at $1$), at zero becomes the harmonic series with opposite sign:$-\sum_{k=1}^\infty \frac1k$, this has the regularized value of $-\gamma$ (there are other ways to see this as well). Thus, $H(0)=e^{-\gamma}$. We will use this result further.

  3. Zero divisors in dual numbers. $\ln (a+b\varepsilon)=\ln a+\varepsilon b/a$. So, all zero divisors have hypermodulus $H(b \varepsilon)=e^{-\gamma}$.

  4. Zero divisors in split-complex numbers. $\ln (a+bj)=\frac{1}{2} j (\log (a+b)-\log (a-b))+\frac{1}{2} (\log (a-b)+\log (a+b))$, so $H(\pm a\pm a j)=\sqrt{2a}e^{-\gamma/2}$. Particularly, the idempotents $1/2+j/2$ and $1/2-j/2$ have hypermodulus $e^{-\gamma/2}$.

  5. Other split-complex numbers. For non-zero-divisors, $H(a+bj)=\sqrt{|a^2-b^2|}$. This is always a positive real number.

  6. Infinity (extended real line). Since $\gamma=\lim_{n\to\infty}\left(\sum_{k=1}^n \frac1{k}-\int_1^n \frac 1t dt\right)$, the integral $\ln\infty=\int_1^\infty \frac 1t dt$ has regularized value of $0$. Thus, $H(\infty)=1$.

  7. Other improper elements. In $\mathbb{\overline{R}}^2$ the improper elements $(0,\pm\infty)$ and $(\pm\infty,0)$ have the hypermodulus $H=e^{-\gamma/2}$. In split-complex numbers the elements correspond to improper elements $\pm\infty\pm j\infty$.

  8. Umbral calculus (Bernoulli umbra). In umbral calculus we will use the index lowering (evaluation) operator as "finite part". Thus, $H(B+a)=e^{\psi(a)}$, where $B$ is Bernoulli umbra and $\psi$ is digamma function. Particularly, $H(B)=H(B+1)=e^{-\gamma}$. (the later result can be seen also from here), $H(e^{B})=\frac1{\sqrt{e}}$, $H(e^{B+1})=\sqrt{e}$.

  9. Euler's umbra. $H(E+a)=\exp (-\Phi (-i,1,a+1)-\Phi (i,1,a+1))$, where $\Phi$ is Lerch transcendent. Particularly, $H(E)=e^{-\pi/2}$, $H(E+1)=1/2$.

  10. Divergent integrals. Multiplication of divergent integrals can be defined in different ways. Trivial multiplication of divergent integrals gives hypermodulus equivalent to the hypermodulus of their regularized value. On the other hand, umbral multiplication gives a system that is isomorphic to umbral calculus, with Bernoulli umbra $B=\int_{-1/2}^\infty dx$ and very interesting expressions: $H(\int_0^\infty dx)=\frac{e^{-\gamma}}4$, $H(\int_0^\infty xdx)=e^{\psi\left(\frac12+\frac{i}{2\sqrt{3}}\right)+\psi\left(\frac12-\frac{i}{2\sqrt{3}}\right)}$, $H(\sum_{k=0}^\infty k)=e^{\psi\left(\frac12+\frac{1}{2\sqrt{3}}\right)+\psi\left(\frac12-\frac{1}{2\sqrt{3}}\right)}$ and others.

  11. Formal power series. Formal power series have subsets, isomorphic to umbras with any moments, including Bernoulli umbra and Euler's umbra and many more.

  12. Update. Infinite products. Some sources claim that the regularized value of infinite products $\prod_{k=0}^\infty k$ is $\sqrt{2\pi}$ and of $\prod_p p$ (where $p$ is prime) is $4 \pi^2$. But in fact, their calculations find not the finite part, but hypermodulus. They convert the products to series using logarithm, regularize the series and then exponentiate the finite part.

So, we can see that the concept is applicable in various areas of mathematics, where the expressions for hypermodulus often involve the $e^{-\gamma}$ constant.

I wonder,

  1. What algebraic role the $e^{-\gamma}$ plays in all these fields?

  2. Can the concept of hypermodulus be used as some indicator of algebraic role of an element of a set, such as showing whether an element a zero divisor, nilpotent, idempotent or whatever else?

  3. Can it help to somehow define or generalize functions on those elements?

  4. What's in common between Bernoulli umbra and zero divisors?

Are mathematical definitions logical equivalences or material equivalences?

Posted: 05 Jul 2022 07:50 AM PDT

Lets consider representing logical equivalence using the symbol $\equiv$ and material equivalence with the symbol $\longleftrightarrow$.

I know that the formulas $P$ and $Q$ are logically equivalent if and only if the statement of their material equivalence $P \longleftrightarrow Q$ is a tautology.

My question is: When mathematicians define something, which of the equivalences is being used. For example, when defining the limit of a function $f$ we can write:

Let $I$ be an open interval containing $c$, and let $f$ be a function defined on $I$, except possibly at $c$. The limit of $f(x)$, as $x$ approaches $c$, is $L$, denoted by $$ \lim_{x\rightarrow c} f(x) = L, $$

means that given any $\epsilon>0$, there exists $\delta>0$ such that for all $x \neq c$, if $|x−c| < \delta $, then $|f(x)−L| < \epsilon$.

If we translate this to symbols, which one the correct?

$\lim_{x\rightarrow c} f(x) = L \longleftrightarrow \forall \, \epsilon > 0, \exists \, \delta > 0 \; s.t. \;0<|x - c| < \delta \longrightarrow |f(x) - L| < \epsilon .\text{)}$

or

$\lim_{x\rightarrow c} f(x) = L \equiv \forall \, \epsilon > 0, \exists \, \delta > 0 \; s.t. \;0<|x - c| < \delta \longrightarrow |f(x) - L| < \epsilon .\text{)}$

This question arose when reading the book Discrete Mathematics with Applications by Susanna S. Epp, because the author defines both $\equiv$ and $\longleftrightarrow$, but then uses $\iff$ (which is never defined in the book) when writing definitions.

EDIT After progressing through the book, I found out that the author does indeed define the symbol, it just happens to be the case that in my edition of the book she uses it before writing its definition. She uses the notation $P(x) \iff Q(x)$ to mean $\forall x, P(x) \longleftrightarrow Q(x)$. When writing definitions she writes them in English and then restates them symbolically (where $\iff$ might be used). Nonetheless both the answers given by @Stinking Bishop and @ryang are valid and useful.

Interval Graphs

Posted: 05 Jul 2022 07:55 AM PDT

In graph theory, an interval graph is an undirected graph formed from a set of intervals on the real line, with a vertex for each interval and an edge between vertices whose intervals intersect. It is the intersection graph of the intervals. Is it possible that the join of two intervals graphs is an interval graph?

In addition,

By definition of an Interval graph. Is it possible that the CORONA of two interval graphs is an interval. And also for the COMPOSITION of two interval graphs is also an interval. I was trying to find a result about it but I can't find one. I am hoping for your response. Thank you very much.

Neumann Laplace eigenfunctions

Posted: 05 Jul 2022 07:40 AM PDT

Let $u_k, u_m$ be two Neumann Laplace eigenfunctions on a bounded domain $\Omega \subset \mathbb{R}^n$ with smooth boundary $\partial \Omega$, corresponding to eigenvalues $\mu_k, \mu_m$ respectively. My question is, can it also happen that their Dirichlet data also agree on $\partial \Omega$, that is, can we also have $u_k|_{\partial \Omega} = u_m|_{\partial \Omega}$ (not necessarily zero)? I am just learning these things, and it seems to me that this is too stringent a condition to be satisfied, but I cannot come up with a rigorous justification.

What would the multifunctional inverse of $F(x)=|x|$ be?

Posted: 05 Jul 2022 07:53 AM PDT

What would the multifunctional inverse of $F(x)=|x|$ be, assuming $x$ is on the complex plane. Also, how would this usually be represented? Note that this won't be a 'true' function. (But assume a multivalued function is considered a function for simplicity's sake.)

Finding a sequence of polynomials that converges uniformly to a holomorphic function on an open set

Posted: 05 Jul 2022 08:02 AM PDT

The following is exercise 13.2 in Rudin's Real & Complex Analysis, which I'm self-studying.

Let $\Omega = \{z: |z| < 1 \text{ and } |2z - 1| > 1\}$, and suppose $f \in H(\Omega)$. Must there exist a sequence of polynomials which converges to $f$ uniformly in $\Omega$?

I have a solution, but I feel it's simple and the shape of $\Omega$ is suspicious so there might be a trick somewhere.

Assume such a sequence exists. Let $0 < \epsilon < 1$, $f(z) = 1/z$ and $P$ be a polynomial that satisfies: $$|f(z) - P(z)| < \epsilon \ \ \forall z \in \Omega$$ Therefore $$|P(z)| < |f(z)| + \epsilon \ \ \forall z \in \Omega$$ But near the boundary of the unit disc, $|f(z)| < 2$, so $|P(z)| < 3$ near the boundary. By the continuity of $P$, we have $P(z) \le 3$ on the boundary. By the maximum modulus principle, $P(z) < 3$ on the unit disc. But $|f(z)|$ gets arbitrarily large near $0$. Therefore $P$ cannot approximate $f$ on $\Omega$.

What gives me confidence in my argument is that it doesn't work on compact subsets of $\Omega$ (for which the existence of the polynomial sequence is guaranteed by Runge's theorem).

Is my counter-example correct?

No comments:

Post a Comment