Sunday, June 26, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Find dual linear program to modified Steiner tree problem

Posted: 26 Jun 2022 02:55 PM PDT

Given undirected graph $G=(V, E)$, function of weights of edges $c: E \rightarrow \mathbb{Z}$, function of weights of vertices $\pi: V \rightarrow \mathbb{Z}$ and vertex $r$, find connected subgraph of $G$, containing $r$, and such that sum of weights of edges minus sum of weights of vertices is minimized.

Here is primal linear program for this problem: $$\text{min} \sum_{e\in E}x_e - \sum_{v \in V} \pi (v) y_v$$ $$\forall_{S \subseteq V \ {r}, S \neq \emptyset} \forall_{v \in S} \quad \sum_{e \in E(S, V\backslash S)} x_e \geq y_v$$ $$\forall_{e \in E} \quad x_e \geq 0 $$ $$\forall_{v \in V \backslash {r}} \quad y_{v} \leq 1 $$ $$y_r = 1 $$

I want to find dual program. I know how to find dual program in simple cases, but in more complex cases (especially with sums over sets), I get completely lost.

Continued fraction expression and inequalities

Posted: 26 Jun 2022 02:54 PM PDT

Given integers $0<a<n$ such that their gcd is 1, one can write $\frac{n}{a}=a_1-\frac{1}{a_2-\frac{1}{a_3-\cdots\frac{1}{a_k}}}$. Note that $a_i\geq 2$ and all $a_i \in \mathbb{Z}$. I have the following questions:

  1. If for fixed $n$ and $0<a<b<n$, if we get $\frac{n}{a}=\{a_i\}$ and $\frac{n}{b}=\{b_i\}$ (we get $a_i,~b_i$ by above method) is this true that $\sum a_i \leq \sum b_i ?$

  2. If we fix $n$ and for any $a<n$ with $(n,a)=1$ so that $\frac{n}{a}=\{a_i\}$, can I say that for $a=n-1$, the summation $\sum a_i$ is maximum?

I have considered some cases where the above claims seem true, but I could not prove them directly. Any suggestions or comments will be helpful. Thanks in advance!

continuous function, derivative with removable discontinuity

Posted: 26 Jun 2022 02:57 PM PDT

Let $f(x)=\left\{\begin{array}{ll}f_1(x) & \mbox{if $x\leq 0$}\\ f_2(x) & \mbox{if $x>0$,}\end{array}\right.$ with the following assumptions:

  1. $f$ is continuous through its domain.
  2. $f_1$ is differentiable over $]-\infty,0[$.
  3. $f_2$ is differentiable over $]0,+\infty[$.
  4. $\lim_{x\to 0^-} (f_1)'(x)=\lim_{x\to 0^+} (f_2)'(x)=L$.

Can I conclude from this that $f'(0)$ exists and equals $L$?

What would a linear representation of t11 and t22 be in terms of e1,e2 and e3?

Posted: 26 Jun 2022 02:47 PM PDT

enter image description here

Please refer to image above for question description. Provide full work if possible, thank you very much!

Let u and v be the real and imaginary component of f defined by the function.

Posted: 26 Jun 2022 02:27 PM PDT

$f(z) = \begin{cases}\frac{\dot{z}^2}{z}, & z \ne 0 \\ 0, & z=0 \end{cases}$ where as $\dot{z}$ is the conjugate of $z$.

  1. Is the Cauchy Riemann equation satisfied at $(0,0)$?
  2. Is $f$ differentiable at $(0,0)$? Justify your answer.
  3. is $f$ analytic at (0,0)? justify your answer.

Universal Property of The Simplex Category Δop

Posted: 26 Jun 2022 02:13 PM PDT

I tried formulating the universal property of the simplex category. There seemed to be two different options to choose for the universal property of $\Delta^{op}$. I'm having trouble making formulating of the one of them. I'm hoping to get some help with this, and maybe some input on which one is a better universal property to work with.

Here is my formulation of the universal property 1:

Let $S: Cat\to Set$ defined by: \begin{align*} S(C):=\{\{(C_n , \{d_i: C_n \to C_{n-1}\}_{0\leq i\leq n} , \{s_i : C_n\to C_{n+1}\}_{0\leq i\leq n})_{n\geq 0} \} : d_i, s_i \text{ satisfy the simplicial identities}\} \end{align*}

And, for any $F: C\to D$, define $F : S(C)\to S(D)$ by \begin{align*} \{(C_n , \{d_i\}_{0\leq i\leq n} , \{s_i \}_{0\leq i\leq n})_{n\geq 0} \} \mapsto \{(F(C_n) ,\{F(d_i)\}_{0\leq i\leq n} , \{F(s_i)\}_{0\leq i\leq n})_{n\geq 0} \} \end{align*}

since functorality preserves the simplicial identities.

CLAIM (UP 1) : $S$ is represented by $\Delta^{op}$, i.e, $Cat(\Delta^{op}, - ) \cong S$. Put differently, $(\Delta^{op} , \{([n], \{\delta_i\} , \sigma_i)\})$ is a universal arrow from $1\in Set$ to $S$. Here we think of $\{([n], \{\delta_i\} , \{\sigma_i\})\}$ as an arrow $1\to \Delta^{op}$ with image $\{([n], \{\delta_i\} , \{\sigma_i\})\}$. And $\delta_i$ and $\sigma_i$ are the notation for the face and degeneracy maps, respectively.

This UP1 seems to capture the idea that to specify a functor $\Delta^{op} \to C$, it suffices to specify an appropriate $(C_n , \{d_i\}_{0\leq i\leq n} , \{s_i \}_{0\leq i\leq n})_{n\geq 0}$

But, I often hear that $\Delta^{op}$ is freely generated by the $\delta_i$'s and the $\sigma_i$'s, subject to the simplicial identities. I am wondering how to best express this idea. It would be easier if the $\delta$'s and $\sigma$'s were not subject to any identities. Could this still be phrase in terms of the free category on a directed graph? There would need to be some sort of quotient happening. But I'm not sure how to express this in a free $\dashv$ forgetful type of universal property. Any help would be appreciated!

limits of 2 lebesgue integrals

Posted: 26 Jun 2022 02:50 PM PDT

Compute the following limits, with justification, where the integrals denote Lebesgue integrals:

  1. $\lim\limits_{m\to\infty} \int_0^\infty \dfrac{m\sin (y/m)}{y(1+y^2)} dy$.
  2. $\lim\limits_{m\to\infty} \int_0^1 \dfrac{1+my^2}{(1+y^2)^m}dy.$

For 1), I think it's useful to use the Taylor expansion of $\sin(x) = x - \frac{x^3}{3!} + \dfrac{\sin c}{4!} x^4,$ where $c\in (0,x)$. Fix $m\ge 1$. Applying the above expansion to $\sin(y/m),$ we get $\frac{y}m - \frac{(y/m)^3}{3!} + \frac{(y/m)^4}{4!}\ge \sin(y/m) \ge \frac{y}m - \frac{y^3}{3! m^3} - \frac{(y/m)^4}{4!}$. Hence letting $g_m(y) = \frac{y}m - \frac{y^3}{3! m^3}, f_m(y) = \dfrac{m\sin (y/m)}{y(1+y^2)}$, we have that for all $y\ge 0,$ $h_m^-(y) := \dfrac{1-y^2/(6m^2) - y^3/(24m^3)}{1+y^2}= \dfrac{m(g_m(y) - \frac{1}{24}(y/m)^4)}{y(1+y^2)} \leq f_m(y)\leq \dfrac{m(\frac{y}m - \frac{y^3}{3! m^3} + \frac{1}{24}(y/m)^4)}{y(1+y^2)} =: h_m^{-}(y).$

Note that both $h_m^-(y)$ and $h_m^+(y)$ converge pointwise to $\frac{1}{1+y^2}$ as $m\to \infty$, which implies by the Squeeze theorem that $\dfrac{m \sin(y/m)}{y(1+y^2)}$ converges pointwise to $\frac{1}{1+y^2}$ as $m\to\infty$. Also, $|\dfrac{m\sin(y/m)}{y(1+y^2)}|\leq |\dfrac{m(\frac{y}m - \frac{y^3}{3! m^3} + \frac{1}{24}(y/m)^4)}{y(1+y^2)}|$. But I need to upper bound $f_m(y)$ by an integrable function (a function $g$ so that $\int_0^\infty |g| dy < \infty$) to use the Dominated convergence theorem.

For the second limit, for any $y > 0, $ one has L'Hopital's rule that $\lim\limits_{m\to\infty} \dfrac{1+my^2}{(1+y^2)^m} = \lim\limits_{m\to\infty} \dfrac{my^2}{(1+y^2)^m} = 0.$ The issue is that to apply the Dominated convergence theorem, I need to find an upper bound for the integrand that's integrable.

Which branch of math studies this problem?

Posted: 26 Jun 2022 02:23 PM PDT

If we want to calculate the same arithmetic operation on 2 numbers, to say something, the square of $5$ and $7$, we can calculate the square of each one, or we can do this:

$$a= 5*1,000,000+7 = 5,000,007$$ $$a^2=25000070000049$$

Now, ignoring the effort to pack $5$ and $7$ in $a$, and extract $25$ and $49$ from the result, we managed to do the calculations in parallel, with a single calculation. Now I have some questions:

There is a name for this trick? A branch of math?
Where is the practical limit? How many numbers can be calculated in parallel? What type of functions can benefit from it?

We need in general functions like $$f(g(x,y))=h(f(x),f(y))$$ where we can unpack $f(x)$ and $f(y)$ from $h(f(x),f(y))$ for all $x$ and $y$ that satisfy certain restrictions

Continuity in topology. An example from Nakahara's book

Posted: 26 Jun 2022 02:18 PM PDT

In his book "Geometry, Topology, and Physics", M. Nakahara illustrates the topological definition of continuity with the following example:

enter image description here

I am having problems with this example.

Evidently, $$ f:~~ (\,-\,1/4,~0\,]\,\longrightarrow\,[\,1,~1+1/4)~~, $$ wherefrom $$ f^{-1}:~~[\,1,~1+1/4)\,\longrightarrow\, (\,-\,1/4,~0\,]~~. $$

At the same time, it is pointless to seek $f^{-1}(~(1-1/4\,,~1+1/4)~)$. Indeed, $(1-1/4\,,~1+1/4)$ is not part of $\,$Im$\,f$, and therefore $f^{-1}$ is not defined on $(1-1/4\,,~1+1/4)$. Please correct me if I am wrong.

Block systems in permutation group

Posted: 26 Jun 2022 02:46 PM PDT

I need to describe all possible block systems in cyclic group $G=\langle(1,2,...,n)\rangle$. I think, they look like $\{1 , d+1 ,\dots\}$, $\{2 , d+2 ,\dots\}$, $\dots$, $\{d , 2d ,\dots\}$, where d is divisor of $n$, but I don't know how to prove if there are some other blocks.

Conjugation of a permutation

Posted: 26 Jun 2022 02:40 PM PDT

Let

$$ \sigma = (1\ 2\ 4), \tau =(1 \ 2 \ 5 \ 3) \ (4\ 7\ 6) $$

where $\sigma, \tau \in S_7$. Based on e.g. this answer, I think it should be the case that:

$$ \tau \sigma \tau^{-1} = (\tau(1)\ \tau(2)\ \tau(4)) = (2 \ 5 \ 7) $$

but when I work it out, I find that:

$$ \tau \sigma \tau^{-1} = (1\ 6\ 3) $$

and:

$$ \tau^{-1} \sigma \tau = (2\ 5\ 7) $$

but:

$$ (\tau^{-1}(1)\ \tau^{-1}(2)\ \tau^{-1}(4)) = (3\ 1\ 6) $$

Where am I going wrong here?

$(\frac{a}{a^2+\lambda}+\frac{b}{b^2+\lambda}+\frac{c}{c^2+\lambda}) \leq \frac{3}{\lambda +1}$

Posted: 26 Jun 2022 02:21 PM PDT

if $a+b+c=3$ ,$ a,b,c>0$ and $\lambda \geq 1$

prove that : $(\frac{a}{a^2+\lambda}+\frac{b}{b^2+\lambda}+\frac{c}{c^2+\lambda}) \leq \frac{3}{\lambda +1}$

my attempt:

using CBC twice and AM_AG inequality

$\frac{3}{3}(\frac{a}{a^2+\lambda}+\frac{b}{b^2+\lambda}+\frac{c}{c^2+\lambda})^2\leq 3 (\frac{a}{a^2+\lambda})^2+(\frac{b}{b^2+\lambda})^2+(\frac{c}{c^2+\lambda})^2\leq 3( (\frac{a}{\frac{(a+\sqrt{\lambda})^2}{2}})^2+ (\frac{b}{\frac{(b+\sqrt{\lambda})^2}{2}})^2+(\frac{c}{\frac{(c+\sqrt{\lambda})^2}{2}})^2) =12(\frac{a^2}{(a+\sqrt{\lambda )^4}}+\frac{b^2}{(b+\sqrt{\lambda )^4}}+\frac{c^2}{(c+\sqrt{\lambda )^4}})\leq12 (\frac{a^2}{\sqrt{\lambda}^4})+\frac{b^2}{\sqrt{\lambda}^4})+\frac{c^2}{\sqrt{\lambda}^4})=\frac{48(a^2+b^2+c^2)}{4\lambda ^2 }\leq \frac{48.9}{4\lambda^2} \leq \frac{9}{(\lambda +1)^2}$

beacuse $f(\lambda )=\frac{9}{(\lambda+1)^2}-\frac{48.9}{4\lambda^2} \geq 0 $ for $\lambda \geq 1$

so finally $\frac{3}{3}(\frac{a}{a^2+\lambda}+\frac{b}{b^2+\lambda}+\frac{c}{c^2+\lambda})^2\leq\frac{9}{(\lambda +1)^2}$$\Leftrightarrow$$(\frac{a}{a^2+\lambda}+\frac{b}{b^2+\lambda}+\frac{c}{c^2+\lambda})\leq\frac{3}{(\lambda +1)}$

i have just one question:

-does my attempt is true?

Is the complement of a quadrant a manifold with corners?

Posted: 26 Jun 2022 02:52 PM PDT

The definition of a manifold with corners is analogous to that of a smooth manifold, except that, instead of being locally diffeomorphic to $\mathbb R^n$, is locally diffeomorphic to $[0,+\infty)^k\times \mathbb R^{n-k}$ where $0\leq k\leq n$.

The classical example is the quadrant $Q=[0,+\infty)^2\subset \mathbb R^2$, which has $(0,0)$ as corner point.

Consider $X := \mathbb R^2 \setminus \mathrm{int}(Q).$ Is $X$ a manifold with corners?

The only problematic point is the origin which clearly has to be a corner point. We would like to show that a neighbourhood of the origin in $X$ is diffeomorphic to $Q$.

The naive idea would be to consider a map $\varphi: X\to Q$, that informally closes $X$ like a book so that it becomes $Q$. This would give us an isotopy of $X$ over $Q$, but it does not seem to be a smooth map. Indeed, this isotopy at a certain time would also show that $X$ (and hence $Q$) is diffeomorphic to $[0,+\infty)\times \mathbb R$ which is a manifold with boundary (no corners).

partial differential problem with boundary conditions

Posted: 26 Jun 2022 02:16 PM PDT

so I have got this problem and I have no idea how to solve it. please help, or give more guidelines. u=u(x,t)

a robin problem for the wave equation in a quartet plane

1.for any alpha that does not equal 0, find a solution to this robin problem for the wave equation in quarter plane.

2.what are the conditions for f(x) and g(x), so the solution would be real?

3.is there only one specific solution?

guiding given:

-for dirichlet boundary conditions we should require u would be odd in x

-for neumann boundary conditions we should require du/dx is odd in x(that means u is even in x)

-hint: what should we require for the given robin boundary condition?

any solution would be good, even if you don't use the guiding

Names for parts of relational models of modal logic

Posted: 26 Jun 2022 02:24 PM PDT

One thing that confuses me about modal logic is the exact meaning of the names Kripke frame, Kripke model, (un)pointed [[thing]], etc.

I'd like to know the standard names of different subtuples of $(P, W, R, V, w)$, defined below, so I can have a better understanding of textbooks and articles that talk about modal logic and so that I can explicitly distinguish the different possible consequence relations one might reasonably use in modal logic without making mistakes.

My question is threefold:

  1. What are the names for the different subtuples of $(P, W, R, V, w)$ in modal logic?
  2. Are there any variations in definitions that one might reasonably encounter in the wild and should pay special attention to?
  3. Is there a conventional way to refer to things that have a distinguished world?

As far as I know, relational models for modal logic have the following notion of truth. I choose as my primitive connectives $\neg, \to, \;\text{and}\; \lozenge$.

$$(P, W, R, V, w) \models A \;\; \textit{if and only if} \;\; \text{$V(w, A)$ is $1$ (i.e. $A$ holds at world $w$)}$$ $$ (P, W, R, V, w) \models \lnot \alpha \;\; \textit{if and only if} \;\; \text{it does not hold that $(P, W, R, V, w) \models \alpha$} $$ $$ (P, W, R, V, w) \models \alpha \to \beta \;\;\text{if and only if}\;\; \text{if $(P, W, R, V, w) \models \alpha$ then $(P, W, R, V, w) \models \beta$} $$ $$ (P, W, R, V, w) \models \lozenge \alpha \;\;\text{if and only if}\;\; \text{there exists a $u$ such that $wRu$ and $(P, W, R, V, u) \models \alpha$ } $$

I think the presentation above is standard. $P$ is, I think, seldom explicitly mentioned, but it does exist implicitly in standard presentations of the relational semantics of modal logic.

This is pretty straightforward, and looks a lot like the definition of truth in a first-order structure. I'm not using $\Vdash$ because I don't really know when to use it and I find systematically making an analogy with first-order logic helpful personally.

As far as I'm aware, the following terminology is standard:

  • $P$ is my set of primitive propositions.
  • $W$ is the set of possible worlds.
  • $R \subset W \times W$ is the accessibility relation.
  • $V$ is the valuation map. There's some (inconsequential) choice in exactly what type to give it. I like making it send pairs of worlds and primitive propositions to $\{0, 1\}$.
  • $w$ is the distinguished world. I sometimes call it the start world but I think this is nonstandard.

After this things get fuzzy for me. I've picked up a few names for the different subtuples of $(P, W, R, V, w)$.

I call $(P, W, R, V, w)$ a pointed model because it has a distinguished world.

I call $(P, W, R, V)$ an unpointed model because it does not have a distingushed world.

I call $(W, R, w)$ a pointed Kripke frame.

I call $(W, R)$ an unpointed Kripke frame.

I don't think that the pointed/unpointed distinction is standard though.

Given a matrix how to determine wether it correspond to a given bilinear form or not?

Posted: 26 Jun 2022 02:38 PM PDT

What is an easiest way to check does matrix $A$ corresponds to a bilinear form $\mathcal{A}$ in some basis?

For example let's say that in a standard basis bilinear form $\mathcal{A}:\mathbb{C}^{3}\times\mathbb{C}^{3}\to\mathbb{C}$ is given by matrix

\begin{equation} A_{0} = \begin{pmatrix}0&-1&-2\\-5&-8&6\\ -8&-8&19\end{pmatrix}. \end{equation}

Are there any $\mathbb{C}^{3}$ in which matrix $A_{1}$ or $A_{2}$ are its Gram's matrices?

\begin{align} A_{1} & = \begin{pmatrix} 3&4&6\\5&7&10\\8&11&17\end{pmatrix} & A_{2} = \begin{pmatrix}-1&4&10\\-2&12&25\\4&-3&-24\end{pmatrix} \end{align}

As I understand this is the same question as asking whether or not there are some matrices $U_{1}$ and $U_{2}$ such that $A_{1} = U_{1} A_{0} U_{1}^{\top}$ and $A_{2} = U_{2} A_{0} U_{2}^{\top}$.

Equivalence between ellipse definitions

Posted: 26 Jun 2022 02:41 PM PDT

In a YouTube video by MathyJaphy, the creator uses a geometric\kinematic way to describe the focus and directrix definition of the ellipse (or, equivalently, the slice-of-a-cone definition). He even created a desmos file if anyone wants to play with it a bit.

When the moving line goes faster than the expanding circle, it draws an ellipse. I understand why it's equivalent to the conic section definition of the ellipse (or the focus and directrix one), but I can't see in a straightforward way why it's gives the same result as the two foci definition, or the squashed circle one (or for that matter - why the resulting shape is symmetric).

Does anyone have a simple explanation why they're all the same? Ideally, it would be with as little algebra as possible, and without using Dandelin spheres (at least, without introducing a third space dimension). Any insight is appreciated.

Find the smallest positive number which when increased by 17, is exactly divisible by both 520 and 468.

Posted: 26 Jun 2022 02:55 PM PDT

I actually know the answer. It is just finding the LCM. But when I checked with my calculator, it showed me a decimal number. The method I followed was to find LCM and subtract 17 from it. Is my method correct? Or should I go with any better method? If yes, please help me.

Is the cosine angle between two R.V. an (approximation) not equality to the correlation coefficient?

Posted: 26 Jun 2022 02:57 PM PDT

I have seen in websites that given two R.V. $X,Y$, if $$ \cos(\theta)=\frac{X\cdot Y}{\|X\|_2\|Y\|_2} $$ and $$ \rho=\frac{\text{Cov}(X,Y)}{\sqrt{\text{Var}(X)\text{Var}(Y)}} $$then

$$ \cos(\theta)=\rho $$


This identity implies $\text{Cov}(X,Y)=X\cdot Y$. Isn't $X\cdot Y$ the Maximum Likelihood Estimate for the covariance missing some factors? If true then the equation above is not equality but rather $≈$ as the samples become bigger.

Next is the denominator which implies that $$ \text{Var}(X)= \| X\|_2 ^2$$. Again, isn't the right side not an identity but rather an estimator (MLE) to the variance of $X$? Isn't $$ \rho ≈ \cos(\theta)$$


I have also seen the dot product (without the denominator in the first equation I've given but being more general using inner products) being used to measure correlation in some papers like Least Angle Regression. I am confused about the relationship between dot products and correlation. This leads me to a general question:

Is $$ \langle X,X\rangle = \text{Var}(X) $$ in Euclidean space.

What am I missing in this proof using the projection lemma (LMIs)?

Posted: 26 Jun 2022 02:35 PM PDT

I am trying to apply the projection lemma to combine a specific pair of linear matrix inequalities (LMIs) into one LMI. I thought I had found also a specific (desirable) structure in the slack variables introduced in the process, however, the resulting LMI can never be feasible. Probably something trivial I am overlooking, but I just can't figure out which step that I am taking is wrong... hopefully someone here can spot what I am missing.

I start with the following two LMIS:

$\begin{bmatrix} E^{-1}A\\ A\\ I \end{bmatrix}^\top\underbrace{\begin{bmatrix} -P & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & P \end{bmatrix}}_{\Psi}\underbrace{\begin{bmatrix} E^{-1}A\\ A\\ I \end{bmatrix}}_{\Gamma_\perp}\succ 0$ and $\begin{bmatrix} 0\\ 0\\ I \end{bmatrix}^\top\begin{bmatrix} -P & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & P \end{bmatrix}\underbrace{\begin{bmatrix} 0\\ 0\\ I \end{bmatrix}}_{\Omega_{\perp}}\succ 0$, where $E$ is non-singular. The only decision variable, for the time being, is P.

For anyone interested, the first is essentially the descent condition for a quadratic Lyapunov function (for a non-singular descriptor system $Ex_+=Ax$) and the second states that the Lyapunov function is positive definite.

Using the projection lemma, there must exist a matrix $\Xi$ such that $\Psi+\Omega^\top\Xi\Gamma+\Gamma^\top\Xi^\top\Omega\succ 0$, where $\Gamma =\begin{bmatrix} E & -I & 0\\ 0 & -I & A \end{bmatrix}$ and $\Omega = \begin{bmatrix} E & 0 & 0\\ 0 & I & 0\end{bmatrix}$ (using the fact that $E$ is full rank). Clearly, we have now merged both LMIs into a single LMI, however, in the next step, I try to impose some desirable structure on $\Xi$.

To this end, let $A=\begin{bmatrix} U & \bar{U}\end{bmatrix}\begin{bmatrix}\Sigma & 0\\ 0 & 0\end{bmatrix}\begin{bmatrix} V^\top\\ \bar{V}^\top\end{bmatrix}$ be the singular value decomposition of $A$. Then, we introduce perform a congruence transformation with the non-singular matrix $T=\begin{bmatrix} E^{-1}U\Sigma & 0 & 0 & E^{-1}\bar{U}\Sigma\\ U\Sigma & 0 & 0 & \bar{U}\Sigma\\ V & V & \bar{V} & 0 \end{bmatrix}$, i.e., we have $T^\top \Psi T + T^\top\Omega^\top\Xi\Gamma T + T^\top \Gamma^\top\Xi\Omega T\succ 0$. Writing out $\Gamma T$, we obtain $\Gamma T = \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & U\Sigma & 0 & \bar{U}\Sigma\end{bmatrix}$. Partitioning $\Xi =\begin{bmatrix} Q & R\\ S & W\end{bmatrix}$, we have

$T^\top \Psi T + T^\top\Omega^\top\begin{bmatrix} Q & R\\ S & W \end{bmatrix}\begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & U\Sigma & 0 & \bar{U}\Sigma \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & U\Sigma & 0 & \bar{U}\Sigma \end{bmatrix}^\top\begin{bmatrix} Q & R\\ S & W \end{bmatrix}^\top\Omega T = T^\top\Psi T+T^\top\Omega^\top\begin{bmatrix} 0 & R\\ 0 & W \end{bmatrix}\begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & U\Sigma & 0 & \bar{U}\Sigma \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & U\Sigma & 0 & \bar{U}\Sigma \end{bmatrix}^\top\begin{bmatrix} 0 & R\\ 0 & W \end{bmatrix}^\top\Omega T\succ 0$

In other words, we can always set $Q=0$ and $S=0$ and, hence, there must exist $R$ and $W$ for which $\Psi+\Omega^\top\bar{\Xi}\Gamma + \Gamma^\top\bar{\Xi}\Omega\succ 0$ with $\bar{\Xi}=\begin{bmatrix}0 & R\\ 0 & W \end{bmatrix}$.

This is where I find out that somewhere along the way one of the steps must be incorrect, because writing out $\Psi+\Omega^\top \bar{\Xi}\Gamma + \Gamma^\top\bar{\Xi}\Omega\succ$ I have $\begin{bmatrix}-P & -E^\top R & E^\top RA\\ -R^\top E & -W-W^\top & WA\\ A^\top R^\top E & A^\top W^\top & P\end{bmatrix}\succ 0$. The problem is that this condition can never hold, because it implies that both $-P\succ 0$ and $P\succ 0$ (see the first and last diagonal blocks). Note that from the condition $\Omega_\perp^\top\Psi\Omega_\perp\succ 0$ we know that $P\succ 0$ and, hence, the block with $-P$ is problematic. However, I keep looking at the steps I took to get to the condition and don't see where things are going wrong.

Apologies for the long derivation, but it would help me tons if someone can point out the step where I'm making a mistake! Many thanks to anyone taking the time to think along, much appreciated! :)

Uniform Continuity and Mass of Pushforward

Posted: 26 Jun 2022 02:13 PM PDT

Let $(X,d)$ be a metric space and $m$ be a regular Borel measure on $(X,d)$. Let $(Y,\rho)$ be another metric space and $f:X\rightarrow Y$ be a uniformly continuous function with increasing continuous modulus of continuity $\omega$.

Denote the pushforward measure $f_{\#}m(A):=m(f^{-1}[A])$ and for any radius $r\geq 0$ and any $y\in Y$ denote the metric ball about $y$ of radius $r$ by $B_Y(y,r):=\{z\in Y:\,\rho(y,z)<r\}$.

What conditions do we need on $f$ so that: for every $y\in Y$ and every $r>0$ $$ f_{\#}m(B_Y(y,r))\lesssim m(B_X(x_y,\omega^{-1}(r))), $$ and where $x_y$ is a choice of an element in $f^{-1}[y]$ and where $\lesssim$ hides an absolute constant (independent of $y$ and of $r$).

Is it necessary and sufficient for $f$ to be injective with a uniformly-continuous inverse?

Proof that plane has small inductive dimension 2?

Posted: 26 Jun 2022 02:26 PM PDT

Is there an "elementary" proof that $\mathrm{ind}\, \mathbb{R}^2 = 2$, where $\mathrm{ind}$ is the small inductive dimension?

Evidently $\mathrm{ind}\, \mathbb{R}^2 \leq 2$, since each point has arbitrarily small neighborhoods that are open disks, and the boundary of such a disk has small inductive dimension 1.

So the thing that remains to prove is that $\mathrm{ind}\, \mathbb{R}^2 \neq 1$.

I know there are not-so-elementary proofs that $\mathrm{ind}\, \mathbb{R}^n = n$, but I'm looking for an elementary proof just in the case $n = 2$.

Using integration by parts to show $\int_{\Sigma} |\nabla^N_{\Sigma} X|^2 = - \int_{\Sigma} \langle X, \Delta^N_{\Sigma} X \rangle$

Posted: 26 Jun 2022 02:45 PM PDT

I'm trying to work through a derivation of the stability operator from minimal surface theory. Suppose $\Sigma^k$ is a minimal submanifold of $\mathbb{R}^n$, and suppose $X$ is a normal vector field on $\Sigma$ with compact support vanishing on the boundary.

Part of the derivation involves the integration by parts $\int_{\Sigma} \langle \nabla_{\Sigma}^N X, \nabla_{\Sigma}^N X \rangle = - \int_{\Sigma} \langle X, \Delta^N_{\Sigma} X \rangle$, where $\nabla_{\Sigma}^N$ is the normal projection of the covariant derivative on $\Sigma$, and $\Delta^N_{\Sigma}$ is the normal Laplacian defined by $\Sigma_{i = 1}^k \nabla_{E_i}^N \nabla_{E_i}^N X - \nabla^N_{\left(\nabla_{E^i} E_i\right)^T} X$, with $E_i$ being an orthonormal frame for $\Sigma$. Why does this hold? I'm aware of Green's identity holding for the Laplacian and gradient of scalar functions, but the operators involved here are normal projections, and I want to know why the identity still holds in this case.

I have tried to compute $\nabla_{\Sigma}^N (\nabla_{\Sigma}^N X)$ in hopes that its inner product with $X$ is the same as that with $\Delta_{\Sigma}^N X$, up to some terms which vanish under the integral like a divergence, but I haven't gotten anywhere.

Alternative Solution to Theorem 5, Section 2.3 of Hoffman’s Linear Algebra

Posted: 26 Jun 2022 02:17 PM PDT

In this video lecture, time stamp 19:20 - 24:30. Professor show proof of following claim: (Extend to basis) Every linearly independent list of vectors in a finite dimensional vector space can be extended to a basis of the vector space.

I known one approach to prove this claim. Here is my attempt, but I still feel confuse on my proof. I would really appreciate if you give some feedback.

Apparently professor proved this claim using different approach (from I known). Professor Proof: Let $\{v_1,…,v_m\}$ be linearly independent. Let $\{w_1,…,w_n\}$ is basis of $V$. If $w_1\in \mathrm{span}(v_1,…,v_m)$, set $B_1=\{v_1,…,v_m\}$. If $w_1\notin \mathrm{span}(v_1,…,v_m)$, set $B_1=\{v_1,…,v_m,w_1\}$. $w_j\in \mathrm{span}(B_{j-1})$, do nothing. If $w_j\notin \mathrm{span}(B_{j-1})$, set $B_j=B_{j-1}\cup \{w_j\}$. At step $n$, $V=\mathrm{span}(B_n)$, $B_n$ is linearly independent, so basis.

I don't know what professor did. You can check video(I have given time stamp) for complete context surrounding the proof. Proof seems extremely handwavy. In my proof, I constructed basis without any help of other(existing) basis, unlike $\{w_1,…,w_n\}$. Of course I proved it for subspace, but proof is essentially same for any vector space, so no modification required. Please help me in completing details of professor proof. I will have two approach in my arsenal to prove this claim(extend to basis).

Categorical Koszul duality

Posted: 26 Jun 2022 02:12 PM PDT

Consider a (small idempotent closed) dg categories C over K. I am interested in the weak dual $C^{\vee}=Fun^{ex}(C, Perf K)$ and when the functor $C \rightarrow (C^{\vee})^{\vee}$ is an quasi-equivalence, which is a kind of categorical Koszul duality.

This functor is an equivalence if C is a dualizable object in the symmetric monoidal category of (small idempotent closed) dg categories C over K, which is precisely when C is smooth and proper. It is also an equivalence if $C= D^b Coh(X)$ for a proper scheme $X$ by https://arxiv.org/abs/1312.7164.

Question: What are some other examples? Is there a general criterion for when this reflexitivity is true?

References would be appreciated as well. I am aware of https://arxiv.org/abs/1409.5934 but this paper seems to study only the class of underived categories, where the situation is quite different as the authors point out.

Confused by this Corollary

Posted: 26 Jun 2022 02:40 PM PDT

I am reading Shawn Hedman's First Course in Logic an Introduction to Model Theory. I am confused by the proof for Corollary 1.38 on page 19.

"Corollary: If $G$ can be derived from the empty set, then $G$ is a tautology.

Proof: If $\emptyset \vdash G$, then, by Monotonicity, $\mathcal{F} \vdash G $ for every set of formulas $\mathcal{F}$. By Theorem 1.37 , $\mathcal{F} \models G$ for every set of formulas $\mathcal{F}$. It follows that $ \mathcal{A}\models G$ for any assignment $\mathcal A$ and $G$ is a tautology."

I don't understand how it follows that $G$ is a tautology using Monotonicity. My guess is we could substitute a known tautology for $\mathcal F $ that is unrelated to $G$, e.g. $ \{ C \lor \neg C \} \vdash G $. Then by Monotonicity $ \{ C \lor \neg C \} \models G $ , which reads, for all assignments $\mathcal A $, if $\mathcal A \models C \lor \neg C $ then $ \mathcal A \models G$. And since $\mathcal A \models C \lor \neg C $ because $C \lor \neg C$ is a tautology, by modens ponens we get $ \mathcal A \models G$. I am not sure about that, would like confirmation. Maybe I missed something obvious.

Also I don't understand why we need to use Monotonicity in the first place (to be fair the author never said that, I am just wondering). Can't we use the Soundness theorem directly? If $\emptyset \vdash G$ then by the Soundness theorem we get $\emptyset \models G$. If we can show $\emptyset \models G$ implies $G$ is a tautology, then we are done. The statement $\emptyset \models G$ reads, for any assignment $\mathcal A $ if $\mathcal A \models \emptyset $ then $\mathcal A \models G$. Equivalently if $\mathcal A(\emptyset)=1 $ then $\mathcal A(G)=1 $. But you can't assign the empty set. So maybe the conditional is vacuously true because $\mathcal A(\emptyset) $ is always $0$. I am stumped by this. Maybe we can use an equivalent form for the conditional $p \rightarrow q \equiv \neg ~p \lor q$ .

For reference, Theorem 1.37, also known as the Soundness theorem, is the statement if $ \mathcal{F } \vdash G $ then $\mathcal{ F}\models G$, and Monotonicity is a rule for proof derivation, if $\mathcal{ F} \vdash G$ and $\mathcal{F} \subset \mathcal{F' }$ then $\mathcal{F'} \vdash G$.

Proof of Schur's Theorem for Convex Plane Curves by Guggenheimer

Posted: 26 Jun 2022 02:47 PM PDT

I'm reading Differential Geometry by Heinrich W. Guggenheimer and I have a doubt about the proof of Schur's Theorem for Convex Plane Curves on page 31. I will put the theorem and the proof here before I say what are my doubts.

$\textbf{Theorem 2-19.}$ Given two curves $f, g$ with continuous tangents and piecewise continuous curvature, obth of length $L$. If $|k_f(s)| \geq |k_g(s)|$, $k_f(s) \neq k_g(s)$, the chord subtended by $g$ is bigger than that subtended by $f$.

$\textbf{Proof.}$

We place both arcs in the lower half plane $x_2 \leq 0$ with endpoints on the $x_1$ axis, $x_{2f}(0) = x_{2f}(L) = x_{2g}(0) = x_{2g}(L) = 0$ and such that $x_{1f} (L) > x_{1f} (0)$, $x_{1g} (L) > x_{1g} (0)$. In this case, both curvatures are non-negative. The chords subtended are $d_f = x_{1f}(L) - x_{1f}(0) = \int_0^L \cos \theta_f (s) ds$, $d_g = x_{1g}(L) - x_{1g}(0) = \int_0^L \cos \theta_g (s) ds$

enter image description here

Let $s'$ be the value of the arc length for which tangent to $f(s)$ is parallel to the $x_1$ axis (look at the picture above), then

$$\theta_f (s) = \int_{s'}^s k_f(\sigma) d \sigma$$

and, because of the convexity of the arc,

$$- \pi \leq \theta_f(s) \leq \pi$$

The angle

$$\theta_g^* (s) = \theta_g(s) - \theta_g(s') = \int_{s'}^s k_g(\sigma) d \sigma \leq \theta_f(s)$$

Hence

$$d_f = \int_0^L \cos \theta_f (s) ds \leq \int_0^L \cos \theta_g^* (s) ds = d_g \cos \theta_g(s') \leq d_g$$

and $d_f = d_g$ only if $k_f(s) = k_g(s)$ for all $s$. $\square$

My doubts are

  1. Why is true that $\int_0^L \cos \theta_g^* (s) ds = d_g \cos \theta_g(s')$?

  2. How the picture helps me to conclude the assertion of my first doubt?

  3. What is the geometric idea of the proof?

Thanks in advance!

Flies in a cube

Posted: 26 Jun 2022 02:47 PM PDT

Two flies sit in the corners $(1, 1, 1)$ and $(n, n, n)$ of a discrete cube $M^3$, where $M=\{1,\ldots, n\}$.

After every unit of time both flies decide randomly and independently of each other whether or not to fly to some neighbouring point; here two points are neighbours if they differ in precisely one coordinate and that difference is precisely $1$.

What's more, all events are equally likely: if a fly currently sits in a point with $k$ neighbours, then with prob. $\frac{1}{k+1}$ it does not move and with the same prob. it also moves to any of the neighbours.

Now both flies live precisely $2n$ units of time. What is the probability that they will ever meet?

This problem is made up and so far I don't know the answer, maybe there is an elegant solution?

Is there a numeral system that makes both addition and multiplication easy?

Posted: 26 Jun 2022 02:34 PM PDT

Decimal positional notation, the system for writing numbers we all use every single day, makes addition very easy by transforming it from a computation to a repeated operation on individual digits (long hand addition). A more formal description would be that if two numbers $a$ and $b$ are presented in decimal notation, long hand addition is a constant-memory, logarithmic time algorithm to obtain the decimal notation of the number $a+b$.

Multiplication (that is, long multiplication) is not so easy in decimal notation, requiring doubly logarithmic memory and logarithmic time while relying heavily on long hand addition as a "subroutine".

It is of course straightforward to invent a numeral system where the roles of the two operations are reversed: Just use the decimal notation of $\log(a)$ to denote the number $a$. Multiplication then becomes as simple as long hand addition in standard notation, because $\log(ab)=\log(a)+\log(b)$ (this is the principle behind the slide rule). Unfortunately, the price one has to pay for this convenience is that addition is suddenly very difficult.

Is there a numeral system that provides a balance between these two extremes, that is, a system for writing numbers in such a way that both addition and multiplication can be done by hand with less effort than decimal long multiplication, but perhaps greater effort than decimal long addition? (Note the focus on still being a system that is of practical use for manual computations; I doubt anyone would be able to execute the Schönhage–Strassen algorithm by hand, even it it has better asymptotic complexity than long multiplication.)

Every cyclic group with order > 2 has at least 2 distinct generators

Posted: 26 Jun 2022 02:49 PM PDT

Every cyclic group with order > 2 has at least 2 distinct generators

Here's what I've got so far:

Either order of our group, $G$, is finite or infinite.

Suppose infinite: then our group is isomorphic to $\mathbb Z$ under addition. This group has two distinct generators, therefore so does $G$.

Suppose finite: Then $G$ is isomorphic to $\mathbb Z_n$ under addition modulo $n$. We know order of $G$ is $\geqslant 3$. If order of $G$ is odd, both 1 and 2 are generators of $\mathbb Z_n$ under addition modulo $n$, so $G$ has at least two generators.

If order of $G$ is even, then both 1 and some odd number between 1 and $n$ are generators. How do we determine this odd number?

Does this look like I'm on the right track?

No comments:

Post a Comment