Sunday, August 1, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Without totality: Semigroupoid, Small Category, and Groupoid - how not be closed possible?

Posted: 01 Aug 2021 08:27 PM PDT

Semigroupoid, Small Category, and Groupoid have group like structures

But Semigroupoid, Small Category, and Groupoid do not require totality - which I believe that means they do not need to be closed.

How can this be possible that their algebraic structure do not need to be closed? but still have a consistent algebra?

Evaluating an infinite sum?

Posted: 01 Aug 2021 08:10 PM PDT

How to evaluate this sum: $\sum ^{\infty }_{n=1}\tan ^{-1}\dfrac{1}{2n^{2}}$?

I've tried taylor series expansion and some other methods but I couldn't figure out.

Is there another example of a complex function that after analytical continuation has non trivial zeros?

Posted: 01 Aug 2021 08:01 PM PDT

Besides the well known zeta function, eta function and all such close relatives, are there simpler examples of a complex function that has non trivial zeros after analytical continuation?

Thanks

Reducing 3 Probability of events to 2

Posted: 01 Aug 2021 08:01 PM PDT

Say we have 1 event with 3 possible outcomes X1, X2, X3 we know that probability of that outcome is 0.6, what would be probability of X1 or X3 oppose to X2? Or can we combine X1 and X3 as 1 probability and have X2 as second? How someone would do that?

Example of a function of $\mathbb{Z} / 4\mathbb{Z}$ in itself that is not polynomial.

Posted: 01 Aug 2021 08:13 PM PDT

I am trying to give an example of a function of $\mathbb{Z} / 4\mathbb{Z}$ in itself that is not polynomial. For that I thought to give some characterization of the polynomial functions of $\mathbb{Z} / 4\mathbb{Z}$ in itself. I took a polynomial $P\in \mathbb{Z} / 4\mathbb{Z}[X]$ such that $P=\sum_{i=0}^{n}a_iX^i$ with $a_i\in \mathbb{Z} / 4\mathbb{Z}$. Then: $$ P(2)=a_12+a_0 $$ Then if $f$ is a function of $\mathbb{Z} / 4\mathbb{Z}$ such that $f(0)=0$ and $f(2)=1$. Then if $f$ is polynomial function so there is $P\in \mathbb{Z} / 4\mathbb{Z}[X]$ such that $P=\sum_{i=0}^{n}a_iX^i$ and $f(x)=P(x)$ for all $x\in \mathbb{Z} / 4\mathbb{Z}$. But $$ f(0)=a_0=0\qquad \text{and}\qquad f(2)=a_1 2+a_0=a_1 2=1 $$ So $2\in (\mathbb{Z} / 4\mathbb{Z})^{×}$. Contradiction.

Is this proof correct?

Smooth map between Sobolev spaces

Posted: 01 Aug 2021 07:52 PM PDT

I know that for manifolds we can define smoothness just in terms of local coordinates representation. But for two Sobolev spaces, if we have a map between them, what do we mean by this mapping is smooth?

p.s. I encounter this problem when I see the following theorem.enter image description here

geodesics on the $n$-sphere

Posted: 01 Aug 2021 07:56 PM PDT

I'm trying to find a coordinate-free expression for geodesics on the $n$-sphere. I know a priori that they're great circles; I'm just trying to show this myself. I'm sure there are a hundred ways to see that geodesics are great circles, but I would like to see it using the specific approach outlined here.

Let $\gamma$ be a geodesic on $S^n\subset\mathbb{R}^{n+1}$. Then $\gamma$ satisfies $(\nabla_{\dot\gamma}\dot\gamma)^\top=0$, where $\nabla$ is the Levi-Civita connection on $\mathbb{R}^{n+1}$ (with a standard abuse of notation, $\gamma$ here is technically an extension from a geodesic on $S^n$ to one on all of $\mathbb{R}^{n+1}$). But the geodesic equation on $\mathbb{R}^{n+1}$ is $\nabla_{\dot\gamma}\dot\gamma=\ddot\gamma=0$. So geodesics on $S^n$ satisfy $$(\nabla_{\dot\gamma}\dot\gamma)^\top=\ddot\gamma^\top=\ddot\gamma-\ddot\gamma^\perp=\ddot\gamma-\langle\ddot\gamma,\gamma\rangle\gamma=0,\tag{1}\label{eq1}$$

where the projection formula holds since $\gamma$ can be considered as a unit vector normal to the surface. Now suppose $\gamma(0)=x\in S^n$ and $\dot\gamma(0)=y\in T_xS^n=\mathbb{R}^n.$ Then the unique geodesic satisfying these initial conditions should be something like $$\gamma(t)=\cos(t)x+\sin(t)y.$$ This $\gamma$ clearly satisfies the initial conditions. However, $\ddot\gamma=-\gamma$, and so $$\langle\ddot\gamma,\gamma\rangle=-\langle\gamma,\gamma\rangle=-\cos^2(t)\|x\|^2-\sin^2(t)\|y\|^2=-\left(\cos^2(t)+\sin^2(t)\|y\|^2\right)$$ since $\|x\|=1$. But $y$ doesn't have norm $1$ (even if it did, this $\gamma$ wouldn't satisfy the geodesic equation, i.e. equation $(1)$), so let us instead try $$\gamma(t)=\cos(\|y\|t)x+\sin(\|y\|t)\frac{y}{\|y\|},$$ where we've multiplied the arguments of sine and cosine with $\|y\|$ so that the geodesic will satisfy the initial condition $\dot\gamma(0)=y$. But in this case, $\ddot\gamma=-\|y\|^2\gamma$, and so again, equation $(1)$ is not satisfied.

Can someone please explain what I have done wrong? Again, I am not looking for a different way (especially one in coordinates) to show that geodesics on $S^n$ are great circles.

Prove that $\left(\Re(x^*Bx)\right)^2+\left(\Im(x^*Bx)\right)^2\leq\left(\Re(x^*T^*ATx)\right)^2+\left(\Im(x^*T^*ATx)\right)^2$

Posted: 01 Aug 2021 07:39 PM PDT

Given $A=\mathrm{diag}\{e^{i\theta_1},e^{i\theta_2},e^{i\theta_3}\}$, $B=\mathrm{diag}\{\gamma e^{i\beta},0,\gamma e^{i\alpha}\}$, such that $\pi/2\geq\beta\geq\theta_j\geq\alpha\geq-\pi/2$ for $j=1,2,3$ and $\gamma\in\mathbb{R}$, $\gamma>0$. Prove that $$\left(\Re(x^*Bx)\right)^2+\left(\Im(x^*Bx)\right)^2\leq\left(\Re(x^*T^*ATx)\right)^2+\left(\Im(x^*T^*ATx)\right)^2$$

for any unit vector $x$ and any nonsingular $T$ such that $\sigma_{max}(T^*AT)\leq\gamma$. ($\sigma$ is singular value)


Lets $B=\gamma C$, where $C=\mathrm{diag}\{ e^{i\beta},0, e^{i\alpha}\}$

\begin{aligned} \left(\Re(x^*Bx)\right)^2+\left(\Im(x^*Bx)\right)^2&\leq\left(\Re(x^*T^*ATx)\right)^2+\left(\Im(x^*T^*ATx)\right)^2\\ |x^\ast Bx|^2&\leq|x^\ast T^*ATx|^2\\ |x^\ast Bx|&\leq|x^\ast T^*ATx|\\ \gamma|x^\ast Cx|&\leq|x^\ast T^*ATx|\\ \end{aligned}

I am stuck from here.

How do I find limit of integral 1/(1+x^n) using fundamental theorem of calculus? [closed]

Posted: 01 Aug 2021 08:00 PM PDT

∫𝑏𝑎𝑓(𝑥)𝑑𝑥 Am I right to use fundamental theorem of calculus for this?

Frictionless bead sliding along a rotating stick

Posted: 01 Aug 2021 07:44 PM PDT

Problem 6.8 in Morin's book on Classical Mechanics has this setup:

A massless stick pivots at its end in a horizontal plane with constant angular velocity $\omega$, while a frictionless bead of mass $m$ slides along it.

The goal is to compute the Lagrangian $L$ and the Hamiltonian $p\cdot \dot{q}-L=H$.($\ast$)

The solution makes sense to me when, using polar coordinates, the Lagrangian winds up being just the kinetic energy $L=\frac12 m\dot r^2+\frac12mr^2\dot\theta^2=\frac12 m\dot r^2+\frac12mr^2\omega^2$

because $\dot\theta=\omega$

Then to evaluate $H=\frac{\partial L}{\partial \dot\theta}\dot\theta+\frac{\partial L}{\partial\dot r}\dot r - L$, I think that

$\frac{\partial L}{\partial \dot\theta}=mr\dot\theta$ and $\frac{\partial L}{\partial\dot r}=m\dot r$, so that the terms before $-L$ in $H$ above become

$mr\dot\theta^2+m\dot r^2$

and therefore I thought $H=\frac12 m\dot r^2 + \frac12mr^2\dot\theta^2=\frac12 m\dot r^2 + \frac12mr^2\omega^2$.

But according to the solution, this ought to come out to just $m\dot r^2$ (as opposed to also having $mr\dot\theta^2$) and $H$ is supposed to be $\frac12 m\dot r^2 - \frac12mr^2\omega^2$.

What it looks like to me, is that rather than continue to use the form of $L$ with $\dot \theta$ in it to compute the conjugate momenta, $\dot\theta$ was immediately replaced with the constant $\omega$ so that the term $\frac12mr^2\omega^2$ became zero in in $\frac{\partial L}{\partial \dot\theta}$ (being a constant with respect to $\dot\theta$.)

So that is what I'm asking about: why should I believe we have license to substitute $\omega$ for $\dot\theta$ into $L$ before computing a partial derivative? I do not recall any explicit discussion of that but it may be in there: I only have the sample chapter, not the whole book. Obviously it does not yield the same answer if you substitute at the very end!

($\ast$) I guess probably I shouldn't call it the Hamiltonian because the book doesn't do that. The Hamiltonian is supposed to be a function of the coordinates and the conjugate momenta. But as I understand it, $H$ and $L$ are supposed to be related this way via the Legendre transformation.

This is math it is about Graphs and relation [closed]

Posted: 01 Aug 2021 08:07 PM PDT

To rent a power btool from ACE Hire, there is an initial charge of $12 and then an additional charge of $30 per hour. IfCis the cost in dollars and f is the time in hours, then the formula connecting C and t?

Solve $x^{x^5} = 5$ for $x$

Posted: 01 Aug 2021 07:38 PM PDT

A friend who tutors high school math showed me this equation. I could not solve it. But by trial & error in Python I discovered $\sqrt[5]{5}$ is the answer. Further realized that for any integer $n$, $x^{x^n} = n$ is solved by $\sqrt[n]{n}$. How does one handle such equations by ordinary algebraic methods?

Defining Dirac's Delta in one dimensional with two variable

Posted: 01 Aug 2021 07:42 PM PDT

I have a problem in one dimension (x-axis) with two spatial variables $x_{1}$ and $x_{2}$ and I have an Dirac's Delta $\delta(x_{1}-x_{2})$. My question is: how can I properly define this $\delta$?

I thought about

$$\delta(x_{1}-x_{2}) \varphi(x) = \varphi(x_{2})$$ for all $\varphi \in C^{\infty}_{c}(\mathbb{R})$.

Is this definition mathematically precise? I don't know if this is correct (I thought maybe $\varphi(x_{1},x_{2})$ and $\delta(x_1 - x_2) \varphi(x_1, x_2) = \varphi(x_{2})$ but I'm not sure either).

Thanks.

Let $g\in R[a,b]$, $f$ a bounded function and $(x_n)$ a sequence, such that $f(x)=g(x)$ for all $x\in [a,b]$, $x\neq x_n$. Then $f\not\in R[a,b]$

Posted: 01 Aug 2021 07:49 PM PDT

I was asked to get an example where $f$ does not accomplish be Riemann integrable.

Let $g:[a,b]\longrightarrow \mathbb{R}$ be a Riemann integrable function and $f:[a,b]\longrightarrow \mathbb{R}$ a bounded function and $(x_n)$ a sequence of points on $[a,b]$ such that $f(x)=g(x)$ for all $x\in [a,b]$, $x\neq x_n$. Show with an example that $f$ is not Riemann integrable.

This question sounds a little weird, I have been trying to get functions with these properties but I am not sure at all. I thought in something like $f(x)=\cos x$ and its McLaurin series ($g(x)$), but I did not get anything. Could you suggest something?

$(I-A)^k=0 \text{ implies that } \exists A^{-1} \text{ s.t. }AA^{-1}=I $

Posted: 01 Aug 2021 08:09 PM PDT

I think this proposition is right. If this is not right, could you provide a counter example? However, this is definitely right for the $\mathbb{R}^3 $ case. Here is how I proved it. Is it right and if so rigorous? Even if yes, are there more ways to prove this, I'm really curious. I put two ways, I'm not sure if either is right, or maybe one of them is rigorous and the other one is not. Could you please point out flaws in the proofs if the idea is right but it is not rigorously explained? I'm new to proofs and it's summer so I can't annoy my professors. Thanks.

$\\(I-A)^k=0 \\ det(I-A)^k=0 \Rightarrow det(I-A)=0 \\ \text{Therefore there exists a non-zero vector x, }\\ (2) \quad(I-A)x=Ix-Ax=0 \\ {\bf\text{Method 1 by finding the eigenvalues of A}}\\ \text{From (2), } Ax=\lambda_{2}x \Rightarrow \lambda_2=1 \\ \therefore \text{The eigenvalues of A must be 1 and therefore A is invertible.}\\ {\bf\text{Method 2 by contradicting that x is a non-zero vector}}\\ \text{Assume A is not invertible, so 0 is an eigenvalue of A: } \lambda_2=0 \Rightarrow Ax=0x=0 \\ \text{Then from (2) we get } Ix-Ax=Ix=0 \Rightarrow x={\bf{0}} \\ \text{This is a contradiction to } x\neq{\bf{0}}\text{, thus the assumption that A is not invertible is false.}\\ \therefore\text{ A is invertible and }\exists A^{-1} \text{ s.t. }AA^{-1}=I $

Lifting ${\rm SO}(D)$ to ${\rm Spin}(D)$ smoothly

Posted: 01 Aug 2021 07:52 PM PDT

I have a question about a very basic thing. Let $M$ be a smooth manifold and suppose that we have a smooth map $g:M\to {\rm SO}(D)$. Let $\rho : {\rm Spin}(D)\to {\rm SO}(D)$ be the covering map. Since $\ker \rho = \{\pm 1\}$ at each $x\in M$ the equation $$\rho(\tilde{g}(x))=g(x)\tag{1}$$

has two solutions. So choosing one solution at each $x\in M$ this defines a function $\tilde{g}:M\to {\rm Spin}(D)$. Intuitively it is very clear to me that we can choose the solutions so that $\tilde{g}$ is smooth as well, but I fail to see how to prove it. I mean this is something that is probably extremely basic, but I'm really missing it. So how could we argue that it is always possible to pick the solutions of (1) at each $x\in M$ such that the resulting function $\tilde{g}$ is smooth?

Problem with the indefinite integral $\int{\frac{\sqrt{{{b}^{2}}{{x}^{2}}-{{a}^{2}}}}{{{x}^{2}}}dx}$

Posted: 01 Aug 2021 07:51 PM PDT

I tried using a substitution $x=\frac{a}{b}\sec \theta $ to solve

$$\int{\frac{\sqrt{{{b}^{2}}{{x}^{2}}-{{a}^{2}}}}{{{x}^{2}}}dx}$$

Obviously, $dx=\frac{a}{b}\sec \theta \tan \theta d\theta $. In other words,

$\begin{align} \int{\frac{\sqrt{{{b}^{2}}{{x}^{2}}-{{a}^{2}}}}{{{x}^{2}}}dx} &=\int{\frac{\sqrt{{{b}^{2}}{{\left( \frac{a}{b}\sec \theta \right)}^{2}}-{{a}^{2}}}}{{{\left( \frac{a}{b}\sec \theta \right)}^{2}}}\left( \frac{a}{b}\sec \theta \tan \theta \right)d\theta } \\ &=\int{\frac{\sqrt{{{\sec }^{2}}\theta -1}}{\frac{1}{b}\sec \theta }\left( \tan \theta \right)d\theta } \\ & =b\int{\frac{{{\tan }^{2}}\theta }{\sec \theta }d\theta } \\ &=b\int{\frac{{{\sec }^{2}}\theta -1}{\sec \theta }d\theta } \\ \end{align}$

But I feel there's something wrong with my solution. Perhaps there's a more effective/better way to solve it?

Need some help showing that: $\lim_{k \to \infty} \int_{\mathbb{R}^n \backslash B(0,k)} f d\lambda ^n = 0$

Posted: 01 Aug 2021 08:21 PM PDT

Assignment: I would like to verify this attempt

Let $f: \mathbb{R}^n \rightarrow \mathbb{R} $ a real function Lebesgue integrable. Show that:

$ \lim_{k \to \infty} \int_{\mathbb{R}^n \backslash B(0,k)} f d\lambda ^n = 0$

Proof: Let $A=\mathbb{R}^n \backslash B(0,k) $ And let S be a rectangle that contains A, and let us extend each of the functions $f_k$, and also f, to S, setting them to zero at S \ A.

Then these extensions have the property that $f_k$ converges to $f$ uniformly in S. For each $k ∈ N$

Now let $D_k$ be the set of points of discontinuity of the function $f_k$ (extended) and let

$G := S \backslash \cup_{k=1}^{∞} D_k.$

It is evident that all functions $f_k$ are continuous in $G$, and furthermore $f_k$ converges uniformly to $f$ in $G ⊆ S$. Then, since the uniform limit of a sequence of continuous functions in a set is continuous in that set, we have that $f$ is continuous in $G$. Therefore, the set $D$ of the points of discontinuity of f is contained in $S \backslash G = \cup_{k=1}^{∞} D_k$, which has zero measure because it is a countable union of sets of zero measure (the $D_k$ have zero measure because each $f_k$ is integrable). Then $D$ also has zero measure and so $f$ is integrable.

dual of a linear programming problem

Posted: 01 Aug 2021 08:05 PM PDT

Could anyone tell me how to write dual of the following problem?

\begin{align} \text{maximize}~~& \sum_{i=1}^{n} v_{i} x_{i}\\ \text{subject to}~~& \sum_{i=1}^{n} x_{i} \le 1, \notag \\ & x_{i}\ge 0 \quad \forall i,\notag\\ & v_i\ge 0\quad \forall i. \end{align}

Prove that $\{x = (x_1, x_2, \dots ) \mid |x_k| \leq M \ \text{for all $k$ , for some positive number $M$}\}$ is closed.

Posted: 01 Aug 2021 07:44 PM PDT

Consider $\mathbb{R}^\omega$ (countably infinite product of $\mathbb{R}$) with the uniform metric.

Let $A$ be the set of infinite bounded sequences of $\mathbb{R}$, i.e. $$A = \{x = (x_1, x_2, \dots ) \mid |x_k| \leq M \ \text{for all $k$ , for some positive number $M$}\}$$

Consider the metric $d(x,y) = \mathop{\text{sup }}_k \{\text{min}\{|x_k-y_k|, 1\}\}$.

I want to prove $A$ is closed in the topology generated by $d$ (uniform topology).

My attempt:

We want to prove $A^c$ is open. Let $x = (x_k) \in A^c$, so $x = (x_k)$ is a unbounded sequence. Suppose there exist a $M$ such that $|x_k| > M$ for infinitely many $k$. I want to show there exist some $r>0$ such that $B_d(x,r) \subset A^c$. Take $r = \frac{M}{2}$, and $B_d(x, \frac{M}{2}) \subset A^c$. Let $y = (y_k) \in B_d(x, \frac{M}{2})$.

But I have no idea how to proceed further?

Actually, I want to prove $y = \{y_k\}$ is unbounded.

Please help me.

Hard and non-trivial inequality with three variable reals

Posted: 01 Aug 2021 08:14 PM PDT

For all reals $a$, $b$, $c$, show that $$a^2+b^2+c^2 \geq a\sqrt[\leftroot{-1}\uproot{1}4]{\frac{b^4+c^4}{2}} + b\sqrt[\leftroot{-1}\uproot{1}4]{\frac{c^4+a^4}{2}} + c\sqrt[\leftroot{-1}\uproot{1}4]{\frac{a^4+b^4}{2}}.$$

I tried to use Holder inequality: $$\left(\frac{1}{2}+\frac{1}{2}+\frac{1}{2}\right)(a^2+b^2+c^2)(a^2+b^2+c^2)\Big(b^4+c^4+c^4+a^4+a^4+b^4\Big) \geq \text{RHS}^4$$ or

$$(a^2+b^2+c^2)^4 \geq (1+1+1)(a+b+c)^2 \Big(\sum_\text{cyc}{\frac{a^2(b^4+c^4)}{2}} \Big) \geq \text{RHS}^4$$

But got stuck after that (the $\geq$ is reversed).

When can we guarantee the existence of an entire function with specified values at the integers?

Posted: 01 Aug 2021 07:54 PM PDT

Carlson's theorem is a uniqueness theorem that states (loosely) that if an entire function $f(z)$ of a complex variable takes on values $f_n$ at the integers $n \in \mathbb{N}$, then any other entire function that takes on the same values $f_n$ at the integers must behave very differently from $f(z)$ as $|z| \to \infty$.

Carlson's theorem only discusses uniqueness, but I'm wondering about the flip side, existence.

  1. Given a double-sided sequence $f_n \in \mathbb{C}$, is there always an entire interpolating function that takes on those values at the integers? What if we place restrictions on the sequence $f_n$ (e.g. if we assume that it's bounded, or falls off as $n \to \pm\infty$)?
  2. If we can guarantee the existence of such an entire interpolating function, then (assuming a sufficiently slowly growing sequence $f_n$) can we bound the asymptotic growth of the function? If so, then together with Carlson's theorem, that would guarantee that (for a sufficiently slowly growing sequence $f_n$) there is a unique slowly-growing entire interpolating function.

For question #1, one possible construction would be to take an entire basis function like $g(z) = \frac{\sin(z)}{z}$ and then form the infinite linear combination $\sum \limits_{n \in \mathbb{N}} f_n\, g(z - n)$. I believe that $f_n$ would need to fall off faster than $1/\log(n)$ for large $n$ in order for this series to converge on the real axis, but since this choice of $g(z)$ grows quickly in the imaginary direction, $f_n$ might need to fall off much faster in order to ensure convergence off the real axis. We could modulate the basis function $g(z)$ by something like $\exp \left( -z^2 \right)$ to improve convergence in the real direction, but that would make it worse in the imaginary direction (and vice versa for $\exp \left(z^2 \right)$).

Must the poset of "automorphism group variants" be upwards-directed?

Posted: 01 Aug 2021 07:58 PM PDT

Say that two structures $\mathfrak{A},\mathfrak{B}$ are parametrically equivalent ("$\approx$") iff they have the same underlying set and each primitive relation/function of one is definable (by a single first-order formula, with parameters) in the other. The automorphism group spectrum of a structure $\mathfrak{A}$ is the set of automorphism groups of its parametric equivalents: $$\mathsf{AGS}(\mathfrak{A})=\{Aut(\mathfrak{B}): \mathfrak{B}\approx\mathfrak{A}\}.$$

In general this may be quite large. For example, the following three structures are parametrically equivalent but have quite different automorphism groups:

  • $\mathfrak{A}=(\mathbb{Q};+,<,1)$ has no nontrivial automorphisms. (Indeed every structure is parametrically equivalent to a rigid one - just add constants naming every element.)

  • $\mathfrak{B}=(\mathbb{Q};+,<)$ has some automorphisms: they're exactly the maps $x\mapsto ax$ for $a$ a positive rational.

  • The "torsor-and-betweenness version" of $\mathfrak{B}$, namely $$\mathfrak{C}=(\mathbb{Q}; (x,y,z)\mapsto x-y+z, \{(x,y,z): \vert y-x\vert+\vert z-y\vert=\vert z-x\vert\}),$$ has even more automorphisms: its automorphism group is generated by $Aut(\mathfrak{B})$, multiplication by $-1$, and addition by any fixed rational number.

There's quite a lot (to put it mildly!) known about the relationship between a structure and its automorphism group. I'm curious about the relationship between a structure and its automorphism group spectrum, which is obviously a vastly looser construction. However, at present I know very little about this even in relatively tame situations. I think the following seemingly-trivial question is a good starting point:

Is $\mathsf{AGS}(\mathfrak{A})$ always upwards-directed? That is, if $\mathfrak{A}\approx\mathfrak{B}$, must there be a $\mathfrak{C}$ with $\mathfrak{C}\approx\mathfrak{A}$ and $Aut(\mathfrak{A})\cup Aut(\mathfrak{B})\subseteq Aut(\mathfrak{C})$?

(Of course downwards-directedness is trivial, since we can just combine the structures involved in the obvious way. In fact $\mathsf{AGS}(\mathfrak{A})$ is always a lower semilattice.)

Note that the structures involved do not have to have finite languages, so we have a fair amount of control here.


EDIT: I think it may be helpful to include a bit more "flavor:" namely, $\mathsf{AGS}(\mathfrak{A})$ never (except in trivial situations) has a greatest element.

Suppose we have a relational structure $\mathfrak{A}$ with domain $A$ and primitive relations $R_i$ of arity $n_i$ ($i\in I$) and elements $a,b\in\mathfrak{A}$. Consider the new structure $\hat{\mathfrak{A}}$ defined as follows. The domain of $\hat{\mathfrak{A}}$ is $A$, and the language of $\hat{\mathfrak{A}}$ has a $2n_i$-ary relation $S_i$ for each $n_i$-ary relation $R_i$. We set $S_i^\hat{\mathfrak{A}}$ to be the set of tuples $(x_1,...,x_{2n_i})\in A^{2n_i}$ such that

  • for each $1\le k\le n_i$, either $x_{2k-1}=x_{2k}$ or $\{x_{2k-1},x_{2k}\}\subseteq\{a,b\}$, and

  • in $\mathfrak{A}$ we have $R_i([x_1,x_2],[x_3,x_4],...,[x_{2n_i-1}, x_{2n_i}])$,

where the bracket operation is defined as follows: $$[u_1,u_2]=\begin{cases} u_1 & \mbox{ if }u_1=u_2\not\in\{a,b\},\\ a & \mbox{ if }\{u_1,u_2\}\in\{\{a\},\{b\}\},\\ b & \mbox{ if }\{u_1,u_2\}=\{a,b\},\\ \mbox{[doesn'tmatter]} & \mbox{otherwise}.\\ \end{cases}$$

We have $\mathfrak{A}\approx\hat{\mathfrak{A}}$ (just name one of $a$ or $b$), but now the permutation swapping $a$ and $b$ and leaving everything else fixed is in $Aut(\hat{\mathfrak{A}})$. Teasing this out, the only way $\mathsf{AGS}(\mathfrak{A})$ can have a top element is if $\mathfrak{A}$ is parametrically equivalent to a structure where every permutation is an automorphism.

Admittedly things might get more interesting if we weaken the notion of maximality (e.g. ask whether there is $H\in\mathsf{AGS}(\mathfrak{A})$ such that for all $\mathfrak{B}\approx\mathfrak{A}$ there are $\pi_1,...,\pi_j$ such that the group generated by $H\cup\{\pi_1,...,\pi_j\}$ contains $Aut(\mathfrak{B})$) or constrain the language (e.g. forbid relations of arity $>n$ for some "small" $n$) but at least for literal maximality there is no interesting behavior possible.

Choice of $\delta$ for "brute force" proof of continuity of exponential function $e^x$

Posted: 01 Aug 2021 07:57 PM PDT

I have read several answers (example 1, example 2) about continuity of $e^x$, but most rely on Power Series definition of $e^x$, or sequential definition of a limit, or squeeze theorem.

I would like a brute-force proof that meets the following criteria:

  • Does NOT use sequential definition of limit
  • Does NOT use Squeeze Theorem
  • Uses $\epsilon-\delta$ definition of continuity directly
  • Does NOT use perturbations (e.g. $|e^{a + h} - e^a|$)
  • Uses definition of limit, starting with a $0 < |x-a| < \delta$ and ending with $|e^x - e^a| < \epsilon$
  • Is NOT based on power series definition of $e^x$
  • Is based on elementary limit definition $e^x = \lim_{n \to 0} (1+n)^{\frac{x}{n}}$

I would like to use Bernoulli's Inequality like this answer: \begin{align*} y+1 \le \ & \ e^y \le \frac{1}{1-y} \\ \to \quad \quad y \le \ & \ e^y - 1 \ \le \ \frac{y}{1-y} \\ \to \quad x-a \le \ & \ e^{x-a}-1 \ \le \ \frac{x-a}{1-(x-a)} \end{align*} except I am trying to modify that proof so it doesn't depend on Squeeze Theorem.


Proof attempt:

Let $\epsilon > 0$ and $a > 0$ arbitrary. Choose $\delta = \frac{\epsilon}{e^a}$. Then \begin{align*} & \quad 0 < |x - a| < \delta \quad \quad \quad \textrm{ (Given)}\\ &\to \quad |e^{x-a}-1| \quad < \delta \quad \quad \textrm{ (Bernoulli Inequality?)} \\ &\to \quad |e^{x-a}-1| < \frac{\epsilon}{e^a} \quad \quad \textrm{ (Substitute $\delta=\frac{\epsilon}{e^a}$)} \\ &\to \quad e^a|e^{x-a}-1| < \epsilon \quad \quad \textrm{ (Multiply both sides by $e^a$)} \\ &\to \quad |e^x-e^a| < \epsilon \quad \quad \quad \textrm{ (Distribute $e^a$ into absolute value)} \\ & \to \quad \lim_{x \to a} e^x = e^a \quad \quad \quad \textrm{ (Definition of limit)} \end{align*}


I know my proof is supposed to use the Bernoulli Inequality, $$x-a \le e^{x-a}-1 \le \frac{x-a}{1-(x-a)},$$ so I tried using it (probably incorrectly) in step 2. Just because $|x-a| < \delta$ doesn't mean $e^{x-a}-1$ (bigger) is also less than $\delta$. It may be bigger than $\delta$. So I am having trouble going from step 1 to step 2.

Also, Chappers' answer here suggests defining $\delta$ such that $$\max\left\{|x-a|, \left|\frac{x-a}{1-(x-a)}\right| \right\} < \frac{\epsilon}{e^a}$$ but the answer doesn't say what that expression for $\delta$ would be, specifically?


Edit 7/29 ($2^{nd}$ Proof Attempt):

Some comments are suggesting I need to choose $$\delta=\max\left\{|x-a|, \left|\frac{x-a}{1-(x-a)}\right|\right\}.$$ Making this substitution, our proof becomes \begin{align*} & \quad 0 < |x - a| < \delta \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\textrm{ (Given)}\\ & \quad 0 < |x - a| < \max\left\{|x-a|, \left|\frac{x-a}{1-(x-a)}\right| \right\} \quad \quad \quad \textrm{ (Substitution of $\delta$)}\\ &\quad \quad \quad \vdots \\ &\quad \quad \quad ? \\ &\quad \quad \quad \vdots \\ &\to \quad e^a|e^{x-a}-1| < \epsilon \quad \quad \textrm{ (Multiply both sides by $e^a$)} \\ &\to \quad |e^x-e^a| < \epsilon \quad \quad \quad \textrm{ (Distribute $e^a$ into absolute value)} \\ & \to \quad \lim_{x \to a} e^x = e^a \quad \quad \quad \textrm{ (Definition of limit)} \end{align*} I am not sure how to fill in the gaps. The left hand side needs to somehow become $e^a |e^{x-a}-1|$. The right hand side needs to become $\epsilon$. But it seems to me that by making this choice, $\delta$ is no longer a function of $\epsilon$.

If $\phi_\sigma$ is the pdf of $\mathcal N(0,\sigma^2)$ and $\psi$ is another pdf, does $(\phi_\sigma*\psi)/\phi_\sigma\to1$ for $\sigma\to\infty$?

Posted: 01 Aug 2021 08:03 PM PDT

Let $\phi_\sigma$ denote the probability density function of $\mathcal N(0,\sigma^2\cdot id)$, where $id$ is the identity matrix in $\Bbb R^{n\times n}$. If $X,Y$ are independent $\Bbb R^n$-valued random variables with $X_\sigma\sim\mathcal N(0,\sigma^2\cdot id)$, then $X_\sigma+Y$ has the density

$$ \Phi_\sigma(x)=\int_{\Bbb R^n} \phi_\sigma(x-y) P(Y\in dy), \quad x\in\Bbb R^n. $$

For the sake of simplicity, I'll suppose that $Y$ also has a density $\psi$, so $\Phi_\sigma$ is simply the convolution $\phi_\sigma*\psi$.

What I am interested in: Find conditions on the law of $Y$ under which we have

$$ \frac{\Phi_\sigma(x)}{\phi_\sigma(x)} \xrightarrow{\sigma\to\infty} 1, \quad x\in\Bbb R^n. \tag{1} $$

Any reference that treats this or similar questions is just as helpful as a hint or a solution. I'd expect that the law of $Y$ needs to have sufficiently nice tails (or just moments) for this to work...

What I have done so far: For any fixed $y\in\Bbb R^n$ and compact set $K\subset\Bbb R^n$, we have

\begin{align*} P(X_\sigma+y\in K)&=\int_{K-y}\phi_\sigma(x)dx=\int_K \phi_\sigma(x-y)dx\\ &=\int_K \phi_\sigma(x) \exp((1/2\sigma^2)(2x^\top y-|y|^2)) dx. \end{align*}

For $\sigma \to \infty$, the term $\exp((1/2\sigma^2)(2x^\top y-|y|^2))$ goes to 1 locally uniformly in $x$ and $y$, so we can infer that

$$ \frac{P(X_\sigma+y\in K)}{P(X_\sigma\in K)} \xrightarrow{\sigma\to\infty} 1 $$

locally uniformly in $y$. Here's where my argument becomes sloppy: For sufficiently large $N\in\Bbb N$ and $\sigma>0$ we should then have

\begin{align*} P(X_\sigma+Y\in K)&\approx \int_{[-N,N]} P(X_\sigma+y\in K)\psi(y)dy \\ &\approx \int_{[-N,N]} P(X_\sigma\in K)\psi(y)dy \\ &\approx P(X_\sigma\in K), \end{align*}

but here I just naively change the order of limits with respect to $N$ and $\sigma$. Even if we assume that this can be fixed and

$$ \frac{P(X_\sigma+Y\in K)}{P(X_\sigma\in K)} \xrightarrow{\sigma\to\infty} 1 $$

actually holds, is this enough to conclude that (1) is true?

A modified median

Posted: 01 Aug 2021 08:24 PM PDT

Assume we have real numbers $x_1 < x_2 < \dots < x_n$. Consider the following function which returns the average distance of a point $t$ from $x_1,\dots,x_n.$ $$ D_1(t) = \frac{1}{n}\sum_{i=1}^n|x_i - t|. $$

It is well known that $D_1(t)$ is minimized at a median of $x_1,\dots,x_n$.

I am interested in the centrality parameter that is obtained when the mean is replaced by the median above (you may assume $n$ is odd and $n \geq 3$ if it helps).

So, my question is this : Is there a simple expression for $$ \arg\min D_2(t) $$ where $$ D_2(t) = \operatorname{median}\{|x_1-t|,\dots,|x_n-t|\}? $$

I have done some numerical experiments. I think that there is always a minimum at a point of the form $\frac{x_i + x_j}{2}$ where $i \leq j$ and $\frac{x_i+x_j}{2}\leq x_{i+1}.$

This problem is combinatorial because one has to keep track of how the order of $|x_i - t|$'s change as $t$ changes.

Update: User Joe has conjectured that if $n=2k+1$ is odd then the minimum occurs that $t = \frac{x_m + x_{m+k}}{2}$ where $m = \arg\min (x_{m+k} - x_m)$.

The plot of the objective function for $1, 2, 3, 10, 11, 12, 20$ shows that a median may not be the minimizer. The conjecture is valid here (k=3), a minimum occurs at $\frac{x_1+x_4}{2}$ and $x_4-x_1=5.5$.

An example where median is not the minimizer, conjecture holds (k=3)

Another example where the conjecture is valid. Here $k=4$ and the minimum occurs at $\frac{x_1+x_5}{2}=3.$

enter image description here

Line Intersect Plane Z=0

Posted: 01 Aug 2021 08:02 PM PDT

If got a problem with getting the point of intersect a line throug a plane. I cant find any information about it. I can find information about a plane that is not flat.

I have a graphical representation of my problem: http://faceting.vondsten.nl/hmm.cgi (the balls should be coming on the plane but they dont. so i used 1/2 of the line to see if my line particle was working)

This is my situation i have a plane with 4 points: (-1, 1, 0) (1, 1, 0) (1, -1, 0) (-1, -1, 0) (x,y,z)

I got line drawed by the 2 points:

P1 = (0.5, 5.55111512312578e-017, 0.207106781186547)

P2 = (0.5, 0.707106781186547, -0.5)

I get this far that i can make a partical. (x, y, z)+t[x-x1, x-y1, z-z1] but when i try to use the equation x + y + z = 0 the awnser wont get 0 for the Z.

I dont know what ore where i need to look to get my calculation right. I have a feeling that x + y + z = 0 isnt going to work.

How can i calculate the intersection by a plane that is using 4 points? The x and y can also be changed, they are infinity.

Sorry for my typos, im dutch and i did not have any higher math on school. So maybey my thinking is completely wrong.

Manny thanks,

Christian

Calculation:

x = 0.5

y = 5.55111512312578e-017

z = 0.207106781186547

a = 0.5

b = 0.707106781186547

c = -0.5


Making Partical

xt = (x - a)

yt = (y - b)

zt = (z - c)


Calculation with plane (x + y + z = 0):

i = (xt + yt + zt)

l = (x + y + z)

o = (i + l)


x = (x + (xt * o)) = Awnser: 0.5

y = (y + (yt * o)) = Awnser: 0.207106781186548

z = (z + (zt * o)) = Awnser: -7.21644966006352e-016 is NOT nul


I know for shure that this part is wrong:

i = (xt + yt + zt)

l = (x + y + z)

o = (i + l)

Convex Hull = Boundary+Segments

Posted: 01 Aug 2021 08:24 PM PDT

If $A\subseteq\mathbb{R}^n$ is an non empty set and $H$ is the convex hull of $A$, how can I prove that the boundary of $H$ consists only of points that lie in the boundary of $A$ and segments that join points from the boundary of $A$?

Why imaginary numbers axis is plotted perpendicular to the real numbers axis?

Posted: 01 Aug 2021 08:25 PM PDT

Negative numbers axis is plotted to the opposite side of the positive real number axis that make sense but i do not understand why imaginary numbers are plotted perpendicular to the real numbers axis.

Finding $\tan t$ if $t=\sum_{i=1}^{\infty}\tan^{-1}\bigl(\frac{1}{2i^2}\bigr)$

Posted: 01 Aug 2021 08:19 PM PDT

I am solving this problem.

Problem. If $$\sum_{i=1}^{\infty} \tan^{-1}\biggl(\frac{1}{2i^{2}}\biggr)= t$$ then find the value of $\tan{t}$.

My solution is like the following: I can rewrite: \begin{align*} \tan^{-1}\biggl(\frac{1}{2i^{2}}\biggr) & = \tan^{-1}\biggl[\frac{(2i+1) - (2i-1)}{1+(2i+1)\cdot (2i-1)}\biggr] \\\ &= \tan^{-1}(2i+1) - \tan^{-1}(2i-1) \end{align*}

and when I take the summation the only term which remains is $-\tan^{-1}(1)$, from which I get the value of $\tan{t}$ as $-1$. But the answer appears to be $1$. Can anyone help me on this.

No comments:

Post a Comment