Wednesday, June 1, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Set on which softmax converges uniformly

Posted: 01 Jun 2022 01:47 PM PDT

Fix a positive integer $n$, a scale $s>0$ and consider the softmax function, defined for any $x\in \mathbb{R}^n$ by $$ f_s(x) = \left(\frac{e^{sx_i}}{\sum_{j=1}^n e^{sx_j}} \right)_{i=1}^n. $$ It is known that $f_s$ converges pointwise to the $\operatorname{argmax}$ function $$ \operatorname{argmax}(x)= (I_{x_i=\max_{j=1,\dots,n}\, x_j})_{i=1}^N $$ as $s$ tends to infinity.

My question is there an infinite (cardinality) compact subset of $[0,1]^n$ on which $f_s$ converges uniformly to $\operatorname{argmax}$?

I'm trying to build the set by fattening a the discrete set of vectors $\{(I_{i=j})_{i=1}^N\}_{j=1}^N$ just enough so it remains disconnected but it is no longer discrete; however, I can't manage to show the uniform convergence. Is the conclusion actually possible?

What does it mean for arithmetic truth to be definable in the language of set theory?

Posted: 01 Jun 2022 01:46 PM PDT

Tarski famously proved arithmetic truth is not definable in the language of arithmetic. Ie there's no predicate $T$ such that $T(|\sigma|)$ is true in the standard model of arithmetic iff $\sigma$ is true in the standard model of arithmetic. Since there is no standard model of ZF, what does it mean for an arithmetic truth predicate T to exist in the language of ZF?

Distribution of ratio of 2 non-standard normal random variables

Posted: 01 Jun 2022 01:41 PM PDT

I know that the ratio of 2 standard normal RVs has a standard Cauchy distribution.

I have data from $X_1, ..., X_5$ and $Y_1, ..., Y_5$, all normal. I want the distribution of $R = \bar{X} \div \bar{Y}$.

$\bar{X}$ and $\bar{Y}$ are not standardized, so what is the distribution of their ratio?

Find a non-zero non-maximal proper ideal on the ring of real functions

Posted: 01 Jun 2022 01:35 PM PDT

Consider the ring of real functions (not necessarily continuous) with the usual sum and product operations.
I was able to prove that every set of the form $$ M_r := \left\{f | f(r)=0\right\} $$ where r is a real constant, is a maximal ideal.
Now I'm asked to find some other ideal that is non-zero, proper and not maximal.
I know it must not contain any maximal ideal and I believe this has something to do with the discontinuous functions, but I couldn't reach an example.

Spivak, Ch. 12, prob 10: Example of $f$ and $g$ such that $g$ takes on all values, and $f$, $g$, and $f \circ g$ are all differentiable everywhere?

Posted: 01 Jun 2022 01:31 PM PDT

I am interested in answering the following problem from chapter 12 "Inverse Functions" of Spivak's Calculus:

12-10. As a follow up to problem 10-17, what additional conditions on $g$ will insure that $f$ is differentiable?

Here is the cited problem 10-17, from chapter 10 "Differentiation"

10-17. Give examples of functions $f$ and $g$ such that $g$ takes on all values, and $f \circ g$ and $g$ are differentiable, but $f$ isn't differentiable. (The problem becomes trivial if we don't require that $g$ takes on all values; $g$ could just be a constant function, or a function that only takes on values in some interval $(a,b)$, in which case the behavior of $f$ outside of $(a,b)$ would be irrelevant).

Here is a solution to 10-17

$$g(x)=x^3$$

$$f(x)=\sqrt[3]{x}$$

Note that $f'(x)=\frac{1}{3\sqrt[3]{x^2}}$ which is undefined at $0$. Therefore, $f$ is not differentiable in the entirety of its domain.

$(f \circ g)(x)=f(g(x))=x$, which is differentiable.

Now, in the context of chapter 12, "Inverse Functions", we realize that $g(x)$ and $f(x)$ are actually inverse functions of one another.

Using a theorem about the derivative of an inverse function, we see that since $g'(x)=3x^2$ and $g'(0)=0$, then $g^{-1}(x)=f(x)$ doesn't have a derivative at $0$, as expected.

So, if we want to make $f$ differentiable, we need to choose some $g$ such that $g' \neq 0$.

This is essentially what the solution manual says, though it reasons as follows

$$f=(f \circ g) \circ g^{-1}$$

Therefore, by the Chain Rule, if $g^{-1}$ is differentiable at $a$ and $f\circ g$ is differentiable at $g^{-1}(a)$ then $f=(f \circ g) \circ > g^{-1}$ differentiable at $a$. We know that $f\circ g$ is differentiable everywhere, so the only condition we need is that $g^{-1}$ is differentiable, which happens when $g' \neq 0$ for all $a$.

The question is: what is an example of $f$ and $g$ that fulfills all the required conditions: $f$ and $g$ take on all values, $f\circ g$, $g$, and $f$ differentiable?

I had originally thought of $g(x)=x^3+x$, which takes on all values and is differentiable. Then, as a first attempt, I tried to compute $f=g^{-1}$. However, this results in the cubic equation

$$x=[(g^{-1})(x)]^3+(g^{-1})(x)$$

Intutively (ie, from the graph) it seems clear that $g^{-1}$ is both continuous and takes on all values. But I'd like to prove this if possible, though it seems like the path to finding an expression for $g^{-1}$ is very difficult, so perhaps this is a bad example of $g$ to use.

Expected value of tries to draw a red-colored marble, no marbles returned to the jar

Posted: 01 Jun 2022 01:28 PM PDT

What is the expected value of the number of tries it'll take to draw $1$ red-colored marble from a jar of $R$ red marbles and $N-R$ non-red marbles (i.e. $N$ total marbles)? The drawn marbles are not returned to the jar.

I've translated my actual problem to the marbles scenario because I thought I'd find my answer this way, but it seems searching for a statistics question that exactly matches your scenario is kind of hard. Sorry if this is a duplicate.

I'm trying to calculate something about UUIDs, so in my case $R \approx 31\times10^9$ and $N = 16^{32}$. And I just need to know the order of magnitude of the answer, which I can deduce from a general formula.

Modified Bessel function integral representation for $n$ integer

Posted: 01 Jun 2022 01:28 PM PDT

I was able to prove the formula $10.32.9$ for $\nu \not \in \mathbb{Z}$ of this link https://dlmf.nist.gov/10.32. I want to prove that this formula is still valid when $\nu=n \in \mathbb{Z}$. From the theory of Bessel functions it's known that $K_n(z)=\lim_{\nu \to n} K_{\nu}(z)$. And so I wish to prove that

$$K_n(z)=\lim_{\nu \to n} \int_{0}^{\infty} e^{-z\cosh(t)}\cosh(\nu t)dt = \int_{0}^{\infty} e^{-z\cosh(t)}\cosh(n t)dt.$$

My idea is try to use the Lebesgue dominated convergence theorem to pass the limit inside the integral sign but i'm failing in show that the integrand is bounded by an integrable function as $\nu \to n$, so if possible, I need I little help in this step. Thank you for the help and every hint will be appreciated.

Solving an integral equation, using banach fixed point theorem

Posted: 01 Jun 2022 01:22 PM PDT

Prove that the integral equation $$ f(x)=\int_{0}^{x} \frac{1}{(1+s)\left(1+f(s)^{2}\right)} d s \quad \text { for all } \quad x \in[0,1] $$ has a unique solution $f$ in $\operatorname{C}([0,1])$.

So I think I can use Banach fixed point theorem here. I am having difficulty in showing that the map $Tf(x):=\int_{0}^{x} \frac{1}{(1+s)\left(1+f(s)^{2}\right)} d s$ is a contraction map.

$||Tf-Tg||=\sup_{x\in[0,1]}|\int_0^x\frac{1}{1+s}\frac{g(s)^2-f(s)^2}{(1+f(s)^2)(1+g(s)^2)} ds|\le \sup_{x\in[0,1]} (x-0)[\sup_{s\in[0,x]}\frac{1}{1+s}\frac{|g(s)^2-f(s)^2|}{(1+f(s)^2)(1+g(s)^2)}]$

Any help with how to get a constant $K<1$?

How does an element in the cohomology of a sequence of $R$ modules look like?

Posted: 01 Jun 2022 01:22 PM PDT

Let us assume $I\subset \Bbb{Z}$ and $(M^i)_{i\in I}$ is a sequence of $R$ modules with associated $R$-modules homomorphism $$f^i:M^i\rightarrow M^{i+1}$$ Then we have defined the cohomology $$H^j\left(\left(M^i\right)_{i\in I}\right):=ker(f^j)/Im(f^{j-1})$$ I know that both $ker(f^j)$ and $Im(f^{j-1})$ are submodules since $f^j$ and $f^{j-1}$ are $R$-modules homomorphisms. But now I wonder how an element in this quotient looks like. Is it similar to the quotient of a ring by an ideal? So I mean if I take $\bar x\in H^j\left(\left(M^i\right)_{i\in I}\right)$ then $x\in ker(f^j)$ and $\bar x=x+Im(f^{j-1})$.

Thanks for your help

The minimal normal subgroup of a solvable group is abelian

Posted: 01 Jun 2022 01:39 PM PDT

The minimal normal subgroup of a solvable group is abelian.

I need to prove that this stands. However, I don't seem to understand the necessity of the group being solvable. Could you point out my mistake?

My proof goes as follows:

Let $H\triangleleft G$ be a minimal normal subgroup of $G$. Suppose by contradiction $H$ isn't abelian. Then there exist $h_{1},h_{2}\in H$ s.t. $[H,H]\ni\left[h_{1},h_{2}\right]\neq e$. Since the commutator subgroup is always normal in the group, we get that $\{e\}\neq [H,H]\triangleleft H$ in contradiction to $H$ being a minimal subgroup.

Note - our exercises tend to be imprecise, so the question itself might be faulty.

Wiener process (Mistake in solution)

Posted: 01 Jun 2022 01:18 PM PDT

My question is divided into $3$ points: Task, My solution, Mistake.

====================================================================

Task:

Calculate with what probability it will happen that the random trajectory of the Wiener process will acquire a value greater than $1$ at time $1$ and a value less than $1$ at time $2$.

My solution:

Let $W_t$ be a Wiener process.

We need to find $Pr[W_1>1 \:\&\:W_2<1]$.

Let $X_1$ and $X_2$ be two independent events, because they do not overlap in time, where

$X_1=[W_1>1]=[W_t-W_s>1]\sim N(0,\:t-s)=[W_1-W_0>1]\sim N(0,\:1-0)=[W_1-W_0>1]\sim N(0,\:1)$

$X_2=[W_2<1]=[W_t-W_s<1]\sim N(0,\:t-s)=[W_2-W_1<1]\sim N(0,\:2-1)=[W_2-W_1<1]\sim N(0,\:1)$

Using the probability density function

$f(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{1}{2\sigma^2}(x-\mu)^2}$

where $-\infty<x<\infty, -\infty<\mu<\infty$ and $\sigma>0$. Then we will find probability for $X_1$ and $X_2$.

For $X_1$ we get:

$Pr[X_1]=Pr[W_1-W_0>1]=\int_{1}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}dz\,\approx\,0,15866$

For $X_2$ we get:

$Pr[X_2]=Pr[W_2-W_1<1]=\int_{-\infty}^{1}\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}dz\,\approx\,0,84134$

Finally, to find the probability for $[W_1>1 \:\&\:W_2<1]$ we need to multiply the probability $X_1$ by the probability $X_2$. We will get

$Pr[W_1>1 \:\&\:W_2<1]=0,15866*0,84134\,\approx\,0,13349$

Mistake:

It is not true that an event $[W_t>k]$ is the same event as an event $[W_t-W_{t-1}>k]$.

The first is an event consisting in the fact that the value of the process is somewhat at some time, the second is an event consisting in the fact that the increment of the process is some.

As a result: independence does not apply. Events $[W_{t}>k]$ and $[W_{t+1}<k]$ are not independent.

====================================================================

I would be grateful for help in resolving the problem.

Free R-module and linear combination

Posted: 01 Jun 2022 01:15 PM PDT

Currently I`m stuck at this theorem: Let $f$ be a volume form on $V\simeq R^n$ where $V$ is free $R$-module , $\dim V = n$, $v =(v_1,\ldots,v_n) - V$ basis $\Leftrightarrow$ $f(v)\ne 0$. The proof itself is obvious but this $\Leftarrow$ requires $R$ arrow to be field. Why is it? My conjecture is: it is connected with linear combination with $R$ elements as coefficients. Can anyone guide me through?

How least squares be solved using orthogonal projection?

Posted: 01 Jun 2022 01:13 PM PDT

Consider as shown below where I have my data in vector $y$ and some functions in $A$ and their coefficients $c$ in a vector $x$.

enter image description here

The problem was to find the function that estimates some data. I have solved this by virtue of $Ax=b$ so to find $b$ I can compute the pseudo inverse and the coefficients are found by $A^+b=x$ where $A^+$ is the pseudo inverse. Thus I have a matrix $A$ to estimate $y$ with coefficients in $x$.

The above method works. My question:

How can I do this using a projection matrix? This is my attempt: To make $b$ unto the $range(A)$ it can be projected down using a projection matrix $P$. Thus it can be stated $Pb=Ax$

Then it can be stated $A^+Pb=x$ However this did not work as the dimensions did not check out. So how would this correctly be done? The formula for the projection matrix I used was $P=A(A^*A)^{-1}A^*$. Also the book I am following stated there are many ways to find the least-squares solution either by QR, Cholesky, SVD, $Pb=Ax$ and the one I used $A^+b=x$.

Linear Optimzation: Kuhn-Tucker Problem with 1 constraint

Posted: 01 Jun 2022 01:12 PM PDT

Question is a Karush-Kuhn-Tucker problem with 1 constraint as follows:

There are two bread manufacturers, A and B, producing bread ($x$) to sell to their customers. They each need 1 part flour and 1 part water for their dough. Flour storage is limited, being 100 for A and 80 for B. Both have unlimited storage of water. They both have set costs of $C=100*x_{A,B}$, x being the amount of Bread produced. The price-sales ratio for A is: $$ p_A(x_A)=300-x_A $$ And for B: $$p_B(x_B)=200-x_B$$

They both have the possibility to buy dough from a third party, whose capacity is 50 measures of flour, for an unknown price. They are both trying to maximize their profit individually and have to buy from the same third party market. The companies don't have to use up all of their flour if it's not economically feasible.

Question is: How much will A and B produce, how much will they buy from the third party at which price?

Answer: I can't seem to define the problem, I don't know how I solve it when I don't know the price $p$ of the flour (how do I integrate the price of the flour into the Lagrange equation?). Just deriving like normally doesn't give me a solution because of the extra variable... This is how far I got:

$$f_A=(300-x_A)*x_A+\mu*(100-x_A+y_A)-100*x_A-p*y_A$$ $$f_A=(200-x_A)*x_A+\mu*(80-x_A+y_A)-100*x_B-p*y_A$$

Upper bound for $\sum_{\rho} x^{\rho}/\rho$

Posted: 01 Jun 2022 01:11 PM PDT

Let $\rho=\beta+\gamma i$ be the non-trivial zeros of the Riemann zeta function and $T>0$. What upper bounds for the sum $$\sum_{\rho} \frac{x^{\rho}}{\rho}=\lim_{T \to \infty} \sum_{|\gamma|<T} \frac{x^{\rho}}{\rho}$$ exist? Since $$\psi(x)=x-\sum_{\rho} \frac{x^{\rho}}{\rho}+O(1)$$ and the Prime Number Theorem has $\psi(x)\sim x$, I expect that $\sum_{\rho} \frac{x^{\rho}}{\rho}=O(x)$ but I do not know of any explicit upper bounds for the sum. For instance, $$\sum_{\rho} \frac{x^{\rho}}{\rho}<x\sum_{\rho} \frac{1}{\rho}=Cx,$$ for a well-known constant $C$, but I am not sure if the initial inequality holds. Is there a constant $B(b)$ such that $$\sum_{\rho} \frac{x^{\rho}}{\rho}<B(b)$$ for $x>b$?

Theorem 33.1 of Munkres’ Topology

Posted: 01 Jun 2022 01:18 PM PDT

Let $X$ be a normal space, let $A$ and $B$ be disjoint closed subsets of $X$. Let $[a,b]$ be a closed interval in the real line. Then there exists a continuous map $f:X\to [a,b]$ such that $f(x)=a$ for every $x$ in $A$, and $f(x)=b$ for every $x$ in $B$.

I have read complete proof of this theorem. Everything in the proof depends on step $1$. let $P= \Bbb{Q} \cap [0,1]$. In step $1$, our goal is the following: $\forall p\in P$, $\exists U_p \in \mathcal{T}_X$ such that if $q\in P$ with $p \lt q$, then $\overline{U_p} \subseteq U_q$. Since $P$ is countable, $P=\{ x_n |n\in \Bbb{N}\}$. Let $x_1=1$ and $x_2=0$. Define $U_1=X-B$. $S_n:$ $\forall n\in \Bbb{N}$, $\exists U_{x_n}\in \mathcal{T}_X$ such that if $p,q\in P_n=\{x_1,..,x_n\}$ and $p\lt q$, then $\overline{U_p}\subseteq U_q$. I think statement $S_n$ assume existence of $U_{x_i}$, $\forall i\in \{1,..,n-1\}$. Is my $S_n$ statement correct? Base case: $n=2$. Since $X$ is $T_4$, $\exists U_0 \in \mathcal{N}_A$ such that $\overline{U_0}\subseteq U_1$. $P_2=\{x_1, x_2\}=\{1,0\}$. Thus $S_2$ holds. Inductive step: suppose $S_n$ is true. Then I don't satisfactory understand following paragraph of Munkres'

In a finite simply order set, every element (other than the smallest and the largest) has an immediate predecessor and an immediate successor. (see theorem 10.1) the number $0$ is the smallest element, and $1$ is the largest element, of the simply ordered set $P_{n+1}$, and $r=x_{n+1}$ is neither $0$ nor $1$. So $r$ has an immediate predecessor $p$ in $P_{n+1}$ and an immediate successor $q$ in $P_{n+1}$. The sets $U_p$ and $U_q$ are already defined, and $\overline{U_p}\subseteq U_q$ by the inductive hypothesis.

Rest of the proof is easy to follow. In step $4$, apparently Munkres' shows $f:X\to \Bbb{R}$ is continuous. By theorem 18.2(e), $f:X\to [0,1]$ is continuous. Let say, if were doing it (proving continuity of $f$) in conventional way. $f(x_0)=1$ and $(c,1] \in \mathcal{N}_{f(x_0)}$. Then what $U\in \mathcal{N}_{x_0}$ should we take such that $f(U)\subseteq (c,1]$?

Hint me on the proof of this inequality

Posted: 01 Jun 2022 01:13 PM PDT

Let $ f(t) $ be a continuous function of $ t $ on $[0, T]$. And, we have $$ \int_{t_1}^{t_2}f(\tau){\rm d}\tau<0 $$ for any $t_1$, $t_2\in[0,T]$. Then, I think we have $ f(t)<0 $, $\forall t\in [0,T]$.

Is this right and how to prove it?

Replacement holds in $L$

Posted: 01 Jun 2022 01:12 PM PDT

In Chapter 13 of Jech's Set Theory, he proves that $L$ is a model of ZF. For Replacement, he writes the following:

If a class $F$ is a function in $L$ then for every $X\in L$ there exists an $\alpha$ such that $F(X)=\{F(x):x\in X\}\subseteq L_\alpha$. Since $L_\alpha\in L$, this suffices.

(Here he is invoking a previous exercise that states that in order to have Replacement, it suffices to have the weaker assertion "If a class $F$ is a function, then $\forall X\,\exists Y \, F(X)\subseteq Y$")

To make sure I'm understanding, we first note that by (metamathematical) Replacement and the fact that $F$ is a function in $L$ that $F(X)$ is a set and in particular a subset of $L$. By some sort of Hartogs' argument this means it must in fact be contained in $L_\alpha$ for some $\alpha$ (although I'd appreciate some help in fleshing out this argument).

The part where I'm getting stuck is understanding how this exercise applies, given that it was framed in terms of $V$. Just because $F(X)$ is a subset of some element of $L$, I don't immediately see why it has to be definable. Does it work to say that we can further pick some $\beta$ such that $X\subseteq L_\beta$ and $F(X)\subseteq L_\beta$, and then say that $F(X)$ is definable over $L_\beta$ via $F(X)=\{y\in L_\beta : \exists x\in L_\beta \,(x\in X\land y=F(x))\}$? Something feels sketchy about this but I'm not sure what.

Dividing by $b^n-1$ in base $b$ results in repeating decimals. Can this be proven with modular arithmetic?

Posted: 01 Jun 2022 01:45 PM PDT

I'm trying to do a video about mathematical discovery and proof. Idea goes something like this, with appropriate demonstrations along the way:

  • notice that $\frac19 = 0.\overline1$, $\frac29 = 0.\overline2$, etc. Any digit $d_0$ in $\frac{d_0}9=0.\overline{d_0}$.
  • notice that $\frac{12}{99} = 0.\overline{12}$, $\frac{34}{99} = 0.\overline{34}$. Any two digits $d_0$ and $d_1$ over two nines gives $\frac{d_0d_1}{99}=\overline{d_0d_1}$.
  • same for $\frac{123}{999} = 0.\overline{123}$,$\frac{1234}{9999} = 0.\overline{1234}$, $\frac{12345}{99999} = 0.\overline{12345}$, and so on. For a number $n$ digits long, dividiing by that many 9's, we always get $\frac{(d_0d_1d_2 \cdots d_n)}{(999 \cdots9)_n}=(\overline{d_0d_1d_2 \cdots d_n})$.

Of course, the 9's aren't a coincidence; 9 is one less than the base of the number system. So it should be true that for any base, the above fractions can be modified (assuming they're legal fractions in the base). That means I should be able to prove for base $b$ (and here my notation gets a little iffy) $\frac{{d_0}_b}{b-1}=0.\overline{{d_0}}_b$, $\frac{{d_0d_1}_b}{b^2-1}=0.\overline{d_0d_1}_b$, and in general $\frac{{(d_0d_1d_2 \cdots d_n)}_b}{b^n-1}=\overline{(d_0d_1d_2 \cdots d_n)}_b$.

Mechanically you can see what's going on if you do ${12345}\div{99999}$ written out in long division (which I can't do in $\LaTeX$). Each step in the long division yields a rotated form of 12345 (i.e. 23451, 34512, etc.) and the final quotient is indeed $0.\overline{12345}$.

The in-general case is tricky since all the digits are different. You can see that $(d_0d_1d_2 \cdots d_n)_b$ divided by $b^n-1=\overline{(d_0d_1d_2 \cdots d_n)}_b$ if you again do long division, and you can see why the numbers rotate. But the best I can do is handwave over the $\cdots$ in the division.

I know the proofs of divisibility by 9 and by divisibility by $b-1$ in base $b$. One uses mods and one uses induction.

I lean towards induction when I notice $\frac{\overbrace{111 \cdots 1}^{n}}{b^n-1}=\frac{b^{n-1}+b^{n-2}+ \cdots + b + 1}{b^n-1}=\frac{b^{n-1}+b^{n-2}+ \cdots + b + 1}{(b-1)(b^{n-1}+b^{n-2}+ \cdots + b + 1)}=\frac{1}{b-1}=0.\overline{1}$, but once you introduce different digits this really doesn't lead anywhere.

It's pretty clear that I don't need to introduce true decimals into the proof. If I stick to the realm of integers, I can use the scads of theorems I learned about mods in my Abstract Algebra course, but as of now nothing works out. And I've tried turning the infinite decimals into geometric sums, to no avail.

So I'm looking for help on proving $\frac{{(d_0d_1d_2 \cdots d_n)}_b}{b^n-1}=\overline{(d_0d_1d_2 \cdots d_n)}_b$.

How to compute inverse matrix with diagonal and symmetric multiplication plus a perturbation term on the diagonals?

Posted: 01 Jun 2022 01:26 PM PDT

Given diagonal matrix $D_j \in \Bbb R^{n \times n}$ with non-zero singular values $\gamma_{1}, \dots, \gamma_{j} $, fat matrix $X \in \Bbb R^{m \times n}$, where $m \ll n$ and $ X^T X$ is singular, and $\lambda > 0$, how do I compute the following matrix inverse?

$$\left( D_j X^T X D_j + \lambda I \right)^{-1}$$

When can I apply the trapezoidal rule?

Posted: 01 Jun 2022 01:22 PM PDT

An artificial lake has the shape illustrated below , with adjacent measurements 20 feet apart. Use suitable numerical method to estimate the surface area of the lake.

enter image description here

I know how to solve this problem we will just assign the adjacent measurements to a parameter $x_i$ and the corresponding lake width to $f(x_i)$ and $x_o=0$ and $f(x_o)=0$ and $x_1=20$ and $f(x_1)=30$ and so on until $f(x_7)$ as the lake is divided into 7 sections, Then we use the trapezoidal as the number of sections is odd.

Now what I don't really understand is why the trapezoidal rule can be applied to this problem and when it isn't viable?

For a map of coordinate rings, what is the "corresponding map of algebraic sets"?

Posted: 01 Jun 2022 01:39 PM PDT

For each of the given maps of coordinate rings, describe the corresponding map of algebraic sets.

$\mathbb{C}[x,y] \to \mathbb{C}[t]$ with $x \to t, y \to t$

$\mathbb{C}[t] \to \mathbb{C}[x,y]$ with $t \to x$

$\mathbb{C}[t] \hookrightarrow \mathbb{C}[t,x,y]/\langle xy -t \rangle$

What does "corresponding map" mean? There's a suggestion "compare to the corresponding maps of Spec", which are defined by $\phi : R \to S \Rightarrow \phi^\# : \text{Spec }S \to \text{Spec }R$ by $\phi^\#(P) = \phi^{-1}(P).$ I know how to compute these, but it doesn't help me reverse engineer what the algebraic sets map might be.

Update: I believe for $\phi : R \to S,$ the correspondence is either $V(I) \to V(\phi(I))$ or contravariantly, $V(I) \to V(\phi^{-1}(I)).$ Which one is better, and could there be a different interpretation?

Directional derivative of $x \mapsto A(x):= x_1 A_1 + x_2 A_2 + \dots x_n A_n $

Posted: 01 Jun 2022 01:20 PM PDT

Let $\mathcal{S}_{m \times m}$ denote the space of real valued symmetric $m \times m$ matrices.

Suppose $A_1, A_2 , \dots, A_n \in \mathcal{S}_{m \times m}$ are such symmetric $m \times m$-matrices.

Now consider $A: \mathbb{R}^{n} \to \mathcal{S}_{m \times m}$ given by

$$ x \mapsto A(x):= x_1 A_1 + x_2 A_2 + \dots x_n A_n \,. $$

Note that $x_i$ ist the $i$-th entry of the vector $x \in \mathbb{R}^n$ and thus a scalar. Therefore $x_i A_i$ is a scalar-matrix product.


Question: I am looking for the directional derivative

$$ A^{\prime}(x;d) $$

of $A$ in $x \in \mathbb{R}^{n}$ in the direction of $d \in \mathbb{R}^{n}$.


Apparently $A$ is linear and therefore differentiable.

Therefore I think that $A^{\prime}(x;d)$ can be expressed in thems of the derivative $A^\prime(x)$ but I have no clue what this could be...Any help is much appreciated.

if a curve $A=\{z : |z-3|+|z+3|=8\}$ and $B=\{z : |z-3|=k\},\; k\in \mathbb R^+$ touch the curve $A$ internally, then which are correct

Posted: 01 Jun 2022 01:18 PM PDT

if a curve $A=\{Z : |Z-3|+|Z+3|=8\}$ and $B=\{Z : |Z-3|=k\},\; k\in \mathbb R^+$ touches the curve $A$ internally, and given another curve $C = \bigl\{Z : \bigl||Z-3|-|Z+3|\bigr|=4\bigr\},\;$ where $Z$ denotes complex number $x+iy$. Then

(A) $A$ and $C$ intersect orthogonally and $k=1$'

(B) $k=1$

(C) $k=\dfrac{3}{2}$

(D) $B$ touches $C$ internally.

My Approach:

Curve $A$ is ellipse given by $\dfrac{x^2}{16}+\dfrac{y^2}{7}=1\;$ and curve $B$ is circle given by $(x-3)^2+y^2=k^2$.

Since both curve are touching each other so i solved both curve together and Put discriminant equal to $0$

Steps: $\dfrac{x^2}{16}+\dfrac{k^2-(x-3)^2}{7}=1$

$\implies\;\;7x^2+16(k^2-(x-3)^2)=112$

$\implies\;\; 9x^2-96x+(256-16k^2)=0$

Now I made discriminant $D=0$ because both curve are touching each other.

Above equation give me $k=0$.

But after verifying from Desmos $k=1$ is solution and i know how to do it graphically.

My Doubt:

$1.$ What is going wrong in my method

This problem is same as A problem on confocal conics but in this problem OP's Doubt is different.

If $f\in C([a,b])$, $V(f^2;[a,b])<\infty$, then $V_2(f;[a,b]) \leq V(f^2;[a,b])+\|f\|_\infty^2$

Posted: 01 Jun 2022 01:36 PM PDT

If $f\in C([a,b])$, $V(f^2;[a,b])<\infty$, then $V_2(f;[a,b])\leq V(f^2;[a,b])+\|f\|_\infty^2$.

Note that $V(f;[a,b])$ denotes the total variation of $f$ on $[a,b]$: $$V(f;[a,b]):=\sup_{a=a_0<a_1<\cdots a_k=b}\sum_{j=1}^k|f(a_j)-f(a_{j-1})|.$$ $V_2(f;[a,b])$ denotes the quadratic variation of $f$ on $[a,b]$: $$V_2(f;[a,b]):=\sup_{a=a_0<a_1<\cdots a_k=b}\sum_{j=1}^k|f(a_j)-f(a_{j-1})|^2.$$

I can only prove the inequality $V_2(f;[a,b])\leq 2V(f^2;[a,b])$, by first showing $V_2(f;[a,b])\leq 2V_2(|f|;[a,b])$ , thanks to the continuity of $f$, and then showing $V_2(|f|;[a,b])\leq V(f^2;[a,b])$. Both steps are consequences of the definitions. In the first step, the MVT is used; and in the second step, I use the fact that $\sqrt x$ is $1/2-$H$\ddot{\text{o}}$lder continuous.

Since my result is not sufficient to prove the original inequality, so I ask here for some help.

Any help would be appreciated!

Find the multiplicity of eigenvalue $2a$

Posted: 01 Jun 2022 01:43 PM PDT

Let $a,b$ be real numbers with $0<2a<b$. Let $K_n$ denote the commutation matrix of order $n$, and let $A$ be the $n^2\times n^2$ matrix defined by

$$A:=a[vec(I_n)vec(I_n)'+K_n-2diag(K_n)+I_{n^2}]+bdiag(K_n)$$

where $vec$ denotes the vectorization operator , and $diag(C)$ denotes the diagonal matrix having the same diagonal as $C$. Finally let $M$ be an $n\times n$ idempotent matrix with rank $r<n$, and let $B$ be the matrix defined by

$$B:=[M\otimes M]A[M \otimes M]$$

where $\otimes$ denotes the kronecker product. I am trying to find the multiplicity of eigenvalue $2a$ as a function of $M$. Also I would like to show that $2a$ is the smallest nonzero eigenvalue of $B$.

After some work I found that the eigenvalues of $A$ are as follows :

  • $b+na \, \text{ with multiplicity } 1 \, \text{and eigenvector} \sum_{i=1}^n (e_i\otimes e_i),$

  • $b \text{ with multiplicty } n-1 \, \text{and eigenvectors} \, \{(e_1\otimes e_1)-(e_i\otimes e_i), i=2,\dots, n\},$

  • $2a \, \text{ with multiplicty } n(n-1)/2 \, \text{and eigenvectors} \, \{(e_i\otimes e_j)+(e_j\otimes e_i), i\neq j\},$

  • $0 \, \text{ with multiplicty } n(n-1)/2 \, \text{and eigenvectors} \, \{(e_i\otimes e_j)-(e_j\otimes e_i), i \neq j\},$

where $e_i$ denotes the $n\times 1$ vector with $1$ in position $i$ and zeros elsewhere. I believe the multiplicity of $2a$ corresponds to the dimension of the $Col(M\otimes M) \cap Col(R)$, where $R$ denotes the $n^2\times n(n-1)/2$ matrix having columns $(e_i\otimes e_j)+(e_j\otimes e_i)$ for $i\neq j$, and $Col$ denotes the column space.

Any ideas how to proceed? Thanks a lot for your help.

Elementary proof of $\lim_{n\to\infty}n(\sqrt[n]{n}-1)=\infty$

Posted: 01 Jun 2022 01:43 PM PDT

This question is closely related to this question, but I am not happy with the answers there for several reasons which I will explain in a second.

The limit $\lim_{n\to\infty}n(\sqrt[n]{n}-1)=\infty$, where $n$ is a natural number, is easy to see by expanding the left side with the help of the exponential series. Indeed, we have $$ n(\sqrt[n]{n}-1)=\ln(n)+\frac{1}{2}\cdot\frac{1}{n}\cdot\ln(n)^2+\frac{1}{6}\cdot\frac{1}{n^2}\cdot\ln(n)^3+\cdots\geq\ln(n)\,. $$ Since $\ln(n)$ grows arbitrary large with $n$ large, the limit is proven.

I found this limit as an exercise in Analysis 1 by K. Königsberger, 5.8 Exercises, 3(b). I am using an old printing and the numbering might have changed, but it is in the very beginning of the book in a chapter about sequences.

At that stage of the book the exponential series as well as logarithms have not yet been introduced and very few means are available. For educational purposes, I am looking for a really elementary proof which uses a different bound from below which in turn goes to infinity. The book suggests that such a proof must exist but I cannot find one.

Can you please help me to find such proof? What is available at this stage is the Bernoulli inequality and the expansion of $(1+x)^n$ for a natural number $n$ and arbitrary $x$, plus some very basic limits like $\sqrt[n]{n}\to 1$, etc. which all can be done elementary. Thank you for your time and help!

Second end of a Catenary of known length passing through a point and intersecting two straight lines at a known distance.

Posted: 01 Jun 2022 01:34 PM PDT

I have the coordinates of one extreme point $D=(0,y_D)$, two lines $y = m_ix + y_D$ for $i\in \{ A,B \}$ intersecting the catenary at two point $A$ and $B$, and also in $D$. I know the length $L_{DA}$ of the catenary from $D$ to $A$, $L_{AB}$ from $A$ to $B$ and the overall length $L_{DE}$ from $D$ to $E$. I want to find the coordinates of the second extreme point $E$, as per the linked picture ($L = L_{AB}$).

My understanding is that given these conditions (one point, two intersections and a known distance between the intersections) the solution must be unique. My approach would be to find the equation of the catenary and then locate $E$, $L_{DE}$ away from the starting point $D$ along the curve.

I know that there is probably no analytical solution, as this involves a transcendental function. The equations I derived read as follows:

The shape of the catenary is expressed as $$y = c + a \cosh\left(\frac{x-b}{a}\right),$$

where $a,b,c$ are unknown. Three lengths are known

$$ L_{AB} = \int_{x_A}^{x_B} \cosh\left(\frac{t-b}{a}\right)dt = a\left[\sinh\left(\frac{x_B -b}{a}\right) - \sinh\left(\frac{x_A -b}{a}\right)\right] $$ $$ L_{DA} = \ldots = a\left[\sinh\left(\frac{x_A -b}{a}\right) - \sinh\left(\frac{-b}{a}\right)\right],$$ $$ L_{DE} = \ldots = a\left[\sinh\left(\frac{x_E -b}{a}\right) - \sinh\left(\frac{-b}{a}\right)\right],$$ and the two lines intersect the catenary $$y_A = c + a \cosh\left(\frac{x_A-b}{a}\right) = m_A x_A + y_D$$ $$y_B = c + a \cosh\left(\frac{x_B-b}{a}\right) = m_B x_B + y_D$$

I have looked at similar problems, such as Finding points along a catenary curve, but I do not know how to reduce this problem to that one. It seems to me the two coincide, once the coordinates $(x_A,y_A)$ or $(x_B, y_B)$ have been fixed. Is it possible to obtain a similar solution (i.e. analytically driven, but for the solution of the transcendental equation) or an algorithm to approximate the coordinates of $E$?

Infinitely many discontinuities for a bijective function from $[0,1)$ to $(0,1)$

Posted: 01 Jun 2022 01:25 PM PDT

Show that any bijection from $[0,1)$ to $(0,1)$ has infinitely many discontinuities.

I have thought about this question but I have no any idea. Any idea is valuable for me, thanks.

Does the symbol $\nabla^2$ has the same meaning in Laplace Equation and Hessian Matrix?

Posted: 01 Jun 2022 01:28 PM PDT

we know that the Laplace Equation can be written in the form: $$\nabla^2 \Phi=0$$ while in this equation,the symbol $\nabla^2 \Phi$ stand for $\sum_{i=1}^n\frac{\partial^2 \Phi}{\partial x_i^2}$.

At the same time, the Hessian Matrix is also denoted as $$H=\nabla^2f$$

My question is what does $\nabla^2$ really mean?

No comments:

Post a Comment