Wednesday, March 31, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Prove that a zero divisor in $Z_n$ does not have a multiplicative inverse

Posted: 31 Mar 2021 09:14 PM PDT

so for this question, I assume $x$ and $y$ to be zero divisors in $Z_n$ such that $x \neq 0, y \neq 0$ .Assume that $x$ has a multiplicative inverse $a$ Since $x \times y=0$, $a * x * y = a * 0$, or $(1)*y=0$ (since a * x = 1 ) But y should be non-zero. So we arrive at a contradiction, and thus a zero divisor in $Z_n$ does not have a multiplication inverse. Does this proof make sense?

An upper bound for $\{r_n\}$ which satisfies $r_n = r_{n-1} + \frac{K}{r_{n-1}}$

Posted: 31 Mar 2021 09:13 PM PDT

I have a sequence $\{r_n\}$ of positive numbers defined recursively by $$\tag{1} r_n = r_{n-1} + \frac{K}{r_{n-1}}.$$ for some positive $K$. The sequence is clearly strictly increasing and tends to $+\infty$ as $n\to \infty$.

Question: Does there exists positive $K_1, K_2$ depending on $r_1, K$ such that $$\tag{2} 2nK - K_1 \le r_n^2 \le 2nK + K_2$$ for all large $n$?

Squaring (1) on both sides give $$ r_n^2 = r_{n-1}^2 + 2K + \frac{K^2}{r_{n-1}^2},$$

If I prove (2) using induction, the induction hypothesis gives

$$ \frac{K^2}{r_{n-1}^2} \le \frac{K^2}{ 2(n-1)K - K_1},$$ which is not very helpful, since $\sum \frac{1}{n-1}$ is not summable.

Wave Packet Decay Estimates

Posted: 31 Mar 2021 09:09 PM PDT

In Tao's notes on time frequency analysis, the following theorem is stated (Theorem 5.5 of Part 1 of https://www.math.ucla.edu/~tao/254a.1.01w/).

Fix $\xi_0 \in \mathbf{R}$, and positive values $\delta_1,\delta_2$ with $\delta_1 \cdot \delta_2 \geq 1$. Fix a bump function $\psi \in C_c^\infty(\mathbf{R})$ supported on $[-1,1]$. Then define the kernel

$$ K(x) = \int e^{2 \pi i \xi \cdot x} \psi((\xi - \xi_0)/\delta_2)\; d\xi $$

and consider the convolution operator $T$ defined by the formula

$$ Tf(x) = \int K(x-y) \psi(y/\delta_1) f(y)\; dy. $$

One can see this operator as the composition of a spatial cutoff by a smooth function on $[-\delta_1,\delta_1]$, and then considering a smooth frequency cutoff on $[\xi_0-\delta_2,\xi_0+\delta_2]$. The result stated is that for $f \in L^2(\mathbf{R}^d)$, and any $n > 0$,

$$ |Tf(x)| \lesssim_n \| f \|_{L^2(\mathbf{R})} \delta_1^{1/2} ( \delta_1 d(x,[\xi_0-\delta_2,\xi_0 + \delta_2] )^{-n}. $$

The proof idea that Tao gives is relatively simple, assuming that $\delta_1 = 1$, then applying Cauchy-Schwartz to conclude that

$$ |Tf(x)| \lesssim \| f \|_{L^2(\mathbf{R})} \left( \int |K(x-y)|^2 |\psi(y)|^2\; dy \right)^{1/2}. $$

I assume the idea then is to use decay estimates for $K$ to obtain the required bound, but I'm not sure how to find these. Via this method, I was able to show that

$$ |Tf(x)| \lesssim_n \delta_1^{1/2} \delta_2^{1-n} \| f \|_{L^2(\mathbf{R})} d(x,[-\delta_1,\delta_1])^{-n}. $$

Is there a typo that makes what is meant to be proven equivalent to this bound, or am I missing something?

Differences in book definitions of $\beta$-reduction leading to confusion on when an abstraction needs to be $\alpha$-converted

Posted: 31 Mar 2021 09:00 PM PDT

I've been learning Lambda Calculus in my free time recently to try and learn how to make programming languages & interpreters. I am struggling a little bit with some inconsistencies on different texts on when to use $\alpha$-conversion to avoid capture in $\beta$-reduction.

For example, in Lectures on the Curry-Howard Isomorphism, it makes no mention of any capture restrictions in its definition of $\beta$-reduction. $$ \label{1}\tag{1} (\lambda x.P)\, Q \rightarrow_\beta P[x := Q] $$

However, in Write You A Haskell, it does

The fundamental issue with using locally named binders is the problem of name capture, or how to handle the case where a substitution conflicts with the names of free variables. We need the condition in the last case to avoid the naive substitution that would fundamentally alter the meaning of the following expression when y is rewritten to x.

$\label{2}\tag{2}[y/x](\lambda x.xy) \rightarrow \lambda x.xx$

By convention we will always use a capture-avoiding substitution. Substitution will only proceed if the variable is not in the set of free variables of the expression, and if it does then a fresh variable will be created in its place.

$\label{3}\tag{3}(\lambda x.e)\,a \rightarrow [x/a]e \quad\text{if}\quad x \notin fv(a)$

Which this piece makes sense to me that if you're substituting a free variable in the body of a lambda expression it can become captured, however the second part is what is throwing me off. The first part appears to be dealing specifically with substituting in lambda abstractions, where the second part dealing with "capture-avoiding substitution" is for any lambda expression.

So I'm not sure I understand why you would need to rename in the following case $$ \label{4}\tag{4} (\lambda x. x\ y)(\lambda y. y x) \rightarrow_\beta [x/x](x y) $$ According to $\ref{3}$, this wouldn't be allowed because $x \in FV(\lambda y. y x)$, but I don't see the problem with it and it seems pretty straightforward to reduce this to $y x$.

When I plug into a lambda calculus interpreter online with steps (such as this one), it agrees and reduces it down to $y x$ with no renaming.

Renaming seems to only apply if the body of the abstraction of the $\beta$-redex is itself another lambda abstraction, such as $$ \label{5}\tag{5} (\lambda x.\lambda y. x y)(\lambda x. x y) \rightarrow_\beta [x/(\lambda x.x y)](\lambda y. x y) $$ which seems to me also according to $\ref{3}$ would not require an alpha conversion because $x\notin FV(\lambda x.x y)$, however when I plug that into the same lambda calculus interpreter it does alpha convert.

One example that I worked out that makes perfect sense to me is the following (using the other notation/definition from Lectures on the Curry-Howard Isomorphism, side note: the differing notations confuse me also): $$ \label{6}\tag{6} \begin{align*} (\lambda y.\lambda x.x y)(\lambda z.x z) &\rightarrow_\beta (\lambda x.x y)[y := (\lambda z. x z)] \\ &\rightarrow_\alpha (\lambda a. (xy)[x := a][y := (\lambda z. x z)] &\text{if}\quad x\in FV(\lambda z. x z)\,\text{and}\, y\in FV(x y) \\ &\rightarrow (\lambda a.(a y)[y := (\lambda z. x z)] \\ &\rightarrow (\lambda a.a (\lambda z. x z)) \end{align*} $$

and the condition makes sense because you need to rename $x$ if it's in the free variables of the substitution to avoid capture, and $y$ needs to be in the free variables of the body because you can only substitute free variables.

But then when I go back to the other examples I end up re-confusing myself on when I need to rename because it seems inconsistent.

In total, the question is

  1. What am I doing wrong in the examples that are confusing me using the Write You A Haskell ($\ref{3}$) definition?
  2. Am I correct that the renaming only applies when the $\beta$-redex abstraction body is itself another abstraction?

How to solve this indefinite integral by hand? $\int \sqrt{1+\cos^2 x}\ dx$

Posted: 31 Mar 2021 09:06 PM PDT

I tried it with some substitutions and none of them has worked... So I am just wondering if it is solvable by hand?

Prove that $f_n(x) = cos^{n} x$ is not uniformly convergent on $[0, \pi]$

Posted: 31 Mar 2021 08:56 PM PDT

Prove that $f_n(x) = cos^{n} x$ is not uniformly convergent on $[0, \pi]$

Intuitively I can see that if $x=0$ or $x = \pi$ then $\lim_{n\to \infty} f_n (x) = 1$ but if $x \in (0,\pi)$, then $\lim_{n\to \infty} f_n(x) = 0$.

I tried to explain that $lim_{x\to 0^+}\{ \lim_{n\to\infty} f_n(x)\} = 0$ but $f_n(0) = 1$ so the limit does not exist which would imply that there is also no possible uniform convergence.

What is the right approach?

What are the ideals of $\mathrm{Map}(S,R)$?

Posted: 31 Mar 2021 08:53 PM PDT

Let $R$ be a commutative ring and let $S$ be a nonempty set. Let $\mathrm{Map}(S,R)$ be the set of maps $S\to R$. Then it is a ring under pointwise addition and multiplication, i.e. $$(f_1+f_2)(s)=f_1(s)+f_2(s),(f_1* f_2)(s)=f_1(s)* f_2(s)$$Then what are the ideals of $\mathrm{Map}(S,R)$? I can see that if $\mathfrak{a}$ is an ideal of $R$, then $\mathrm{Map}(S,\mathfrak{a})$ is an ideal of $\mathrm{Map}(S,R)$, but I do not know if the reverse is true, i.e. is every ideal of $\mathrm{Map}(S,R)$ of the form $\mathrm{Map}(S,\mathfrak{a})$? Thanks for your help.

I need help solving this question

Posted: 31 Mar 2021 08:54 PM PDT

$\displaystyle \text{Var} ~\left(\sum_{i=1}^n w_iX_i\right)$

[enter image description here][1] [1]: https://i.stack.imgur.com/q3CkV.png The summation applies to both w and x, not just w

It is given that:

$\displaystyle ~\left(\sum_{i=1}^n w_i\right) ~=~ 1.$

[enter image description here][2] [2]: https://i.stack.imgur.com/zJnRU.png

and expectation of x is μ, variance is σ2

I'm not too sure how they reached this answer:

$\displaystyle ~\sigma^2 \left(\sum_{i=1}^n (w_i)^2\right)$

[enter image description here][3] [3]: https://i.stack.imgur.com/gXdjW.png

Does radix 1 exist?

Posted: 31 Mar 2021 08:43 PM PDT

Hypothetical: Radix 1 has one digit, namely zero (0).

Since one could write any number in any other base with an infinite number of zeros in front of it and not change it, am I then right in assuming that radix 1 can only represent one value, which is zero as well, because that value would basically be an infinite string of zeros?

Or what is the real story behind radix 1?

Proving a commutative ring is isomorphic to Cartesian product of sets

Posted: 31 Mar 2021 08:43 PM PDT

I'm having trouble finding a starting point for this problem:

Let $R$ be a commutative ring containing elements $a, b$, both $\neq 0_R$ such that: $$ a + b = 1_R,\ a^2 = a,\ b^2 = b,\ and\ a · b = 0_R $$ Show that the ideals $Q := R · a$ and $S := R · b$ are rings, but not subrings of $R$, and that the ring $R$ is isomorphic to the ring $Q×S$.

Given the $Q×S$ ring follows $(r,s) + (r', s') = (r+r', s+s')$ and $(r,s)·(r',s') = (rr', ss')$

I understand that if $R$ is isomorphic to $Q×S$, then that would mean that $a = (1_R, 0_R)$ and $b = (0_R, 1_R)$ satisfies all of the conditions, and by extension how they are not subrings.

I just can't for the life of me determine how to use that information to prove that $R$ is isomorphic to $Q×S$ without circular reasoning.

Rewriting higher powers in $K[x]/(p)$

Posted: 31 Mar 2021 09:00 PM PDT

Suppose $p = \sum a_i x^i \in K[x]$ is a polynomial of degree $n$ over some field $K$. Assume WLOG that $a_n = -1$, so we can rewrite the equation $p = 0$ as

$$x^n = \sum_{i=0}^{n-1} a_i x^i$$

Working now in the ring $K[x]/(p)$, this equation allows us to rewrite higher powers $x^m$, $m \geq n$ uniquely as a linear combination of $1,x,x^2,\ldots,x^{n-1}$, i.e. a polynomial of degree at most $n-1$. Is an explicit formula known for this representation? I worked out (might be some mistakes):

$$ x^{n+1} = \sum_{i=0}^{n-1} (a_{i-1} + a_{n-1}a_i) x^i $$

$$ x^{n+2} = \sum_{i=0}^{n-1} \big( a_{i-2} + a_{n-1} a_{i-1} + (a_{n-2} + a_{n-1}^2)a_i \big) x^i $$

but couldn't see an obvious pattern to prove inductively.


More generally, it seems that for any $p \in K[x]$ of degree $n$, $K[x]/(p)$ is canonically $K$-vector space of dimension $n$, having a canonical basis $\{ 1,x,x^2,\ldots,x^{n-1} \}$. Here, the "multiplication by $x$ map" $q \mapsto xq$ seems to be an interesting linear map - is anything known about this?

Partial derivative of composite function of multivariate gaussian

Posted: 31 Mar 2021 09:14 PM PDT

I would like to find the partial derivative of $f(y)$ with respect to c where $y$ follows multivariate normal/Gaussian density $N(x(t),\sigma^2I_n)$ i.e.

$f(y)=(2\pi)^{-\dfrac{n}{2}}|\sigma^2I_n|^{-\dfrac{1}{2}}exp[-\dfrac{1}{2}(y-x(t))^T|\sigma^2I_n|^{-1}(y-x(t))]$

Here, $x(t)=\dfrac{\exp(z(t))}{\int_a^p \exp(z(s))\,\text{d}s}$ and $z(t)=b(t)^Tc + b(t)^TAu$. I could do it if $x(t)$ is a function of $c$, but I am kind of lost at the very beginning since $x(t)$ is a composite function of $c$ i.e $x(t)=g(h(c))$.

N.B. Here the dimensions: $A$ is $p*p$, $b(t)$ is $p*n$, $c$ & $u$ is $p*1$, so $z(t)$ is $n*1$. And $\sigma^2I_n$ is the $n*n$ variance-covariance matrix.

Using the chain rule $\frac{\partial f(y)}{\partial c}= \frac{\partial f(y)}{\partial x(t)} *\frac{\partial x(t)}{\partial z(t)}*\frac{\partial z(t)}{\partial c}$ I have got ,

$\frac{\partial z(t)}{\partial c}= b(t)^T$

$\frac{\partial x(t)}{\partial z(t)} = x(t) - x(t) \frac{\int_a^p exp({z(s)})\frac{\partial z(s)}{ \partial z(t)}ds}{\int_a^bexp(z(s))ds}$

$\frac{\partial f(y)}{\partial x(t)} = N(x(t),\sigma^2I_n)|\sigma^2I_n|^{-1}(y - x(t)) $

Is using the chain rule is right path? If not, what else? If yes, is the partial derivatives are correct? It's looks too complicated, I thought the final result would be comparatively simple.

Probability a given markov chain is in two different states at two different times

Posted: 31 Mar 2021 08:46 PM PDT

Let us say that our state space $S = \{1, 2, 3, 4\}$

Now let us say our transition matrix $P$ is given by: \begin{bmatrix} 1/2 & 1/2 & 0 & 0 \\ 1/3 & 0 & 1/3 & 1/3 \\ 1/6 & 1/6 & 2/3 & 0 \\ 1/2 & 1/4 & 1/4 & 1/2 \end{bmatrix}

Given that a Markov chain $X_n$ is in state 3 at time 0 (i.e $X_0 = 3$), what is the probability that it is in state 1 at time 4 and state 2 at time 5? To rephrase the question, given that $X_0 = 3$, what is the probability that $X_4 = 1$ and $X_5 = 2$ both occur

Originally, the way I thought to go about answering this question was to take the row matrix $\pi_0$:

\begin{bmatrix} 0 & 0 &1&0 \\ \end{bmatrix}

which gives the probability distribution of the Markov chain at time 0, and take the first entry of the row matrix given by $\pi_0P^4$, (the probability of the Markov chain being in state 1 at time 4) and the second entry of the row matrix given by $\pi_0P^5$ (the probability of the Markov chain being in state 2 at time 5, and multiplying these two entries together to get our final answer. However, this would require the two events to be independent, which, due to the definition of Markov processes, I am not sure they are. Is this approach correct? If not, how would you solve this problem?

Show that $[L : Q(t)]$ is dependent on the choice of inclusion map $\phi$

Posted: 31 Mar 2021 08:58 PM PDT

Let $Q(t)$ denote the field of rational functions. And let $\phi: Q(t)\to L $ denote an inclusion map to some field $L$. Give an example such that $[L : Q(t)]$ is dependent on the choice of $\phi$.

I'm thinking of the evaluation map $f_\pi(q)= q(\pi)$, which must be injective because $\pi$ is not algebraic. And then work on comparing the index $[\mathbb{R} : f_\pi(Q(t))]$ and $[\mathbb{R} : f_e(Q(t))]$ to get an counterexample. However, it seems $Q(t)$ is countable so any inclusion map to $\mathbb{R}$ will have an infinite index. Any suggestions for other examples?

How to describe these combinatorial sets in general and prove their cardinality

Posted: 31 Mar 2021 08:56 PM PDT

The sets I am looking at are defined as follows: for $k=1$ the set is defined as:

$\Sigma_1:=\Big\{ \frac{1}{1},\frac{1}{3}\Big\}$

whereas for $k=2$ we have

$\Sigma_{2}:=\Big\{ \frac{1}{1},\frac{2}{1},\frac{3}{1},\frac{1}{2},\frac{1}{3},\frac{1}{5},\frac{3}{5},\frac{1}{7} \Big\}$

and for $k=3$ we have

$\Sigma_{3}:= \Big\{ \frac{1}{1},\frac{2}{1},\frac{3}{1},\frac{5}{1},\frac{1}{2},\frac{3}{2},\frac{1}{3},\frac{2}{3},\frac{5}{3},\frac{1}{4},\frac{1}{5},\frac{3}{5},\frac{1}{7},\frac{3}{7},\frac{5}{7},\frac{1}{9},\frac{1}{11} \Big\}$

I struggle finding a general rule for this. However, I am quite sure that the cardinality of each $\Sigma_{k}$ is equal to three times the triangular number minus one i.e.

$\frac{3k(k+1)}{2}-1$

since from above we may observe that

$|\Sigma_1| = 2$, $|\Sigma_2|=8$, $|\Sigma_3|=17$.

Edit: $\Sigma_4$ would be defined as

$\Sigma_4 := \Big\{ \frac{1}{1},\frac{2}{1},\frac{3}{1},\frac{4}{1},\frac{5}{1},\frac{7}{1},\frac{1}{2},\frac{3}{2},\frac{1}{3},\frac{2}{3},\frac{4}{3},\frac{5}{3},\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{7}{5},\frac{1}{7},\frac{3}{7},\frac{5}{7},\frac{1}{9},\frac{5}{9},\frac{7}{9},\frac{1}{11},\frac{3}{11},\frac{1}{13},\frac{1}{15} \Big\}$

Is there any way of deriving a general formula for each $\Sigma_k$ and proving this cardinality? Notice that in each set $\Sigma_k$ for each $\frac{n}{m}$ you want to have $\text{gcd}(n,m)=1$.

Each set corresponds to positive slopes found in $1$, $2$ and $3$ layered hexagons divided by $\sqrt{3}$. For instance, the first set describes positive slopes in a single regular hexagon divided by $\sqrt{3}$, in particular, $\frac{1}{\sqrt{3}} \rightarrow \frac{1}{1}$ and $\frac{\sqrt{3}}{3} \rightarrow \frac{1}{3}$. The set $\Sigma_3$ would correspond to the positive slopes found in the given image minus $0$ and $\infty$. For example, the points 13 and 32 in the given image would define the slope $\sqrt{3}$ which corresponds to $\frac{1}{1}$ and 20 and 24 define the slope $\frac{\sqrt{3}}{5}$ which corresponds to $\frac{1}{5}$.

enter image description here

How do I do Fourier decomposition in non-sinusoidal basis of even polynomial functions.

Posted: 31 Mar 2021 09:10 PM PDT

I've been studying atan(x)/x approximations and using infinite series to help me adjust coefficients of approximating formulas.

One of the approximations elements I use is the function $x^2/(1+|x|)$ which is supposed to produce a graph similar to a hyperbola but should produce a Taylor series with all even powers. (eg: it's symmetrical across x=0).

When I plug this equation into Mathematica, I get a Taylor series that has the absolute value of odd powers. The online widget lists it as a "Parseval" series.

I can't compare the coefficeints of a Parseval series to those of an even-only powered taylor series. Mathematica's answer is useless to me.

I believe the Taylor series produced by Mathematica must not be unique, and I want to re-compute the Parseval series in terms of even powers alone. The function is even, and this SHOULD be possible to do.

If you could guide me in doing this, I would appreciate it.

Here's what I've tried so far, and where I get stuck:

The only way I know to reformulate an infinite series into another infinite series and still be sure it will converge, is to use a Fourier style decomposition to replace Taylor series expansion.

I realize that I need to replace the Fourier sin() cos() basis set with polynomial parts. Apparently this is a well known possibility to mathematicians.

Fourier transform with non sine functions?

But, I don't have the vocabulary to locate practical tips to carry it out to completion.

However, I do know from linear algebra that I can treat polynomial terms as Fourier basis vectors; I'm going to replace each $a x^n$ term of a Taylor expansion with a 'vector' that has an average value of zero and an inner product of 1.

This is my basis set:

$x^n \rightarrow { (1+{1 \over n}) \sqrt { n+ 1 \over 2 } ({ {x^n - 1} \over {n+1} }) }$

The inner product of any two vectors is defined as usual for Forier analysis for period P=2. It will be the definite integral of the product of vector functions over the region x=[-1:+1].

Not surprisingly, when I test the inner product of the vector for x^2 with x^3, they are not orthogonal. They are normal to each other.

The test verifies that polynomial terms in an infinite series are not unique, because a certain amount of each term's coefficient may be represented as a linear combination of other terms.

So, what's left for me to do is to ortho-normalize my basis set; and then use the orthonormal basis in a Fourier decomposition on my original function.

However, the only way I know to do the orthonormalizaation is Grahm-Schmidt process; and when I do that, it's going to change the basis set from singular polynomial terms into linear combinations of them.

There are several arbitrary decisions that have to be made ... and I'm wondering if someone has already done this process before, and how they chose to deal with the trade offs (and why.)

I'm thinking that naively orthogonalizing the series with the x^2 vector first has the dis-advantage of emphasizing the ability of x^2 to absorb the non-orthogonal representation of higher order terms. It sort of would change the meaning of 'x^2'... !

What I really would like is the Fourier decomposition into each orthogonalized vector to produce a number which is proportional to the uniqueness (or mandatory) use of the corresponding Taylor series coefficient.

eg: the value for the x^2 vector's coefficient should represent the minimum or 'above average' amount of x^2 that is required to reconstruct my Fourier decomposed function. Note: The value for the X^2 coefficient does not need to be the same as the value for the taylor coefficent of x^2 term, but the Fourier computed coefficient should have some kind of logical relationship to the Taylor coefficient so that I can compare them.

I assume that I need to have some kind of 'average' vector that represents the most common linear dependency among polynomial terms, so that I can use it to make my x^2 vector NOT absorb more than an average amount of the non-orthognality of higher order vectors. eg: A vector to represent the average linear dependency of higher order terms, and use that as the 'first' vector in the Gram Schmidt process. At the same time, since the basis set is infinite, I can't actually construct such an 'average' vector exactly... It's sort of an ill defined problem.

I tried an experiment; I took the inner product for the $x^2$ vector with all the following vectors up to $x^{24}$ to produce a table of how much the vectors are linearly related.

Inner product of normalized x^2 vector with itself, and higher order vectors.

[100%,.958315,.895806,.83814,.788227,.745356,.708329,.676065,.647689,.622514,.600000,.579721,.561339,.544579,.52922,.515079,.502005,.489871,.478571,.468014,.458123,.44883,.44079,.431818,.424004 ]

Conceptually this is a table that corresponds in linear algebra to the cosine of the 'angle' between vectors. The more non-zero an entry is, the more linear dependence the vecotor has with x^2.

I've tried to construct an average vector by adding the non-orthogonal vectors together in proportion to value in the list. eg: in proportion to the cosine() of the angle between vectors. I'm not sure this is a good choice, but it has the virtue of emphasizing the importance of lower order vectors in the average. In infinite series, convergence is determined by the error of higher order terms going to zero .. so this de-empahsizing higher order terms seems logical. But, again,it's an arbitrary attempt.

None the less, I graphed the normalized results of the averaging, and I get a graph which suggests that the process is converging toward a shape that is very stable in certain places of the graph. I believe that convergence is what linear dependence is ... and non-orthogonality in my basis set. (Am I in error?)

But, I don't quite see a practical way of determining an analytical function from my graph that is both simple (eg: non infinite in terms...!) and would still extract or represent the most common non-orthogonality among the polynomial terms.

enter image description here

Basic Triangle Geometry on GRE: Length of one side of a triangle?

Posted: 31 Mar 2021 09:05 PM PDT

Practice GRE problem that is giving me more trouble than it should. Why is the answer: 'The relationship cannot be determined?'

My answer was that the two quantities are equal--but that is clearly wrong.

GRE Question

Diagonalization of a Matrices Represent Change of Basis.

Posted: 31 Mar 2021 09:09 PM PDT

In our examination of Mathematical Physics course, a question came which had a matrix written in the standard basis for a 2 by 2 matrix. Now he changed the basis of the given matrix and told us to write the new matrix in terms of these new basis.

I gave it a try by solving and finding the eigenvalues of the matrix (whom to be written in new basis) and now my doubt arises that should i ow use their eigenvectors or what. and if i use them, what to do next.

I tried looking for answers and found a you-tube link (https://www.youtube.com/watch?v=s4c5LQ5a4ek&t=530s), but was unable to catch up.

I need to clear this doubt as my professor said it will play a huge role in Quantum Mechanics that denationalization of matrices or similarity transformation represent change of basis.

Edit- I know the 3 important properties of matrices to be similar. Being a Physics student, would appreciate every possible explanation of it.

Integral inequalities of complex function

Posted: 31 Mar 2021 08:40 PM PDT

Why do these inequalities hold $?$. When does the equality hold $?$. \begin{align} \left\vert\,{\int_{c\ \pm\ {\rm i}T}^{r\ \pm\ {\rm i}T} \frac{x^{-s}}{s}\,{\rm d}s}\,\right\vert\ & \leq\ \int_{c}^\infty \frac{x^{-\sigma}}{\sqrt{\sigma^{2} + T^{2}}}\,{\rm d}\sigma \\[4mm] \left\vert\,{\int_{c\ -\ {\rm i}T}^{r\ +\ {\rm i}T} \frac{x^{-s}}{s}\,{\rm d}s}\,\right\vert\ & \leq\ \pi x^{-c} \end{align}

for $ x > 1,\ r > c$.

Image of a region under the Mobius transformation $f(z) = \frac{z+1}{i(z-1)}$

Posted: 31 Mar 2021 09:06 PM PDT

I have a function $$f(z) = \frac{z+1}{i(z-1)}.$$ And I have a triangle enclosed by the vertices $0$, $1$ and $i$. I am trying to draw the image of the triangle under $f$ on the standard complex plane.

I know that $f(0) = i$ and $g(i) = -1$ and $g(1)$ goes to infinity. But I'm having trouble drawing the actual image, especially since $g(1)$ goes to infinity.

I also know that $f$ is a mobius transformation. Is there a trick to determining the image?

Pick r from n with arrangement

Posted: 31 Mar 2021 09:17 PM PDT

I'm self-learning combination theory and encountered this problem.

How many distinct $4$ digit number can be formed from picking numbers in $1,3,3,7,7,8$?

I'm thinking to permute all numbers then eliminate duplicates $3$ and $7$ so ${6!\over2!\times2!} = 180$. Then pick first $4$ digit and eliminate arrangement from the last two so $\frac{180}{2!} = 90$

I don't think this is correct since the second elimination will overcount.

How do I approach this kind of problems?

Solving a first order quasilinear PDEs using the idea of characteristics

Posted: 31 Mar 2021 08:44 PM PDT

I was considering the following problem:

$$u_t+uu_x=1$$

Then I put this in the form of $\frac{du}{dt}=1,$ $\frac{dx}{dt}=u$ and deduced that the general form would be $$u=t+f(x+\frac{1}{2}t^2−ut)$$

for some arbitrary function $f$ that should be determined via initial condition. Up to this step I am confident that I am correct.

Problem:

So as a sanity check I wanted to make sure the $u$ I found indeed would satisfy the PDE. So I calculated its partial derivatives as follows:

$u_t=1+(t-u)f'(x+\frac{1}{2}t^2−ut)$, $u_x=f'(x+\frac{1}{2}t^2−ut).$ Plug everything in I have that

$$u_t+uu_x=1+tf'$$

But I was expecting it to be just $1$? Where have I gone wrong? Have I used the chain rule on partial derivatives incorrectly? Many thanks in advance!

Edit

I saw where my mistake is now. I was meant to partial differentiate rhs as well since $u$ lies in the argument.

Calculate integral $\int_0^{\infty} \frac{(1-at²)\cos(nt)}{(1+bt²)³}dt$

Posted: 31 Mar 2021 08:45 PM PDT

How to calculate the following definite integral containing cosine and rational function of polynomial $$ \int_{0}^{\infty}\frac{\left(1 - at^{2}\right)\cos\left(nt\right)}{\left(1 + bt^{2}\right)^{3}}\,{\rm d}t\ ? $$

Here $a, b$ and $n$ are positive constants, greater than $1$.

Are there any known infinite series of rational terms that are just irrational (not transcendental)?

Posted: 31 Mar 2021 09:09 PM PDT

I have probably encountered hundreds of infinite series where each term is rational. In each case (as far as I can remember), the value of the infinite series was either rational or transcendental.

For example, some simple cases include: $$\begin{align} \sum_{r=1}^{\infty}\frac{1}{2^r}&=1\\ \sum_{r=1}^{\infty}\frac{1}{r^2}&=\frac{\pi^2}{6}\\ \sum_{r=1}^{\infty}\frac{1}{r^2+1}&=\frac{1}{2}(\pi\coth\pi-1). \end{align}$$ I realize that it's definitely not known that it's true that infinite series of rational terms can only be rational or transcendental, as otherwise we wouldn't say that $\zeta(3)$ and other constants are irrational; we'd immediately be able to say that they are transcendental, so I'm not asking that. I'm asking if anyone knows of any infinite series of rational terms that is just irrational, not transcendental.

Thank you for your help.

Proof that $\forall r \in \Bbb Q$, if $3r-\frac12s\in\Bbb{\bar Q}$ then $s\in\Bbb{\bar Q}$

Posted: 31 Mar 2021 08:54 PM PDT

Proof that $\forall r \in \Bbb Q$, if $3r-\frac12s\in\Bbb{\bar Q}$ then $s\in\Bbb{\bar Q}$

I'm unsure about the notation in this proof (specifically what $\Bbb{\bar Q}$ means) and how to go about it. Any help would be appreciated.

semi-simple and simple lie group,SO(n) for n even

Posted: 31 Mar 2021 08:58 PM PDT

It is stated in the literature that SO(n) for n even is semi-simple or simple and either one of these means that it has no non-identity abelian factor groups but I,-I is an order 2 abelian factor group which is contained in SO(n) which contradicts this. Also if eliminate this one possibility,or say SO(n)/{I,-I} for n even >1, I would say SO(n) would be simple and not semi-simple . Can anyone give example of why these(my) statements are not true. It is stated in the literature that SO(n)/{I,-I} ,n even, is simple except for n=4 which is stated as only semi-simple. How can that be - in particular in the natural 4x4 matrix representation what exactly is a possible proper normal subgroup ? It can't be restricted to the space of any 2 or 3 dimensional submatrix that is where every element of the normal subgroup has a 1 on a certain diagonal(s). Could a normal subgroup be taken as a finite group? No but found an answer in Lie Algebras for Physicists by Ashok Das and Susumo Okubo, kupdf.net_das-amp-okubo-lie-groups-and-lie-algebras-for-physicist.pdf, pages 93-4. Subgroups $\exp(a_1J_1+a_2J_2+a_3J_3)$ for all constants $a_i$ or same but $J_i$ replaced by $K_i$ since $[J_i,K_j]=0$ and SO(4) is direct product of the two SO(3) type groups. Now would like to have a general proof of why the same type of analysis and development could not yield proper normal subgroups in SO(n) for n>4. I think possibly a similar type of analysis holds for the discrete finite alternating permutation group.

What is the proper way to exclude multiple specific numbers from a set?

Posted: 31 Mar 2021 09:09 PM PDT

I'm working on some homework, and my google-fu isn't helping me out with this question. I've already solved it, but I need to put it into the proper notation in order to get full credit. I've already figured out that the range is $[-5,\infty)$, but I can't figure out the proper notation, nothing looks right to me. I feel like the answer should be $\mathbb R\setminus \{(-\infty, -5)\}$ ?

(the question in question: Find the largest possible subset $A$ of $\mathbb R$ that will make the following functions well-defined. $A\to \mathbb R$ given by $f(x) = \sqrt{3x+15}$: The range where the question is well defined is from $-5$ to infinity, but I don't know how to "formally" say it.

Can a function have a domain and codomain of binary numbers?

Posted: 31 Mar 2021 09:10 PM PDT

I saw a thread regarding functions and I have a related question.

Is it possible to have a function where the domain and the codomain are binary numbers?

For a decimal number $x\in \{0, 1, \dots, 2^n-1\}$ we have the binary respresentation \begin{align} x &= x_02^0 + x_12^1 + x_22^2 +\ldots+ x_{n-1}2^{n-1} \\ &= x_0 x_1 \cdots x_{n-1} \end{align} where $x_0,\ldots,x_{n-1}\in\{0,1\}$. We also have a second binary number \begin{align} y &= y_02^0 + y_12^1 + y_22^2 +\ldots+ y_{n-1}2^{n-1} \\ &= y_0 y_1 \cdots y_{n-1} \end{align} where $y_0,\ldots,y_{n-1}\in\{0,1\}$.

Question 1:

Can we now have a function with the domain $\{x\}$ and the codomain $\{y\}$? I.e. $$ f:\{x\} \rightarrow \{y\} \tag 1 $$

Question 2:

Is the function notation in $(1)$ equivalent to the following: $$ f:\{x_02^0 + x_12^1 + x_22^2 +\ldots+ x_{n-1}2^{n-1}\} \rightarrow \{y_02^0 + y_12^1 + y_22^2 +\ldots+ y_{n-1}2^{n-1}\} \tag 2 $$ $$ f:\{x_0x_1\cdots x_{n-1}\} \rightarrow \{y_0y_1\cdots y_{n-1}\} \tag 3 $$ If so, which notation is most correct/common?

convex hull of a lower hemicontinuous correspondence is lower hemicontinuous

Posted: 31 Mar 2021 09:01 PM PDT

Let $(X, \mathcal{T})$ and $(Y, \mathcal{T}')$ be two topological spaces.

We call a correspondence (set-valued function) $F: X \rightarrow 2^Y$ lower semicontinuous if for every $G \in \mathcal{T}'$ , $\{x \in X\mid F(x) \cap G \neq \phi\} \in \mathcal{T}$ holds.

Consider a lower semi-continuous correspondence $F: X \rightarrow 2^Y: x \mapsto F(x)$. Show that $c(F): X \rightarrow 2^Y: x \mapsto c(F(x))$ , where $c(A)$ denotes the convex hull of $A$ in $Y$, is also lower semi-continuous.

This is a claim I came across in C.D. Aliprantis and K.C. Border's "Infinite Dimensional Analysis: A Hitchhiker's Guide". It is also a proposition in Ernest Michael's first paper on Continuous selections. However, I'm failing to prove this by just using the definitions I know.

How page rank relates to the power iteration method

Posted: 31 Mar 2021 09:05 PM PDT

I do not see how pageRank relates to the power method. Since for the pageRank we are looking for the steady stable state (vector) for a Markov (transition) matrix and the matrix has already an eigenvalue equals to one, why multiplication is used throughout the iterations to converge into that vector. Knowing that the only things that changes during the iterations are the eigenvalues with absolute value less than 1 and their corresponding "disappearing" eigenvector.

Therefore, why this not done by factoring the Markov matrix to find the eigenvector corresponds to eigenvalue 1 instead of the iterative multiplication approach?

No comments:

Post a Comment