Sunday, July 11, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


To find gradient of cost function using Wirtinger calculus

Posted: 11 Jul 2021 09:37 PM PDT

I want to get gradient of the following cost function. $f(g,g^*) = \|(f-|Ag|)\|^2_2$
w.r.t $g$ using Wirtinger Calculus (i.e., derivating $f(g,g^*)$ w.r.t $g$ the $g^*$ is considered as constant and vice-versa).

where $\|.\|_{2}$ is the $L_2$ norm, $g$ is $n\times1$ vector and $g^*$ is its complex conjugate $f$ is real positive $m\times1$ vector and $A\in L(R^n,R^m)$ (Linear operator that maps a $n\times1$ vector to $m\times1$ vector) and adjoint of $A =A^{\dagger}$ defined such that $\langle g_{01},Ag_{i2}\rangle=\langle A^\dagger g_{01}, g_{i2}\rangle$.

any help regarding this will be appreciated.

Give all the nodes on a tree a positive number $v_i$ such that no adjacent nodes share the same number, what is the minimum $\sum v_i$?

Posted: 11 Jul 2021 09:46 PM PDT

Suppose you have to give all the nodes on a tree a positive number $v_i$ such that no adjacent nodes share the same number, what is the minimum $\sum v_i$?

I came across this problem on a programming platform. I had solved it but I had a few doubts concerning it.

  1. When I was solving this problem, I thought that $\max(v_i) = 3$ were always enough but apparently there were some cases where $\max(v_i)$ had to be $4$. Could anyone provide an example of such a tree?
  2. Is there a bound on $\max(v_i)$? If yes, why?

If this is a well-known problem (especially on a tree), please provide me with some reference links cuz I can't find any. Thanks in advance.

Example of $\max(v_i)$ being $3$:

enter image description here

How to match a table to a contour diagram.

Posted: 11 Jul 2021 09:15 PM PDT

picture of the tables and the contour diagrams

I need to know how to match contour diagrams to tables. I'm really at a loss for how to do this. This wasn't discussed in my class, and there's nothing about it in my textbook. I feel like I'm missing something. I know two of these are right (I tried guessing the right answers) but I don't know which are right. If someone could explain the steps to solving this, I would greatly appreciate it.

Let f : R -> R be a twice continuously differentiable function and f(0) = f'(0) = 0. If |f''(x)| is ≤ 1 then prove f(x) is ≤ 1/2 for x in [−1, 1].

Posted: 11 Jul 2021 09:05 PM PDT

Let f : R → R be a twice continuously differentiable function and suppose f(0) = f'(0) = 0. If |f''(x)| ≤ 1 for all x in R, then prove that |f(x)| ≤ 1/2 for all x in [−1, 1].

Given that n and m are integers, if $n^2 + 1 = 2m$, prove that m is the sum of the squares of 2 non-negative integers.

Posted: 11 Jul 2021 09:44 PM PDT

I have absolutely no idea how to approach the question posed in the title, and would like a hint towards the answer. I tried multiple approaches, such as multiplying both sides by 2, or subtracting 2 from both sides to make the LHS a difference of squares, or adding $2n$ to both sides to make the LHS a perfect square. I had no idea how to progress by doing any of these methods, though.

Geodesics on oblique helicoid using isometry

Posted: 11 Jul 2021 09:02 PM PDT

I am trying to find the non trivial geodesics on an oblique helicoid(non-minimal) surface apart from the rulings. Instead of using the usual geodesic equations and also because I was interested in seeing the correspondence between geodesics through isometry in this case, I was looking at using the hyperboloid of one sheet(of revolution) which is isometric to the oblique helicoid. By the hyperboloid I mean the surface given by the equation $$\dfrac{x^2+y^2}{a^2}-\dfrac{z^2}{b^2} = 1$$

The parametrisation of the oblique helicoid considered is this:

\begin{equation} M(u,v) = \bigg\lbrace\dfrac{ub}{\Delta}\cos{\frac{v}{b}}, \dfrac{ub}{\Delta}\sin{\frac{v}{b}}, \dfrac{au}{\Delta}+v\bigg\rbrace \end{equation}

where $\Delta = \sqrt{a^2+b^2}$. The central circle on the hyperboloid is a geodesic which corresponds to the straight axis of the oblique helicoid. But as the hyperboloid is also a surface of revolution, the meridians of which are hyperbolas and are also geodesics. I am trying to see what the images of these geodesics are on the oblique helicoid, but haven't had any luck. Any hints or suggestions are welcome. Thanks in advance.

About random variables and expected value

Posted: 11 Jul 2021 09:38 PM PDT

Let $X$ and $Y$ be two independent random variables defined on $(\Omega, \mathcal G, P)$. If $M \in \mathbb B(\mathbb R)$ prove that $$ \int_{\{Y \in M\}} X dP = E(X) P_Y(M), $$ where $P_Y$ is the probability measure induced by $Y$ on $(\mathbb R, \mathbb B(\mathbb R))$.

So far I tried to work with the RHS: $E(X)P_Y(M) = (\int_{\Omega}X dP) P_Y(M) = \int_{\Omega}P_Y(M) X dP = \int_{\Omega}P(\{ Y \in M\}) X dP $, but don't know what else to do. Also, I don't see how the fact that $X$ and $Y$ are independent can help.

Does $a \equiv b \equiv 5 \pmod {12}$ imply $A \equiv 3 \pmod 4$, under the given conditions?

Posted: 11 Jul 2021 09:24 PM PDT

Let $a$ be prime, and let $b$ be a positive integer.

Suppose that I know that $a \equiv b \equiv 1 \pmod 4$ holds in general.

Additionally, assume that I know that the following biconditionals hold: $$(A \equiv 1 \pmod 4) \iff (a \equiv b \pmod 8)$$ $$(A \equiv 3 \pmod 4) \iff (a \equiv b + 4 \pmod 8)$$

If I know that $a \equiv b \equiv 5 \pmod {12}$, then what can I conclude about $A$?

MY ATTEMPT

Since $a \equiv b \equiv 1 \pmod 4$ holds in general, then $4 \mid (a - b)$.

Suppose to the contrary that $A \equiv 1 \pmod 4$. This means that $a \equiv b \pmod 8$, which is equivalent to $8 \mid (a - b)$.

But we know that $a \equiv b \equiv 5 \pmod {12}$. In particular, $4 \mid 12 \mid (a - b)$. This contradicts (?) $8 \mid (a - b)$. Hence, we conclude that $$a \equiv b \equiv 5 \pmod {12} \implies A \equiv 3 \pmod 4.$$

QUESTION

Is this argument logically sound? If not, how can it be mended so as to produce a valid proof?

If I "randomly" come up with curve, how do I obtain it's parametrization?

Posted: 11 Jul 2021 09:11 PM PDT

Sorry If this question is too cumbersome, I am still a bit lost at the basics of differential geometry: Given the curve $(\cos t,\sin 3t)$, how to obtain the parametrization of this curve? I've been thinking about using the relation $\tilde\gamma(t)=\gamma(\phi(t))$ and comparing it with the arc length parametrization. I'm a bit confused.

If A is the 4 by 4 matrix of ones, find the eigenvalues and the determinant of A−I

Posted: 11 Jul 2021 08:45 PM PDT

So I want to find the eigen values and eigen vectors of a matrix with all 1's

\begin{bmatrix}1&1&1&1\\1&1&1&1\\1&1&1&1\\1&1&1&1\end{bmatrix}

Only 1 independent would be left,

\begin{bmatrix}1&1&1&1\\0&0&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix} Now, Let's assume λ= 1.

A-λI would give me,

\begin{bmatrix}0&1&1&1\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix} With eigen values as, -1,-1,-1 and 3.

Ultimately, λ1=3 and λ2=-1.

But this is something I have assumed, how can I get eigen values and its vectors by a method? Or what steps should I take ahead? Thanks in advance

Is the series $\sum_{i=k}^\infty\frac{\sin\left(\frac{x}{i}\right)}{i}$ bounded?

Posted: 11 Jul 2021 09:05 PM PDT

Comparison with $\frac{1}{i^2}$ shows that the series $$\sum_{i=k}^\infty\frac{\sin\left(\frac{x}{i}\right)}{i}$$ is convergent. However, the function of $x$ thus obtained appears to be bounded, with the bound approaching zero in $k$. I have no idea how to prove this, every method I know gives me no better a bound than the obvious $\frac{\pi^2x}{6}$. My suspicion is that prior to reaching $\left|\frac{x}{i}\right|<1$, the values of $\frac{x}{i}$ modulo $2\pi$ have an asymptotic distribution which helps similar to other identities with alternating signs bounded by $\frac{1}{i}$.

If $X^{\star} \times Y^{\star}$ is a KC space, then $X\times Y$ is a $k$-space

Posted: 11 Jul 2021 09:35 PM PDT

The following theorem is found in the article "ON KC AND k-SPACES, A. García Maynez. 15 No. 1 (1975) 33-50"

Theorem 3.5: Let $X, Y$ be topological spaces. If $X^{\star} \times Y^{\star}$ is KC then $X\times Y$ is a $k$-space

Definitions:

  1. $X$ is a $KC$ space if every $K\subset X$ compact is closed.
  2. Let $A⊂X$. Then $A$ is a $k$-closed if for all $K⊂X$ compact it happens that $A∩K$ is closed in $K$
  3. $X$ is a $k$-space if every $k$-closed set of $X$ is a closed set in $X$.
  4. Let $C\subset X$. Then $C$ is compactly closed if for all $K\subset X$ compact and closed, $C\cap K$ is compact.

The article refers that the theorem 3.5 is a consequence of theorem 3.4

Theorem 3.4: Let $X,Y$ be non-compact spaces and let $Y^{\star}=Y\cup \{ \infty \}$ be the one-point compactification of $Y$. Assume $Y$ is a $KC$ space. Then a set $C\subset X\times Y$ is compactly closed in $X\times Y$ if and only if $C\cup (X\times \{ \infty \})$ is compactly closed in $X\times Y^{\star}$.

I fail to understand why theorem 3.5 is a consequence of theorem 3.4. Could someone explain me with more details?

Fourier Series of f(x) = 1 (exponential version)

Posted: 11 Jul 2021 09:12 PM PDT

I'm trying to find just the Fourier series of f(x) = 1 from 0 $\leq$ x $\leq$ $\pi$ using the exponential definition of the coefficients (f(x) = 0 from $-\pi \leq x < 0$). I'm running into a divide by 0 problem.

So I know that $c_n$ = $\frac{1}{2*\pi}$ $ \int_{-\pi}^{\pi} f(x) e^{(-inx)} \,dx $. f(x) = 1 and integral of $e^{-inx}$ is $\frac{-e^{-i*\pi*n}}{in}$. Integrating, I get $\frac{1}{2\pi} (\frac{-e^{-i*\pi*n}}{in} + \frac{1}{in})$ = $\frac{1}{2\pi} (\frac{-(-1)^{-n}+1)}{in})$. So this seems like it should be the formula for the $c_n$?

But I don't think it is. Because we have to take the sum $\sum_{n=-\infty}^{\infty} c_n e^{inx}$. But $c_0$ = $\infty$. Also my understanding is that for Fourier Series, $\sum_{n=-\infty}^{\infty} c_n e^{inx}$ = f(x) for all x. But Wolfram Alpha shows that's not the case here. So what am I doing wrong?

If I have a complex valued odd/even holomorphic function, can I say anything about parity of its real/imaginary part as $R^2$ functions?

Posted: 11 Jul 2021 08:31 PM PDT

Let $f(z)=f(x_1,x_2)=f(x_1+i x_2)=u(x_1,x_2)+iv(x_1,x_2)$ be a holomorphic function, then by Cauchy-Riemann: $\frac{\partial u}{\partial x_1}(x_1, x_2)=\frac{\partial v}{\partial x_2}(x_1, x_2), \quad \frac{\partial u}{\partial x_2}(x_1, x_2)=-\frac{\partial v}{\partial x_1}(x_1, x_2).$

If I'm not wrong it means that if $u$ is odd in $x_i$, $v$ must be even in $x_i$, right?

My question is: suppose $$f(-z)=-f(z),$$ then it must be true that $u(-x_1,-x_2)=-u(x_1,x_2),v(-x_1,-x_2)=-v(x_1,x_2)$. But,

can I say anything about $u(-x_1,x_2),v(-x_1,x_2),u(x_1,-x_2),v(x_1,-x_2)$?

EDIT: From Ted Shifrin comment I realize that it should have to be a relation with the power series. Let's say: $$f(z)=\sum_{i=-\infty}^\infty c_i z^i.$$ as $$u=(f+\bar{f})/2=\sum_{i=-\infty}^\infty \frac{c_i z^i + \bar{c}_i \bar{z}^i}{2}$$ then if $c_i$ are reals $u(x_1,-x_2)=u(x_1,x_2)$.

Is it a sufficient condition to say $c_i$ are reals?

What should be the condition for $u(-x_1,x_2)=-u(x_1,x_2)$?

On an example of an idempotent which is not a projection

Posted: 11 Jul 2021 09:32 PM PDT

I am working through a solution of the following exercise in Conway's functional analysis:

Let $\mathscr{H}$ be the 2D real Hilbert space $\mathbb{R}^2$, $\mathscr{M}=\{(x,0):x\in\mathbb{R}\}\subseteq\mathscr{H}$, and $\mathscr{N}=\{(x,x\tan{\theta}):x\in\mathbb{R}\}\subseteq\mathscr{H}$, where $0<\theta<\frac{\pi}{2}$ is fixed. Find a formula for the idempotent $E_\theta$ with range $\mathscr{M}$ and kernel $\mathscr{N}$. Also show that $\|E_\theta\|=(\sin{\theta})^{-1}$.

I am very confused with this question. I will write my try can you please help me why I can't get the solution?

My questions : 1- $Ε_θ$ is an operator s.t. it transforms all points of $R^2$ to x-axis. So intuitively $Ε^2_θ=Ε_θ$. But how one write an explicit formula for $Ε_θ$. Isn't it $Ε_θ (x,y)=(x,0)$ and if so it doesn't depend on θ then?

2- $Ker (Ε_θ)$ is the set of point s.t. $Ε_θ (x,y)=(0,0)$ then it must be the y-axis, then why it is {(x,xtan0): $x \in R$}?

3- how $||Ε_θ||=(sin θ)^{-1}$? Calculating $||Ε_θ||=sup Ε_θ$ depends on two things to know before that : is sup taken on the circle in $R^2$? what is explicit formula for $Ε_θ$ so that to know on what sup is taken?

4- this exercise is designed to indicate an example of an idempotent which is not a projection? ker E is the y-axis and ran E is the x-axis so how $ker E = (ran E)^{\perp}$ does not hold when it holds?

False proofs that there are finitely many primes

Posted: 11 Jul 2021 08:50 PM PDT

There are numerous proofs of the infinitude of primes. I'm interested in FALSE proofs that there are finitely many primes. I looked online, but the only proof I found is this: https://jeremykun.com/2011/07/05/there-are-finitely-many-primes/ (Essentially, it falsely claims that there is a bijection between the square-free naturals and the power set of the set of primes). Does anyone know other nice fake proofs?

Combination of Addition and Multiplication of 1 and 0 done 13^12 times to never reach above 3^3^3^3^3

Posted: 11 Jul 2021 09:40 PM PDT

I have been doing this question for some time.

If the question is asking us to find the combinations of adding and multiplying 1 and 0 why can't it be 1+1+1+1... but it can be 1+1=2 or just 1?

Below is the question:

Start with 0 and 1. Then do all combinations of adding and multiplying 2 of the numbres (with repetition) to get 0, 1, and 2. Repeat this step 13¹² more times. Show that the smallest natural numbrer never reached is greater than ⁵3 (3 raised to itself 5 times)

Any tips for how to get to the answer or what this question is asking us would be welcome! Thanks.

showing a ring is isomorphic to a localization of $R$ at $S$

Posted: 11 Jul 2021 08:41 PM PDT

Let $\mathbb{K}$ be a field. The domain of $f = \frac{g}h\in \mathbb{K}(x), $ where $g,h\in \mathbb{K}[x]$ is defined to be the set $Dom(f) := \{c\in \mathbb{K} : h(c) \neq 0\}.$ Let $S(c) := \{f\in \mathbb{K}(x) : c\in Dom(f)\}$ for $c\in \mathbb{K}.$ Show that for every $c\in \mathbb{K},$ $\mathbb{K}[x]_{(x-c)}\cong S(c),$ where $(x-c)$ is the smallest ideal of $\mathbb{K}[x]$ containing $x-c$ and $\mathbb{K}[x]_{(x-c)}$ is the localization of $\mathbb{K}[x]$ with respect to $(x-c)$ and its elements can be thought of as fractions with numerators in $\mathbb{K}[x]$ and denominators in $\mathbb{K}[x]\backslash (x-c).$

I know that localizations are unique in the sense that if $R$ is a commutative ring, $S$ is a multiplicatively closed subset of $R$, and $Q$ is a commutative ring so that there exists an injective homomorphism $\phi : R\to Q$ so that

  1. $\phi(x)$ is a unit of $Q$ for every $x\in S$ and every element of $Q$ is of the form $\phi(x)\phi(y)^{-1}$ for $x\in R, y\in S$
  2. If $\psi : R\to T$ is a morphism so that $\psi(x)$ is a unit of $T$ for all $x\in S,$ there exists a morphism $\tilde{\psi} : Q\to T$ so that $\tilde{\psi}\circ \phi = \psi,$

then $Q$ is isomorphic to the localization of $R$ at $S.$ So I think one way to prove part $3$ is to show these two conditions are satisfied for $S(c).$

Alternatively, is there a more "direct" approach to this problem? I was thinking of defining the map $\phi : \mathbb{K}[x]_{(x-c)}\to S(c), \phi(\frac{a}b) = \frac{a}b$. However, it seems that showing that this map is an isomorphism would also show that $S(c) = \mathbb{K}[x]_{(x-c)}.$ I know $S(c)$ is a subring of $\mathbb{K}(x)$ and hence is commutative. Let $R = \mathbb{K}[x], S = \mathbb{K}[x]\backslash (x-c).$ Define $\phi : R\to S(c)$ as follows. $\phi(x) = \frac{x}1.$ Observe that this map is clearly injective. Since for $f=g/h\in S(c), h(c)\neq 0, h\not\in (x-c)\Rightarrow h\in S.$ So $f = \phi(g)\phi(h)^{-1}.$ Then define $\tilde{\psi} : S(c)\to T$ so that $\tilde{\psi}(\frac{a}b) = \psi(a)\psi(b)^{-1}.$ Then one can verify that $\tilde{\psi}$ has the required properties.

Vector-space, Linear algebra, Span of a vector space

Posted: 11 Jul 2021 09:44 PM PDT

Question: Will a set of all linear combinations of the basis of a vector space give the span of that vector space?

This is what I have understood from the meaning of the span of a vector space:

Example: Say we have a vector space V, and it has 2 basis with dimension 3 as follows $$\{a,b,c\} \quad ; \quad \{d,e,f\} $$ Then span of $$V=\{ x_1(a)+x_2(b)+x_3(c); x_i \in \mathbb{R}\} \cup > \{x_1(d)+x_2(e)+x_3(f);x_i \in \mathbb{R}\}$$

That is, span of V is the set of all linear combinations that can be formed with all the basis of V.

Question: Am I correct in understanding what a span of a vector space is?

Proving an inequality given $\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\le 1$

Posted: 11 Jul 2021 09:30 PM PDT

Given that $a,b,c > 0$ are real numbers such that $$\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\le 1,$$ prove that $$\frac{1}{b+c+1}+\frac{1}{c+a+1}+\frac{1}{a+b+1}\ge 1.$$


I first rewrote $$\frac{1}{a+b+1} = 1 - \frac{a+b}{a+b+1},$$ so the second inequality can be rewritten as $$\frac{b+c}{b+c+1} + \frac{c+a}{c+a+1} + \frac{a+b}{a+b+1} \le 2.$$ Cauchy-Schwarz gives us $$\sum \frac{a+b}{a+b+1} \geq \frac{(\sum \sqrt{a+b})^2}{\sum a+ b+ 1}.$$ That can be rewritten as $$\frac{2(a+b+c) + 2\sum \sqrt{(a+b)(a+c)}}{2(a+b+c) + 3},$$ which is greater than or equal to $$\frac{2(a+b+c) + 2 \sum(a + \sqrt{bc})}{2(a+b+c) + 3} = \frac{4(a+b+c) + 2 \sum \sqrt{bc}}{2(a+b+c) + 3} \geq 2,$$ which is the opposite of what I want. Additionally, I'm unsure of how to proceed from here.

What can be said about the coefficients of linear transformations which transforms the real axis into the imaginary axis?

Posted: 11 Jul 2021 09:12 PM PDT

Question: What can be said about the coefficients of linear transformations which transforms the real axis into the imaginary axis?

Thoughts: If I wanted to transorm the real axis into itself, then I can show that I can do this using the cross ratio $(w,w_1,w_2,w_3)=(z,1,0,-1)$, and it turns out the we get the coefficients are real. So, if I wanted to transform the real axis into the imaginary axis, could I set up a cross ratio as $(z,1,0,-1)=(w,i,0,-i)$? But I don't have anything that tells me, for instance, that $1$ is necessarily going to $i$. So I am sort of stuck. I would like to do this using the cross ratio, but maybe that is the wrong approach? Any help is greatly appreciated! Thank you.

Convex subsets of $\mathbb{R^2}$

Posted: 11 Jul 2021 08:56 PM PDT

I have some troubles with proving that $A$ and $B$ are convex.

$A = \left\{(x,y)\in\mathbb{R}^2\,:\, y > \frac{1}{|x|}, x<0\right\} \quad \mbox{and}\quad B = \left\{(x,y)\in\mathbb{R}^2\,:\, y > \frac{1}{x}, x>0\right\}$.

The definition of convex is: A set A is convex iff for $a,b\in A$ and $\alpha\in[1,0]$, then $\alpha a+(1−\alpha)b\in A$.

I have some troubles with proving that $A$ and $B$ are convex. My attempt for $A$ (I think that $B$ should be the same idea)

Let $a=(a_1,a_2),b=(b_1,b_2)\in A$, and $\alpha \in (0,1)$, its clear that

$$\alpha a_1+(1-\alpha)b_1<0.$$

On the other hand, we know that

$$b_2> \frac{1}{|b_1|} \quad \mbox{and} \quad a_2> \frac{1}{|a_1|} $$

How can I arrange the inequalities? Any hint?

Show a matrix with the "even" rows of the binomial coefficients is invertible

Posted: 11 Jul 2021 09:43 PM PDT

I have a $n \times n$ matrix $\mathbf{M}$ defined as, \begin{equation}(\mathbf{M})_{ij} = \begin{cases} {2i \choose j} \quad 1 \le j \le 2i, 1 \le i \le n \\ 0 \quad \mathrm{otherwise} \end{cases} \end{equation}

I want to prove that this matrix is invertible i.e. the determinant is non-zero. I tried various various by induction, and using some tricks in this question on Pascal matrices, but I was not able to reach a resolution.

What would be a good way to approach this?

2 standard 52 card decks, how many sequences with exactely 12 adjacent pairs of identical cards?

Posted: 11 Jul 2021 09:45 PM PDT

I am doing recreational math as a pastime. I like to do math exams and such and also to come up with math-puzzles on my own.

Recently i was doing the following MIT math exam:

https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-042j-mathematics-for-computer-science-fall-2010/exams/MIT6_042JF10_final_2004.pdf

It was super fun! :-)

Anyway, here is problem 7:

picture of problem 7 of the exam

That problem got me thinking about the problem I described below.


Take 2 standard 52 card decks.

Then you can take those cards and form sequences.

There are a total of $104! / 2^{52}$ possible different sequences.

From all of those only $52!$ exist that have $52$ pairs of identical cards placed right next to one another. For example ( Ad,Ad,Ah,Ah,As,As,...)

How many sequences are there with exactly 12 pairs?

Can anyone help me out?

Sum of terms in a reduced residue system

Posted: 11 Jul 2021 09:23 PM PDT

Problem: Prove that if $r_1, r_2, ... r_{\phi(m)}$ is a reduced residue system modulo m, and m is odd, then $r_1 + r_2 + ... + r_{\phi(m)} \equiv 0\mod m$.

I know that $r_1 + r_{\phi(m)} \equiv 0$ because $r_{\phi(m)}=m-1$.

My intuition tells me that $r_2 + r_{\phi(m)-1}$ is also equivalent to 0 so on and so forth. I can also see that, since m is odd, the reduced residue system has an even number of elements. So each $r_i$ has a partner, namely $r_{\phi(m)-i+1}$

How do I show this rigorously?

Is $\lim_{n \to \infty} n((1-\frac{1}{n})^n - \frac{1}{e}) = 0$?

Posted: 11 Jul 2021 09:28 PM PDT

Intuitively I think this statement is true, but I am unable to proof it. Can someone help me? If possible, I would like bound $\vert(1-\frac{1}{n})^n - \frac{1}{\mathrm{e}}\vert$ (or even $\vert{(1+\frac{1}{n})^n - \mathrm{e}}\vert$) because I think this is a good think to know in general.

Minimizing the expectation of the loss function

Posted: 11 Jul 2021 09:03 PM PDT

So i was reading Elements of Statistical Learning and found this in the Statistical Decision Theory part.I

Did not understand it.

The expected (squared) prediction error . By conditioning on $X$, we can write EPE as $$EPE(f) = E_x E_{y|x} ([Y − f(X)]^2|X) \qquad (2.11)$$ and we see that it suffices to minimize EPE pointwise: $$f(x) = \operatorname{arg\,minc} E_{y|x} ([Y − c]^2|X = x)$$

enter image description here

Can someone explain me what exactly happened here with proper mathematical formulae and some intuition as well. Is it to assume that conditioning over $x$ implies assuming $x$ to be constant in some sense. And if possible please try to explain using a density and the definition of expectation.

What exactly is a basis in linear algebra?

Posted: 11 Jul 2021 09:34 PM PDT

I have a brief understanding of bases. But I don't know if it is right or not. So, I just need someone to correct me if it's not.

When we look for the basis of the image of a matrix, we simply remove all the redundant vectors from the matrix, and keep the linearly independent column vectors. When we look for the basis of the kernel of a matrix, we remove all the redundant column vectors from the kernel, and keep the linearly independent column vectors.

Therefore, a basis is just a combination of all the linearly independent vectors.

By the way, is basis just the plural form of base?

Let me know if I am right.

Reduced Residue Systems: $r_1+r_2+...+r_{\phi(m)} \equiv0 \pmod m$

Posted: 11 Jul 2021 09:23 PM PDT

Prove that if $r_1,r_2,...,r_{\phi(m)}$ is a reduced residue system modulo $m$, and $m$ is odd, then $r_1+r_2+...+r_{\phi(m)} \equiv0 \pmod m$.
I am really confused as to where to even begin this problem or how to prove this

Bounded and finite - examples of distinction

Posted: 11 Jul 2021 09:19 PM PDT

What is the distinction between a bounded function, and a finite function? Is there any example of two functions that satisfies only one of them?

Definition If $|f(x)|<+\infty \forall x \in E$, we say $f$ is finite.

Definition If there exist a finite number $M$ such that $|f(x)| \leq M \forall x \in E$, we say $f$ is bounded.

Thank you.

No comments:

Post a Comment