Saturday, April 10, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


$l^2:=\{x=(x_1,x_2,\dots,x_n,\dots)|x_1,x_2,\dots,x_n,\dots\in\mathbb{R},\sum_{k=1}^\infty x_k^2<\infty\}$ is separable.

Posted: 10 Apr 2021 08:24 PM PDT

I am reading a famous book by Kolmogorov and Fomin (4th Edition, translated from Russian to Japanese).

Fact:
Let $l^2:=\{x=(x_1,x_2,\dots,x_n,\dots)|x_1,x_2,\dots,x_n,\dots\in\mathbb{R},\sum_{k=1}^\infty x_k^2<\infty\}$.
Let $\rho(x,y):=\sqrt{\sum_{k=1}^\infty (y_k-x_k)^2}$ for any $x,y\in l^2$.
Then $(l^2,\rho)$ is a metric space.

The authors wrote the following proposition holds without a proof:

Proposition:
Let $A:=\{x=(x_1,x_2,\dots,x_n,\dots)|x_1,x_2,\dots,x_n,\dots\in\mathbb{Q}, x_i = 0 \text{ for all }i\geq n_0 \text{ for some }n_0\in\{1,2,\dots,\}\}$.
Then $\overline{A}=l^2$.
Since $A$ is countable, $l^2$ is separable.

Is my proof of this proposition ok?

Proof:
Let $x\in l^2$.
Let $S:=\sum_{k=1}^\infty x_k^2$.
For any positive real number $\epsilon$, there is $N_0\in\{1,2,\dots,\}$ such that $S - \sum_{k=1}^{N_0} x_k^2<\frac{\epsilon}{2}$.
So $x_{N_0+1}^2+x_{N_0+2}^2+\dots<\frac{\epsilon}{2}$.
For each $k\in\{1,2,\dots,N_0\}$, there exists a rational number $y_k$ such that $|y_k-x_k|<\sqrt{\frac{\epsilon}{2N_0}}$.
So, $(y_1-x_1)^2+\dots+(y_{N_0}-x_{N_0})^2<\frac{\epsilon}{2}$.
So, $(y_1-x_1)^2+\dots+(y_{N_0}-x_{N_0})^2+(0-x_{N_0+1})^2+(0-x_{N_0+2})^2+\dots<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$.
$y=(y_1,y_2,\dots,y_{N_0},0,0,\dots)\in A$.
So, $x\in \overline{A}$.
So, $l^2=\overline{A}$.
Since $A$ is countable, $l^2$ is separable.

The following proposition is true since $A\subset B$ and $B$ is also countable.
Is my proof of the following proposition ok?

Proposition
Let $B:=\{x=(x_1,x_2,\dots,x_n,\dots)|x_1,x_2,\dots,x_n,\dots\in\mathbb{Q}\}$.
Then $\overline{B}=l^2$.
Since $B$ is countable, $l^2$ is separable.

Proof:
Let $x\in l^2$.
Let $\epsilon$ be any positive real number.
For each $k\in\{1,2,\dots\}$, there is a rational number $y_k$ such that $|y_k-x_k|<\sqrt{\frac{\epsilon}{2^k}}$.
Then, $\sum_{k=1}^\infty (y_k-x_k)^2 < \sum_{k=1}^\infty \frac{\epsilon}{2^k}=\epsilon$.
So, $x\in\overline{B}$.
So, $l^2=\overline{B}$.
Since $B$ is countable, $l^2$ is separable.

Does this taylor series converge

Posted: 10 Apr 2021 08:23 PM PDT

We have that there exists $b>0$ such that $|f^{(n)}(x)| \leq \frac{1}{b}$ for $x \in \mathbb{R}, n \in [0,\infty), n \in \mathbb{N}$. If a taylor series is constructed for f centered at $x=a$, then does this taylor series converge?

We can construct this taylor series by using the formula: $\sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(x-a)^n$. I am trying to think of a counterexample to the question. Let b=1, then the sine function satisfies all of the conditions above. However, the taylor series for a sine function does converge. If I choose any value of b, then I can scale the sine fucntion (for instance, to 0.5sinx, 0.0001sinx, and so on) and it seems like there will always be a taylor series that converges. Does this mean that this statement is true?

Practice exercise | Trees | Graph theory

Posted: 10 Apr 2021 08:16 PM PDT

Let $ G $ be a tree with 14 vertices of degree 1, and the degree of each nonterminal vertex is 4 or 5. Find the number of vertices of degree 4 and degree 5.

My attempt, summarized, is the following: Let $ x $ be the number of vertices of degree 4, let $ y $ be the number of vertices of degree 5. Note that there are $ x + y + 14 $ vertices ($ x + y + 13 $ edges, since $ G $ is a tree), then $ 4x + 5y + 1 (14) = 2x + 2y + 26 $ applying handshaking lemma (the sum over all vertices of the degrees is two times the number of edges). But the solution to this is $x = - \frac{3}{2} y + 6$. I don't know if it's okay or what I'm doing wrong.

Conformal transformation on Riemannian manifold with boundary

Posted: 10 Apr 2021 08:01 PM PDT

For a conformal transformation on an n-dimensional (n>2) Riemannian manifold with boundary. If the transformation becomes an identity on a portion of the boundary, can one conclude that the transformation is actually an identity map?

Winning strategy of a unique game

Posted: 10 Apr 2021 08:00 PM PDT

First let's state the game:
Game: Matin and Amo met each other in a gym and decided to play a game. The gym has long enough horizontal metal shaft and exactly one weight of each radii $1, 2,\dots, 100$ units. They take turns alternatively, starting with Amo, picking up a weight and inserting it from either left of right head of the shaft. After all weights have been inserted, Matin stands by the left side of the shaft and Amo stands by its right; among them the one who can see more weights wins; if they see equal number of weight then it is a draw. By seeing a weight we mean that no weight with bigger radius exist between the observer and that weight. Considering that they both are intelligent mathematicians, determine the result of this game.

Here are my results:

  • Amo has a draw strategy:
    If Matin starts with inserting the weight with radius 100, Amo inserts 99 from his end, after this move Amo just needs to insert the largest weight from his end; in this manner any weight inserted by him will be visible by him and not Matin. So Amo will see 51 weights and Matin will see 50 weights. But if Matin start with a weight with radius less than 100 Amo puts the weight with radius 100 near Matin's end and continues like the other case.
  • Playing 99 when 100 is not played yet will cause a loss:
    This is actually quiet easy the rival will insert 100 from not his but the other end of shaft. The rest is similar to the above when 100 is inserted.
  • Playing 98 before 99 and 100 will cause the rival to have draw strategy:
    Again the other player does the same as above it is not hard to see it works.

What is a compact 2-dimensional surface without boundary?

Posted: 10 Apr 2021 08:08 PM PDT

When I am reading the descriptions of 2-dimensional Poincare conjecture on Wikipedia , it menstions "a compact 2-dimensional surface without boundary". People told me that a 2-sphere will be an example of "a compact 2-dimensional surface without boundary". My question is, what is the topological space we are talking about? Is it $\mathbb R^3$ or the 2-sphere itself?

If the whole topological space is $\mathbb R^3$, then a 2-sphere is clearly bounded. It is closed because $\mathbb R^3 \backslash \{(x, y, z) \vert x^2 + y^2 + z^2 = 1\}$ is open. But in this case the 2-sphere contains the boundary, because the interior of a 2-sphere will be empty, and the closure will be 2-sphere itself.

If the whole topological space is the 2-sphere itself, then it is also bounded. It is closed because the empty set is open. Hence it is compact. It is also without a boundary.

So looks like we are using the 2-sphere as the whole topological space. Is my understanding correct?

To summarize my question, what will be the whole topological space in the context of the 2-dimensional Poincare conjecture?

What do the different integral symbols mean, and how do they effect the calculation?

Posted: 10 Apr 2021 07:48 PM PDT

I see a lot of variations on the integral symbol, but I haven't quite grasped their effects on the computation of integrals. I'm sure there are many more variations, but these are a few of the symbols I've come across that I'm uncomfortable with:

$$ 1.\oint_V\\ 2.\oint_S\\ 3.\oint\\ 4.\int_S\\ 5.\int d^3x $$

Thanks in advance

Marginal distribution on unit sphere surface.

Posted: 10 Apr 2021 07:45 PM PDT

The random vector $X=(X_1, X_2, X_3)$ follows uniform distribution on unit sphere surface $S3 := \{(x_1, x_2, x_3) \in\mathbb R^3 : \|(x_1, x_2, x_3)\| = 1\}$

(a) Find pdf of $(X1, X2)$

(b) Find pdf of $X1$

My attempts : pdf of $X$ = $\frac{1}{4\pi}I(-1 \le X1 \le 1)I(-\sqrt{1-X1^2} \le X2 \le \sqrt{1-X1^2})I(X3 = \pm\sqrt{1-X1^2-X2^2})$ Thus pdf of $(X1, X2)$ = $\int_{x_3=\pm\sqrt{1-x_1^2-x_2^2}} \frac{1}{4\pi}I(-1 \le x_1 \le 1)I(-\sqrt{1-x_1^2} \le x_2 \le \sqrt{1-x_1^2})dx_3 = ??$

I'm stuck here and can't proceed any further. Any help would be appreciated.

Why do we define the modulus of a complex number as we do?

Posted: 10 Apr 2021 07:48 PM PDT

For a complex number $z = a+bi$, we say that its modulus is: $$|z|=\sqrt{a^2+b^2}$$

When we draw complex numbers in the Argand diagram, intuitively, this makes sense. But if we used a different projection for the diagram (i.e. a different metric for distance) then it wouldn't necessarily. Of course, complex numbers can also be written as:

$$z = re^{i\theta} = r(\cos\theta +i\sin\theta)$$

so an equivalent question could be, if this is what we define, why we define that:

$$|e^{i\theta}| = |\cos\theta + i\sin\theta| = 1$$

for all values of $\theta$, rather than just $\theta = n\pi$.

The answer may simply be that it is convenient to work with this definition. But is there a deeper reason? Are there any problems for which it is convenient to define things differently? And what would be the consequences if we did things differently?

Tricky cost of food

Posted: 10 Apr 2021 07:58 PM PDT

Juggie loves food. He loved everything his mother cooked and his favorite foods of all were pizzas and chocolates. There was one food he did not like and that was carrots. Juggie's mother told him that he could buy as many pizzas as he liked. But he must buy twice the number of chocolates and more carrots than chocolates and pizza put together. Each pizza cost Rs. 100. The cost of a kg of carrots is half of the pizza and four small chocolates are worth the price of one orange. How much did Juggie spend on food if the number of the pizza, chocolates, and oranges were bought in integers?

The cost of chocolate is not given therfore I am very much confused with this question

Is $\lim_{n\to\infty} \sqrt[n]{\sin^2 (n\alpha)} \le 1$? Why?

Posted: 10 Apr 2021 07:58 PM PDT

In reviewing some old notes of mine, I find the following worked example:

Let $0<\alpha<1$. Does $\sum_{n=1}^{\infty} \alpha^n \sin^2 (n\alpha)$ converge?

Use the root test: $\lim_{n\to\infty} \sqrt[n]{\alpha^n \sin^2 (n\alpha)} = \lim_{n\to\infty} \alpha \sqrt[n]{\sin^2 (n\alpha)} \color{red}{\le \alpha} < 1$ so the series converges.

My question concerns the part highlighted in red. The solution seems to be saying that $\lim_{n\to\infty} \sqrt[n]{\sin^2 (n\alpha)} \le 1$. I suppose it's obvious that if the limit exists at all, it must be less than or equal to 1, but I don't see any obvious reason why the limit should exist (and indeed my instinct is telling me it shouldn't, except for carefully-chosen special values of $\alpha$). Any help?

How many distinct isomorphism types are there of an unlabeled two vertex graph? With labels?

Posted: 10 Apr 2021 08:25 PM PDT

How many isomorphisms are there of unlabeled 2 vertex graphs, and how many of labeled 2 vertex graphs? Loops are allowed.

I know this is trivial but I suspect there're 4 unlabeled, no loops, one edge; no edge, two loops; one loop, one edge; one loop no edge; Would labeling halve this?

Probability and Limits of Cumulative Distribution function

Posted: 10 Apr 2021 08:18 PM PDT

Let $X : \Omega : \rightarrow \mathbb{R}$ a random variable defined over some probability space $(\Omega, \mathcal{F}, \mathbb{P})$. The cumulative distribution function of $X$ is the function $F_X \colon \mathbb{R} \to [0, 1]$ where $F_X(x) = P(X \leq x)$.

I wanted to show that $\lim_{k \rightarrow \infty}F(k)=1$ where $k$ takes only integer

I would appreciate it if someone would get me started on this.

Fitting when variables have different variances

Posted: 10 Apr 2021 08:07 PM PDT

I am trying to fit a function $y = f(x_1,x_2,x_3)$. The function is non-linear and variables $x_1$,$x_2$, and $x_3$ have different variances. In such a case how do I weigh different variables while performing a least-squares fit? Any examples would help.

For example, below is the covariance matrix at one location (1400 points):

 0.0039   -0.0001   -0.0054  -0.0001    0.0037    0.0003  -0.0054    0.0003    0.8037  

At another location, the covariance matrix is below (200 points):

0.0460    0.0000    0.0030  0.0000    0.0408    0.0003  0.0030    0.0003    0.2943  

Thanks.

Generalizing the geometric interpretation of dot product to simple $k$-vectors

Posted: 10 Apr 2021 08:13 PM PDT

For $u, v \in \mathbb R^n$, the dot product $u \cdot v$ can be interpreted geometrically as follows:

  1. Its magnitude is the product of the lengths of $u$ and $\operatorname{proj}_{u} v$.
  2. Its sign is $+1$ if $u$ and $\operatorname{proj}_{u} v$ point in the same direction, and $-1$ if they point in opposite directions.

Is there a similar geometric interpretation for the dot product of two simple $k$-vectors? Recall that the (standard) dot product in $\Lambda^k(\mathbb R^n)$ is given by $$(u_1 \wedge \cdots \wedge u_k) \cdot (v_1 \wedge \cdots \wedge v_k) = \det(u_i \cdot v_j)_{i,j=1}^k.$$

Let $P(u_1, \ldots, u_k)$ be the (oriented) $k$-dimensional parallelogram generated by $u_1, \ldots, u_k$. Then it seems to me that $(u_1 \wedge \cdots \wedge u_k) \cdot (v_1 \wedge \cdots \wedge v_k)$ can be interpreted geometrically as follows:

  1. Its magnitude is the product of the $k$-dimensional volumes of of $P(u_1, \ldots, u_k)$ and $\operatorname{proj}_{\operatorname{span}(u_1, \ldots, u_k)} P(v_1, \ldots, v_k)$.
  2. Its sign is $+1$ if $P(u_1, \ldots, u_k)$ and $\operatorname{proj}_{\operatorname{span}(u_1, \ldots, u_k)} P(v_1, \ldots, v_k)$ have the same orientation, and $-1$ if they have opposite orientations.

(To prove this, first note that it is true when $u_1, \ldots, u_k$ are orthonormal. Then use Gram-Schmidt to reduce to the orthonormal case.)

But after searching on the internet and in various textbooks, I cannot find this geometric description anywhere. (I did see it briefly mentioned in the question in Interpreting the determinant of matrices of dot products, but it was not addressed in the answer.) That is surprising to me because it seems like a natural generalization of the $k=1$ geometric interpretation, which (I believe) is commonly taught when introducing the dot product in $\mathbb R^n$. That leads me to wonder if the interpretation above for $k > 1$ is incorrect or somehow not useful, or if there is a "better" geometric interpretation.

Calculate the integral $\int_{\gamma} \operatorname{Im} z dz$, where $\gamma$ is a polyline with vertices at points $z_0 = 0; z_1 = i, z_2 = 2+i$

Posted: 10 Apr 2021 08:11 PM PDT

Calculate the integral $\int_{\gamma} \operatorname{Im} z dz$, where $\gamma$ is a polyline with vertices at points $z_0 = 0; z_1 = i, z_2 = 2+i$

How can I do this? I have no idea about solving integrals along a polyline.

Projectile motion up a plane.

Posted: 10 Apr 2021 07:52 PM PDT

I'm stuck on the following simple projectile motion problem up a plane. Below is the question:

A plane is inclined at an angle $\alpha$ to the horizontal. A particle is projected up the plane with speed $u$ at an angle $\beta$ to the plane. The plane of projection is vertical and contains the line of greatest slope. When the particle is at its maximum perpendicular height above the plane, it is $\frac{3}{5}$ of the range up the plane. Show that $\tan\alpha \tan\beta = \frac{2}{7}$.

Here is what I did.

I set up a coordinate system parallel and perpendicular to the plane - $\vec x$ (up the plane) and $\vec y$ (perpendicular to the plane). Thus, I can express the initial velocity $u$ in terms if this coordinate system.

$$\vec u = u \cos\beta\vec x + u\sin\beta\vec y$$

Therefore

$$\vec u_x = u \cos\beta $$ $$\vec u_y = u \sin\beta $$

The acceleration due to gravity, in terms of $\vec x$ and $\vec y$ is given by:

$$ \vec a = -g\sin\alpha \vec x - g\cos\alpha \vec y$$

Therefore

$$\vec a_x = -g\sin\alpha $$ $$\vec a_y = -g\cos\alpha $$

For maximum perpendicular height ($h$) I used $$(\vec v_y)^2 = (\vec u_y)^2 + 2\vec a_y h$$ Since $\vec v_y = 0 $ for maximum height, I can find $h$.

$$ 0 = u^2\sin^2\beta + 2(-g\cos\alpha)h$$ $$ h = \frac{u^2\sin^2\beta}{2g\cos\alpha}$$

To find the range, I first found the time of flight. For the time of flight $\vec s_y = 0$. Using the formula:

$$ \vec s_y = \vec u_yt + \frac{1}{2} \vec a_y t^2$$ $$ 0 = u\sin \beta t + \frac{1}{2}(-g\cos \alpha) t^2 $$ $$ t(u\sin \beta - \frac{1}{2}g\cos \alpha t) = 0 $$

$$t = 0 , t = \frac{2u\sin \beta}{g\cos \alpha}$$

Time of flight

$$ t = \frac{2u\sin \beta}{g\cos \alpha} $$

To find the range I used the formula

$$ \vec s_x = \vec u_xt + \frac{1}{2} \vec a_x t^2$$ $$ \vec s_x = u \cos\beta \biggl(\frac{2u\sin \beta}{g\cos \alpha}\biggl) + \frac{1}{2} (-g\sin \alpha)\biggl(\frac{2u\sin \beta}{g\cos \alpha}\biggl)^2$$ $$ \vec s_x = \frac{2u^2\sin \beta \cos \beta}{g\cos \alpha} - \frac{2u^2 \sin \alpha \sin^2\beta}{g\cos^2 \alpha} $$

And this is where I get stuck. I can seem to derive $\tan\alpha \tan\beta = \frac{2}{7}$.

If u = f(x,y) and x = rcosθ and y = rsinθ, verify uxx + uyy = urr + 1/r urθ + 1/ $ \r^2 \$uθθ [closed]

Posted: 10 Apr 2021 08:25 PM PDT

If u = f(x,y) and x = rcosθ and y = rsinθ, verify uxx + uyy = urr + 1/r urθ + 1$ /r^2$ uθθ

uxx denotes double partial derivative of u with respect to x.

I have deduced $x^2$ + $y^2$ = $r^2$ and that u is a function of x and y and x and y are functions of $\theta$ and r.

I was able to find ∂u/∂r but in terms of ∂u/∂x and ∂u/∂y since u is not clearly defined but while taking $∂^2$u/∂$r^2$ how do I use that equation and solve the question further?

Solving a question using characteristic function where X is discrete and Y is continuous

Posted: 10 Apr 2021 07:52 PM PDT

$X = Binom(5,0.5)$ and $Y=U(0,1)$. we need to find $P(X+Y \geq2)$

using characteristic function. This is what I could solve but could not go ahead.

$C_X(t)=\frac{(e^{it}+1)^5}{2^5}$

$C_Y(t)=\frac{e^{it}-1}{it}$

Taking $Z=X+Y$ and since both X and Y are independent.

$C_Z(t)=C_X(t)C_Y(t)= \frac{(e^{it}+1)^5(e^{it}-1)}{2^5\cdot it}$

After this what should I do, integration would not be feasible.

Expected number of distinct integers found with variable stopping time

Posted: 10 Apr 2021 08:22 PM PDT

This is a simplified/special case version of a previous question.

Consider the integers $\{1,\dots, N\}$ for some positive integer $N$. Let us suppose that for each $\{1, \dots, N\}$ there is an associated probability $p_1, \dots, p_N$. We also define an integer threshold $1 \leq n < N$.

We sample independently and repeatedly from $\{1,\dots, N\}$. For each $i \in \{1,\dots, N\} $ we sample the integer $i$ with probability $p_i$. We sample repeatedly until we have found $x$ distinct integers and then stop.

I would like to know how to compute the expected number of distinct integers less than or equal to the threshold $n$ that have been sampled.

If we knew that we would take $s$ samples, we can you compute the expected number of distinct integers less than or equal to $n$ that have been sampled. This is

$$n-\sum_{i=1}^n(1-p_i)^s.$$

However in my problem the number of samples is itself a random variable that depends on parameter $x$ and the different probabilities $p_i$.

Proving uniqueness for a system of nonlinear equations

Posted: 10 Apr 2021 08:21 PM PDT

How do I prove that this system has a unique solution? $(1-x)e^y = -e^{-x} ; y(1-x)e^x = e^{-y}$

I just don't even know where to start, should I try using limits?

Semi-simple rings and fields.

Posted: 10 Apr 2021 08:14 PM PDT

I want to show that:

$R$ is semi-simple iff $R$ is isomorphic (as a ring isomorphism)to a direct product of a finite number of fields.

Definition: $R$ is a semi-simple ring if it is a direct sum of simple ideals.

My attempt:

$\Leftarrow$

Assume that $R$ is isomorphic to a direct product of a finite number of fields, I want to show that $R$ is semi-simple. I was able to proof that any field is a semi-simple ring, then $R$ is isomorphic to a direct product of a finite number of semi-simple rings. But then how can I proof that the direct product of a finite number of semi-simple rings is semi-simple, Could anyone help me in showing this step, please?

$\Rightarrow$

Assume that $R$ is a semisimple ring and we want to show that $R$ is (ring) isomorphic to a direct product of a finite number of fields.

Since $R$ is a semisimple ring, then $R$ is the direct sum of a finite number of simple modules (or minimal ideals) $i.e., R = \otimes_{i=1}^n L_i$(I know that direct sum is the same as direct product if we have a finite number). Now we want to show that $L_i$ is a field for every $i.$ So it suffices to show that every non-zero element in $L_i$ has an inverse but I do not know how to show this, any help in this will be greatly appreciated!

Also, I do not know how to show the ring isomorphism, could anyone help me in this, please?

Do elements of (strictly) totally ordered sets have unique neighbours?

Posted: 10 Apr 2021 08:11 PM PDT

The real numbers form a totally ordered set, which according to Wikipedia is also strictly totally ordered, i.e.

  • transitive: $a < b, b < c \Rightarrow a < c$
  • semiconnex: if $a \neq b$ then $a < b$ or $b < a$
  • antireflexive: $a < a$ is false
  • asymmetric: if $a < b$ is true then $b < a$ is false

Following a discussion under the answer to my last question if in a strictly totally ordered set such as $\mathbb{R}$, every element $x$ has a unique neighbor: the element which is larger (smaller) than $x$, and smaller (larger) than every other element $c$ such that $c > x$ ($c < x$).

Since these relationships can be determined for all $ x ∈\mathbb{R}$, I suspect the problem lies in having to determine an infinite number of relations.

If this was true, then a smallest positive real number would exist.

Clarification about the proof of the Chinese Remainder Theorem

Posted: 10 Apr 2021 08:11 PM PDT

Near the end, the proof states that no prime that divides $m_i$ for $i = 1, 2,\ldots, k-1$ can divide $m_k$. But why does this imply that $\gcd{(m_1m_2\cdots m_{k-1},m_k)=1}$? I've been thinking about this for a long time, and I really can't understand it.

enter image description here

enter image description here

When is the trace multiplicative?

Posted: 10 Apr 2021 07:56 PM PDT

Let $A,B\in M_n(\mathbb{R})$ be real (or complex) square matrices. Generally speaking,

$${\rm trace}(AB)\neq {\rm trace}(A){\rm trace}(B)$$

There are a lot of 'easy' examples where this doesn't hold. What is not obvious to me is whether there is any chance of it working for something other than $A=B=0$, the zero matrix. Even if you work out the details for $n=2$ the algebraic condition is not very insightful.

Question: What are sufficient or necessary conditions on $A$ and $B$ under which the trace is multiplicative, i.e., ${\rm trace}(AB)= {\rm trace}(A){\rm trace}(B)$?

Edit: This post from 2014 asked whether the trace is multiplicative or not. I am well aware that it is not, but a distinct question is whether there are conditions on $A$ and $B$ under which the trace becomes multiplicative.

Can the Chinese Remainder Theorem extend to an infinite number of moduli?

Posted: 10 Apr 2021 08:14 PM PDT

I've been trying to find info on this and have come up lacking. The CRT says that a system of congruences with coprime moduli always has a unique answer (modulo the product of the original moduli). And the generalizations I've seen defined say you can use residues $\{a_1, a_2,..., a_n\}$ and moduli $\{m_1, m_2,..., m_n\}$ to find a unique $X$ mod $M$ (with $M = m_1 \cdot \ m_2 \cdot ... \cdot m_n$), so long as the set of moduli are all coprime.

Can this be extended for an infinite set of residues and moduli? It seems to me you could choose $n$ to be as large as you desire, but I feel like it's unclear. Thoughts? It feels vaguely like Euclid's proof of infinitude of primes, but "feels" isn't really well-defined...

Inverse Matrix of a sum

Posted: 10 Apr 2021 07:59 PM PDT

Let $\mathbf{A}$ a $n\times n$ symmetric positive definite matrix and $\alpha$ a positive constant. I want to simplify the following expression: $$\left(\mathbf{A} + \alpha \, \mathbf{I}\right)^{-1},$$ where $\mathbf{I}$ is the identity matrix of order $n$.

I looked in the Matrix Cookbook in order to find an identity, but I found only more general formulas like the Woodbury identity. Do you know how to find a simpler identity for this easy case?

Thanks a lot!

Edit: my goal is to obtain an easy expression in terms of products of $\mathbf{I}$, $\mathbf{A}$, $\mathbf{A}^{-1}$, $\alpha$ and $\alpha^{-1}$ without computing other inversions or decompositions. I can use the same term more times and I don't have to use all of them (I do not want to compute redundant operations).

Find the sup-norm

Posted: 10 Apr 2021 08:05 PM PDT

Find the sup-norm, $\|f\|_{\sup}$, if

$$f(x)=\begin{cases} 0, &x \in\mathbb{Q}\\ -x^2, &x\not\in\mathbb{Q} \end{cases}$$

As I look at the graph of $-x^2$, I know it's a decreasing function. I know the sup-norm is infinity. I'm not sure why that is though.

How find this integral $I=\int_{-\infty}^{+\infty}\frac{x^3\sin{x}}{x^4+x^2+1}dx$

Posted: 10 Apr 2021 08:12 PM PDT

Find this integral $$I=\int_{-\infty}^{+\infty}\dfrac{x^3\sin{x}}{x^4+x^2+1}dx$$

my idea: $$I=2\int_{0}^{+\infty}\dfrac{x^3\sin{x}}{x^4+x^2+1}dx$$

because $$\dfrac{x^3\sin{x}}{x^4+x^2+1}\approx\dfrac{\sin{x}}{x},x\to\infty$$ so $$I=\int_{-\infty}^{+\infty}\dfrac{x^3\sin{x}}{x^4+x^2+1}dx$$ converges

then I can't,Thank you

No comments:

Post a Comment