Thursday, April 1, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


For all x, x is not an element of a set A\B. what does the figure look like?

Posted: 01 Apr 2021 09:03 PM PDT

For all x, x is not an element of a set A\B. what does the figure look like?

I mean, when x is an element of a set A\B,

the figure looks like Fig.1.8 from that webpage.

The shaded area shows the set A−B.

But x is not an element of a set A\B is equivalent to If x is in Set A then x is in Set B.

It seems look like set A is a subset of set B.

Is that correct?

Discountinuty of sin function at irrational numbers

Posted: 01 Apr 2021 08:55 PM PDT

How can we say that sin function is discontinuous at irrational numbers since we know that sin function is defined for all real numbers?

Is a family of non-empty subsets of a bounded metric space compact when rhe space is compact?

Posted: 01 Apr 2021 08:54 PM PDT

The statement to which the answer is presumably yes is contained in exercise 8. on p.25 of the Dover edition of Gamelin & Greene's Topology. That statement with the appropriate metric for the subset space has one more predicate: the subsets are closed. The hints on p. 200 for a proof do not explicitly utilize the closed character of the subsets, and I have not been able to see where it is implied. Can someone suggest where I have overlooked it? Sometimes, alas, I do not see what is right in front of me!

Series $A=\displaystyle \sum_{n=1}^{\infty}\left(n^\dfrac{1}{n^2+1}-1\right)$

Posted: 01 Apr 2021 08:53 PM PDT

Consider convergence of series: $A=\displaystyle \sum_{n=1}^{\infty}\left(n^\dfrac{1}{n^2+1}-1\right)$

I have a idea

\begin{align*} a_n&=n^\dfrac{1}{n^2+1}-1 \\&=e^{\dfrac{ln\,n}{n^2+1}}-1 \sim b_n= \dfrac{ln\,n}{n^2+1} \text{ when } n \to \infty \end{align*} I wanna consider convergence of series $B=\displaystyle \sum_{n=1}^{\infty}b_n$. I have trouble here.

Re-metrizing a compact metrizable space to make a homeomorphism Lipschitz

Posted: 01 Apr 2021 09:03 PM PDT

Let $(X, \rho)$ be a compact metric space, and $T : X \to X$ a homeomorphism of $X$ to itself. I know that this $T$ can fail to be Lipschitz, e.g. $X = [0, 1], Tx = \sqrt{x}$ with the standard metric $\rho(x, y) = |x - y|$ (I found this in another post on here, but I closed the tab before I could link that post here). What's less obvious to me, however, is whether I could give $[0, 1]$ another metric $\rho'$ compatible with the standard topology on $[0, 1]$ with respect to which $\sqrt{\cdot}$ is Lipschitz.

More generally, given a compact metrizable space $X$ and a homeomorphism $T : X \to X$ of $X$ to itself, is there a way to look at the topological properties of $T$ and discern whether $T$ could be made Lipschitz by endowing $X$ with an appropriate metric? I know there are several metrical properties which can be dependent upon which metric is being used, e.g. boundedness and Cauchy-ness, but I don't know to what extent Lipschitzness is visible in the topology.

Combination of integers that satisfy a condition

Posted: 01 Apr 2021 08:46 PM PDT

Let n and k be positive integers such that n≥k(k+1)/2. The number of solutions (x1,x2,…,xk), with x1≥1, x2≥2,..., xk≥k for all integers satisfying x1+x2+⋯+xk=n is?

I can't understand why the answer is not infinity cuz RHS of inequality is sum of k natural numbers and minimum of LHS is equal to that . For example if we consider every X equal to its inequality value and keep increasing one X we get infinite solutions . WD

Where did the + C go - Indefinite integration?

Posted: 01 Apr 2021 09:00 PM PDT

After performing indefinite integration on both sides, an equation becomes:

$$\ln(|v|) + C = \frac{-t}{RC} + C $$

This then simplifies to:

$$\ln(|v|) = \frac{-t}{RC} + C $$

Where did the $+ C$ go on the LHS and why?

Solving congruence relation

Posted: 01 Apr 2021 08:44 PM PDT

I am trying to solve the below problem from Artin's algebra:

Solve the congruence $2x \equiv 5$ (a) modulo $9$ and (b) modulo $6$.

Here is my attempt.

(a) If $2x \equiv_5 5$, then $2x \equiv_5 14$, then $2x - 14 \equiv_5 = 0$, and hence $2(x - 7) \equiv_5 0$. Since there are no zero divisors in $\mathbb{Z}/5\mathbb{Z}$ and $2 \neq 0$ in mod $5$, we have $x - 7 \equiv_5 = 0$, so $x \equiv_5 7$.

Working modulo $6$, I didn't have a good strategy other than plugging in $x = 0,1,2, \ldots, 5$ and deduced that none of them satisfy the congruence, so there is no solution.

Is there a better way to do this problem (part (b) in particular)?

Expressing $\frac{13}{17}$ as an infinite series...

Posted: 01 Apr 2021 09:04 PM PDT

I am trying to find a way to express $\frac{13}{17}$ as an infinite series so that I can convert $\frac{13}{17}$ to its base-$3$ counterpart. Could someone please how such an infinite series can be developed? How does one even start solving this problem?

integral of delta function involving trigonometry

Posted: 01 Apr 2021 08:30 PM PDT

I'm self-studying the book "Kolmogorov spectra of turbulence'' and got confused by a calculation on page 87.

Is there any efficient ways to evaluate the following integral $$\int \int \delta(k-k_1\cos\theta_1-k_2\cos\theta_2) \, \delta(k_1\sin\theta_1\sin\varphi_1+k_2\sin\theta_2\sin\varphi_2)\, \delta(k_1\sin\theta_1\cos\varphi_1+k_2\sin\theta_2\cos\varphi_2) d\cos\theta_1 \, d\varphi_1 \, d\cos\theta_2 \, d\varphi_2$$ This is essentially the angular part of $\int \int d\vec{k}_1\, d\vec{k}_1\, \delta(d\vec{k}-d\vec{k}_1-d\vec{k}_2)$.

I found the discussion by @ md2perpe at Integrating Dirac delta functions with trigonometric arguments very helpful. But the direct application involves complex formula of product of square roots of fractions of sin or cos. The book seems to have a more efficient way to simply it to $$=\int \int \delta(k-k_1\cos\theta_1-k_2\cos\theta_2) \, \delta(k_1\sin\theta_1+k_2\sin\theta_2) \frac{\sin\theta_1\, d\theta_1 d\theta_2}{2k_2} =\frac{1}{2k k_1 k_2}$$

Any idea how the book did it? Thanks in advance!

Determine convergence of $\frac{2n^3 + 7}{n^4 \sin^2 n}$

Posted: 01 Apr 2021 08:35 PM PDT

Discuss the convergence of the series: $$\sum_{n=1}^\infty \frac{2n^3+7}{n^4 \sin^2 n}$$

My approach: Let, $b_n=\frac{2n^3+7}{n^4\sin^2 n}$ $$\sin^2n=\frac{1-\cos 2n}{2}$$ Let,$a_n=\frac{1}{n^4}$

Using limit comparison test, $$\lim_{n \to \infty}\frac{a_n}{b_n}=\lim_{n \to \infty}\frac{\frac{1}{n^4}}{\frac{2n^3+7}{n^4\sin^2 n}}=\lim_{n \to \infty}\frac{\sin^2n}{2n^3+7}=\lim_{n \to \infty}\frac{1-\cos2n}{2n^3+7}=\lim_{n \to \infty}\frac{1}{2n^3+7}-\frac{\cos2n}{2n^3+7}$$ $$\lim_{n \to \infty}\frac{1}{2n^3+7}-\frac{\cos2n}{2n^3+7}=0-\lim_{n \to \infty}\frac{\cos 2n}{2n^3+7}$$

This next step is what I'm very unsure of. Although the limit of x as $\cos x$ tends to infinity isn't defined, it won't exceed $+1$, and wont drop below $-1$. So, $2n^3+7$ becomes the deciding factor again, and that whole limit turns out to be zero.

Therefore, $$\lim_{n \to \infty}\frac{a_n}{b_n}=0$$

According to limit comparison test, if the limit equals zero, and $a_n$ converges, then $b_n$ also converges. WKT $\frac{1}{n^4}$ is convergent, therefore $b_n$ is also convergent.

So, this was my approach, and I'm not very sure of my method, as well as answer. If anyone has a better way to solve this question(which I'm damn sure, exists), kindly let me know. Thanks, in advance!!

Rank of the elliptic curve $y^2 = x^3-226x$?

Posted: 01 Apr 2021 09:02 PM PDT

Consider the elliptic curve $E: y^2 = x^3-226x$. We want to find the rank of this elliptic curve using the fact that $N^2 = b_1M^4 + aM^2e^2 + b_2e^4$ has a solution for some $b_1, b_2, N, M, e \in \mathbb Z, e > 0, b_1b_2 = b,$ if and only if $b_1$ is in $\text{Im}(\alpha_E)$.

Let $E : y^2 = x^3-226x.$ Then $b = -2 · 113 \implies \text{Im}(\alpha_E) \subset\left\{\pm 1, \pm 2, \pm 113, \pm 2 · 113\right\} $. We need only consider $b_1 = -1, 2$.

My question is: why does it suffice to consider $b_1 = -1, 2$?

Automatically $1$ and $b$ are in the image so $b = -2 \cdot 113$ is in the image. So that if $-1$ is in the image, then so is $\pm 2 \cdot 113$, and likewise $\pm 1$ is in the image. Next, if $2$ is in the image then by multiplication with $-1$ we have $\pm 2$ is in the image. But then how is $\pm 113$ in the image?

Likewise, considering the corresponding isogenous curve

Let $E' : y^2 = x^3~+4 \cdot 226x.$ Then $b' = 2^3 · 113 \implies \text{Im}(\alpha_E') \subset\left\{\pm 1, \pm 2, \pm 113, \pm 2 · 113\right\} $. We need only consider $b_1 = -1, 2$.

And the same question here again: why does that suffice?

Surjections, Bijections, and Injections

Posted: 01 Apr 2021 08:20 PM PDT

Let a and b be real numbers. Consider a function $f : \mathbb{R} → \mathbb{R}$ given by the formula $f(x) = ax + b$.

(a) Under what conditions on $a$ and $b$ is $f$ a bijection from $\mathbb{R}$ to $\mathbb{R}$? (For instance, what if $a = 0$?)

(b) Under what conditions on $a$ and $b$ is the restriction $f|\mathbb{Z}$ a bijection from $\mathbb{Z}$ to $\mathbb{Z}$?

(c) Under what conditions on $a$ and $b$ is the restriction $f|\mathbb{N}$ a bijection from $\mathbb{N}$ to $\mathbb{N}$?

My answers:

(a) The function $f$ is a bijection if and only if $a \neq 0$.

(b) The necessary and sufficient condition is that $a = \pm1$.

(c) a = 0 (I'm not sure)

Am I correct? If not, please give me some explanations on the restriction? What is a restriction? How can I solve the problem? Thank you so much!

Alternating series - determine if it converges absolutely, conditionally or diverges using alternating p-series test

Posted: 01 Apr 2021 08:32 PM PDT

I am trying to determine to whether the series

$\sum _{n=0}^{\infty }\:\left(-1\right)^n\frac{1}{\sqrt{n}\left(ln\left(n\right)^{2021}\right)}$

is conditionally convergent, absolutely convergent, or divergent. So far using p-series testing I believe that $\frac{1}{\sqrt{n}\left(ln\left(n\right)^{2021}\right)}$ is decreasing for p=1/2 I was wondering if we can possibly argue that this series is conditionally divergent or not?

formula for approximating the gradient of a curve that is stable at different ratios of y:x

Posted: 01 Apr 2021 08:56 PM PDT

I need to calculate the gradient of a curve using a y intercept and changes in the value of x that will give comparable results at different ratios of x and y.

I've tried $$ gradient = \frac{(y_{[0]} - y_{[1]})}x \ $$

Where $y_{[0]}$ is the value of $y$ at the current index, or "now", and $ y_{[1]} $ is the value of $ y $ one unit of $ x $ ago.

This gives an approximation of the gradient, but is highly sensitive to changes in the ratio of $y:x$.

I've also tried $$ gradient = \frac{(log_{10}(y_{[0]}) - log_{10}(y_{[1]})} {log_{10}x} \ $$

Which reduces the magnitude of the issue (by 10, actually), but doesn't really solve it.

Operator raised to a power with variable coefficient

Posted: 01 Apr 2021 08:42 PM PDT

I have a series which involves an expression of the form

$\displaystyle \frac{\partial^{n} a(x)}{\partial x^n} + c(x)\frac{\partial^{n+1} b(x)}{\partial x^{n+1}}$

Is there a way to factor out the $\frac{\partial^n }{\partial x^n}$ operator? I believe I can write the expression above as

$\displaystyle \frac{\partial^{n} a(x)}{\partial x^n} + c(x)\frac{\partial^n }{\partial x^n}\frac{\partial b(x)}{\partial x}$

Since the coefficient $c$ is a function of $x$, I'm pretty sure factoring out the operator any further is not an option (at least not in a nice way). For clarity, I'm wondering if the following can be achieved

$\displaystyle \frac{\partial^{n} }{\partial x^n}\left(a(x) + c(x)\frac{\partial b(x)}{\partial x}\right) $

The motivation of this question is to better understand how powers work with operators, and not just when an operator coefficient is constant. This brings me to the second part of my question. If I define

$\displaystyle f^n = c(x)^n\frac{\partial^n}{\partial x^n}$

then based on that definition,

$\displaystyle f^{n+1} = c(x)^{n+1}\frac{\partial^{n+1}}{\partial x^{n+1}}$

however, the rules of powers (e.g. $g^{n+1} = g^ng$) don't seem to apply when $f$ is defined with an operator? Specifically, if $f^{n}f$ should be interpreted as a composition , then the differential operator in $f^n$ would have to be applied to $c(x)$ I believe. However, I'm not sure how to proceed, because I feel like some different rules are in play here? Specifically, the term $f$ is not commutative, so I'm wondering if I'm entering a different area of mathematics?

Limit $\underset{x\to 0}{\text{lim}}\frac{\sqrt[3]{a x+b} - 2}{x}$ equals to $\frac{5}{12}$

Posted: 01 Apr 2021 08:43 PM PDT

A friend of mine asked this question to me. It seems it's from Stewart.

Find the values of a and b such that $\underset{x\to 0}{\text{lim}}\frac{\sqrt[3]{a x+b} - 2}{x} = \frac{5}{12}$

This is what I tried with better results.

For $b$:

$$\frac{\sqrt[3]{a x+b} - 2}{x} = \frac{5}{12} $$

$$\sqrt[3]{a x+b} - 2 = x\frac{5}{12} $$

$$\underset{x\to 0}{\text{lim}} \sqrt[3]{a x+b} - 2 = \underset{x\to 0}{\text{lim}} x\frac{5}{12} $$

$$\sqrt[3]{b} - 2 = 0 $$

$$\sqrt[3]{b} = 2 $$

$$ b = 8$$

For $a$:

$$a x+8 = (x\frac{5}{12} + 2)^3$$

$$a x+8 = \frac{125 x^3}{1728}+\frac{25 x^2}{24}+5 x+8$$

$$a x = \frac{125 x^3}{1728}+\frac{25 x^2}{24}+5 x$$

$$a = \frac{125 x^2}{1728}+\frac{25 x}{24}+5$$

$$\underset{x\to 0}{\text{lim}} a =\underset{x\to 0}{\text{lim}} \frac{125 x^2}{1728}+\frac{25 x}{24}+5$$

$$a = 5 $$

But the limit $$\underset{x\to 0}{\text{lim}}\frac{\sqrt[3]{5x+8} - 2}{x}$$ doens't go to $\frac {5}{12}$.

May someone help.

Techniques for solving coupled second order differential equations

Posted: 01 Apr 2021 08:58 PM PDT

I'm trying to solve a system of coupled second order differential equations. I never did that before. I'm not sure where to begin.

The equations are:

$$\ddot{x} = -\omega\dot{y} - \frac{k}{m}\dot{x}$$ $$\ddot{y} = \omega\dot{x} - \frac{k}{m}\dot{y}$$

I'm wondering what method should I use to solve this and can I use the same method for all the coupled systems?

I know x should be: $$x(t)= -\tau V_{0x} \cos \beta e^{\frac{-t}{\tau}} \cos(\omega t + \beta) + x_0 + \tau \cos^2\beta v_{0x}$$

However, I didn't find how to process to get this solution.

I have those values at $t=0$: $$ v_x(t=0) = v_{0x} , v_y(t=0) = 0, x(t=0) = x_0, y(t=0) = y_0. $$

Finally, to lighten. $$\tau = \frac{m}{k}, \omega=\frac{qB}{m}, \tan(\beta) = \omega \tau$$

Any help would be really appreciated.

Cycle index for $S_2\times S_4$

Posted: 01 Apr 2021 08:25 PM PDT

I am trying to determine the cycle index polynomial of $S_2\times S_4$, for the purpose of finding colourings. This is what I have tried:

I computed the polynomials for $S_2$ and $S_4$:

$$Z_{S_4}(t_1,\dots ,t_4)=\frac{1}{4!}\left(6t_4^1+8t_3^1t_1^1+3t_2^2+6t_2^1t_1^2+t_1^4\right)$$

$$Z_{S_2}(s_1,s_2)=\frac{1}{2!}\left(s_1^2+s_2^1\right)$$ where $t_i^k$ indicates the presence of $k$ cycles of length $i$ in some cycle class. From what I can see, 'multiplying' should be done using an operation $\cdot$ likewise: $$t_i^n \cdot s_j^m = w_{\text{lcm}(i,j)}^{mn\cdot\text{gcd(i,j)}}$$ where $w_k$ are variables for $Z_{S_2\times S_4}$, and if there are several variables as a product (e.g. $t_3^1 t_1^1$) then multiplying by something else, we do pairs separately and multiply, for instance: $$t_3^1t_1^1\cdot s_1^2=(t_3^1 \cdot s_1^2)(t_1^1\cdot s_1^2)=w_3^2w_1^2$$ Applying this to the product of both polynomials I get:

$$Z_{S_2}(s_1,s_2)\cdot Z_{S_4}(t_1,\dots,t_4)=\boxed{\frac{1}{48}\left(12w_4^2+8w_3^2w_1+13w_2^4+6w_2^2w_1^4+w_1^6+8w_2^1w_6^1\right)}$$

Unfortunately this doesn't appear to be right. For instance setting $w_i=2$ (a $2$-coloring of a $2\times 4$ board up to vertical/horizontal permutations) does not yield an integer.

Would love it if someone could point me in the right direction. Trying to teach myself and I think I've misunderstood how this is meant to work...

Integrate a 2-form on Kaehler manifold

Posted: 01 Apr 2021 08:59 PM PDT

Let $X$ be a Kaehler 3-fold, with associated Kaehler form $\omega$ and metric $g_{i\bar{j}}$, $$ \omega = \omega_{i\bar{j}} \, dz^{i} \wedge d\bar{z}^j = \frac{i}{2} g_{i\bar{j}} \, dz^{i} \wedge d\bar{z}^j \,. $$ Suppose there is also a holomorphic 2-form $B = B_{i \bar{j}} \, dz^i \wedge d\bar{z}^j$. I want to calculate the integral $$ \int_X B \wedge \omega \wedge \omega $$ and show that it yields, up to a constant $$ \int_X \sqrt{\mathrm{det}(g)} \, g^{i \bar{j}}B_{i\bar{j}} \, dz^1\wedge d\bar{z}^1 \wedge dz^2\wedge d\bar{z}^2 \wedge dz^3\wedge d\bar{z}^3 $$ (I asked a similar question about $\int_X \omega^3$ here ).

$$ \epsilon^{ikm} \epsilon^{\bar{j}\bar{l}\bar{n}} \, B_{i\bar{j}} \, \omega_{k\bar{l}} \, \omega_{m \bar{n}} \, dz^1\wedge d\bar{z}^1 \wedge dz^2\wedge d\bar{z}^2 \wedge dz^3\wedge d\bar{z}^3 $$ but have become stuck ...

I also think that the relation $$ B_{i \bar{j}} = g_{i \bar{b}} g_{a \bar{j}} B^{a \bar{b}} $$ will be necessary to get three factors of the metric in order to get $\mathrm{det}(g)$, but the square root is throwing me off as well.

Show that $\dim \operatorname{im}(f) + \dim \ker(f) = \dim V$

Posted: 01 Apr 2021 08:42 PM PDT

Let $V,W$ be two finite dimensional vector spaces and $f$ is linear mapping from $V$ to $W$. Show that $\dim \operatorname{im}(f) + \dim \ker(f) = \dim V.$


Here is what I did. Please kindly give me a recommendation or help me to show this. Thank in advance!

Let $(e_1,e_2, \dots, e_n)$ be a basis for $V$ and $(e_1,e_2, \dots, e_k)$ be a basis for $\ker(f),k<n$.

We can get $$\dim V=n,\quad \dim \ker(f)=k$$ Then we need to prove that $\dim \operatorname{im}(f)=n-k$

Since $(e_1,e_2, \dots,e_n)$ is basis for $V$. $(e_1,e_2, \dots,e_n)$ spans V.

Let $x\in V$.There exists $\lambda_1,\lambda_2,\dots,\lambda_n\in\mathbb{K}$ such that $$x=\lambda_1e_1+\lambda_2e_2+\cdots+\lambda_ne_n$$ $f$ is linear mapping $$f(x)=f(\lambda_1e_1+\lambda_2e_2+\cdots+\lambda_ne_n)$$ $$f(x)=\lambda_1f(e_1)+\lambda_2f(e_2)+\cdots+\lambda_nf(e_n)$$ Which that $\operatorname{im}(f)=\{f(v)\in W | v\in V\}$ $$\operatorname{im}(f)=\operatorname{span}\{f(e_1),f(e_2),\dots,f(e_n)\}$$ Since $(e_1,e_2,\dots,e_n)$ is linearly independent.There exists $\alpha_1,\alpha_2,\dots,\alpha_k,\dots\alpha_n\in\mathbb{K},\quad k<n$

Which $\alpha_1=\alpha_2=\dots=\alpha_n=0$ $$\alpha_1e_1+\alpha_2e_2+\cdots+\alpha_ke_k+\alpha_{k+1}e_{k+1}+\cdots+\alpha_ne_n=0$$ $$f(\alpha_1e_1+\alpha_2e_2+\cdots+\alpha_ke_k+\alpha_{k+1}e_{k+1}+\cdots+\alpha_ne_n)=f(0)$$ $$\alpha_1f(e_1)+\alpha_2f(e_2)+\cdots+\alpha_kf(e_k)+\alpha_{k+1}f(e_{k+1})+\cdots+\alpha_nf(e_n)=0$$ Since $\ker(f)=\operatorname{span}\{e_1,e_2,\dots,e_k\}\Longrightarrow f(e_1)=f(e_2)=\dots=f(e_k)=0$.We can get $$\alpha_{k+1}f(e_{k+1})+\cdots+\alpha_nf(e_n)=0$$ So that $\{f(e_{k+1}),\dots,f(e_n)\}$ is linearly independent.

We can get $\{f(e_{k+1}),\dots,f(e_n)\}$ is a basis for $\operatorname{im}(f)\Longrightarrow \dim \operatorname{im}(f)=n-k$.

Explicit integration of Kahler form to get volume

Posted: 01 Apr 2021 08:59 PM PDT

Consider a compact Kaehler manifold $X$ of dimension $d=3$. The K"ahler form in local coordinates $z^i,\bar{z}^i$ is $$ \omega = \omega_{i \bar{j}}dz^i \wedge d\bar{z}^j, $$ with $\omega_{i \bar{j}} = \frac{i}{2} g_{i \bar{j}}$, where $g_{i \bar{j}}$ is Hermitian.

I want to explicitly show that the volume given by $\int_X \omega^n $ yields $$ A \int_X \sqrt{\mathrm{det}(g_{i\bar{j}}) } dz^1\wedge dz^2 \wedge dz^3 \wedge d\bar{z}^1 \wedge d\bar{z}^2 \wedge d\bar{z}^3, $$ where $A$ is some overall constant.

The closest I arrive at my intended result is: $$ \int_X \epsilon^{i k m} \epsilon^{\bar{j} \bar{l} \bar{n} } ~ \omega_{i \bar{j}} \omega_{k \bar{l}} \omega_{m \bar{n} } ~ dz^1\wedge dz^2 \wedge dz^3 \wedge d\bar{z}^1 \wedge d\bar{z}^2 \wedge d\bar{z}^3, $$ using the antisymmetry of the wedge product and the identity $$ dz^i \wedge dz^k \wedge dz^m = \epsilon^{i k m} dz^1\wedge dz^2 \wedge dz^3 $$

and its complex conjugate.

I'm almost convinced that I can identify $ \epsilon^{i k m} \epsilon^{\bar{j} \bar{l} \bar{n} } ~ \omega_{i \bar{j}}\omega_{k \bar{l}} \omega_{m \bar{n} } = \mathrm{det} (\omega_{i \bar{j}})$, but not 100% sure.

(FWIW, I have seen a version of this where the $dz^i , d\bar{z}^j$'s are turned into real forms $dx^i, dy^j$, but I want to avoid going down that route for the moment and express the result in complex coordinates.)

If $A^3 = O$, what condition do the entries of $A$ satisfy?

Posted: 01 Apr 2021 08:50 PM PDT

$$A = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}$$

When $A^3 = O$ (zero matrix) is satisfied, what condition do the four real numbers $a, b, c, d$ meet?


I cubed $A$:

$A^3 = \begin{bmatrix} a(a^2+bc)+bc(a+d) & b(a^2+bc) + bd(a+d) \\ ac(a+d)+c(bc+d^2) & bc(a+d)+d(bc+d^2)\end{bmatrix}$

I don't know how it is going to be.

Post script

Thank you for the replies, I might have found the answers:

$$ \begin{eqnarray} A^3_{11} * d - A^3_{12} * c = 0 \\ (ad-bc)(a^2+bc) = 0 \\ ad-bc = 0 \end{eqnarray} $$

But why $ a^2 + bc \neq 0$?

Does sign-magnitude ordering admit infinite descending chains in any discretely ordered ring?

Posted: 01 Apr 2021 08:29 PM PDT

I have a conjecture:

Let $R$ be an ordered ring $(S, 0, 1, +, ·, <)$.

Let $|x| = x$ if $0 \leq x$ and $|x| = -x$ if $x < 0$.

Define $x \preccurlyeq y$ to hold if either $|x| < |y|$ or both $|x| = |y|$ and $y < x$. Observe that $\preccurlyeq$ is an ordering, though not congruent with $+$ or $·$.

Assume $1$ is the smallest positive number.

Then $\preccurlyeq$ admits of no infinite descending chains.

Is this conjecture true?

Note: if $0 < q < 1$ then $0 = 0q < qq < 1q = q$ i.e. $0 < q^2 < q$, and all $q^{2^n}$ are distinct by transitivity and irreflexivity, and form an infinite descending chain. So I need 1 to be the smallest positive number.

But then if $n < q < n+1$ where $n = 1 + \ldots + 1$, then $0 < q - n < 1$ which we assumed to not be true. If $k < q < k+1 \leq 0$ then $-0 = 0 \leq -(k+1) < -q < -k$. So there cannot be any numbers "between the integers". Can there be numbers "beyond the integers"?

Otherwise, for any $x$ I think I can let $m$ be the smallest $1 + \ldots + 1$ such that $|x| \leq m$, and then there are only finitely many non-negative values smaller than $m$, and thus only finitely many values $\preccurlyeq$-smaller than $x$, and my conjecture holds.

I observe with curiosity that one can prove the Archimedean property for the reals using the least upper bound property. I'm not sure what to make of that, but it smells relevant.

Abundant products of iterations of Euler's totient function

Posted: 01 Apr 2021 08:55 PM PDT

Let $a_0(n) = n$ and $a_{i+1}(n) = \varphi(a_i(n))$ for $i\geq 0$, where $\varphi(n)$ is Euler's totient function (the number of positive integers less than or equal to $n$ and coprime with $n$). Denote $f(n) = \prod_{k=0}^{\infty}a_k(n)$.

Determine all positive integers $n$ such that the sum of positive divisors of $f(n)$ is strictly greater than $2f(n)$.

infinitesimal strain tensor in cylindrical coordinates

Posted: 01 Apr 2021 08:36 PM PDT

How can I obtain the below formulas of infinitesimal strain in cylindrical coordinates using matrix calculation given the first formula? I find it hard to study them because I still don't know how to derive them.

$$ \epsilon_{ij}=\frac{1}{2}\left(u\otimes\nabla+\nabla\otimes u\right)\\ \,\\ \begin{align} u\otimes\nabla &=\begin{bmatrix}u_r\\u_{\vartheta}\\u_z\end{bmatrix}\begin{bmatrix}\dfrac{\partial}{\partial r}&\dfrac{1}{r}\dfrac{\partial}{\partial\vartheta}&\dfrac{\partial}{\partial z}\end{bmatrix}\\\\ &=\begin{bmatrix}\dfrac{\partial U_r}{\partial r}&\dfrac{1}{r}\dfrac{\partial U_r}{\partial\vartheta}&\dfrac{\partial U_r}{\partial z}\\\\\dfrac{\partial U_{\vartheta}}{\partial r}&\dfrac{1}{r}\dfrac{\partial U_{\vartheta}}{\partial\vartheta}&\dfrac{\partial U_{\vartheta}}{\partial z}\\\\\dfrac{\partial U_z}{\partial r}&\dfrac{1}{r}\dfrac{\partial U_z}{\partial\vartheta}&\dfrac{\partial U_z}{\partial z}\end{bmatrix}\end{align} $$

Above, I show my try in deriving the first part of the tensor, but I didn't know how to derive the second part.

\begin{align} \varepsilon_{ij} &= \frac{1}{2} (U_{i,j} + U_{j,i})\\ \varepsilon_{rr} & = \cfrac{\partial u_r}{\partial r} \\ \varepsilon_{\theta\theta} & = \cfrac{1}{r}\left(\cfrac{\partial u_\theta}{\partial \theta} + u_r\right) \\ \varepsilon_{zz} & = \cfrac{\partial u_z}{\partial z} \\ \varepsilon_{r\theta} & = \cfrac{1}{2}\left(\cfrac{1}{r}\cfrac{\partial u_r}{\partial \theta} + \cfrac{\partial u_\theta}{\partial r}- \cfrac{u_\theta}{r}\right) \\ \varepsilon_{\theta z} & = \cfrac{1}{2}\left(\cfrac{\partial u_\theta}{\partial z} + \cfrac{1}{r}\cfrac{\partial u_z}{\partial \theta}\right) \\ \varepsilon_{zr} & = \cfrac{1}{2}\left(\cfrac{\partial u_r}{\partial z} + \cfrac{\partial u_z}{\partial r}\right) \end{align}

$LU$ factorization to solve a system of equations

Posted: 01 Apr 2021 09:01 PM PDT

I am having trouble solving the following exercise:

Suppose $A\in \mathbb R^{n\times n}$ has an $LU$ factorization. Show how $Ax=b$ can be solved without storing the multipliers by computing the $LU$ factorization of the $n-\text{by}-(n+1)$ matrix $[A\ \ b]$

The problem I have is that so far to solve this it is usually done with the Gaussian elimination methods, where multipliers are used and in the same algorithm there is a part that specifically stores the multipliers ($\tau_i=\frac{v_i}{v_k}$ those used in Gaussian elimination), so I don't know how It is possible to define solving the system without storing the multipliers, since from what I have seen so far, for this type of problems we use Gaussian transformations, but these in their development use multipliers, in the case of obtaining the factorization of the Augmented matrix I also have doubts about why it is possible to obtain it, since what I know is that a matrix has LU factorization only when it is non-singular and then what would be the criteria to determine whether or not it has LU factorization (using the lu command in MatLab I have experimented with some systems and when I put matrices for example for a $2x2$ system and I try to obtain the $LU$ factorization if it throws it at me), I think that I don't understand the operation even of the $LU$ factorization.

I am using the Golub matrix computations book to study this topic and it is one of the proposed exercises. Well, here I have read about the benefits of using LU in these cases

Any help you can give me I appreciate a lot in advance, thank you.

Weak and strong orders

Posted: 01 Apr 2021 09:04 PM PDT

Did I understand it right, that R={⟨X,Y⟩∣X⊆Y} is a weak order and {⟨X,Y⟩∣X⊂Y} is a strong order (not sure about the correct english term)?

If G is a group, H<= G, and G=Z20, what is H=<4>?

Posted: 01 Apr 2021 08:19 PM PDT

If Z=20 and H=<4>, what is the set Z composed of? what is H=?, and what are the elements of H in G?

Also. how will I be able to graph this?

It is part of my homework that I'd rather do myself but I can't find simple examples on the internet

No comments:

Post a Comment