Saturday, November 13, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Show that field extensions of $\mathbb{F}_{81}$ are normal

Posted: 13 Nov 2021 04:01 PM PST

Be $\mathbb{F}_{81}$ the finite field with $81$ elements, $L=Quot(\mathbb{F}_{81}[T])$ and $K=\mathbb{F}_{81}(T^{10})\subseteq L$. Show, that the extension $L/K$ is normal. Hint: How does the factorization of $X^n-a$ look like?

My attempt:

I know that $\mathbb{F}_{p^n}^{\times}$ is cyclic. If we look at $\mathbb{F}_{p^n}$ and a factor $k$ of $n$, I can find subgroups $U$ with order $p^k-1$ so that \begin{align*} X^{p^k}-X=X\cdot \prod_{u\in U} (X-u) \end{align*} In my case, I have $\mathbb{F}_{3^4}$ and $k=1,2,4$

Could that be of any relevance here? I also think that the Frobenius-Automorphism might be useful. My main problem is that I cannot imagine $L$ and $K$.

I think $x\in K$ can be written as $x=\frac{a_nT^{10n}+...+a_1T^{10}+a_0}{b_mT^{10m}+...+b_1T^{10}+b_0}$ and $y\in L$ can be written as $y=\frac{a_rT^{r}+...+a_1T+a_0}{b_sT^{s}+...+b_1T+b_0}$

I'd like to have some ideas/tips/... that push me in the right direction.

Is this injective(one to one): $ \mathbb{Z} \to \mathbb{Z^+}; g(x) = \lvert x\rvert + 1 $

Posted: 13 Nov 2021 03:56 PM PST

I am having trouble figuring out the domain and co-domain for this question? So even though the domain is all integers be it positive or negatives, would the co-domain only allow for all positive integers? So instead of a V-shaped diagram, I would get a simple sloped line increasing in the positive x-direction.

Density of $\{ v \in C^0[0,1]: v(0)=0\}$ in $C^0[0,1]$

Posted: 13 Nov 2021 04:06 PM PST

Let $V = \{ v \in C^0[0,1]: v(0)=0\}$. I need to show this set is dense in $C^0[0,1]$.

I was thinking to select a sequence of functions $\{ f_n \} \in V$ such that $f_n(0)=0$ so they're in $V$ and as $n$ increase I am always closer to $f(0)$, being $f$ the function I want to approximate.

I came up with $f_n(x)= $ \begin{cases} f(\frac{x}{n})x, \qquad x \in I_n \\f(x) \qquad x \not \in I_n ​\end{cases}

with ​$I_n =[0,\frac{1}{n})$. Now, as $n \rightarrow \infty$ $I_n \rightarrow \{ 0 \}$ and $f(\frac{x}{n}) x \rightarrow f(0) \cdot 0 =0$ which is not $f(0)$... how can I solve this issue?


Okay, here's what I did: I need to show that $$||f_n -f||_{L^2(0,1)} \rightarrow 0$$ As the sequence and the original function $f$ agree except on $I_n$, I need to compute that difference on $I_n$ only.

Then I have:

$$\int_0^{1/n}(f(\frac{x}{n}) - f(x))^2dx =_\underbrace{nx = t} \int_0^1 \Bigl(f\bigl(\frac{t}{n^2}\bigr) \frac{t}{n^2} - f\bigl(\frac{t}{n} \bigr) \Bigr)^2 \frac{dt}{n} $$ Taking the limit, that integral goes to $0$. Is this correct?

Discrete Mathematics Division Algorithm proof

Posted: 13 Nov 2021 04:06 PM PST

I'm not quite sure how to do this problem if anyone can do a step by step to help me understand it I would appreciate it a lot.

Let $a$ and $b$ be positive integers with $b > a$, and suppose that the division algorithm yields $b = a \cdot q + r$, with $0 \leq r < a$. (note: its a zero)

Prove that $$\mathrm{lcm}(a, b) − \mathrm{lcm}(a, r) = \frac{a^2 \cdot q}{\gcd(a, b)}.$$

Terms in a generated substructure

Posted: 13 Nov 2021 03:40 PM PST

Let $\underline{B}$ be a $\tau$-structure and $G\subseteq B$. Suppose $\underline{A}=\langle G\rangle_B$ denote the smallest substructure of $\underline{B}$ generated by $G$.

Show that for each $a\in A$, there exist a $\tau$-term $t(x_1,\ldots,x_n)$ and $g_1,\ldots,g_n\in G$ such that $t^{\underline{B}}(g_1,\ldots,g_n)=a$.

My attempt: Let $$C=\{a\in A\mid \exists \tau\text{-term }t(x_1,\ldots,x_n) \text{ and } g_1,\ldots,g_n\in G :t^{\underline{B}}(g_1,\ldots,g_n)=a \}.$$ My idea is to show that $\underline{C}$ is a substructure of $B$ and $G\subseteq C$. Then I can directly obtain $\underline{A}\subseteq\underline{C}$.

To show that $G\subseteq C$, let $g\in G$ and for a variable $x_1$, let $t=x_1$, then $t^{\underline{B}}(g)=g$ and hence $g\in C$.

Is this correct? Moreover, how do I show that $\underline{C}$ is a substructure?

Let $C_n$ be a cyclic group of order $n$. Find all automorphisms of $C_n$.

Posted: 13 Nov 2021 03:43 PM PST

Let $C_n$ be a cyclic group of order $n$, I want to find all isomorphism from $C_n$ to $C_n$.

If $f: C_n\rightarrow C_n$ is an isomorphism, then for all $x\in C_n$, it exists a generator $g$ s.h $\:f(x)=f(g^m)=f(g)^m,\:m\in \mathbb{N}$

Now because $f(g)\in C_n$, in order to generate the whole group it must be true that $\text{gcd}(n,m)=1$ so they can be $\phi(n)$ automorphism. More specific the automorphism are $f_1(x)=x$ ,... $f_{\phi(n)}(x)=x^{n-1}$

Now I want to show that the set of those automorphisms, form a cyclic group with order $\phi(n)$ and operation the composition.

$f_1(x)=x$ is the identity element and the inverse of $f_i(x)$ is (don't know)

also I don't know how to prove that $\text{Aut}(C_n)$ is cyclic, don't see why some $<f^k>$, $k\in \mathbb{N}$ generates all the other isomorphisms

Is $I^\times = I,$ that is is the adjoint of the identity operator the identity operator?

Posted: 13 Nov 2021 03:36 PM PST

Fundamentally this would make sense, but I have not found anything that explicitly states if this is true or not.

I need help approaching this limit of limit.

Posted: 13 Nov 2021 04:05 PM PST

As i was trying to study today, I came across this limit that I have no idea how to solve.

$$\lim_{n \to \infty} ( \lim_{x \to 0} (1+\sin^2(x)+\sin^2(2x)+\cdots+\sin^2(nx))^\frac{1}{n^3\cdot x^2})$$

My first thought was the squeeze theorem:

If:
$ 0 \le \sin^2(x) \le 1$

Then:
$ 1 \le 1+\sin^2(x)+\sin^2(2x)+\cdots+\sin^2(nx) \le n+1$. Raising it to the power of $\frac{1}{n^3\cdot x^2}$, then

$$\large 1 \le 1+\sin^2(x)+\sin^2(2x)+\cdots+\sin^2(nx) \le (n+1)^\frac{1}{n^3\cdot x^2}$$

But I got stuck solving the limit for $(n+1)^\frac{1}{n^3*x^2}$.

Thanks!

Prove for every three integers $a$, $b$ and $c$ that an even number of the integers $a + b$, $a + c $and $b + c$ are odd.

Posted: 13 Nov 2021 03:22 PM PST

Is my proof correct? Asking because it differs from solution in by book. Thanks in advance.

Lemma 1 given: "If the sum of two integers is even then they have the same parity"

Proof by contradiction.
Assume that an odd number is odd.

Case 1: 1 number is odd:
WLOG let $a + b$ be odd. Thenn by lemma 1 $a$ and $b$ have different parity. WLOG let $a$ be even. Since $a$ is even $c$ is also even, since $c$ is even $b$ is also even, which is a contradiction.

Case 2: all three numbers are odd: WLOG let $a$ be even, since $a$ is even $c$ is odd, since $c$ is odd $b$ is even, which results in both $a$ and $b$ being even which is a contradiction.

$\blacksquare$

Prove $\sum _{i=1}^{n} a_{i}^{2} \geqslant 1/n$ if $\sum _{i=1}^{n} a_{i} =1$

Posted: 13 Nov 2021 03:33 PM PST

Prove that if $\sum _{i=1}^{n} a_{i} =1$, then $\sum _{i=1}^{n} a_{i}^{2} \geqslant 1/n$

Thanks!

Primal solution exists but dual solution does not using linprog?

Posted: 13 Nov 2021 03:47 PM PST

I am solving the dedication problem in Optimization Methods in Finance by Gérard Cornuéjols, pg36 using Matlab.

$x_{j}:$ amount of bonds $j$ in the portfolio, for $j=1, \ldots, 10$

$s_{t}$ : surplus cash in year $t$, for $t=1, \ldots, 6$ Liabilities needed to be covered in year 1 through 6 $$ \begin{array}{ccccccc} \hline \text { Year } & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \text { Required } & 100 & 200 & 800 & 100 & 800 & 1200 \\ \hline \end{array} $$

$$ \begin{array}{cccccccc} Bond & 1 & 2 & 3 & 4 & 5 & 6 & \text { Price } \\ \hline \text 1 & 10 & 10 & 10 & 10 & 10 & 110 & 109 \\ 2 & 7 & 7 & 7 & 7 & 7 & 107 & 94.8 \\ 3 & 8 & 8 & 8 & 8 & 8 & 108 & 99.5 \\ 4 & 6 & 6 & 6 & 6 & 106 & & 93.1 \\ 5 & 7 & 7 & 7 & 7 & 107 & & 97.2 \\ 6 & 5 & 5 & 5 & 105 & & & 92.9 \\ 7 & 10 & 10 & 110 & & & & 110 \\ 8 & 8 & 8 & 108 & & & & 104 \\ 9 & 7 & 107 & & & & & 102 \\ 10 & 100 & & & & & & 95.2 \\ \hline \end{array} $$

The primal is

Objective $$ \min 109 x_{1}+94.8 x_{2}+\cdots+102 x_{9}+95.2 x_{10} $$ Constraints: $$ 10 x_{1}+7 x_{2}+8x_{3}+6x_{4}+7x_{5}+5x_{6}+10x_{7}+8x_{8}+7 x_{9}+100 x_{10} \quad-s_{1}=100 $$ $$ 10 x_{1}+7 x_{2}+8x_{3}+6x_{4}+7x_{5}+5x_{6}+10x_{7}+8x_{8}+107 x_{9} \quad+s_{1}-s_{2}=200 $$ $$10x_{1}+7x_{2}+8x_{3}+6x_{4}+7x_{5}+5x_{6}+110x_{7}+108x_{8}+s_{2}-s_{3}=800$$ $$10x_{1}+7x_{2}+8x_{3}+6x_{4}+7x_{5}+105x_{6}+s_{3}-s_{4}=100$$ $$10x_{1}+7x_{2}+8x_{3}+106x_{4}+107x_{5}+s_{4}-s_{5}=800$$ $$110x_{1}+107x_{2}+108x_{3}+s_{5}-s_{6}=1200$$ $xi>0$ and the dual is

Objective: $$ \max 100y_{1}+200y_{2}+\cdots+800y_{5}+1200y_{6} $$ Constraints: $$ 10y_{1} + 10y_{2} + 10y_{3} + 10y_{4} + 10y_{5} + 110y_{6} = 109$$ $$ 7y_{1} +7y_{2}+7y_{3}+7y_{4}+7y_{5}+107y_{6}=94.8$$ $$8y_{1}+8y_{2}+8y_{3}+8y_{4}+8y_{5}+108y_{6}=99.5$$ $$6y_{1}+6y_{2}+6y_{3}+6y_{4}+106y_{5}=93.1$$ $$7y_{1}+7y_{2}+7y_{3}+7y_{4}+107y_{5}=97.2$$ $$5y_{1}+5y_{2}+5y_{3}+105y_{4}=92.9$$ $$10y_{1}+10y_{2}+110y_{3}=110$$ $$8y_{1}+8y_{2}+108y_{3}=104$$ $$7y_{1}+107y_{2}=102$$ $$100y_{1}=95.2$$ $$-y_{1}+y_{2}=0$$ $$-y_{2}+y_{3}=0$$ $$-y_{3}+y_{4}=0$$ $$-y_{4}+y_{5}=0$$ $$-y_{5}+y_{6}=0$$ $$-y_{6}=0$$ $$yi free$$ Yet the dual has no solution using linprog, the primal does.

Is my dual formulation wrong? How can I find the solution to the dual?

What is the Galois group of this extension?

Posted: 13 Nov 2021 03:36 PM PST

Let $u$ be the real root of $x^3-x+1$; I am trying to calculate de Galois group of $\mathbb{Z}/3\mathbb{Z}(u)/\mathbb{Z}/3\mathbb{Z}$; let's start with the first one; as $x^3-x+1$ is monic, and irreducible over $\mathbb{Z}/3\mathbb{Z}$ (just checking that $0$, $1$ and $2$ are not roots of the polynomial), and $u$ is root of the polynomial, $x^3-x+1$ is the minimal polynomial of $u$ over $\mathbb{Z}/3\mathbb{Z}$. So, $[\mathbb{Z}/3\mathbb{Z}(u):\mathbb{Z}/3\mathbb{Z}]=3$ (the degree of the extension), being possible to choose $\{1,u,u^{2}\}$ as a basis for the extension. Then, we proceed to calculate the Galois group, i.e., $G=Aut_{\mathbb{Z}/3\mathbb{Z}}\mathbb{Z}/3\mathbb{Z}(u)$ (the group of $\mathbb{Z}/3\mathbb{Z}$-automorphisms of $\mathbb{Z}/3\mathbb{Z}(u)$). We can conclude that we can define every automorphism by the image of $u$, and if $\varphi \in G$, $\varphi(u)$ is root of the minimal polynomial of $u$ ($x^3-x+1$); of course, some minor proofs have to be given in order to justify this assumptions. The only root of the polynomial that is in $\mathbb{Z}/3\mathbb{Z}(u)$ is $u$, as the other $2$ are complex roots, so the only automorphism possible is the identity, that sends $u$ to $u$. So, $G=\{Id\}$, and $G'$ (i.e, ''what the automorphisms on $G$ fix from $\mathbb{Z}/3\mathbb{Z}(u)$ '') is $\mathbb{Z}/3\mathbb{Z}(u)$, and as $G'$ is different from $\mathbb{Z}/3\mathbb{Z}$, the extension is not a Galois extension. Is this sketch of my way of thinking the exercise ok? And, If I change $\mathbb{Z}/3\mathbb{Z}(u)$ with any $\mathbb{Z}/n\mathbb{Z}(u)$, with $n$ prime, does the exercise will still have analogous results? Thanks in advance for your time!

Locality of Connection

Posted: 13 Nov 2021 03:55 PM PST

This is a lemma in Lee's Riemannian Geometry:

Lemma 4.1: Suppose $\nabla$ is a connection in a smooth vector bundle $E \rightarrow M$. For ever $X \in \mathfrak X(M)$, $Y \in \Gamma(M)$, and $p \in M$, the covariant derivative $\nabla_X Y|_p$ depends on the values of $X$ and $Y$ i an arbitrarily small neighborhood of $p$.

First, consider $Y$. Replacing $Y$ by $Y - \tilde Y$ shows that it suffices to prove $\nabla_X Y|p = 0$ if $Y$ vanishes on a neighborhood of $p$. Suppose $Y$ is a smooth section of $E$ that is identically zero on a neighborhood $U$ of $p$. Choose a bump function $\varphi \in C^\infty(M)$ with support in $U$ such that $\varphi(p) = 1$. The hypothesis that $Y$ vanishes on $U$ implies that $\varphi Y \equiv 0$ on all of $M$, so for every $X \in \mathfrak X(M)$, we have $\nabla_X(\varphi Y) = \nabla_X (0 \cdot \varphi Y) = 0$. Thus, the product rule gives $$0 = \nabla_X (\varphi Y) = (X \varphi) Y + \varphi(\nabla_X Y).$$ Now, $Y \equiv 0$ on the support of $\varphi$, so the first term on RHS is identically 0. Evaluating equation above at $p$ shows that $\nabla_X Y |_p = 0$. The argument for $X$ is similar. $\blacksquare$

In the proof above, what's the point of taking a bump function? If I know that $Y \equiv 0$ on $U$, which is a neighborhood of $p$, then isn't $\nabla_X |_{p \in U} Y \equiv 0$ trivially true?

If the variance of a random variable $X$ exists, show $E(X^2)\geq [E(X)]^2$.

Posted: 13 Nov 2021 03:35 PM PST

Is the following proof valid?


Let's try a proof by contradiction.

Suppose $E(X^2)<[E(X)]^2$. Since $[E(X)]^2\geq 0$, then $E(X^2)<0$.

But in the discrete case, if $p(x)$ is the p.m.f. of $X$ and $\mathscr{A}$ is the space of $X$, then $$ E(X^2)=\sum_{x\in \mathscr{A}}x^2p(x) $$ We know for all $x\in \mathscr{A}$ that $p(x)\geq 0$ and $x^2\geq 0$, and thus $x^2p(x)\geq 0$. We must have then $$ E(X^2)=\sum_{x\in \mathscr{A}}x^2p(x)\geq 0 $$ which contradicts the conclusion that $E(X^2)<0$. It must be true, then, that $E(X^2)\geq [E(X)]^2$.

Similar proof can be shown for the continuous case. Did I prove this correctly?

Lie derivative different cases

Posted: 13 Nov 2021 03:36 PM PST

Consider an arbitrary $(2,0)$-tensor field $T^{ij}$, and let $X$ be some vector field. Furthermore, consider the Lie derivative $L_XT^{ij}$. In some cases, the Lie derivative is defined by $$(L_XT)^{ij}=X^l(D_lT^{ij})-(D_lX^i)T^{lj}-(D_lX^j)T^{il} ,$$ where $D_l$ is the covariant derivative; and in other cases the Lie derivative is defined by $$(L_XT)^{ij}=X^l (\partial_l T^{ij})-(\partial_lX^i)T^{lj}-(\partial_lX^j)T^{il} .$$ Why is there different defintions for the Lie derivative and how can I distinguish between the two cases?

for what $\sigma , \tau \in \mathbb{C}$ is $f_{\sigma}f_{\tau}=f_{\sigma + \tau}$ why?

Posted: 13 Nov 2021 03:55 PM PST

For $ \sigma \in \mathbb{C}$ is

\begin{align*} f_{\sigma}:\mathbb{C} \setminus (-\infty, 0] \rightarrow \mathbb{C}, z\rightarrow e^{\sigma \ln(z)} \end{align*}

for what $\sigma , \tau \in \mathbb{C}$ is $f_{\sigma}f_{\tau}=f_{\sigma + \tau}$ why?

i think it means when is z^$\tau$$\sigma$=z^$\tau$+$\sigma$. is that right?

Prove $\vdash\forall x(\alpha\rightarrow\beta)\rightarrow(\alpha\rightarrow\forall x\beta)$ in Enderton's system

Posted: 13 Nov 2021 03:26 PM PST

$\vdash\forall x(\alpha\rightarrow\beta)\rightarrow(\alpha\rightarrow\forall x\beta)$

According to Enderton p.121, it suffices to show that:

$\{\forall x(\alpha\rightarrow\beta),\alpha\}\vdash\beta$

For then, Enderton suggests, we can derive the former via generalization and deduction theorems.

But I don't understand this. For one the one hand, from the deduction theorem we can derive:

$\{\forall x(\alpha\rightarrow\beta)\}\vdash\alpha\rightarrow\beta$

But an application of the generalization theorem to the RHS of this won't yield our target theorem.

On the other hand, we cannot apply the generalization theorem to $\{\forall x(\alpha\rightarrow\beta),\alpha\}\vdash\beta$ to get:

$\{\forall x(\alpha\rightarrow\beta),\alpha\}\vdash\forall x\beta$

Because we don't know if $x$ occurs free in $\alpha$.

Any and all help is much appreciated!

Hartshorne Exercise III.2.7(a): sheaf cohomology of constant sheaf $\Bbb Z$ on $S^1$ in the usual topology

Posted: 13 Nov 2021 03:55 PM PST

I am trying to solve this exercise:

Let $S^1$ be the circle(with its usual topology), and let $\mathbb Z$ be the constant sheaf $\mathbb Z$

(a) Show that $H^1(S^1,\mathbb Z)\simeq \mathbb Z$, using our definition of cohomology.

I have tried to construct an injective resolution: like Proposition 2.2, Let $I^0=\prod_{x\in S^1}j_*(\mathbb Q)$. But then I don't know how to calculate its stalk. So I have difficulties in build the $I^1$... If I just use its discontinuous sections to build a flasque resolution, I also can't calculate the stalk...

Could you provide some help or give a complete answer? Using Cech cohomology is also accepted. Thanks!

Trigonometric terms for floor function $Q_k(n)$

Posted: 13 Nov 2021 04:01 PM PST

I'm working on some problems of number theory and somehow I could manage to find a more general formula for some problems. However, I needed to define a function $$Q_k(n) = \text{floor}(\frac n{k}) \text{ As, I'm interested in quotient of (n/k) I named it: Q}$$

$\phi(n, k) = F(Q_k(n),Q_{k-1}(n), .....Q_{2}(n), k)$ Here, $F$ is another complicated summation function which can be simplified (Using simple algebra)

  • What is my difficulty;

I wanted a trigonometric expression for $Q_k(n)$

Ex. I could only find $$Q_2(n) = \frac n{2} - \frac {\left[1-(-1)^n\right]}{4} = \color{green}{\frac n{2} - \frac {1-\cos(n\pi)}{4}} = \frac n{2} - \frac {\sin^2{(\frac {n\pi}{2})}}{2}$$

Similarly, I'm looking for $Q_3(n), Q_4(n) \text{, ..... }Q_k(n)$

If possible, the green part is the highly desired thing for me(With no powers to trigonometric functions so that later I can perform summation.)

vector rotations in $\{1,i,j\}$ space

Posted: 13 Nov 2021 03:55 PM PST

I was wondering why Hamilton chose the purely imaginary quaternions as his 3D vectors (calling them the "vector quaternions"). Out of curiosity, I tried figuring out how to rotate vectors in a $\{1,i,j\}$ vector space instead. Eventually I noticed the following:

In $\{1,i,j\}$ space, for any two unit vectors $a$ and $b$ that are perpendicular to each other, if $b$ is on the $\{i,j\}$ plane then the following generalization of Euler's formula holds true:

$e^{\theta ab}a = ae^{\theta ba} = e^{\frac{\theta}{2} ab}ae^{\frac{\theta}{2} ba} = a\cos(\theta) + b\sin(\theta)$

This allows one to rotate the $\{a,b\}$ plane an angle $\theta$ around the origin. Note that if $a=1$, then it's just the regular Euler's formula with $b$ as the unit imaginary number. If $a\neq\pm{1}$, then $ab$ has a $k$ component. If $a$ is also on the $\{i,j\}$ plane, then the half-angle "sandwich" is exactly the same as for purely imaginary "vector quaternions" because $ab$ would in this case be $\pm{k}$ and $ba=\mp{k}$, and $e^{\theta k}$ and $e^{-\theta k}$ are conjugate.

Here's my question: Is it safe to say that you could take the axis of any $\{a,b\}$ plane and rotate an arbitrary vector $\vec{v}$ in $\{1,i,j\}$ space around the same axis via $e^{\frac{\theta}{2} ab}\vec{v}e^{\frac{\theta}{2} ba}$ ? I know it's true if $ab=\pm{k}$ or if $\vec{v}$ is on the $\{a,b\}$ plane, and everything else I've tried seem to check out, but I'd like to know if it's true in general.

Why does the specific shape of the hyperbola keep the same ratio of d1 and d2 no matter the point? [closed]

Posted: 13 Nov 2021 04:01 PM PST

Why does the shape of a hyperbola always keep the ratio of d1-d2=2a? I know it's the definition, but how does it work? The ellipse makes sense to me. d1+d2 goes farther at the vertex because it's a straight line, and goes the least distance at the co-vertex because the distances are at their greatest angle. But I don't understand why the hyperbola is shaped the way it is, why subtracting one distance from another and having it stay the same ratio makes that shape. I'm learning about it in school and just want to completely understand.

Help with finding the equation of line tangent to the semi circle with the equation: $y=\sqrt{1-x^2}$

Posted: 13 Nov 2021 04:05 PM PST

enter image description here

Ok so I need to show that the equation of the tangent above to the semi-circle with the equation: $y=\sqrt{1-x^2}$, is $y=-\frac{1}{\sqrt{3}}x$ + $\frac{2}{\sqrt{3}}$

What we from the question:

The tangent intersects the $x$-axis at $(2,0)$, (assume that you do not know any other points of intersection)

What I tried to do so far:

Since its a tangent, I decided to differentiate the semi-circle function using the chain rule, getting:

$\frac{dy}{dx}$ = $\frac{-x}{\sqrt{1-x^2}}$

Then using the equation of a line formula:

$y=mx+c$

--> $y=\frac{-x}{\sqrt{1-x^2}}x +c$

Substituting the point in:

-->$0=\frac{-2}{\sqrt{1-2^2}}(2) +c$

Now clearly something is wrong because I will get the square root of a negative, which does not make any sense. Need help from here.

*Note: I know that there are other ways to solve this, but I would prefer if calculus was used to solve this question.

Trouble applying Hahn-Banach theorem in exercise

Posted: 13 Nov 2021 03:23 PM PST

The exercise is the following: Let $X$ be a vector space over the real numbers and $p:X \to \mathbb{R}$ a sublinear functional. Show that if $x_0\in X$ then there exists a linear functional $F:X\to \mathbb{R}$ such that $F(x_0)=p(x_0)$ and $F(X) \leq p(x)$ for al $x\in X$.

I know this has Hahn-Banach Theorem written all over it and that the $F(x_0)=p(x_0)$ if $x_0=0$. I just don't find a way to do it for $x_0\neq 0$. Is there any path to do that?

Edit: The Hahn Banach Theorem that I'm using states the following: Let $M_0$ be a subspace of a Vector Space $X$ over $\mathbb{R}$, $p: X \to \mathbb{R}$ a sublinear functional and $f_0$ a linear functional defined only on $M_0$. Then there exists a linear functional $F: X\to \mathbb{R}$ such that:

1)$F(x)=f_0(x)$, for all $x\in M_0$
2) $F(x)\leq p(x)$, for all $x\in X$

Set of set of pairings to cover all combinations but are disjoint for each subset

Posted: 13 Nov 2021 03:50 PM PST

I am not familiar with nomenclature of these specific problems, so maybe I am banging my head with something very obvious but I have been unable to find a constructive procedure or a general counterexample.

And let me apologize for an horrendous title. Could not come up with anything better. Suggestions are welcome.

I have $S = \lbrace 1, 2, \ldots n\rbrace$. I want to build the following:

$ T_1 \ldots T_k$

where

$T_i = \lbrace (a^{i}_{1}, b^{i}_{1}), \ldots, (a^{i}_{m}, b^{i}_{m})\rbrace$

$ a^{i}_{t} \neq b^{i}_{s} \quad \forall t, s$

$ a^{i}_{t} \neq a^{i}_{s} \quad \forall t \neq s$

$ b^{i}_{t} \neq b^{i}_{s} \quad \forall t \neq s$

satisfying the following:

$ \bigcup\limits_{i=1}^{k} T_i = \lbrace (a, b) \in S \times S \mid a < b \rbrace$

Optimally, $ m = \frac{n}{2}$ and also $T_i \cap T_j = \emptyset \quad \forall i \neq j$

Thoes that set of $T_i$ always exist? I was expecting to find an easy constructive way to find it, but I have been struggling. If what I call optimal is impossible in certain cases, which is the best (minimal number of $T_i$)?

Written in a less LaTeX-y fashion: I want to find a way to cover all combinations of elements of S in a way that for each "step" I have a set of pairs that are disjoint, and when the last step is done, all the combinations have been picked once. My motivation comes from an algorithmic parallellism point of view, but I think that the mathematic foundation is properly set up.


Edit: Some examples

An almost degenerate $n = 3$:

$T_1 = \lbrace(1,2)\rbrace$

$T_2 = \lbrace(2,3)\rbrace$

$T_3 = \lbrace(1,3)\rbrace$

An example of "optimal" for $n = 4$:

$T_1 = \lbrace(1, 2), (3, 4)\rbrace$

$T_2 = \lbrace(1, 4), (2, 3)\rbrace$

$T_3 = \lbrace(1, 3), (2, 4)\rbrace$

Normal disjunctive and conjuctive form from a truth table

Posted: 13 Nov 2021 04:05 PM PST

Let's say that we get a table with zeros and ones. We need to get it into disjunctive normal form or conjuctive normal form. We also have discrete variables $x_1,..,x_n$ that are either $1$ or $0$. How do you determine where to put negation and where not to put it.

for instance: we have a row: $$p = 0, q= 1, r = 0, \quad \text{table row result = 1}$$

Should I write this as: $$...\vee(\neg p \wedge q \wedge \neg r) \vee ...$$ or $$...\wedge(\neg p \vee q \vee \neg r) \wedge ...$$

What is the correct way ? What if the table row result would be zero?

Or the other way with negations? So my question is how do we know where the negations are?

Prove that $\int_1^2 g(x^3 - 3x)\,\mathrm dx=\int_0^1 g(x^3 - 3x)\,\mathrm dx$

Posted: 13 Nov 2021 03:21 PM PST

Let $g:[-2, 2]\to\mathbb{R}$ be an even continuous function. Prove that $$\int_1^2g(x^3-3x)\,\mathrm dx=\int_0^1g(x^3-3x)\,\mathrm dx.$$

I found the following solution online, but I'm not sure why $$\int_0^1f(t)\,\mathrm d[x(t)]=\int_0^1 g(x^3-3x)\,\mathrm dx.$$ Also, why does $$\int_0^1g(t)\,\mathrm d[x(t)]=\int_{t=-1}^0 g(t)\,\mathrm d[y(t)]+\int_{t=-1}^0 g(t)\,\mathrm d[z(t)]\,?$$ Is it because $x(t) = -z(t)-y(t)$ and does $$\int_a^b\,\mathrm d(x(t))=-\int_a^b\,\mathrm d[y(t)]-\int_a^b\,\mathrm d[z(t)]\,?$$

For each $t\in [-1,0]$, the equation $x^3-3x=t$ has three roots $x(t)\in[0,1]$, $y(t)\in[1,\sqrt3]$, $z(t)\in [-2,-\sqrt3]$.

By Vieta's Theorem, $x(t)+y(t)+z(t)=0$ $\forall$ $t$. Also, $x^3-3x$ is differentiable on $\mathbb{R},$ increasing in $(-2,-\sqrt{3}), $ decreasing in $[0,1],$ and increasing in $[1,\sqrt{3}],$ so by the inverse function theorem, $x,y,z$ are all differentiable for $t\in (-1,0).$ We then have

\begin{align*} \int_0^1 g(x^3-3x)\,\mathrm dx&=\int_0^{-1}g(t)\,\mathrm d[x(t)]\\ &= \int_{-1}^0 g(t)\,\mathrm d[y(t)]+\int_{-1}^0g(t)\,\mathrm d[z(t)]\\ &=\int_1^{\sqrt3} g(y^3 -3y)\,\mathrm dy+\int_{-2}^{-\sqrt3}g(z^3- 3z)\,\mathrm dz\\ &=\int_1^{\sqrt3} g(y^3-3y)\,\mathrm dy + \int_{\sqrt3}^2g(z^3 - 3z)\,\mathrm dz&\text{(as g is even)}\\ &=\int_1^2 g(x^3-3x)\,\mathrm dx \end{align*}

Antiderivative to $\int\frac{1}{(\cos x+\sin x)^2} \ dx$

Posted: 13 Nov 2021 03:52 PM PST

I have tried the following:

$\int\frac{1}{(\cos x+\sin x)^2} \ dx \ = \int \frac{\sec^2x}{(\tan x+1)^2}\ dx \ $.

After using the substitution $t=\tan x$, I got the solution: $- \frac{1}{\tan x+1} + C$.

Wolfram alpha gives the solution: $\frac{\sin x}{\sin x+\cos x}$.

At the same time $\frac{\sin x}{\sin x+\cos x} \neq - \frac{1}{\tan x+1}$.

So I'm a bit confused.

Energy method for one dimensional wave equation with Robin boundary condition

Posted: 13 Nov 2021 03:25 PM PST

Show that the initial-boundary value problem

\begin{align} & {{u}_{tt}}={{u}_{xx}}\text{ }(x,t)\in \left( 0,l \right)\times \left( 0,T \right),\text{ }T,l>0 \\ & u\left( x,0 \right)=0,\text{ }x\in \left[ 0,l \right] \\ & {{u}_{x}}\left( 0,t \right)-u\left( 0,t \right)=0,\text{ }{{u}_{x}}\left( l,t \right)+u\left( l,t \right)=0,\text{ }t\in \left[ 0,T \right]\\ \end{align}

has zero solution only.

My attempt 2:

Previously I tried separation by variables but got stuck at the end. Inspired by BCLC, I try energy method this time.

Set

$$E\left( t \right)=\frac{1}{2}\int_{0}^{L}{\left( u_{x}^{2}\left( x,t \right)+u_{t}^{2}\left( x,t \right) \right)dx}.$$

The equation ${{u}_{tt}}={{u}_{xx}}$ and the Robin b.c. gives

$$\begin{align} & \frac{dE}{dt}=\int_{0}^{L}{\left( {{u}_{x}}{{u}_{xt}}+{{u}_{t}}{{u}_{tt}} \right)dx} \\ & \text{ }=\int_{0}^{L}{\left( -{{u}_{t}}{{u}_{xx}}+{{u}_{t}}{{u}_{tt}} \right)dx}+\left. {{u}_{t}}{{u}_{x}} \right|_{0}^{L} \\ & \text{ }={{u}_{t}}\left( l,t \right){{u}_{x}}\left( l,t \right)-{{u}_{t}}\left( 0,t \right){{u}_{x}}\left( 0,t \right) \\ & \text{ }=-{{u}_{t}}\left( l,t \right)u\left( l,t \right)-{{u}_{t}}\left( 0,t \right)u\left( 0,t \right)\le 0\text{ }\left( ? \right) \\ \end{align}$$

Therefore, $E\left( t \right)\le E\left( 0 \right)$ for all $t\ge 0$. Since $E\left( t \right)\ge 0$ and , we obtain $E\left( 0 \right)=0 (?)$ for all $t\ge 0$, thus $E\equiv 0$ and hence $u\equiv 0$ .

Is the proof correct?

Find bounding box dimensions around rotated object

Posted: 13 Nov 2021 04:02 PM PST

Consider the following rectangle with dimensions 320 by 130.

After rotating the rectangle 10 degrees clockwise from the center (x: 160, y: 65), it looks like this.

enter image description here

My question is: How do I determine the bounding box dimensions?

I'm talking about the dimensions needed to surround the box, such as:

The answer is 346 by 232 but I only found that out because of the program I am using to make this image.

I've also done an example with programming such as:

rotate(10)  width = x1 - x3 + x2  height = y2 - y1 + y4  

But, I'd like to solve this without programming. Where should I start with this?

Best way to plot a 4 dimensional meshgrid

Posted: 13 Nov 2021 04:06 PM PST

I have $4$ variables $X$, $Y$, $Z$ and $C$, and I want to plot these on a graph. Usually I would just plot the surface $X$, $Y$, $Z$ and then use color to represent the $4$th dimension, as shown bellow:

4 dimensional plot 1

However, my $X$, $Y$, and $Z$ co-ordinates make up a $3$-dimensional meshgrid, so when I do the $4$ dimensional plot it is hard to see what is going on, as shown below:

4 dimensional plot 2

$X$, $Y$ and $Z$ represent spatial dimensions and $C$ represents a value that depends on its place its $3$-dimensional space. I need $X$, $Y$, and $Z$ to be shown in all places because these are the independent variables. In this simplified version of my function, $C=X+Y+Z$. I want to be able to pick any $3$ numbers for $X$, $Y$, and $Z$, and then look at my graph, and be able to get a good idea of what $C$ is. You can sort of do this with this current graph but it is hard to use.

What I want to know is: Is there a better way to plot this information? For example, is there a different co-ordinate system I could use that would be better? Or is there a way I could represent the 3 spatial dimensions so they look like a curved surface, but still include every point?

To reiterate that last question: Is there a way to represent every point in $3$ dimensions within $0 \leq X,Y,Z \leq 10$, all on one surface?

Thanks!

No comments:

Post a Comment