Friday, December 3, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Smoothness of parametric curves with vanishing derivatives.

Posted: 03 Dec 2021 04:32 AM PST

Let $c$ be a curve given by $x=f(t)$ and $y=g(t)$ with $t\in \mathbb{R}$. Lets assume $f$ and $g$ are $C^1$. I am trying to learn how to analyze the smoothness of $c$ in the cases where $f'(t_0) = 0$ and $g'(t_0) = 0$.

For example, if we have $x=t^3$ and $y=t^2$, then $t_0=0$. In this case, $\dfrac{dx}{dy} = \dfrac{f'(t)}{g'(t)}=\dfrac{3t}{2}$ which goes to $0$ as $t\to 0$. My notes say the curve is not smooth because $dy/dt$ changes sign at $t=0$ and I can't see precisely why.

Here is another example, if $x=t \sin t$ and $y=t^3$ then $f'(t) = \sin t + t \cos t$ and $g'(t) = 3t^2$. Both vanishe at $t=0$. Now we have $$\lim _{t \rightarrow 0} \frac{d y}{d x}=\lim _{t \rightarrow 0} \frac{3 t^{2}}{\sin t+t \cos t}=\lim _{t \rightarrow 0} \frac{6 t}{2 \cos t-t \sin t}=0$$

but again since $dx/dt$ changes sign at $t=0$, the curve is not smooth at $t=0$. And I still don't see the relation.

Also, i noted that when my notes compute $dx/dy$ they study if $y'(t)$ changes sign, and if they compute $dy/dx$, they study if $x'(t)$ changes sign.

What I imagine is, okay imagine we are computing the derivative wrt $y$, $dx/dy$. Then we study $dy/dt$ to see if it changes suddenly of sign. That would imply $dx/dy$ is not continuous where $dy/dt$ changed sign since we are studying the derivative with respect to $y$. But i fail trying to precise this with math terms and i am not sure at all if its correct.

If $f$ is continuous on $[a,b]$ and for any $x_1 \lt x_2 \in [a,b]: f(x_1) \neq f(x_2)$, then $f$ is strictly increasing or decreasing on $[a,b]$.

Posted: 03 Dec 2021 04:32 AM PST

I want to prove the following visually intuitive claim:

If $f$ is continuous on $[a,b]$ and for any $x_1 \lt x_2 \in [a,b]: f(x_1) \neq f(x_2)$, then $f$ is strictly increasing or decreasing on $[a,b]$.

There is a logical conclusion I made marked by a $\color{red}{(\dagger)}$ that I think is correct, but I just wanted to make sure.


Suppose $x_1$, $x_2$, and $x_3$ are three arbitrary elements in $[a,b]$ such that $x_1 \lt x_2 \lt x_3$. To avoid contradicting the assumption, we must have either $f(x_1) \lt f(x_2)$ or $f(x_1) \gt f(x_2)$. WLG, assume the first.

Similarly, to avoid contradicting the assumption, we must have either $f(x_2) \lt f(x_3)$ or $f(x_2) \gt f(x_3)$. Assume the latter: we will show that this leads to contradiction.

Finally, either $f(x_1) \lt f(x_3)$ or $f(x_1) \gt f(x_3)$. WLG assume the former.

So we have the following:

$f(x_1) \lt f(x_2)$, $f(x_2) \gt f(x_3)$, and $f(x_1) \lt f(x_3)$.

Note, then, the following inequality: $f(x_1) \lt f(x_3) \lt f(x_3)+\frac{|f(x_2)-f(x_3)|}{2} \lt f(x_2)$

Because $f$ is continuous, by the Intermediate Value Theorem, we know that there exists an $x' \in [x_2,x_3]$ such that $f(x')=f(x_3)+|f(x_2)-f(x_3)|$. Similarly, we know that that there is an $x^* \in [x_1,x_2]$ such that $f(x^*)= f(x_3)+|f(x_2)-f(x_3)|$. Because $x_1 \lt x_2 \lt x_3$, clearly $x' \neq x^*$, but $x',x^* \in [a,b]$ and $f(x')=f(x^*)$. A contradiction.

In conclusion, for any $x_1 \lt x_2 \lt x_3 \in [a,b]$, we have that $f(x_1) \lt f(x_2) \lt f(x_3)$ or $f(x_1) \gt f(x_2) \gt f(x_3)$.

From this, we can derive that for any $x_1 \lt x_2 \in [a,b]$, we have $f(x_1) \lt f(x_2)$ OR $f(x_1) \gt f(x_2)$ $\color{red}{(\dagger)}$. This is equivalent to saying that $f$ is strictly increasing or $f$ is strictly decreasing on $[a,b]$ $\quad \square$.

How to determine the sets-problem

Posted: 03 Dec 2021 04:30 AM PST

Let M be the set of functions from {0,1,2} to {0,1,2}. To example, h defined by enter image description here is an element of M. We define the partial order enter image description here on M via enter image description here if and only if

enter image description here for all i = 0,1,2.

a)Is there any element in M ​​that is larger than all the others?

b)Determine all g ∈M that satisfy enter image description here where h is the example above.

Anybody that understands this it is from my exam in the universitity the problem is that i don't understand this problem properly and how to think?

Find derivative of rotation

Posted: 03 Dec 2021 04:16 AM PST

I have an object with certain rotation and angular velocity (α, β, γ) in 3D space, using Euler extrinsic rotation (x–y–z). Also, I have angular accelerations in global coordinate system.

When I numerically integrate this in Python, using the solve_ivp() function, the angular accelerations are interpreted as local angular accelerations. This results, for example, in exactly the wrong orientation when one of the axes is rotated 180 deg.

My solution would be to convert all rotations to quaternions. And multiply the derivative by dt, and 'quaternion add' this to the original angular velocity to get the new angular velocity.

There is however one problem with this: I do not know the time stepping as this varies. And this function requires me to return the derivatives with respect to time.

So the question is: how can I determine the local angular accelerations when I have the global rotation?

Every incomplete inner product space has a maximal but incomplete orthonormal system

Posted: 03 Dec 2021 04:14 AM PST

A functional analysis text book contained the following exercise.

Let $V$ be an incomplete inner product space. Show there exists a maximal orthonormal system that is not complete. Hint: Use a closed hyperspace $F$ which has trivial orthogonal complement.

Here maximal refers to the inability to add more elements and complete refers to the linear span being dense.

I manage to prove the hint as follows.

Let $H$ be a completion of $V$, so $V$ is a dense subspace of $H$ which is complete. Let $x\in H\setminus V$. Then $x^{\perp\perp}=<x>$ in $H$, so if we let $F=x^\perp|_V$ then since $F$ is dense in $x^\perp$ we find $F$ has trivial orthogonal complement in $V$.

I tried to use this to solve the exercise as follows.

If $F$ has a complete orthonormal system, then it is maximal in $V$ as the orthogonal complement of $F$ is trivial. Since $x$ can be approximated in $V$ but not in $F$, we find it can not be a complete orthonormal system in $V$.

So I have two questions. The main question is how do I finish the proof? What if $F$ does not have a complete orthonormal system? I found this which shows that separable inner product spaces always have a complete orthonormal system, but inseparable inner product spaces may not. So at least I solved the exercise for separable inner product spaces, but the inseparable case remains unsolved.

The other question I have is less important, but the book has not mentioned the completion of normed or inner product spaces, so my solution for the hint feels like cheating. I wonder if someone can come up with a solution that does not require the use of a completion.

Determine the points on the line formed by $([1]_5, [2]_5)$ and find out how many lines there are in $\Bbb Z_5^2$.

Posted: 03 Dec 2021 04:11 AM PST

Denote by $\Bbb Z_5^2$ the direct product $\Bbb Z_5 \times \Bbb Z_5$. This is a vector space with coefficients from $\Bbb Z_5$. Determine the points on the line formed by $([1]_5, [2]_5)$ and find out how many lines there are in $\Bbb Z_5^2$.

The points on the line spanned by $([1]_5, [2]_5)$ seem to be of the form $$r(t)=([1]_5, [2]_5)+([1]_5, [2]_5)t$$ where $t \in \Bbb Z_5$? How can I start to figure out how many lines there are in total? It seems that I have to figure out how many different points $([x]_5,[y]_5)$ I can consider? Or can I just start somehow shifting the line formed by $([1]_5,[2]_5)$ until I get back to it?

Simple proof about prime filters.

Posted: 03 Dec 2021 04:24 AM PST

Exercise. Let $F$ be an true filter in a set $X$. Show that $F$ is an ultrafilter ($\Leftrightarrow$ prime) iff a subset of $X$ intersects every set of $F$ then this same subset is in $F$.

UPDATE.

My attempt. $(\Rightarrow)$ Suppose $F$ is an ultrafilter. Then, we have that $A,B \in F \Rightarrow A \cap B \in F$ and $A \in F, A \subseteq B \Rightarrow B \in F$ by definition of a filter. Let $S \subset X$ s.t. $S \cap A \neq \emptyset$, $\forall A \in F$. This guarantees that there is another filter $F'$ s.t. $F \subseteq F'$ and $S \in F'$, but $F$ is maximal and thus $F=F'$, meaning $S \in F$, proving what's wanted.

$(\Leftarrow)$ Suppose now that the second condition is verified for every subset $S$ of $X$. By the same reasoning as before, we can guarantee that there exists a filter $F'$ s.t. $F \subseteq F'$ and $S \in F'$. If we show that $F=F'$ we have that $F$ is maximal, and that's what we want. Let's assume that $F \neq F'$, i.e., that there is a set $Z$ such that $Z \in F'$ and $Z \notin F$. Then we have: $(X\backslash Z)\cup Z = X \in F$ ($F$ is a filter). But this is equivalent to saying that $X\backslash Z \in F \vee Z \in F$ but $Z \notin F$ and thus $X \backslash Z \in F \Rightarrow X\backslash Z \in F'\Rightarrow Z \notin F'$ which is a contradiction. Thus $F=F'$ e so $F$ is maximal.

Is this formulation right? I have some issues understanding this basic concepts about filters and some of my proofs go wrong because of it. Thanks for all the help in advance.

Exists $V\subseteq X$ such that $U\cap V$ is connected if $X$ is simply connected

Posted: 03 Dec 2021 04:00 AM PST

I'm trying to prove that all closed (holomorphic) $1$-forms on a riemann surface are exact by using zorn's lemma. That is I consider the set of $T$ of $(U,f)$ where $f$ is a $0$-form on $U$ such that $df=w$. Now I consider the relation $\leq$ on $T$ where $(U,f)\leq(V,g)$ if $U\subseteq V$ and the restriction of $g$ to $U$ is $f$. I now need the following topological theorem (which I hope is true) to expand these forms from the maximal element.

The theorem:

If $M$ is simply connected (riemann surface), $U_\alpha$ (the charts on $M$) an open cover for $M$, and $U\subset M$ an open connected set not equal to $M$. Then there exists an open set $V$ such that $V\not\subseteq U$, $U\cap V$ is connected (and non-empty) and $V\subseteq U_\alpha$ for some $\alpha$.

What I'm then able to do is expand $f$ to $U\cup V$ which is a larger set than $U$.

My intuition tells me that this should be true. If you consider for example the unit disk $D$ in $\mathbb{R}^2$ and $D\setminus\{(0,0)\}$, and $U$ is the set $D\setminus[-1,0]$, then you can add such a $V$ in the first case, but not in the second case.

If it is needed for the proof then we can of course assume that $M$ has all the nice properties of manifolds, for example that path-connectedness is equivalent to connectedness etc.

Solving for a function $B(i,k)$

Posted: 03 Dec 2021 03:58 AM PST

I am trying to solve for a function $B(i,j)$ of degree $n$ that satisfies the following: $$P(i,j)=\sum_{k=1}^nB(i,k)(k-\alpha)^{n-j}\prod_{r=1}^\gamma(k-\lambda_r)$$ $P(i,j)=1$ if $j=i$ and $P(i,j)=0$ if $j\neq i$.
Where $\lambda_r\neq\alpha$
$\gamma,n\in \mathbb{N}$
$\lambda_r,\alpha\not\in \mathbb{N}$.
I have read I can make use of Lagrange polynomials to solve this problem however I haven't fully understood how these can be applied.
I approached the problem by abstracting some parts then attempting to find the link with the Lagrange polynomials, still with no success. $$P(i,j)=\sum_{k=1}^nB(i,k)H(j,k)^nC(k)$$

Radical Function

Posted: 03 Dec 2021 03:51 AM PST

The time it takes for a planet to make one revolution around the Sun is the planet's period. The period f(d) (in years) of a planet whose average distance from the Sun is d million kilometers is modeled by the equation f(d)=0.0005443\sqrt(d^(3)) a. What is the period of Neptune, whose average distance from the Sun is 4498 million kilometers? b. Use a graphing calculator table to find Earth's average distance from the Sun. c. Suppose in the future we colonize Mars, whose average distance from the Sun is 228 million kilometers. What is the period of Mars? If a person is 20 years old in "Earth years," how old is the person in "Mars years"?

Integrating surface integrals in polar coordinates.

Posted: 03 Dec 2021 04:17 AM PST

The question I have is to find the double integral of $z$ over $S$. Where $S$ is the hemispherical surface given by $x^2+y^2+z^2=1$ with $z \geq 0$.

I drew this out and found it to be the top half of a sphere and found the the cross product of tangent vectors to be $\frac{1}{z}$.

So my integral became $1$.

Using polar coordinates I found the circle projection of the sphere on the xy plane and got $0 \leq \theta \leq 2 \pi$ and $0 \leq r \leq 1$.

However when I integrate this I get $2 \pi$. Which I know is wrong as the circle with radius one has area $\pi$.

I can't figure out where my polar coordinates integral is wrong so any help would be greatly appreciated.

(apologies for the bad formatting, I'm relatively new and still figuring out the basics. :))

Is the following set a subspace of $\Bbb R^3$

Posted: 03 Dec 2021 04:01 AM PST

I have a set of vectors $S$ which are of the form: $$S = \{ \left(\begin{matrix} a,&b,&c \end{matrix}\right) \mid a, b,c \in \Bbb R \}. $$

The set is simply what is shown above. Nothing else. I feel like it should be simple to show but I think I'm missing something.

So if I had the given vectors $x$ and $y$ = $$x = { \left(\begin{matrix} a_1,&b_1,&c_1 \end{matrix}\right) } $$ $$y = { \left(\begin{matrix} a_2,&b_2,&c_2 \end{matrix}\right) } $$

How would I show that they are still closed in $\Bbb R^3$ for addition and scalar multiplication?

How to solve this using Laplace Transform?

Posted: 03 Dec 2021 03:43 AM PST

My teacher passed this question on for an exam. I know that I can solve the first two terms using Laplace transform substitution tables. But the part that has the integral left me in doubt, because it is an integral equation.

And this is the original post on the web based learning plataform. original professor's post.

In my search, the integral appears to be an Integral equation. But despite that when I searched in the PDFs books of Mathematical Physics the integral equation doesn´t fit in what I have found.

I found it strange to use "z" instead of "y". Nevertheless, I rewrote the expression like this below. enter image description here

This is my attempt. enter image description here

enter image description here

Differential Equation to Model Temperature of Water

Posted: 03 Dec 2021 04:15 AM PST

Question

The water in a hot-water tank cools at a rate which is proportional to $T − T_0$, where $T$ is the temperature of the water in degrees celcius at time $t$ minutes and $T_0$ is the temperature of the surrounding air in degrees celcius. When $T = 60$, the water is cooling at $1$ celcius per minute. When switched on, the heater supplies sufficient heat to raise the water temperature by $2$ degrees celcius each minute (neglecting heat loss by cooling). If $T = 20$ when the heater is switched on and $T_0$ = 20. Find the differential equation $\frac{dT}{dt}$ (Where both heating and cooling are taking place).

So the temperature leaving the water is leaving at a rate of $\frac{dT_{out}}{dt}=k(T-T_0)$ for some constant $k$. We can find $k$ by letting $T=60$,$T_0=20$ and $\frac{dT_{out}}{dt}=-1$. I assumed here that the air temperature is staying constant at $20$ as I'm thinking that is what the last scentence of the question implied but it is hard to tell. From this I found that $\frac{dT_{out}}{dt}=\frac{20-T}{40}$

It was given that $\frac{dT_{in}}{dt}=2$ so the overall temperature must be: $$\frac{dT}{dt}=\frac{dT_{in}}{dt}-\frac{dT_{out}}{dt}$$ $$\frac{dT}{dt}=\frac{60+T}{40}$$

However, the answer should be $\frac{dT}{dt}=\frac{100-T}{40}$. Please let me know where I went wrong. Thanks.

Existence of uniform Parseval frame

Posted: 03 Dec 2021 04:17 AM PST

Let H be a Hilbert Space(real/complex) of dimension n and $ N>n $. Does there exist a Uniform Parseval frame F, such that $|F| =N $ ? If it exists, how can I construct them. Thanks in advance.

Find the standard equation of the circle with center on x + y = 4 and 5x + 2y + 1 = 0 and having a radius 4.

Posted: 03 Dec 2021 04:31 AM PST

Does the x + y supposed to have a square on it? This is heavily confusing me since I only used the x + y with square on them

Prove an integral to be $o(h)$

Posted: 03 Dec 2021 03:48 AM PST

I've got an expression (if not wrong, see my last post for original question) $$\int_{y=-x}^x e^{\mu y-\frac{1}{2}\mu^2 h}\frac{1}{\sqrt{2\pi h}} (e^{-\frac{(y-2x)^2}{2h}} +e^{-\frac{(y+2x)^2}{2h}}-e^{-\frac{(y-4x)^2}{2h}}-e^{-\frac{(y+4x)^2}{2h}})\,dy$$ and now am asked to prove it's of $o(h)$, when $h\to 0$. But I have no idea to proceed.

Any suggestion will be appreciated.

Loomis and Sternberg Problem 1.58

Posted: 03 Dec 2021 04:27 AM PST

enter image description here

Null Space of $D$:

If $f \in W$ s.t. $f(x)=c \in \mathbb{R}$, then $Df \in V$ is such that $Df(x)=0$ for any $x$ which is the additive identity of $V$ so $f \in N(D)$.

If $Df=f_0^V$, the additive identity of $V$, then $Df=Dg$ where $g(x)=\int_a^x f_0^V(t) \ dt =\int_a^x 0 \ dt = c_1 \in \mathbb{R}$. Since $Df=Dg$, $f=g+h$ where $h$ is some constant function so $f$ is also some constant function.

Thus $N(D)=\{f:f(x) = c\} \subset W$. $\square$

$D$ is surjective:

Take any $f \in V$. Let $F \in W$ be $F:x \mapsto \int_a^x f(t) \ dt$. Then $DF=f$ so $D$ is surjective. $\square$

$T$ is injective:

$T(f_0^V) = F$ where $F:x \mapsto \int_a^x f_0^V \ dt = 0$ which is the additive identity of $\ W$.

Assume $T(g)=F$ is the additive identity of $W$. Then $F(x)=\int_a^x g(t) \ dt = 0$ then $DF(x)=0=g(x)$ which is the additive identity of $V$.

Therefore $N(T)=\{f_0^V\}$ and $T$ is injective. $\square$

Not sure how to find the range of $\ T$. Would really appreciate some hints. Thanks!

What is the gradient of a matrix product AB?

Posted: 03 Dec 2021 04:16 AM PST

The sixth page of this http://cs231n.stanford.edu/vecDerivs.pdf says that the gradient of $dYi, / dXi, = W$ but https://www.deeplearningbook.org/contents/mlp.html says it is $GB^T$ on page 212, I am sure there is a gap in my understanding somewhere. I understand how we get the first derivative but then what is $GB^T$, is that also the derivative (sorry if I'm using the words gradient and derivative interchangeably still trying to get a grasp on this subject).

Formula for second Chern class on product of line and vector bundle

Posted: 03 Dec 2021 04:08 AM PST

If possible I would like someone to prove or suggest a place to see the proof of this relation:

$$c_2(V \otimes L)=c_2(V)+(r−1)c_1(V)c_1(L)+ {r \choose 2} c_1(L)^2$$

Here $L$ is the line bundle and $r$ is the rank of the vector bundle $V$. The reference that is mentioned in the link is not freely available...

What I need in reality is only to express this relation for the $r=2$ case, that is, to show that $c_2(V \otimes L)=c_2(V)+c_1(V)c_1(L)+ c_1(L)^2$.

I know the relation the relation $c_1(V \otimes L)=rc_1(L) + c_1(V)$ and that the total chern class satisfies $c(E)= 1 + c_1(E) +... +c_n(E)$, and that $c(V \otimes L)=\prod_j (1 + c_1(L_j') + c_1(L))$ if we assume that $V= \oplus_{j=1}^r L_j'$. However, this relations do not seem to suffice to prove what I want. Thanks in advance!

Deleted comments to post "Aproximating differential equation in maple"

Posted: 03 Dec 2021 04:11 AM PST

Yesterday I asked a question in the post named Aproximating differential equation in maple. I received beneficial feedback and help but the help comments are not there anymore. Were they deleted or something else. Could someone help?

Thanks.

Is the multiplier algebra $M(K(H)\otimes C(X))$ isomorphic to $M(K(H))\otimes C(X)$?

Posted: 03 Dec 2021 03:43 AM PST

In the survey article by Maes and Van Daele in the beginning of section 5, after Def. 5.1, the following claim is made (I will use different notation here, but wanted to give the reference nonetheless):

Let $H$ be a Hilbert Space, $X$ be a compact hausdorff space and $K(H)$ denote the compact operators on H. Given any $C^\ast$-algebra $A$, we denote its multiplier algebra by $M(A)$. Then we can identify elements in $M(K(H)\otimes C(X)\otimes C(X))$ with strictly continuous functions from $X\times X$ to $B(H)$.

I have two questions regarding this:

  1. The way that I understand this is that we identify $C(X)\otimes C(X)$ with $C(X\times X)$ (which I'm fine with) and then, somehow, get that $M(K(H) \otimes C(X\times X)\simeq M(K(H))\otimes C(X\times X)$. Why does that hold? Is it true in general that for any $C^\ast$-algebra $A$, we have $M(A\otimes C(X))\simeq M(A)\otimes C(X)$? Or is this something more specific about the compact operators? Am I correct in assuming that this is the intended route and we conclude the claim by identifying $M(K(H))$ with $B(H)$ (and $M(K(H))\otimes C(X)$ with the continuous functions from $X$ to $M(K(H))$, but that is something that I at least know how to do)?
  2. I have read in several places that the multiplier algebra of $K(H)$ is $B(H)$ equipped with the $\sigma$-stop$^\ast$-topology. What is a good reference for that? From my understanding the multiplier algebra of a $C^\ast$-algebra $A$ should again be a $C^\ast$-algebra with respect to the norm $$\lVert \cdot\rVert_{M(A)} \colon = \sup_{a\in A} \max\{l_a(\cdot), r_a(\cdot)\},$$ where $l_a(x)\colon= \lVert xa\rVert\lVert a\rVert$ and $r_a(x)\colon=\lVert ax\rVert\lVert a\rVert$. So what exactly do people mean when they say that we can identify $B(H)$ equipped with the with the multiplier algebra of $K(H)$? They certainly cannot be isomorphic as $C^\ast$-algebras, as that would imply equality of the two norms by uniqueness of norms that turn an algebra into a $C^\ast$-algebra. What properties does this identification of $M(K(H))$ and $B(H)$ preserve? In what sense can they be identified?

I'd be extraordinarily happy for any good reference on the topic that allows me to digest and understand what is going on in the Maes and Van Daele article. Thanks in advance!

2D Convolution using Matrix-vector multiplication

Posted: 03 Dec 2021 04:27 AM PST

I was reading a paper about filter application. In some part, the convolution is stated as a matrix-vector multiplication using a convolutional matrix. I have found some definitions for convolutional matrix, however, I am confused about the 2D arrays convolution and specifically about the dimensions of the arrays mentioned in the paper.

Consider the input 2D matrix as i with a size of KxL and the 2D filter matrix as f with size of MxN and the output 2D matrix as o, the problem is stated as:

$ o = i * f $

for which * denotes convolution. In the mentioned paper, this problem is rewritten as a matrix-vector multiplication using a 2D convolution matrix I of input matrix i:

$ o = If $

Here, f is the vectorized form of filter matrix with a size of MN. The dimension of this 2D convolutional matrix is stated as KLxMN. I want to know how to form this convolution matrix according to this dimention?!

Equation of hyperboloid of one sheet resulting from rotating a (skew) line about an axis

Posted: 03 Dec 2021 03:42 AM PST

Suppose I have a line in $3D$ given in parametric form as $ L(t) = P_0 + t V $, with $t$ being the parameter, and I rotate it about an axis passing through the origin whose direction is specified by the unit vector $A$. The vector equation of the surface resulting from rotating $L(t)$ about $A$ is found as follows:

$X(t, \theta) = R_A(\theta) L(t) $

where $R_A(\theta)$ is the rotation matrix about the axis $A$ by an angle $\theta$, and is given by the Rodrigues formula as

$R_A(\theta) = A A^T + (I - A A^T) \cos \theta + S_A \sin \theta $

with the skew-symmetric matrix $S_A$ being the matrix representation of the linear transformation $A \times P$ (the cross product between $A$ and a vector $P$ )

I want to show that the surface specified by all points $X(t,\theta)$ is indeed a hyperboloid of one sheet (under certain conditions imposed on $P_0$, $V$ and $A$), and I also want to derive the algebraic equation of this hyperboloid, which is of the form

$ (X - X_0)^T Q (X - X_0) = 1 $

Any pointers on these two questions is highly appreciated.

Proof for property of proportionality used in deriving physical laws like law of gravitation and coulombs law.

Posted: 03 Dec 2021 04:08 AM PST

$$\text{F} \propto m_1m_2$$ $$\text{F} \propto \frac{1}{r^2}$$

Therefore $$\text{F} \propto \frac{m_1m_2}{r^2}$$

In physics one quantity (F) is directly proportional to two other quantities ($m_1m_2$ and $\displaystyle \frac{1}{r^2}$), then the product of the last two quantities is directly proportional to the first quantity $$\text{F} \propto \frac{m_1m_2}{r^2}$$

This property intuitively makes sense but I have never seen a rigorous proof of it or ever seen it being formally written in a textbook.

Can someone please prove it rigorously.

(The law of gravitation is simply the most popular application of the problem I have asked. I do not want to know the inner workings of the law of gravitation. I only want to know the how proportionality can be combined)

(How does one combine proportionality? I know this question has been answered in the following link, but I do not understand the explanation so can someone please provide another proof explained differently)

Number of triangles $\Delta ABC$ with $\angle{ACB} = 30^o$ and $AC=9\sqrt{3}$ and $AB=9$?

Posted: 03 Dec 2021 03:48 AM PST

I came across the following question just now,

A triangle $\Delta ABC$ is drawn such that $\angle{ACB} = 30^o$ and side length $AC$ = $9*\sqrt{3}$

If side length $AB = 9$, how many possible triangles can $ABC$ exist as?

Here is a diagram for reference:

enter image description here

Here is what I did:

  • I used the Law of Sines to find angle $\angle ABC$

$\to \frac{9}{\sin(30^o)} = \frac{9*\sqrt{3}}{\sin(\angle ABC)}$

$$\to \angle ABC = 60^o$$

So, therefore, $\Delta ABC$ can only exist as a $1$ triangle with angles: $30^o, 60^o$ and $90^o$.

But the answer says $2$ triangles are possible. So my question is: what is the second possible triangle?

There is no expansive homeomorphism on $S^1$

Posted: 03 Dec 2021 04:06 AM PST

I want to prove that there is no expansive homeomorphism on $S^1$.

$f: X \to X $ is expansive if \begin{align} \exists\varepsilon>0 ,\quad \sup d(f^n(x),f^n(y)\leq \varepsilon \Rightarrow x=y, \quad n\in \mathbb{N} \cup \{0\} \end{align}

This is how I solved this question:

Suppose not, if $f$ is an expansive homeomorphism on $S^1$ then so is $f^2$ let $f^2=g$ then $g^n$ converges to $1$ as $n \to \infty$ take two points $x$ and $y$ very close to $1$ for example in $[1-\varepsilon, 1)$ so $d(g^n(x),g^n(y))<\varepsilon$ for all $n$. This contradicts expansivity.

I consider $[0,1]$ in my proof.

Are things ok with my proof?

Is there trace operator for periodic functions?

Posted: 03 Dec 2021 04:33 AM PST

I know that for a smooth domain $\Omega$ we can build a trace operator $\gamma : H^s(\Omega) \to \prod_{0\leq j \leq s}H^{s-j-\frac{1}{2}}(\partial \Omega)$. In particular it has a right inverse which implies that $\gamma$ is surjective. Moreover, one can characterize $H^s_0(\Omega)$ as being the kernel of $\gamma$.

Now I am wondering if a similar result exists for periodic functions. So if I define $\Omega = (-\pi,\pi)^m$ and $H^s_{per}(\Omega)$ to be the usual Sobolev space for periodic functions, can we build a surjective operator $\gamma_{per}$, such that $H^s_{0,per}$ (= periodic functions in $H^s_0(\Omega)$) can be identified as the kernel of $\gamma_{per}$ ?

The point is that $\Omega$ is not smooth anymore but only Lipschitz. However, I was hoping that since we restrict to periodic solutions, there might be a way to obtain a similar result anyway.

It feels like the natural way to do the proof would be to deal with coefficients of Fourier series, and turn the problem into finding an operator on coefficient spaces. I just could not find out the right inverse in that manner.

Edit :

There is a constrution of a right inverse on the half plane in ''Strongly Elliptic Systems and Boundary Integral Equations'' by William McLean (page 101, chapter of trace operator). Now since the boundary of a cube looks locally like a half plane (not in the vertices, but we can decompose the boundary into pieces to avoid this), I was hoping to obtain a right inverse in that manner.

Probability of getting consecutive heads in $100$ coin tosses if$ P(H) = 0.04$ and $P(T) = 0.96$

Posted: 03 Dec 2021 03:56 AM PST

If there is an uneven coin where heads show up $4\text{%}$ of the time and tails show up $96\text{%}$ of the time, what is the probability of seeing consecutive heads in $100$ coin tosses?

Converting Complex numbers into Cartesian Form

Posted: 03 Dec 2021 04:03 AM PST

I am trying to convert the following Complex number equation into Cartesian Form:

$$ \sqrt{8}\left(\cos \frac{\pi}{4} + i \sin\frac{\pi}{4}\right) $$

So far I have tried doing both:

$\sqrt{8} (\pi/4)\cos (\pi/4)$ and $\sqrt{8}(\pi/4)\sin (\pi/4)$ which returns $(0.5\pi, 0.5\pi)$ but the tutorial I am following says this is the incorrect answer.. Does anyone know how to correctly convert Complex numbers in Polar form to Cartesian Form?

No comments:

Post a Comment