Wednesday, March 24, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Eigenspace Notation

Posted: 24 Mar 2021 08:38 PM PDT

I am trying to solve a problem asking about what I believe to be specific Eigenvalues in a Vector Space.

However, I am confused by the notation used and hence have little idea what the question is actually asking.

Note: I am not looking for an answer to the problem, merely clarification with the notation.

The question was written as follows:

Let W $\in$ $M_{nn}$($\Bbb{R}$) and fix $\lambda$ $\in$ $\Bbb{R}$

Let $E_{\lambda}$(W) = {x$\in$$\Bbb{R}^n$ | Wx = $\lambda$x}

Let W = $$\begin{bmatrix}0&y\\y&0 \end{bmatrix}$$

Find $E_1$(W).

Now, where i get confused by the notation is with the meaning of $E_1$(W).

I believe this is asking me to find the first eiegenvalue of $W$ in the set $E_{\lambda}$(W)?

Again, I am not asking for a solution, just a clarification on the notation.

Thank you.

Hilbert's Nullstellensatz for analytic functions

Posted: 24 Mar 2021 08:37 PM PDT

In Demailly's book complex analytic and differential geometry, p97, he states that $F_{V(F), 0} = \sqrt{F}$, where $F$ is an ideal of the ring of holomorphic functions in an open subset of $\mathbb C^{n}$ containing $0$, $V(F)$ are the common zeros of the elements in $F$, and $\sqrt{F}$ is the radical of $F$. What confuses me is that the statement seems to suggest $0 \in V(F)$ which does not seem to be correct. One can simply take the ideal generated by a holomorphic function which does not vanish at $0$.

If the series $a_n$ is convergent, is the series $(-1)^n \sqrt{a_n}$ convergent?

Posted: 24 Mar 2021 08:35 PM PDT

Given that series $a_n$ converge (which is positive), I have to figure out if the series $(-1)^n \sqrt{a_n}$ is convergent. I would like to know if my proof is correct:

First, I am looking at the series $\sqrt{a_n}$. Since it is always true that $\sqrt{a_n} \leq a_n$, by Basic Comparison Test the series $\sqrt{a_n}$ is convergent, and this implies that the series $(-1)^n \sqrt{a_n}$ is also convergent by absolute convergence test. Does this make sense?

If every point of a topological space $X$ has a closed neighbourhood that is $T_2$ (as subspace), then $X$ is $T_2$.

Posted: 24 Mar 2021 08:34 PM PDT

I have tried the problem in the following way-

Let $x,y$ be two distinct points in $X$ and let $x\in A$ and $y\in B$ where $A,B$ are closed neighbourhoods of $x,y$ respectively and $T_2$. Let $U_1$ and $U_2$ be open sets such that $x\in U_1\subset A$ and $y\in U_2\subset B$.

Case 1: If $y\notin A$, then take $V=A^c$ and $U=U_1$. Then $U\cap V=\emptyset $ and $y\in V$ and $x\in U$.

Case 2: If $x\notin B$ then take $U=B^c$, $V=V_1$. Then $U\cap V=\emptyset$ and $x\in U$, $y\in V$.

But I'm stuck with the case $x\in B$ and $y\in A$ i.e. $x,y\in A\cap B$.

Can anyone help me in this regard? Thanks for your help in advance.

Complex Analysis Limit using taylor series

Posted: 24 Mar 2021 08:33 PM PDT

I am supposed to be attempting to use the Taylor and or Laurent series expansions to sing the limits at removable singularities of the questions below but am severely struggling. I think the correct way to solve the problem is to take the log of the entire thing and somehow get rid of the pesky $\frac{1}{z}$ term but I am not too sure how to proceed.

$\lim_{z \to 0} \frac{1}{z} [(1+z)^{(1/z)}-e] $

Prove if x > 0, $1+\frac{1}{2}x-\frac{1}{8}x^2 \le \sqrt{1+x}$

Posted: 24 Mar 2021 08:33 PM PDT

Prove if x > 0, $1+\frac{1}{2}x-\frac{1}{8}x^2 \le \sqrt{1+x}$.

I find online, one person suggested using Taylor Theorem to expand the right-hand side, and apply Bernoulli's inequality.

So, if $x_0 = 0$, $\sqrt{1+x} = 1+\frac{1}{2}x-\frac{1}{4(2!)}x^2+R$, where R is larger than 0, this makes sense to me, but I'm trying to find another way to prove the equality. Like Mean Value Theorem for inequality $\sqrt{1+x} \le 1+\frac{1}{2}x$.

I see the part where $1+\frac{1}{2}x$ is the same, but get trouble to put $\frac{1}{8}x^2$ into the equation.

Fano planes probability

Posted: 24 Mar 2021 08:29 PM PDT

I have to find the threshold probability for containing a Fano plane. Since its a 3-hypergraph with 7 vertices and 4 hyperedges I was following the same approach as in:

https://math.stackexchange.com/a/4073729/874698

I have a small doubt here: since the Fano plane has a particular shape, thus, when calculating the variance, I believe we should get only the following cases

(i) no two Fano planes share a hyperedge

(ii) two Fano planes share one hyperedge

(iii) two Fano planes share all four hyperedges, i.e. they are the same.

Essentially, I think it is not possible for two Fano planes to share two (or three) hyperedges only. Is it correct? Or its shape can be distorted so as to only keep the essential structure? Thanks in advance.

Proof $\vec{r} \cdot (\vec{r})'=0\iff ||\vec{r}||=K,$ with $K \in \mathbb{R}$.

Posted: 24 Mar 2021 08:25 PM PDT

I need to proof the following property:

Let $r: \mathbb{R}\to \mathbb{R}^n$ a smooth vectorial function, then

$\vec{r} \cdot (\vec{r})'=0\iff ||\vec{r}||=K,$ with $K \in \mathbb{R}$.

I could prove the $\leftarrow$ direction, but i could not with the right direction.

Any hint is appreciated.

Why $\int_{C} \operatorname{det}\left(d N_{\phi}\right)\left|\phi_{u} \times \phi_{v}\right|dudv=\int_{C}\left|\psi_{u} \times \psi_{v}\right|dudv$?

Posted: 24 Mar 2021 08:25 PM PDT

I was trying to understand the solution to the following problem.

Let $S \subset \mathbb{R}^{3}$ be a compact, connected, nonempty surface whose curvature $K$ is everywhere positive. Prove that $\int_{S} K d A \geq 4 \pi .$ (Hint: use the Gauss map $N: S \rightarrow \mathbb{S}^{2}$ to compare this to an integral over a sphere.)

Proof: Let $\phi: U \rightarrow S$ be a chart at $p,$ and shrink $U,$ if necessary, so that $N$ restricts to a diffeomorphism of $\phi(U) \subset S$ onto its image. Then $\psi=N \circ \phi: U \rightarrow \mathbb{S}^{2}$ is a chart for $\mathbb{S}^{2}$ at $N(p)$, and we recognize that $K(\phi(u, v))=\operatorname{det} d N_{\phi(u, v)}$, so from $\psi=N \circ \phi$ we get $$ \int_{\phi(C)} K d A=\int_{C} \operatorname{det}\left(d N_{\phi}\right)\left|\phi_{u} \times \phi_{v}\right| d u d v=\int_{C}\left|\psi_{u} \times \psi_{v}\right| d u d v=\operatorname{area}(\psi(C)) $$ for any compact set $C \subset U$. If we cover $S$ by such compact sets $\phi(C)$, then some of them may overlap, but their images under $N$ cover all of $\mathbb{S}^{2}$ and so we conclude that $$ \int_{S} K d A \geq \operatorname{area}\left(\mathbb{S}^{2}\right)=4 \pi. $$

The question is, why to we have $\int_{C} \operatorname{det}\left(d N_{\phi}\right)\left|\phi_{u} \times \phi_{v}\right| d u d v=\int_{C}\left|\psi_{u} \times \psi_{v}\right| d u d v$? This is the only bit that I don't understand.

Calculate Image of general coordinate vector based on basis?

Posted: 24 Mar 2021 08:22 PM PDT

Question

How would someone solve any question of this format? Why are there three spaces for the answer?

How would I solve a Quadratic Function with one x intercept, the y intercept, and k

Posted: 24 Mar 2021 08:19 PM PDT

I am given the following information, the y intercept is at 0.4m above the ground. It reaches a maximum height of 15m and travels a total of 42m (the x intercept). How would i gather a quadratic function from this?

$|f(x)| \leq \sqrt{\int_0 ^1 |f'|^2}$ Solution verification

Posted: 24 Mar 2021 08:35 PM PDT

Note

After finishing my proof, I found a dupe here. Apologies if this needs to be closed/removed. I thought I'd submit it anyway just to see if my proof looks ok, and to get feedback on the $f(0)$ vanishing part, which isn't really addressed in the linked question.

Question

Looking for solution verification:

This is from Calculus by Michael Spivak, Chapter 14, problem 22.

14-22. Suppose that $f'$ is integrable on $[0,1]$ and $f(0) = 0$. Prove that for all $x$ in [0,1] we have $$|f(x)| \leq \sqrt{\int_0 ^1 |f'|^2}.$$ Show that the hypothesis $f(0) = 0$ is needed. Hint: Problem 13-39.

The hint refers to the Cauchy-Schwarz inequality: $$ \left(\int_a ^b fg\right)^2 \leq \left(\int_a^b f^2\right)\left(\int_a^b g^2\right)$$

From the Second FTC we have for $x$ in $[0,1]$ $$\int_0^x f' = f(x) - f(0) = f(x)$$ Squaring both sides $$[f(x)]^2 = \left(\int_0^x f'\right)^2$$ Applying Cauchy-Schwarz to the above expression (using functions $f'$ and $g = 1$), we have $$[f(x)]^2 =\left(\int_0^x f'\right)^2 \leq \left(\int_0^x f'^2\right) \left(\int_0^x 1^2\right)$$ $$[f(x)]^2 \leq x\int_0^x f'^2$$ Here, $0 \leq x \leq 1$, and the integrand $f'^2$ is nonnegative, so we have $$x\int_0^x f'^2 \leq 1\int_0^1 f'^2$$

Thus $$[f(x)]^2 \leq \int_0^1 f'^2$$

Taking the positive square root yields the desired inequality

$$|f(x)| \leq \sqrt{\int_0^1 |f'|^2}$$

The $f(0) = 0$ part

As for showing that the hypothesis $f(0) = 0$ is needed, I'm not sure what to do.

If we couldn't remove $f(0)$ the final inequality would instead look like $$|f(x) - f(0)| \leq \sqrt{\int_0^1 |f'|^2}$$ Is this sufficient? Am I missing something important?

Show that for all $x> 0$, $P(Y\ge x|F_{0})=\min(1,X_{0}/x)$

Posted: 24 Mar 2021 08:15 PM PDT

$X$ nonnegative martingale, $X_{n}$ converges to $0$ almost surely, $ Y =\sup X_{n} $. Show that for all $x> 0$, $P(Y\ge x|F_{0})=\min(1,X_{0}/x)$

To prove a sufficient and necessary condition of a quadratic form to equal to zero

Posted: 24 Mar 2021 08:26 PM PDT

The question is the following:

Let X a vector space and $B: X \times X \rightarrow K$ a positive semi-definite form. Show that $q_B(y)=0 \Longleftrightarrow B(x,y)=0$, $\forall$ $x$ $\in$ $V$, where $\ q_B(y)\ $ is the quadratic form $\ B(y,y)\ $.

Is it because $q_B = \sum_{i,j=1}^n a_{ij}x_i \bar{x_j}$? Similar type proof as to proving linear independence?

Solving recurrence using EGF

Posted: 24 Mar 2021 08:40 PM PDT

How do I solve the recurrence relation $a_n = na_{n - 1}+ (n - 1)!$ using exponential generating function?

Any help would be highly appreciated. Thanks.

Finding the maximum using inequalities

Posted: 24 Mar 2021 08:21 PM PDT

Given that $(a+1)(b+1)(c+1)(d+1)=81$ and $a,b,c,d>0$, find the maximum of $abcd$.

I know the answer should occur when $a=b=c=d$ but how do I get there using inequalities? This feels like AM-GM.

How do I show $f$ is one-one on $\mathbb{R}^2$?

Posted: 24 Mar 2021 08:12 PM PDT

Let $f$ be a differentiable map (but not necessarily continuous differentiable) with component functions $f_1$ and $f_2$, that is $f(x_1,x_2)=(f_1(x_1,x_2),f_2(x_1,x_2))$ for all $(x_1,x_2) \in \mathbb{R}^2$. For all $(x_1,x_2) \in \mathbb{R}^2$, one has

$|\frac{\partial f_1}{\partial x_1}-2|+ |\frac{\partial f_1}{\partial x_2}|+|\frac{\partial f_2}{\partial x_1}|+|\frac{\partial f_2}{\partial x_2}-2| \leq \frac{1}{2}$.

Prove that $f$ is one-one on $\mathbb{R}^2$?

I cannot use inverse function theorem to prove this question. Can anyone suggest some direction to solve this question?

Prove that a subspace contains the span

Posted: 24 Mar 2021 08:13 PM PDT

Let vectors $v, w \in \mathbb{F}^n$. If $U$ is a subspace in $\mathbb{F}^n$ and contains $v, w$, then $U$ contains $\operatorname{Span} \{v,w\}.$

--

My attempt: if $U$ contains vectors $v, w$. Then $v+w \in U$ and $av \in U$, $bw \in U$ for some $a, b \in \mathbb{F}$. Since the span is $au+bw$, then $U$ contains the span. I feel like I am missing a step here, but dont know what it is. So, can someone please point it out?

How do I find the equation of this parabola?

Posted: 24 Mar 2021 08:18 PM PDT

A manufacturer is designing a flashlight. For the flashlight to emit a focused beam, the bulb needs to be on the central axis of the parabolic reflector, 3 centimeters from the vertex. Write an equation that models the parabola formed when a cross section is taken through the reflector's central axis. Assume that the vertex of the parabola is at the origin in the xy coordinate plane and the parabola can open in any direction.

Why the ith term can be represented in a quadratic form as any difference of $2$ adjacent elements of the sequence is a form of linear?

Posted: 24 Mar 2021 08:11 PM PDT

$0<a_i$

($1\le i\le n$)

$$\alpha,\beta:=\text{constants for any } i$$

$$a_{i+1}-a_i=\alpha \cdot i+\beta$$

The textbook claims that $a_i$ can be represented in a quadratic form ($bi^{2}+ci^{1}+d$).

Why this claim is adequate?

I thought below.

$$a_{i+1}=\alpha\cdot i+\beta+a_i$$

$$a_2=\alpha\cdot(1)+\beta+a_1 \text{ (linear form)}$$

As 2nd term is a form of linear so the 3rd one will also be a linear one, and same for any $i$

Hence the any term takes a linear form.However actually the textbook states the ith term takes 2nd degree polynomial.

How do you find $B^c$ when $B$ is the solution set of $\left|x^2-3x+\sqrt{x^2+2x-3}+3-|-x+x^2+3|\right|+3=-x$?

Posted: 24 Mar 2021 08:38 PM PDT

The problem is as follows:

Assuming $B$ is the solution set of the equation from below:

$$\left|x^2-3x+\sqrt{x^2+2x-3}+3-|-x+x^2+3|\right|+3=-x$$

Find $B^c$.

The given choices in my book are as follows:

$\begin{array}{ll} 1.&\varnothing\\ 2.&\mathbb{R}\\ 3.&[2,+\infty\rangle\\ 4.&[2,3]\\ \end{array}$

I've found this problem in my precalculus workbook but I have no idea how to solve it.

What I'm about to post below is the official solution from my book but it is some unclear to me. The reason for this is that it does not give any further details on the explanation how the vertical pipes are eliminated. Thus I really appreciate someone could explain to me the rationale behind this problem, and more importantly what is the meaning of that $C$ on $B^c$

$\left|x^2-3x+\sqrt{x^2+2x-3}+3-|-x+x^2+3|\right|+3=-x$

Notice:

$|-x+x^2+3|$ the $\Delta<0$

Therefore:

$\left|x^2-3x+\sqrt{x^2+2x-3}+3-x^2+x-3\right|+3=-x$

$\left|\sqrt{x^2+2x-3}-2x\right|=-x-3$

and

$-x-3\geq 0$

$\left|\sqrt{x^2+2x-3}-2x\right|=-x-3$

$\sqrt{x^2+2x-3}-2x=-x-3$ and $x\leq -3$

$\sqrt{x^2+2x-3}=x-3$ and $x\leq -3$

Therefore:

The solution set is $\varnothing$

hence the $B=\varnothing$ and $B^c=\mathbb{R}$

Therefore the answer is choice 2.

That's it where it ends the official solution. But as I'm presenting it here I'm just clueless on how did it arrived to that conclusion. There might be some typo in the question regarding who's $C$ but I wonder who could it be so that the answer is all real.

Can someone help me with this?. It would really help me a lot if the answer would include some foreword or an explanation on when the vertical pipes can be erased or just skipped from the analysis I think the latter part is more important to my understanding because this is what I'm lacking the most.

Thus could someone attend these doubts which I have?. Please try not to skip any step and if you could explain each line because I'm lost here.

Regarding the notations which I have been asked before. The closed brackets $[$ means that the left boundary on the interval is closed, conversely $]$ closed in the right boundary and $\rangle$ means that the right boundary on the interval is open.

How to prove Riemann sum wrt. any point will give same result (left, right, middle, etc.)

Posted: 24 Mar 2021 08:22 PM PDT

If $f(x)$ is a differentiable function. If I want to find the Riemann sum between points $a$ and $b$.
I'll divide the part of function from $a$ to $b$ into $n$ pieces. So the width of each piece will be $\Delta x=\frac{b-a}{n}$. Let the points that demarcate each part are $x_0,x_1,...,x_n$ such that $x_0=a$ and $x_n=b$, and $x_i=x_{i-1} + \Delta x$. The area of each (rectangular) piece will be $\Delta x f(x_i^*)$, such that $x_i^*$ is a point between $x_i$ and $x_{i+1}$.

I want to prove that $x_i^*$ can be any point between $x_i$ and $x_{i+1}$, and the Riemann sum (i.e. $\sum_{i=0}^n f(x_i^*)\Delta x$) as $n \to \infty$ will give the same result.

Will this kind of proof work?
$x_i\le x_i^*\le x_{i+1}$
$\Rightarrow x_i\le x_i^*\le x_i+\Delta x$
as $n\to \infty$, $\Delta x\to 0$
$\Rightarrow \lim_{\Delta x\to 0} x_i\le \lim_{\Delta x\to 0} x_i^*\le \lim_{\Delta x\to 0} x_i+\Delta x$
$\therefore x_i \le x_i^*\le x_i$
Hence $x_i^*$ can be any point between $x_i$ and $x_{i+1}$

Prob. 10, Chap. 2, in Royden's REAL ANALYSIS: If, for some $\alpha>0$, $|a-b|\geq\alpha$ for all $a\in A$ and $b\in B$, then ...

Posted: 24 Mar 2021 08:24 PM PDT

Here is Prob. 10, Chap. 2, in the book Real Analysis by H.L. Royden and P.M. Fitzpatrick, 4th edition:

Let $A$ and $B$ be bounded sets for which there is an $\alpha > 0$ such that $\lvert a-b \rvert \geq \alpha$ for all $a \in A$ and $b \in B$. Prove that $m^*(A \cup B) = m^*(A) + m^*(B)$.

Of course, we have $$ m^*(A \cup B) \leq m^*(A) + m^*(B). \tag{1} $$

How to prove the equality? In particular, how to prove the inequality $m^*(A) + m^*(B) \leq m^*(A \cup B)$?

For each $a \in A$, we have $B \cap (a-\alpha, a+\alpha ) = \emptyset$, and for each $b \in B$, we have $A \cap (b-\alpha, b+\alpha) = \emptyset$. Thus of course we also have $A \cap B = \emptyset$.

How to make use of these facts in arriving at our desired equality?

How does one check for cardinality of transformations?

Posted: 24 Mar 2021 08:24 PM PDT

Suppose $F$ is a finite field with $p^n$ elements (such that $p$ is a prime.) Let $V$ be a $k$-dimensional vector space over $F$. Then count the cardinality of the following:

(a) The number of linear transformations $T \colon V → V $.

(b) The number of invertible linear transformations $T \colon V → V $.

(c) The number of linear transformations $T : V → V$ with determinant $1$.

My approach(s):

a. I have tried to make a matrix $(k*k)$, and I'm aware that in each place there could be p^n entries. I suppose the total number of possible transformation matrices with such a dimension would be $(p^n)^(k^2)$.

b. For the first row we can choose $p^n$ entries for each position in the matrix. I think we can have a row except for the one where all the entries are $0$.

For the second row, we can have $(p^n)^k$ values minus the $k$ where, at a time k numbers aren't allowed as entries as each element of this row is a real value * element in the first row (How many such reals exist? I'm unsure of this.) For the third row, we will have ((p^n)^k) minus the multiples of elements of those in row 1 and 2. I do not know how many such numbers. is it 2k? That seems wrong to me. kindly correct me on that.

c. Perhaps take all matrices with determinant = k, and the one with determinant 1 = $|a_k|$ / k ?

$X$ is complete iff $\sum_{n=1}^\infty \|x_n\| < \infty \implies \sum_{n=1}^\infty x_n$ converges (Carothers, Theorem $7.12$)

Posted: 24 Mar 2021 08:38 PM PDT

A normed vector space $X$ is complete iff $\sum_{n=1}^\infty \|x_n\| < \infty \implies \sum_{n=1}^\infty x_n$ converges.

I have some questions to ask about the second part of the proof, i.e. $[\Leftarrow]$ direction:

  1. The author says, "As always, it is enough to find a subsequence of $(x_n)$ that converges". What do they mean by "As always"? Where else is this technique employed and why is it so obvious? Suppose I do find a subsequence $(x_{n_k})$ that converges. That does not imply that $(x_n)$ converges as well. $\color{blue}{\text{(Resolved).}}$
  2. The author asks us to choose a subsequence $(x_{n_k})$ such that $$\|x_{n_{k+1}} - x_{n_k}\|< 2^{-k}, \text{ for all }k$$ What allows us to do this? Given a Cauchy sequence $(x_n)$, I know that $$\forall \epsilon > 0 \exists N\in\Bbb N \forall m,n\ge N (\|x_m-x_n\| < \epsilon)$$ I could put $\epsilon = 2^{-k}$ and find the corresponding $N$, but that doesn't give me a subsequence! $\color{red}{\text{(Unresolved).}}$

Attached is the proof for reference:

enter image description here

Thanks a lot!

lower semi-bounded imply symmetric

Posted: 24 Mar 2021 08:22 PM PDT

A quadratic form is a map $q: Q(q) \times Q(q) \rightarrow \mathbb{C}$, where $Q(q)$ is a dense linear subset of the Hilbert space $H$. If $q(\phi,\psi)=\overline{q(\psi,\phi)}$, then we say q is symmetric. If $q(\phi,\phi)\geq -M||\phi||^2$ for some $M$, we say $q$ is semibounded.

If $q$ is semibounded, then it is automatically symmetric if $H$ is complex.

Anyone could give my a hint for the proof?

Proving pullback of projection to quotient manifold is always injective on de Rham cohomology

Posted: 24 Mar 2021 08:11 PM PDT

I found the following result on Wikipedia:

We can also find explicit generators for the de Rham cohomology of the torus directly using differential forms. Given a quotient manifold ${\displaystyle \pi :X\to X/G}$ and a differential form ${\displaystyle \omega \in \Omega ^{k}(X)}$ we can say that $\omega $ is ${\displaystyle G}$-invariant if given any diffeomorphism induced by ${\displaystyle G}$, ${\displaystyle \cdot g:X\to X}$ we have ${\displaystyle (\cdot g)^{*}(\omega )=\omega }$. In particular, the pullback of any form on ${\displaystyle X/G}$ is ${\displaystyle G}$-invariant. Also, the pullback is an injective morphism.

which states that the pullback $\pi^*: X/G \to X$ is injective. I was able to prove this but only in the case that the group $G$ is finite (it's somewhat of an "averaging" argument). But I can't see how to prove it in this general case (the "averaging" argument breaks down since a sum of infinite terms is not defined). I'd really appreciate some help on this!

Minimal embedding of the Grassmannian into Projective space (or a "weighted Grassmannian" into Euclidean space)

Posted: 24 Mar 2021 08:33 PM PDT

Let $Grass(r,k)$ be the set of all $r$-dimensional subspaces of $\Bbb R^k$.

It is well known that $Grass(r,k)$ embeds isometrically as a projective variety into the projectivization of the r'th power of the exterior algebra $\Lambda^r(\Bbb R^k)$; this is the Plücker embedding.

Another way to think of this is that it embeds "weighted, signed to Euclidean space, and I will often equivocate between these two representations.

This is nice and simple to work with, because you can just take the wedge product of a set of vectors to get a representation of the subspace that they span.

The problem is that this representation is extremely space-inefficient. We are taking an $r(k-r)$-dimensional manifold and putting it into a space of ${k}\choose{r}$. This is much larger than the minimum embeddings guaranteed by things like the Whitney and Nash embedding theorems. which can be thought of as guaranteeing a minimum dimension for a "Weighted Grassmannian" to be embedded in Euclidean space.

My question:

What is a better isometric embedding for the Grassmannian as a projective variety into projective real space?


I am mostly interested in representing "weighted, signed subspaces" - in the exterior algebra, if you can take the wedge product of a set of vectors, the $\ell_2$ norm of this multivector represents the volume of the parallelotope generated by the vectors, which is useful. But, I think this is equivalent to just finding a better embedding of the projective Grassmannian into projective real space and then making the homogeneous coordinates non-homogeneous.


This is a paper on on Cremona linearizations that says that there should be a birational mapping on the n'th exterior power that maps the embedded Grassmannian to a linear subspace. So for example, if you map $Grass(2,4)$ using this method, the Pfaffian will map to a linear subspace, so that you can quotient by it to get a space with 5 homogeneous coordinates theoretically equal to the dimension of the embedded Grassmannian as a projective variety.

I really don't know much about Cremona transformations but tried to follow the paper, which gives an explicit construction. It seems only to work for those elements of $\Lambda^r(\Bbb R^k)$ whose first element is nonzero, and I don't see how to get it to work otherwise. It would be nice to have something that works on any element of $\Lambda^r(\Bbb R^k)$, expressed in homogeneous coordinates. I'm also not sure if this is isometric.

Show that $ \lim\limits_{x\to 0} f(x) =\lim\limits_{x\to 0} f(x \lambda)$ We imply that if one limit exists then so does the other. [closed]

Posted: 24 Mar 2021 08:29 PM PDT

Show that $ \lim\limits_{x\to 0} f(x) =\lim\limits_{x\to 0} f(x \lambda)$ We imply that if one limit exists then so does the other.

Formula to project a vector onto a plane

Posted: 24 Mar 2021 08:15 PM PDT

I have a reference plane formed by 3 points in $\mathbb{R}^3$ – $A, B$ and $C$. I have a 4th point, $D$. I would like to project the vector $\vec{BD}$ onto the reference plane as well as project vector $\vec{BD}$ onto the plane orthogonal to the reference plane at vector $\vec{AB}$. Ultimately, I need the angle between $\vec{AB}$ and $\vec{BD}$ both when the vectors are projected on to the reference plane as well as the orthogonal plane. I have completed tutorials on projecting a vector onto a line in $\mathbb{R}^2$ but haven't figured out how to translate that to $\mathbb{R}^3$...

Please note the diagram only shows the reference plane as parallel to the $xy$ plane for the sake of convenience. In my examples, the reference plane could be at any orientation. I am using 3D coordinate data from an electromagnetic motion tracker and the reference plane will be constantly moving. I understand the cross product of the two vectors $\vec{AB} \times \vec{BC}$ results in the normal vector to their plane. I have 2 different methods to calculate that but am a little lost once I get to this point. I have seen both unit vector notation and column vector notation but am confused by moving between the different styles. It would be most helpful if you could tell me the formal name of the notation/equations you use. I know the scalar equation of a plane through point $(a,b,c)$ with normal $\hat{n} = [n_1, n_2, n_3]$ is:
$$ n_1(x-a) + n_2(y-b) +n_3(z-c) = 0 $$ and the standard linear equation definition is: $$ Ax + By + Cz = D $$ but I could use some tips on when the equation is $=D$ and when it is $=0$ as well as any additional equations for a plane and in which circumstances the different forms are appropriate. I hope I've made sense here. Thanks for any help you can provide. enter image description here

No comments:

Post a Comment