Saturday, April 30, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


On double dual of C* algebra

Posted: 30 Apr 2022 11:51 AM PDT

Can anyone provide examples of double dual of $C^{*}$-algebras except $K(\mathcal{H})$, commutative cases? Thanks in advance!

A deduction from showing that all non-trivial $k$-fold products $x_1 \dots x_k$ are zero.

Posted: 30 Apr 2022 11:50 AM PDT

Here is the question I was able to solve solution:

Suppose that $X$ has a covering, $X = U_1 \cup U_2 \cup \dots \cup U_k$ by contractible open sets $U_i.$ Show that all non-trivial $k$-fold products $x_1 \dots x_k$ in $H^* X$ are zero.

And here is the question I want to solve:

Show that an atlas for $\mathbb RP^n$ must have at least $n + 1$ charts.

My thoughts:

I feel like if we assume that there is an atlas for $\mathbb RP^n$ with only $n$ charts then we will have that $\mathbb RP^n$ has a covering, $X = U_1 \cup U_2 \cup \dots \cup U_k$ by open sets (**I am not sure though if every open set is contractible **) $U_i$ and by the above problem $\mathbb RP^n$ will have that all non-trivial $k$-fold products $x_1 \dots x_k$ in $H^* \mathbb RP^n$ are zero but this is not the case as we know that the cohomology ring of $\mathbb RP^n$ is non-trivial.

The atlas definition in the question is: a set of charts whose domains cover the manifold. And by a chart of an n-dimensional manifold $M$, we mean a diffeomorphism (or a homeomorphism, for a topological manifold) $h: U \to V,$ where $U \subset M$ and $V \subset \mathbb R^n,$ with $V$ diffeomorphic (resp. homeomorphic) to $\mathbb R^n.$\

Any corrections to my thoughts and providing me with the correct solution will be greatly appreciated!

Solving Trigonometry Diagrams

Posted: 30 Apr 2022 11:48 AM PDT

Find non-zeroes $a$ and $b$ such that $\lim[a^n(u_n - 1)] = b$ where $u_1 = 0$ and $4u_{n + 1} = u_n + \sqrt{6u_n + 3}, \forall n \in \mathbb Z^+$.

Posted: 30 Apr 2022 11:42 AM PDT

Consider sequence $(u_n)$, defined as $\left\{ \begin{aligned} u_1 &= 0\\ u_{n + 1} &= \dfrac{u_n + \sqrt{6u_n + 3}}{4}, \forall n \in \mathbb Z^+ \end{aligned} \right.$. Knowing that $a$ and $b$ are two real numbers not equal to zero such that $\lim[a^n(u_n - 1)] = b$, calculate the value of $a + b$.

[For context, this question is taken from an exam whose format consists of 50 multiple-choice questions with a time limit of 90 minutes. Calculators are the only electronic device allowed in the testing room. (You know those scientific calculators sold at stationery stores and sometimes bookstores? They are the goods.) I need a solution that works within these constraints. Thanks for your cooperation, as always. (Do I need to sound this professional?)

By the way, if the wording of the problem sounds rough, sorry for that. I'm not an expert at translating documents.]

Why was this question there? WHY WAS THIS IN MY MID-SEMESTER EXAM? (>_<)

I don't need answers, I need to be angered. (This is a joke, please take it with a grain of salt. I do actually need answers to this problem.)

Here are my observations.

According to the WolframAlpha, the general term of the sequence above is $$u_n = \left(1 - \dfrac{1}{2^{n - 1}}\right)\left(1 - \dfrac{2 - \sqrt 3}{2^{n - 1}}\right), \forall n \in \mathbb Z^+$$

If we define sequences $(v_n)$ and $(w_n)$ as $v_n = 1 - \dfrac{1}{2^{n - 1}}, \forall n \in \mathbb Z^+$ and $w_n = 1 - \dfrac{2 - \sqrt 3}{2^{n - 1}}, \forall n \in \mathbb Z^+$ respectively, then it can be obtained that $u_n = v_nw_n, \forall n \in \mathbb Z^+$ and $$\left\{ \begin{aligned} v_1 &= 0\\ v_{n + 1} &= \dfrac{v_n + 1}{2}, \forall n \in \mathbb Z^+ \end{aligned} \right. \text{ and } \left\{ \begin{aligned} w_1 &= -1 + \sqrt 3\\ w_{n + 1} &= \dfrac{w_n + 1}{2}, \forall n \in \mathbb Z^+ \end{aligned} \right.$$

That means $$\begin{aligned} v_{n + 1}w_{n + 1} = \dfrac{v_nw_n + \sqrt{6v_nw_n + 3}}{4} &\iff \dfrac{(v_n + 1)(w_n + 1)}{4} = \dfrac{v_nw_n + \sqrt{6v_nw_n + 3}}{4}\\ &\iff v_n + w_n + 1 = \sqrt{6v_nw_n + 3}\\ &\iff w_n = (2v_n - 1) + \sqrt 3(1 - v_n), \forall n \in \mathbb Z^+ \end{aligned}$$

Anyhow, about the sequence $(u_n)$ itself, it is strictly increasing bounded. More specifically, we have that $u_n \in [0; 1), \forall n \in \mathbb Z^+$. The same goes for sequences $(v_n)$ and $(w_n)$.

Actually, $v_{n + 1} = \dfrac{v_n + 1}{2}$ and $w_{n + 1} = \dfrac{w_n + 1}{2}$, those look familiar, hmmm~

Of course, I forgot. If we let $v_n = \cos a_n, \forall n \in \mathbb Z^+$ and $w_n = \cos b_n, \forall n \in \mathbb Z^+$, then it can be obtained that $$\left\{ \begin{aligned} a_1 &= \dfrac{\pi}{2}\\ \cos(a_{n + 1}) &= \cos^2\dfrac{a_n}{2}, \forall n \in \mathbb Z^+ \end{aligned} \right. \text{ and } \left\{ \begin{aligned} b_1 &= \arccos(-1 + \sqrt 3)\\ \cos(b_{n + 1}) &= \cos^2\dfrac{b_n}{2}, \forall n \in \mathbb Z^+ \end{aligned} \right.$$

Nevermind, that didn't work as well as I had thought.

What was I doing this entire time? You might be wondering. Well, I'm trying to find the general term of sequence $(u_n)$ without the need of a laptop, since you can't take that into the testing room.

Anyhow, for the second part of the problem, first of all, let $\lim u_n = m$, then we have that $m = \dfrac{m + \sqrt{6m + 3}}{4} \iff m = 1$. Again, the same goes for sequences $(v_n)$ and $(w_n)$. Futhermore, $$\begin{aligned} \left\{ \begin{aligned} 2^n(v_n - 1) &= -2\\ 2^n(w_n - 1) &= 2\sqrt 3 - 4 \end{aligned} \right. &\iff \left\{ \begin{aligned} 2^n(v_n + w_n - 2) &= 2\sqrt{3} - 6\\ 4^n(v_n - 1)(w_n - 1) &= 8 - 4\sqrt{3} \end{aligned} \right.\\ &\implies 4^n\left[u_n - \left(\dfrac{2\sqrt{3} - 6}{2^n} + 2\right) + 1\right] = 8 - 4\sqrt{3}\\ &\iff 4^n(u_n - 1) = (8 - 4\sqrt{3}) - 2^n(6 - 2\sqrt{3})\\ &\iff 2^n(u_n - 1) = \dfrac{(8 - 4\sqrt{3})}{2^n} - (2\sqrt{3} - 6), \forall n \in \mathbb Z^+\\ &\implies \lim[2^n(u_n - 1)] = 2\sqrt{3} - 6 \end{aligned}$$

In conclusion, $a + b = 2 + (2\sqrt{3} - 6) = 2\sqrt{3} - 4$.

My question is more focused on the first part of the problem, on how the general term of sequence $(u_n)$. As always, thanks for reading (and even more if you could help~)

Question about graph minors and average degree

Posted: 30 Apr 2022 11:42 AM PDT

Let's define the following two properties first. A graph $G$ is called dense if it has no minors that have a higher average degree. A k-chromatic graph G is called saturated if every minor of G is k-colorable,(or not having $K_{k+1}$ as minor if we assume Hadewiger conjecture).

I was trying to came up with properties and examples of dense graphs, and had the conjecture that all dense graphs are also saturated. If someone could come up with a counter example or a proof, or any other interesting property of dense graphs, that would be great.

Ordinary least squares. Gauss Markov theorem

Posted: 30 Apr 2022 11:35 AM PDT

Let $$Y=X\beta+\varepsilon,\;\;\;\;\; Y,\varepsilon\in\mathbb{R}^n,\;X\in\mathbb{R}^{n\times p},\;\beta\in\mathbb{R}^p$$ be standard linear regression model and $\widehat{\beta}$ OLS estimate. Is following true or additional conditions must be given?

Generalized Gauss Markov theorem
OLS esimate $\widehat{\theta}=a^T\widehat{\beta}$ is best linear estimate of parametar $\theta=a^T\beta$, i.e. for any unbiased linear estimate $b^TY$ hold $$var\left[\widehat{\theta}\right]\leq var\left[b^TY\right].$$

On the countable sum theorem

Posted: 30 Apr 2022 11:34 AM PDT

enter image description here

I am trying to understand the above theorem. I get most of the proof but I have trouble understanding why the functions $g_i$ were introduced. I mean why can't we just define $g:X\rightarrow S^n$ by $g(x)=f_i(x)$ if $x\in B_i\cup \bar{V}_i$ instead of $g(x)=g_i(x)$ if $x\in V_i$. What is the need of the $g_i$ functions?

Source: A. R. Pears, Dimension theory of general spaces.

Sign of the determinant of the Jacobian matrix is ​preserved in connected domains

Posted: 30 Apr 2022 11:32 AM PDT

Suppose we have a continuous map between subsets of Euclidean spaces such that the domain is connected. Why is the sign of the Jacobian matrix determinant preserved in this domain?

Note: I am asking this question because for problems involving orientability it is common for this argument to be used. An example of this is taking an atlas with only two cards whose intersection of domains is connected...

Prove contrapositive of a statement after proving it's converse. (If n is prime then $2^n-1$ is also prime)

Posted: 30 Apr 2022 11:53 AM PDT

I have been given the following problem:

Let n > 1 be a positive integer. Let P be the following statement:

If n is a prime number then $2^n - 1$ is a prime number

Write down the converse of the statement P. Is it always true? Justify your answer.

I have solved this part by using a proof by contradiction, in the same way that is shown in this question. However, this problem is followed up with the following:

Write down the contrapositive of the statement P.

Is it always true? Justify your answer.

From my understanding the contrapositive of P is:

If $2^n-1$ is not prime, then n is not prime

What I'm struggling with is how to go about proving this. Could I just use the same method as the previous part of the question (again, see here)? Or would I have to get the contrapositive of the contrapositive (resulting back in P) and prove that in some way?

Convergence of a series induced from given two series

Posted: 30 Apr 2022 11:30 AM PDT

I could not get any counter example so I am asking this question.

Given $\frac{1}{n}\sum_{k=0}^{n}ka_k\to 0$ and $\frac{1}{n}\sum_{k=0}^{n}ka_k\to 0$, as $n\to \infty$, where $a_k,b_k \in (0,1)$.

Is it true that $\sum_{k=0}^n(a_k-b_k)\to 0$?

Thanks in advance!

When M is a payoff matrix what does $\max\limits_{i} \min\limits_{j} $ and $ \min\limits_{j} \max\limits_{i} M_{ij}$ mean exactly?

Posted: 30 Apr 2022 11:27 AM PDT

I am reading a book on randomized algorithms leading up to Yao's Technique and I stumbled on the following min/max notation for some payoff matrix $M$:

$$\color{green}{(1)}: \max\limits_{i} \min\limits_{j} M_{ij} $$ $$\color{blue}{(2)}: \min\limits_{j} \max\limits_{i} M_{ij} $$

So what I am looking for is for someone to maybe phrase this more eloquent than me.

The way that I understand $\color{green}{(1)} $ and $\color{blue}{(2)}$ is that $i$ is the strategy/index the row player R picks and $j$ is the index/strategy that the column player C picks. Then $\color{green}{(1)}$ means to express that we let $i$ vary over all of its rows, for each row we find the minimum, and then we take the maximum of all of the minimums and return that? and vice versa for $\color{blue}{(2)}$? So both is eventually just a value.

Sequential Optimization of an Inverse Matrix Norm

Posted: 30 Apr 2022 11:46 AM PDT

I have a matrix of the following form:

$ H_{t} = \sum_{i=1}^{t}x_{i}x_{i}^{T} + \lambda I$ where $x_{i} \in X \subset \mathbb{R}^{d}$, $X$ being a compact set, $\lambda > 0$ and $I$ is the identity matrix in d-dimensions.

Now I am sequentially solving an optimization problem of the form:

$x_{t}^{*} =\underset{x_t \in X}{\operatorname{argmin}} a^{T}H_{t}^{-1}a$, where $a \in X$ is a fixed non-zero vector.

I want to show that the objective $a^{T}H_{t}^{-1}a$ goes to zero as t goes to infinity. How can I show that?

There are uncountably many subgroups of $\text{Gal}( \bar{\mathbb{Q}}/\mathbb{Q})$ of index $2$.

Posted: 30 Apr 2022 11:30 AM PDT

I am reading infinite Galois theory, and the motivation for introducing the Krull topology seems to be that this is the way in which we can solve the problem that the Galois correspondence fails. This is the counterexample given:

If $G=\text{Gal}( \bar{\mathbb{Q}}/\mathbb{Q})$ there are uncountably many subgroups of $G$ with index $2$, while the number of subfields of $\bar{\mathbb{Q}}$ of degree $2$ over $\mathbb{Q}$ is countable, thus there cannot be a bijection.

I understand why the number of subfields of $\bar{\mathbb{Q}}$ of degree $2$ over $\mathbb{Q}$ is countable (these are of the form $\mathbb{Q}(\alpha)$ with $\alpha$ a root of an irreducible polynomial of degree $2$, and there are countably many possible $\alpha$) but I don't get why there are uncountably many subgroups of $G$ with index $2$.

Any hints or help will be appreciated.

Minimizing sum of squared vector elements with given conditions

Posted: 30 Apr 2022 11:29 AM PDT

Let ${a\in\mathbb{R}^n}$, ${b\in\mathbb{R}}$. Solve

$$min \sum_{i=1}^nx_i^2$$

under conditions $a^Tx\geq b$, $x\geq 0$.

How do I draw the image of the following complex region under the power map?

Posted: 30 Apr 2022 11:47 AM PDT

Let me denote $R=\left\{0<Arg(z)<\frac{\pi}{6}\right\}$. I want to understand how $f(R)$ looks like if $f(z)=z^i$.

My idea was that I first compute the image for an arbitrary $z\in R$ and then try to draw it.

So let $z\in R$ then $z=re^{i\theta}$ where $r\in (0,\infty)$ and $\theta \in \left(0,\frac{\pi}{6}\right)$. Then $$f(z)=f(re^{i\theta})=e^{i(\log(r)+i\theta)}=\frac{1}{e^\theta}e^{i\log(r)}$$ Where $\theta\in \left(0, \frac{\pi}{6}\right)$ and $\log(r)\in (0,\infty)$

So now our new radius is $\frac{1}{e^\theta}$ and the new argument is $\log(r)$.

But this somehow seems a bit strange to me, is this really correct? And if yes how do I draw it now from here?

My idea is that the new radius let's denotes it $r'$ satisfies $\frac{1}{e^{\frac{\pi}{6}}}<r'<\frac{1}{e^0}=1$ but then the argument lies in $(0,\infty)$?

Count of 3 digit numbers having digits in non-increasing order

Posted: 30 Apr 2022 11:50 AM PDT

What is the count of 3-digit positive numbers such that all the digits(from left to right) are in non-increasing order of value?

For e.g:

  1. 633 is counted, as its digits from left to right are in non-increasing order, i.e. 6 >= 3 >= 3.

  2. 323 is not counted, as its digits are not in non-increasing order. I have tried this question by creating cases for the middle digits starting from 0 to 9 but couldn't get the correct answer. The correct answer for this question is 219. It would be really helpful if someone can let me know how to solve this.

Determinant of the linear transformation $S: \mathbb{K}^{n\times n} \to \mathbb{K}^{n\times n}$ such that $S(X) \mapsto X^t$

Posted: 30 Apr 2022 11:26 AM PDT

$\mathbb{K}$ is a field and $n \geq 1$.

Let $S: \mathbb{K}^{n\times n} \to \mathbb{K}^{n\times n}$ such that $S(X) \mapsto X^t$ be a linear transformation.

What is the determinant of $S$?

I know that $S$ has $n^2$ base vectors, which are the matrices $E_{ij}$, where every entry is $0$ except of the entry $ij$, which is $1$.

But I have no idea how to calculate the determinant of $S$. I tried to calculate the "matrix" of $S$, but with no success.

Gram-Schmidt process to construct orthonormal base in a finite vector space with indefinite scalar product.

Posted: 30 Apr 2022 11:11 AM PDT

Im choking with this exercise because of the indefinite scalar product. I know the process for the definite one.

The first thing I'm asked to do is to check GS is still valid for indefinite scalar product. I have to check this:

a) Check that if the subspace $W_k$ is non degenerate for all $k = 1, . . . , m$ then the GramSchmidt procedure works

What I have tried is to use the definition of a non degenerate subspace: Be $x \in W_k$ and $g(x,y)=0$ for all $y\in W_k \rightarrow x=0$

But I dont know how to follow. I have constructed $x_1$ and $x_2$ with the GS process and My intuition says that the definition of non-degenerate implies that the def. of the projectors are well done. Problem comes with a hint.

Hint. Let $S = (v_1, · · · , v_m) ⊂ V$ be a linearly independent set which spans a non-degenerate subspace $W = L(S)$ and put $W_k := L(v_1, . . . , v_k)$ for every $k = 1, . . . , m$

Prove $o(x^2) = o(x^2 + o(x^3))$ when $x \to 0$

Posted: 30 Apr 2022 11:47 AM PDT

Let's define $o(g(x))$ as usually:

$$ \forall x \ne a.g(x) \ne 0 \\ f(x) = o(g(x)) \space \text{when} \space x \to a \implies \lim_{x \to a} \frac{f(x)}{g(x)}=0 $$

How to prove that $o(x^2) = o(x^2 + o(x^3)) \space \text{when} \space x \to 0$?

Could I do it like this:

$$ \text{Let} \space f(x) = o(x^2 + o(x^3)) \implies \\ 0 = \lim_{x \to 0} \frac{f(x)}{x^2 + o(x^3)} = \lim_{x \to 0} \frac{f(x)}{x^2 + x^3 \frac{o(x^3)}{x^3}} = \lim_{x \to 0} \frac{f(x)}{x^2 + 0} = \lim_{x \to 0} \frac{f(x)}{x^2} \implies f(x) = o(x^2) $$

My main question there is if the multiplication of $o(x^3)$ with $\frac{x^3}{x^3}$ is allowed? Is there a better way to do it?

Thanks!

A fair die is rolled five times. What is the probability that the largest number rolled is 5?

Posted: 30 Apr 2022 11:29 AM PDT

I was confused how to approach this problem. It is provided in the sample paper for mma of isi masters exam of 2021.

Randomly arrange $k$ elements 'a' and $n-k$ elements 'b' into groups of size $g\leq k$. What is the probability of getting a group with all 'a'?

Posted: 30 Apr 2022 11:44 AM PDT

I have a set of $n$ elements, where $k$ are of type $a$ and $n-k$ are of type $b$. I want to randomly group all elements into groups of size $g \leq k$. I'm trying to compute the probability that at least one of the groups contains all $a$.

Example:

I have this set of $n=10$ elements, with $k=4$:

$S = \{a,a,a,a,b,b,b,b,b,b\}$

Then I randomly group all elements into groups of size $g=2$:

$G = \{(a,b), (b,b), (a,b), (a,b), (a, b)\}$

I would like to know the probability of getting at least one $(a,a) \in G$, if the grouping process is random.

I know that the number of groups of size $g$ is $nCg$ and the number of possible $(a,\dots, a)$ groups is $kCg$. But I'm not sure how to continue from here.

A system of congruences

Posted: 30 Apr 2022 11:10 AM PDT

Suppose $x,y$ are integers in $[T,2T]$ and $U,V$ are different primes in $[2T^2,4T^2]$ then consider the congruences $$U^2(x+y)^2=(aU^2+bUV+cV^2)\bmod (U-V)$$ $$UV(x+y)^2=(aU^2+bUV+cV^2)\bmod (U-V)$$ $$V^2(x+y)^2=(aU^2+bUV+cV^2)\bmod (U-V)$$

$$U^2(x+y)^2=(aU^2-bUV+cV^2)\bmod (U+V)$$ $$-UV(x+y)^2=(aU^2-bUV+cV^2)\bmod (U+V)$$ $$V^2(x+y)^2=(aU^2-bUV+cV^2)\bmod (U+V)$$ where $a,b,c\in[T^2,4T^2]$, $(aU^2+bUV+cV^2)\leq256T^6$ and $a+b+c=(x+y)^2$.

Is $(aU^2+bUV+cV^2)=(Ux+Vy)^2$?

It is trivial $$(aU^2+bUV+cV^2)=(Ux+Vy)^2+k(U-V)$$ holds at a $k=O(T^4)$ from first three and $$(aU^2-bUV+cV^2)=(Ux-Vy)^2+l(U+V)$$ holds at an $l=O(T^4)$ from last three.

How to show $k,l=0$ is the question?

Calculating divisor of function on elliptic curve

Posted: 30 Apr 2022 11:26 AM PDT

I read Pairings for Beginners by Craig Costello.
In the example 3.1.1 at 37-th page we consider $ E/F_{103} : y^2 = x^3 + 20x + 20$, with points $ P = (26, 20), Q = (63, 78), R = (59, 95), T = (77, 84)$ all on $E$. The author defines function $f = \frac{6y + 71x^2 + 91x + 91}{x^2 + 70x + 11} $ and states that it's divisor is $(f) = (P) + (Q) - (R) - (T)$. To check that there is no zero/pole at $\mathcal{O} = (0 : 1 : 0)$ we look at $f$ in projective space $f = \frac{6YZ + 71X^2 + 91XZ + 91Z^2}{X^2 + 70XZ + 11Z^2}$. At infinity both the numerator and the denominator are zero and the book says that they cancels out. But I see that if we substitute $Y=1$ and $X=Z \to 0$ the numerator will tend to zero linearly while the denominator quadraticly. And that should give us first order pole at $\mathcal{O}$.
My version of $f$ is $f' =\frac{y+4x+82}{y+75x+12}$ and it clearly evaluates to $1$ at $\mathcal{O}$. Do I calculating divisors wrong?

How to solve the system of nonlinear equations in N-R method or other numerical methods?

Posted: 30 Apr 2022 11:46 AM PDT

Consider the system of infinite series \begin{align} &F=x+\frac{y^{3^2}}{3}+\frac{x^{3^5}}{3^2}+\frac{y^{3^7}}{3^3}+\frac{x^{3^{10}}}{3^4}+\frac{y^{3^{12}}}{3^5}+\cdots=0 \\ &G=y+\frac{x^{3^3}}{3}+\frac{y^{3^5}}{3^2}+\frac{x^{3^{8}}}{3^3}+\frac{y^{3^{10}}}{3^4}+\frac{y^{3^{13}}}{3^5}+\cdots=0. \end{align}

I want to approximate its zeros using Newton-Raphson process/ any other method.

Since the above system consists of infinite series, we don't have technique unless we truncate the system. I am considering the following truncated system \begin{align} &F(x,y)=x+\frac{y^{3^2}}{3}+\frac{x^{3^5}}{3^2}+\frac{y^{3^7}}{3^3}+\frac{x^{3^{10}}}{3^4}=0 \\ &G(x,y)=y+\frac{x^{3^3}}{3}+\frac{y^{3^5}}{3^2}+\frac{x^{3^{8}}}{3^3}+\frac{y^{3^{10}}}{3^4}=0. \end{align}

This one I am going to try at first.


Now in my previous post, I have asked to solve the following system \begin{align} &f(x,y)=x+\frac{y^{2^2}}{2}+\frac{x^{2^5}}{2^2}+\frac{y^{2^7}}{2^3}=0 \\ &g(x,y)=y+\frac{x^{2^3}}{2}+\frac{y^{2^5}}{2^2}+\frac{x^{2^8}}{2^3}=0. \end{align} The Sage code, given by the use @dan_fulea, was

IR = RealField(150)  var('x,y');    f = x + y^4/2 + x^32/4 + y^128/8  g = y + x^4/2 + y^32/4 + x^256/8    a, b = IR(-1), IR(-1)  J = matrix(2, 2, [diff(f, x), diff(f, y), diff(g, x), diff(g, y)])  v = vector(IR, 2, [IR(a), IR(b)])    for k in range(9):      a, b = v[0], v[1]      fv, gv = f.subs({x : a, y : b}), g.subs({x : a, y : b})      print(f'x{k} = {a}\ny{k} = {b}')      print(f'f(x{k}, y{k}) = {fv}\ng(x{k}, y{k}) = {gv}\n')      v = v - J.subs({x : a, y : b}).inverse() * vector(IR, 2, [fv, gv])  

Using the initial guess $(-1,-1)$, the N-R process converges at $8$th iteration.


Unfortunately the same code seems to be inefficient to solve the above truncated system $$F(x,y)=0=G(x,y).$$ It seems this system is associated with higher degrees and that is why the above code is efficient. It needs modification.

Is there a more efficient computer code to solve the above truncated system system in N-R method or other numerical methods ?

Thanks

Locate a pinhole camera using a fiducial marker

Posted: 30 Apr 2022 11:55 AM PDT

enter image description here

Note: The superscript notation used refers to the frame of reference. There are three frames of reference:

  1. $w$, the world frame (in Euclidean 2-space),
  2. $c$, the camera frame (in Euclidean 2-space), and
  3. $i$, the image frame (in pixels).

Suppose we have:

  • a 1-dimensional "tag", or "fiducial marker", $t$, defined by its boundaries, $t:=((0,-1)^w,(0,1)^w)$, existing at between $(0,-1)^w$ and $(0,1)^w$,
  • a "camera" at $c^w∈ℝ^2$, focal length $f^i>0$, image plane with width $w^i>0$ (whose center, $d^i:=w^i/2$, is $f^i$ units from $c$).

The optical axis is the line that includes the points $c$, $d$. Let's define $q$ as the point where the optical axis intersects the $y$ axis.

Define $b^i$ and $c^i$ as the tag's boundaries projected onto the image plane; that is, they are scalars drawn from $[0,w]$ which give some distance along the image plane, such that:

  • $b^i$ is the point intersecting the image plane of the line from $(0,1)^w$ to $c^w$ and
  • $c^i$ is the point intersecting the image plane of the line from $(0,-1)^w$ to $c^w$.

Let's assume the tag is in view of the camera, that is, assume that $b^i,c^i∈[0,w]$.

Finally, let $R$ be a $2 \times 2$ rotation matrix such that the line with points $R[0,1]$, $R[0,-1]$ is perpendicular to the optical axis.

Note: $f^i$, $w^i$, $b^i$, $c^i$, and $d^i$ are expressed in pixels, not necessarily in the same unit scale as the x-y plane of the world frame and camera frame.

Problem: Given $f^i>0$, $w^i>0$, $b^i∈[0,w]$, $c^i∈[0,w]$, and $R∈ℝ^{2 \times 2}$, find:

  1. Camera position $c^w∈ℝ^2$ and
  2. A function mapping any point in world space into the camera image plane: $$Ω:ℝ^2 → [0,w] : p^w → p^i$$

Bonus points:

  1. Generalize to any tag position $t$.
  2. Generalize to $n$ dimensions.

Properties of the time integral of Ito process

Posted: 30 Apr 2022 11:19 AM PDT

Consider the Ito process $X_t$ defined by

$$ dX_t = a(t,X_t) dt + b(t,X_t) dW_t $$

where $W_t$ is the standard continuous-time Wiener process. Let's define the process $Y_t$ to be some integral of $X_t$, namely

$$ Y_t =\int_0^t f(t , s , X_s) ds $$

I was wondering if someone could recommend a reference that deals with time integrals of this kind, or even of the simpler kind:

$$ Y_t =\int_0^t X_s ds \qquad \Rightarrow \qquad dY_t =X_t dt $$

Unfortunately, I haven't seen any treatment of the properties of $Y_t$ in the better-known texts on stochastic analysis. There is also this question on Math Overflow, but references there are not useful. In particular:

Is $Y_t$ an Ito process? I would say no, but the couple $(X_t,Y_t)$ probably yes, because we can consider it as a particular two-dimensional Ito process $$ (dX_t,dY_t) = a(t,X_t,Y_t) dt + b(t,X_t,Y_t) dW_t $$ where now $a$ and $W_t$ are 2-dimensional vectors and $b$ is 2 by 2 matrix. Are there particular cases (i.e. choices of $a$ and $b$) for which $dY_t$ alone is still an Ito process? I mean, given $a$ and $b$, can we find $A$ and $B$ such that $$ dY_t = A(t,Y_t) dt + B(t,Y_t) dW_t \, \, \, ? $$

Finally, is $Y_t$ even Markovian? Maybe this property can shed light on why we can write down a Fokker-Planck when we consider the couple $(X_t,Y_t)$ but not when we consider $Y_t$ alone? Any reference that is specific on "integrated processes" is appreciated (e.g. how to find its statistical properties like the autocovariance of $Y_t$).

Note: for the special case of the time integral of an Ornstein-Uhlenbeck process, see this MO question and "Time integral of an Ornstein-Uhlenbeck process". Regarding the definition of Ito process, see the Wikipedia link above or this, this and this interesting questions.

Edit (after the useful comments of @KurtG. ): consider $Y_t =\int_0^t f(t , s , X_s) ds$, by applying the Ito's lemma we may find the expression for $dY_t$. At this point we can start to restrict the generic expression of $f$ in order to try to have something of the form $dY_t=A dt+BdW$ (see e.g. this question for an application of the Ito's lemma to a similar case). However, it is not clear to me how to apply the Ito's lemma to this kind of "integral function". Do we need some "extension" of Ito's lemma to differentiate $Y_t$?

Is there a clean non-contrived theorem that can only be proven by contradiction?

Posted: 30 Apr 2022 11:18 AM PDT

I know (see Can every proof by contradiction also be shown without contradiction? that there are some theorems that can be proven by contradiction (relying on the law of the excluded middle, that for any proposition $A$, the axiom "either $A$ or not $A$ must hold" is available, but cannot be proven without that axiom schema.

I'm looking for the simplest, cleanest theorem that can't be proven without resorting to proof by contradiction (but has been proven by contradiction).

I'm looking for some theorem that might occur to somebody whose purpose in life was not just to demonstrate that proof by contradiction is sometimes necessary.

I'd be particularly impressed if somebody explains why they know that this theorem can't be proven with just "intuitionist" logic (which does not use the laaw of the excluded middle).

Interesting implicit surfaces in $\mathbb{R}^3$

Posted: 30 Apr 2022 11:14 AM PDT

I have just written a small program in C++ and OpenGl to plot implicit surfaces in $\mathbb{R}^3$ for a Graphical Computing class and now I'm in need of more interesting surfaces to implement!

Some that I've implemented are:

  • Basic surfaces like spheres and cylinders;
  • Nordstrand's Weird Surface;
  • Klein Quartic;
  • Goursat's Surface;
  • Heart Surface;

So, my question is, what are other interesting implicit surfaces in $\mathbb{R}^3?$

P.S.: I know this is kind of vague, but anything you find interesting will be of use. (:

P.P.S: Turn this into a community wiki, if need be.

No comments:

Post a Comment