Monday, May 16, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


What is the difference between $\mathbb C_X$-modules and sheaves of $\mathbb C$-vector spaces?

Posted: 16 May 2022 04:17 AM PDT

In the literature I have encountered the notions "category $\mathbb C_X-Mod$ of $\mathbb C_X$-modules" and "category $Sh_X(\mathbb C)$ of sheaves of $\mathbb C$-vector spaces". Does anyone know if both names and designations describe the same category or if they denote different categories? What is the difference between both?

An object of the category of $\mathbb C_X$-modules is described as follows. Let $\mathbb C_X$ be the sheaf of locally constant $\mathbb C$-valued functions. A $\mathbb C_X$-module $M$ is a sheaf on $X$ such that for every open subset $U$ in $X$, $M(U)$ is a $\mathbb C_X(U)$-module and for every open inclusion $V\subseteq U$, $res^U_V(am)=res^U_V(a)res^U_V(m)$ for all sections $a\in\mathbb C_X(U)$, $m\in M(U)$, where $res^U_V$ denotes the restriction map from $U$ to $V$ of both sheaves $\mathbb C_X$ and $M$.

An object of the category of sheaves of $\mathbb C$-vector spaces $Sh_X(\mathbb C)$ is a sheaf $N$ on $X$ such that for every open subset $U$ in $X$, $N(U)$ is a $\mathbb C$-vector space.

Show that if $P\left(x_{1}\right)=o_{x_{1}}\left(\left|x_{1}\right|^{k}\right)$ then $P\left(x_{1}\right)=0$

Posted: 16 May 2022 04:12 AM PDT

Question:

Let $P\left(x_{1}\right)$ be a polynomial of degree $k$ in one variable.

Show that if $P\left(x_{1}\right)=o_{x_{1}}\left(\left|x_{1}\right|^{k}\right)$ then $P\left(x_{1}\right)=0$.


My Take:

We Know that for any $\epsilon>0$ there exists a $\delta>0$ such that for every $\left|x_{1}\right|\leq\delta$ it holds that: $$ \left|P\left(x_{1}\right)\right|=\left|a_{k}x_{1}^{k}+a_{k-k}x_{1}^{k-1}+\ldots+a_{0}\right|\leq\varepsilon\left|x_{1}\right|^{k} $$ $$ \frac{\left|a_{k}x_{1}^{k}+a_{k-1}x_{1}^{k-1}+\ldots+a_{0}\right|}{\left|x_{1}\right|^{k}}\leq\varepsilon $$ $$ \left|a_{k}+\frac{a_{k-1}}{x_{1}}+\ldots+\frac{a_{0}}{x_{1}^{k}}\right|\leq\varepsilon $$

and that's the best I've got...

Can I move on with this?

Proving that there are infinitely many composites in an arithmetic progression

Posted: 16 May 2022 04:24 AM PDT

The question says

Consider $S$={$a, a+d, a+2d, ...$} where $a$ and $d$ are positive integers. Show that there are infinitely many composite numbers in $S$

The only argument I could think of was that primes aren't equally spaced. So, the rest of the numbers must be composite numbers, which are infinitely many in number. Is this a legit solution, or is there a more elegant way to prove this?

Would appreciate your help.

Thanks in advance

The set $X$ can be seen as index set?

Posted: 16 May 2022 04:09 AM PDT

Let $X$ be a set, and the power set of $X$ be $\mathcal P(X).$

For $\mathcal B \subset B,$ does \begin{align} &\quad \ \left\{ U\subset X \mid \exists \Lambda : \mathrm{index \ set}, \exists \{U_\lambda\}_{\lambda \in \Lambda}\subset \mathcal B\ \ ;\ U=\cup_{\lambda\in \Lambda} U_\lambda\right\}\\ &=\{ U\subset X \mid \forall x\in U, \exists B\in \mathcal B \ ; \ x\in B \subset U \} \end{align} hold ?

I expect it holds. I proved LHS$\subset $RHS, but I'm not sure my proof of LHS$\supset $RHS is correct.


My proof.

Let $U\in $ RHS.

Then, $\forall x\in U, \exists B_x \in \mathcal B \ ; \ x\in B_x \subset U $ holds.

Each $B_x$ belongs to $\mathcal B$ i.e., $\{ B_x \}_{x\in X}\subset \mathcal B$, and $U=\cup_{x\in X} B_x$ holds, so $U\in $LHS.


I'm not sure this is correct, since I don't know whether we can see $X$ as index set.

$X$ is defined as a set, but can we see $X$ as an index set ?

computational geometry

Posted: 16 May 2022 04:06 AM PDT

Hi everyone can you please clarify one proof for me thank you .

In paper " Fractional planks" by Aharoni et.al https://www.semanticscholar.org/paper/Fractional-Planks-Aharoni-Holzman/cf92dd1e8061f8b363865fcfc08048f83d5ad008

theorem 3.7 ( page 590) .: i dont understand this part : " denoting by Dj the open plank Plj (-1/n,1/n) (1<=j<=n)
the set K is not contained in the union of the planks Dj " why K can not belong to this union ? its total width is 2 and min width of K is also 2 so it should cover K , why it doesnt? thanks for your opinions

Better understanding of weak convergence of measures

Posted: 16 May 2022 04:04 AM PDT

I am trying to better understand weak convergence of measures by reading Billingsley's 'Convergence of Probability Measures'. Theorem 3.1 connects the weak convergence of distributions or rather convergence in distribution with the metric of the underlying measure space which does not seem intuitive to me and I have trouble understanding the proof.

Let $S$ be a metric space, with metric denoted by $\rho(\cdot,\cdot)$, equipped with the Borel $\sigma$-Algebra.

Theorem 3.1

Suppose that $(X_n,Y_n)$ are random elements of $S \times S$. If $X_n \Rightarrow X$ and $\rho(X_n,Y_n) \Rightarrow 0$, then $Y_n \Rightarrow X$.

The proof applies Portmanteau's Lemma twice which makes sense to me. But it starts by letting $F$ be a closed borel set and for $F_\epsilon:= \{x\in S:\rho(x,F) \le \epsilon\}$ the inequality $$P(Y_n\in F) \leq P(\rho(X_n,Y_n)\ge\epsilon) + P(X_n\in F_\epsilon)$$ appears. However, I do not see why this inequality holds. Could somebody help me out?

Linear Equation in 2 variables (Concept)

Posted: 16 May 2022 04:15 AM PDT

$$ 6x + 8y = 1 -(1)$$ $$ 26x + 48y=5 -(2)$$

We can solve this by multiplying (1) with 6 : $$36x+48y=6 -(3)$$ $$26x+48y=5-(4)$$ Gives, $$x=0.1$$

Now substituting this on (1) gives me y too. But, what exactly happened here? Well, I'm trying to understand this differently. How can we do this? Plotting (1) and (2) on a graph give both x & y. But plotting (3) and (4) makes no difference. But y in (3) & (4) could take any value?

Calculating difference between two means of normal distribution

Posted: 16 May 2022 03:40 AM PDT

Let's consider two normal probability distributions: $P = \mathcal N(\mu_p, \sigma_p^2)$ and $Q = \mathcal N(\mu_q, \sigma_q^2)$

Then we know that $\hat \mu_X \sim (\mu_p, \frac 1 m \sigma_p^2)$ and $\hat \mu_Y \sim (\mu_q, \frac 1 n \sigma_q^2)$. I'm reading paper in which they say, that:

$$E_{XY}[\|\hat \mu_X - \hat \mu_Y\|^2] = \|\mu_p - \mu_q\|^2 + \frac 1 m \sigma_p^2 + \frac 1 n\sigma_q^2$$

And I don't know why. My calculations are the following:

$$E_{XY}[\|\hat \mu_X - \hat \mu_Y\|^2] = E_{XY}[\hat \mu_X^2 - 2\hat \mu_{X}\hat \mu_Y + \hat \mu_Y^2] = E_{XY}[\hat \mu_X^2] + E_{XY}[\hat \mu_Y^2] - 2 \mu_p\mu_q = $$

$$= \textrm{Var}[\hat \mu_X^2] - [E\hat \mu_X]^2 + \textrm{Var}[\hat \mu_Y^2] - [E\hat \mu_Y]^2- 2 \mu_p \mu_q $$

$$= \frac 1 m \sigma_p^2 - \mu_p^2 + \frac 1 n \sigma_q^2 - \mu_q^2 - 2 \mu_q \mu_q = $$

$$=-\|\mu_q + \mu_p\|^2 + \frac 1 n \sigma_p^2 + \frac 1 m \sigma_q^2$$

Could you please tell me where I have problem with my justification?

$\mathbb{Z}_p$ points and $\mathbb{Q}_p$ points on a variety

Posted: 16 May 2022 03:52 AM PDT

Let $f_1,...,f_s$ be a set of equation over $\mathbb{Z}[X_1,...,X_n]$. We want to study the local solutions. Assume they have $\mathbb{Q}_p$ solutions for all $p$. I think it is true that for all but finitely many $p$, the solution can be actually found in $\mathbb Z _p$. Basically, I want to claim if $X(\mathbb Q_p)$ are not empty for all $p$, then the adelic points $X(\mathbf A_{\mathbb Q})$ is not. This is true for projective variety (need not be irreducible or reduced). What about affine or just general variety?

I think to prevent $\mathbb{Z}_p$ solutions but still have $\mathbb Q_p$ is something like the $pX-1$. So we can take big primes so that the coefficients in the equation are all units. When there is only one variable, it is apparent the solution must be in $Z_p$. When there are more variables, it becomes a bit more complicated. Say $X+Y=0$, we could have $\mathbb{Q}_p$ solutions $(1/p,-1/p)$. However, we can still have $\mathbb{Z}_p$ solutions like $(1,-1)$. Here, I don't know how to argue nicely.

What exactly is argument of a function?

Posted: 16 May 2022 03:39 AM PDT

I saw the meaning of argument in the context of mathematics in wikipedia, It said that 'In mathematics, an argument of a function is a value provided to obtain the function's result. It is also called an independent variable.' In the first line it said 'an argument is a value provided to obtain the function's result.' , if this is the definition of argument then the values we substitute for 'x' is supposed to be the argument not the 'x' itself. But then in the next sentence 'It is also called an independent variable.' it said the 'x' is the argument. As I advanced they said that the value we input in the place of 'x' is the argument value, this is line which says that 'A function of two or more variables is considered to have a domain consisting of ordered pairs or tuples of argument values.'. The definition they gave for argument doesn't seem to mean that 'x' is the argument, but instead the values we substitute for 'x' is the argument. I am confused. I need assistance. I apologize if the question I asked is not supposed to asked here, but in English Language learners stack exchange.

Prove that $\frac{2022}{n} + 4n$ is a perfect square iff $\frac{2022}{n} - 8n$ is a perfect square

Posted: 16 May 2022 04:19 AM PDT

Prove that $\frac{2022}{n} + 4n$ is a perfect square iff $\frac{2022}{n} - 8n$ is a perfect square

My solution was to substitute all the positive divisors of $2022$ into the $2$ expressions and observe that they are perfect squares iff $n=6$ which confirms the above relation. However, I don't think the problem setters were looking for this indirect proof, so I wonder if there is another way to solve the problem.

How can I solve this inequality.

Posted: 16 May 2022 03:37 AM PDT

I was reading through some articles related to inequalities theory and I ran into this inequality that I can not see why. It says:

$$ \frac{4}{(t+1)^2}<\frac{1}{t}, \quad \forall t > 1 $$

Computing a limit of an integral?

Posted: 16 May 2022 04:12 AM PDT

Suppose we have the following functions:

$$\bar{h_1}=\beta_A-\delta_A-\alpha-\delta_I+\frac{R'}{R}+\frac{S'}{S} $$

$$\bar{h_2}=-\epsilon\beta_A -\nu-\gamma-\mu+ c(\gamma+\mu)+c\frac{I'}{I}+c\frac{R'}{R}+\frac{S'}{S}+\frac{A'}{A} $$

$$\bar{h_3}=-\epsilon\beta_I -\nu-\delta_I-\mu+ \frac{\delta_I+\mu}{c}+\frac{A'}{A}+\frac{R'}{R}+\frac{I'}{c I}$$

$$\bar{h_4}=\beta_A-\delta_I+\frac{S'}{S}+\frac{R'}{R}+\frac{A'}{A} $$

where $c$ is a constant such that $\frac{\delta_I+\mu}{\epsilon \beta_I+\nu+\delta_I+\mu}<c<1$

Then we wish to compute:

$$\lim_{t\rightarrow\infty} \frac{1}{t}\int_0^t\bar{h_i(s)}ds = \bar{H_i}<0\qquad i=1,\dots,4$$

The variables $S, A, I$ and $R$ are bounded by $S+A+I+R=1$.

I was reading a paper that computes these limits but I didn't fully understand their methodology.

EDIT:

They have the limits:

$$\bar{H_1}=\beta_A-\delta_A-\alpha-\delta_I$$ $$\bar{H_2}=-\epsilon\beta_A-\nu-\gamma-\mu+c(\gamma+\mu)$$ $$\bar{H_3}=-\epsilon\beta_I-\nu-\mu-\delta_I+\frac{\delta_I+\mu}{c}$$ $$\bar{H_4}=\beta_A-\delta_I$$

Given the given metric $ d(x_n,y_n)$, is the convergence of a sequence equivalent to pointwise convergence?

Posted: 16 May 2022 04:21 AM PDT

Let $X$ be the set of all complex sequences and let $ d: X \times X \rightarrow \mathbb{R} $ be defined by $$ d\left(\left(x_{n}\right)_{n \in \mathbb{N}},\left(y_{n}\right)_{n \in \mathbb{N}}\right)=\sum \limits_{n=1}^{\infty} 2^{-n} \frac{\left|x_{n}-y_{n}\right|}{1+\left|x_{n}-y_{n}\right|} $$ for all sequences $ \left(x_{n}\right)_{n \in \mathbb{N}},\left(y_{n}\right)_{n \in \mathbb{N}} \in X $.

The first task was to show that the given metric d is indeed a metric. This could be found through other questions in this forum and then show that it is a metric.

But the second question really makes me despair:

(ii) Show that a sequence $ \left(\left(x_{n}^{(k)}\right)_{n \in \mathbb{N}}\right)_{k \in \mathbb{N}} $ of elements in $ X $ exactly converges to a sequence$ \left(x_{n}\right)_{n \in \mathbb{N}} \in X $ if for all $ n \in \mathbb{N} $ $$ \lim \limits_{k \rightarrow \infty} x_{n}^{(k)}=x_{n} . $$

I find it difficult to establish a proof for both directions:

For "-->" I can't see a right way to show that.

But for "<--", that you get from pointwise convergence the convergence of $ \left(\left(x_{n}^{(k)}\right)_{n \in \mathbb{N}}\right)_{k \in \mathbb{N}} $ to x with consideration of the metric d, I found there too: Convergence with respect to $d_1(x,y)=\sum^\infty_{i=1}\frac{1}{2^i}\frac{|x_i-y_i|}{1+|x_i-y_i|}$ is equivalent to pointwise convergence?

I would never have thought of such an idea myself.

Overall, I'm still trying to build up a proof for the direction "-->", but I'm missing the sparkling idea for it.

Any help would be greatly appreciated.

Norm inequalities for sums of two basic elementary operators

Posted: 16 May 2022 03:58 AM PDT

I'm working on this paper. What I'm intersted in is this theorem:

theorem 3.2

where $M_{A,B}(X)=AXB$

I don't know why we can find those two sequences $X_n$ and $x_n$, either I'm finding difficulties to show the lines highlited can someone help.

Is the Lagrangian canonical momentum the fiber derivative of the Lagrangian?

Posted: 16 May 2022 03:50 AM PDT

Consider a Lagrangian $L:TQ\to \mathbb{R}$, for some smooth manifold $Q$. As explained in this answer, one can define its fiber derivative $\mathbf FL:TQ\to T^* Q$ as $$\mathbf FL(v) \equiv \mathrm d(L|_{T_x Q})(v,\bullet), \qquad v\in T_x Q.$$ More precisely, given $L|_{T_x Q}:T_x Q\to \mathbb{R}$, we're considering the differential $\mathrm d(L|_{T_x Q}):T_v(T_x Q)\to T_{L(v)}\mathbb{R}$, and exploiting the isomorphism $T_v(T_x Q)\simeq T_x Q\times T_x Q$, we identify vertical displacements $(v,\delta v)\in T_x (T_x Q)$ with pairs of tangent vectors in $T_x Q$, and thus define $\mathbf FL(v)$ as the mapping $\delta v\mapsto \mathrm d(L|_{T_x Q})(v,\delta v)\in\mathbb{R}$.

So, as far as I can tell, formally speaking the elements in the image of $\mathbf FL$ are not functionals in the same tangent spaces that make up its domain (albeit the two things are isomorphic to one another, hence why we can identify them).

Now, people often also write the Lagrangian canonical momentum as the derivative of $L$ wrt its velocities: $p_i = \frac{\partial L}{\partial \dot q^i}$. What I'm trying to understand is whether such expressions should really be taken as a shorthand easier way to refer to the fiber derivative itself. That seems to make sense: $\mathbf FL$ sends each $v\in TQ$ to a covector, and the momentum is often said to be a covector, and the expression for $\mathbf FL$ in local coordinates also directly involves $\frac{\partial L}{\partial v^k}$.

From these observations, it would seem that I could in general define the Lagrangian canonical momentum associated to each $L:TQ\to \mathbb{R}$ and $v\in TQ$ as $p\equiv (\mathbf FL)(v)$. However, I've never seen this stated explicitly, so I'm wondering if there's some subtleties I'm not taking into account.

$f$ is discontinuous. Find topological conditions that there exists a continuous $g$ st. $f(x)\geq0\iff g(x)\geq 0$.

Posted: 16 May 2022 04:19 AM PDT

Let $x\in X$ where $X$ is a topological space. It seems like:

Claim 1. For every real-valued function $f$ on $X$, the following conditions are equivalent:

  1. there exists a continuous real function $g$ on $X$ s.t. $$f(x)\geq0 \iff g(x)\geq0.$$
  2. The set $\{x\in X|f(x)\geq0\}$ is closed.

Is any additional topological assumption needed on $X$? Is second countable T2 topology enough to let Claim 1 hold?

For example, if $X$ is further assumed to be a metric space, then Claim 1 clearly hold by constructing $g(x)=d(x,A)$ where $f(A)< 0$. The distance function must be continuous.

Determine closed forms for generating functions

Posted: 16 May 2022 04:07 AM PDT

I'm studying generating functions for my study of number theory and I'm trying to solve some exercises. I need to find closed forms for the generating functions of the following sequences:

a) $a_n=n^3$ for all $n\geq0$.

b) $a_n=2^n$ if $n$ is odd, $a_n=2^n+3^{\frac{n}{2}}$ if $n$ is odd, and $a_n=2^n-3^{\frac{n}{2}}$ if $n$ is even, but not divisible by $4$.

I know how to define and set up the ordinary, exponential, and Poisson generating functions but I'm not sure how to get a closed form expression

Find the number of elements in $\{0,1\}^n$ with no more than three $1$'s or three $0$'s in a row

Posted: 16 May 2022 03:44 AM PDT

I'm trying to find a general formula for the number of elements $s_n$ in $\{0,1\}^n$ with no more than three $1$'s or three $0$'s in a row, where $n\geq1$.

I calculated $s_n$ for small values of $n$ but could not really come up with a formula to prove by induction.

I also approached the problem combinatorially by considering two groups, one of three $0$'s and one of three $1$'s, and arranging them together with other arbitrary elements totalling $n$ elements in all, but since we want to find the number of elements with no more than three $0$'s or $1$'s in a row, and not exactly three, this approach does not work either.

Question about critical point

Posted: 16 May 2022 04:22 AM PDT

Assume $M$ is a manifold, $f$ is a smooth function on it, the differential $df$ is a map $df:M\rightarrow T^*M$, Assume $x$ is a critical point, then it is on the zero section, moreover, prove $x$ is a nondegenerate critical point if and only if $df(V)$ is transverse to the zero section at $x$. I wonder how to prove it.

Regarding proof of Bolzano's theorem (Csez Kosniowski)

Posted: 16 May 2022 03:52 AM PDT

I am trying to understand the lemma 10.1 (IVT) of "A first course in Algebraic topology" by C. Kosniowski.

The lemma states,

If $f: I \rightarrow \mathbb R$ continuous with $f(0)f(1) \leq0$, then $\exists \hspace{2mm} t\in I: f(t)=0$.

The proof assumes $f(t)\not=0, \hspace{2mm}\forall t\in I\hspace{2mm}$ for a contradiction. Then constructs $g:I \rightarrow S^0$ as $g(t)=f(t)/\vert f(t) \vert$.

Apparently, $g$ is to be continuous because $f(0)f(1)<0$, but I don't know why that the strict negativeness of $f(0)f(1)$ assures continuity of $g$. Once I know that $g$ is continuous, then I am done.

Can anyone help me with this?

How can other unknowns be calculated in polynomial division given there are no remainders?

Posted: 16 May 2022 04:15 AM PDT

I've being practicing polynomial long division for the last week and have built some competence/confidence around the algorithm for performing the operation, but this is stumping me:

Given P($x$) = $(x^3-2x^2-x+2)/(x-k)$ has three values for k in which thee quotient has no remainder, what are the possible k values?

I'm not sure where to begin. The idea of the modulus function comes to mind, but I'm not sure that's the way either. I'm posting in the hope someone could enlightenment me with the general strategy for this.

Structure of group of symmetries of a napkin-ring.

Posted: 16 May 2022 04:14 AM PDT

Trying to get a better understanding of dicyclic groups, I wonder what the structure of symmetries of a napkin-ring is.

Suppose you paint a pattern on the ring that has $n$-fold rotational symmetry and also up-down and left-righ symmetry, like $n$ periods of a sine wave that runs around the ring. Then the rotational symmetry is generated by an element $a$ of order $n$, where $n=26$ in the following depictions using the uppercase letters $\text{A⋅⋅⋅Z}$: $\def\pp#1{{\style{display: inline-block; transform:scale(+1,+1)}{\text{#1}}}}$ $\def\np#1{{\style{display: inline-block; transform:scale(-1,+1)}{\text{#1}}}}$ $\def\pn#1{{\style{display: inline-block; transform:scale(+1,-1)}{\text{#1}}}}$ $\def\nn#1{{\style{display: inline-block; transform:scale(-1,-1)}{\text{#1}}}}$

Then $a^k$ generates the 26 pure rotations, and $\{a^k\}\simeq \Bbb Z/n\Bbb Z$:

$$\pp{ABC···XYZ}\\ \pp{BCD···YZA}\\ \vdots\\ \pp{ZAB···WXY}\tag{00}$$

Adding an element $x$ of order 2 turns the group into the dihedral group $D_{2n}$, where $x$ in the depiction is turning the ring upside down: $a^kx$ generates

$$\nn{ABC···XYZ}\\ \nn{BCD···YZA}\\ \vdots\\ \nn{ZAB···WXY}\tag{10}$$

We can add yet another element $y$ of order 2, like the action of a vertical mirror. In two dimensions and with a flat $n$-gon, we'd have $y=a^mx$ for some $m$, but in 3D with the ring this is not the case, and we get $2n$ more symmetries: $a^ky$ are the mirror images of $a^k$:

$$\np{ABC···XYZ}\\ \np{BCD···YZA}\\ \vdots\\ \np{ZAB···WXY}\tag{01}$$

and $a^kxy$ are the mirror images of $a^kx$ resp. upside-down reflections of $a^k$:

$$\pn{ABC···XYZ}\\ \pn{BCD···YZA}\\ \vdots\\ \pn{ZAB···WXY}\tag{11}$$

So what's the group structure of these symmetries $S$? As far as I understand, we have

$$ \langle a,x\rangle \simeq \langle a,y\rangle \simeq \langle a,xy\rangle \simeq \langle a,yx\rangle \simeq D_{2n} \tag 2 $$

and that $S \not\simeq D_{4n}$ and $S \not\simeq Q_{4n}$ for $n$ not too small, and where $Q_{4n}$ denotes the dicyclic group of order $4n$. That $S\not\simeq Q_{4n}$ I infer from this post that states that dicyclic groups are "not easy to visualize". And for $Q_{4n}$ we'd need an element $z\neq a^k$ of order 4, which we don't have.

how to offset a rotation contained in a unit quaternion rotation from the origin of a rigid object.

Posted: 16 May 2022 04:16 AM PDT

I'm using Unity3D for a project. The way it handles sorting transformations is with a 3vector-unit quaternion-3vector "sandwich" (the 1st vector for position, the quaternion for rotation, and the 2nd vector for uniform axis scale). When you add a rotation to a transform it rotates the object around the origin of the mesh (usually the center of the object), my question is, is there a way to rotate an object around an "offset" from the object's center mathematically? is short move the rotation itself to a new position, and still influence the object? (see figure for clarity)enter image description here

Why is the multiplicative group of a central division algebra anisotropic?

Posted: 16 May 2022 04:11 AM PDT

Let $k$ be any field. Suppose $D$ is a central division algebra over $k$ of degree $n^2$, then we can understand its multiplicative group $D^{\times}$ as an algebraic group (defined over $k$). I wonder how to show $D^{\times}$ is anisotropic (modulo the centre $G_{m}$). In other words, the maximal $k$-split torus is the centre $G_{m}$.

According to Milne's Algebraic Group book, we can argue by looking at the conjugation representation on $D$ of a maximal $k$-split torus. But I was stuck at the last implication: how can one see $S \subset Z(G)$? In my understanding, $se_{i}s^{-1}e_{i}^{-1} \in k$ only implies it should be an $n$-root of unity, by looking at its determinant after base change to $M_{n}(\bar{k})$. Not sure why it has to be $1$.

Milne

Is the component of a vector along another vector also a vector?

Posted: 16 May 2022 03:58 AM PDT

I will surmise what I've learned from a couple of Wikipedia articles below:

Vector projection:

The vector projection of a vector $\vec{a}$ onto a vector $\vec{b}$ (also known as the vector component or vector resolution of $\vec{a}$ in the direction of $\vec{b}$) is defined as the following:

$$\text{proj}_\vec{b}\vec{a}=\frac{\vec{a}\cdot \vec{b}}{|\vec{b}|}\hat b$$

, where $|\vec{b}|$ is the length of $\vec{b}$, $\hat{b}$ is the unit vector in the direction of $\vec{b}$, and the operator $\cdot$ denotes a dot product.

Scalar projection:

The scalar projection (also known as scalar component) of a vector $\vec{a}$ onto a vector $\vec{b}$ is given by the following:

$$s=|\vec{a}|\cos\theta$$

, where $|\vec{a}|$ is the length of $\vec{a}$, and $\theta$ is the angle between $\vec{a}$ and $\vec{b}$.

My question:

  1. Evidently, a vector projection/component is a vector and a scalar projection/component is a scalar. In this top answer, @RonGordon promulgates the component of $\vec{a}$ along $\vec{b}$ as a scalar. Is it a convention to assume that "component" means scalar component/projection, just as @RonGordon did, unless it is explicitly specified otherwise (by the writing of vector projection)? In other words, if I find "vector projection" written anywhere, it will be sufficiently clear what the author means. Similarly, if I find "scalar projection" written, it will be sufficiently clear what the author means as well. However, if I find only "component" written somewhere, what should I interpret it as?

Fubini theorem and time integrals of stochastic processes

Posted: 16 May 2022 03:59 AM PDT

Let $(X_t)$ be a real continuous stochastic process such that $E[ \int_0^1 (\frac{X_t}{1-t})^2 dt ] <\infty$. Let $f_t$ be the PDF of $(X_t)$.

Since $Y_t= \frac{X_t}{1-t}$ is $L^2$ in time and space, and that the measure of $\Omega \times [0,1]$ is finite, $(Y_t)$ is also $L^1$ in time and space, and hence,

$$I:=E\left[ \int_0^1 \frac{X_t}{1-t} dt \right] <\infty.$$

I would like to know if the following integral is equal to $I$:

$$\int_\mathbb R x \left(\int_0^1 \frac{f_t(x)}{1-t} dt\right) d x,$$

and if so, is $\int_0^1 \frac{f_t(x)}{1-t} dt$ the PDF of the random variable $\int_0^1 \frac{X_t}{1-t} dt$ ? Is the function $x \rightarrow x\int_0^1 \frac{f_t(x)}{1-t} dt$ guaranteed to be integrable in $x$ ?

I obtained this integral by following these steps:

$$I \rightarrow \int_0^1 \frac{E[X_t]}{1-t} dt \rightarrow \int_0^1 \frac{ \int_\mathbb R x f_t(x) dx}{1-t} dt \rightarrow \int_\mathbb R x \left(\int_0^1 \frac{f_t(x)}{1-t} dt\right) d x $$

I used Fubini two times, and I am not sure that knowing $I < \infty$ is enough to use Fubini in these ways.

Finding the righthand system from the Laplace transform $G(s) = \frac{1}{1- e^{-sT}}$

Posted: 16 May 2022 03:46 AM PDT

I want to find the impulse response $g(t) \in \mathbb{C}$, that it's two-sided Laplace transform is: $$\mathcal{L}\{ g(t)\} = G(s)=\frac{1}{1-e^{-sT}}$$

I tried to find $g(t)$, by finding the inverse impulse response. Since, if I define $f(t)$ to be the inverse impulse response. Then, as in LTI systems, the convolution $ (g\ast f)(t) = \delta(t) $. Thus, in Laplace space according to the convolution theorem: $ G(s)\cdot F(s) = 1$ $$ \Rightarrow F(s) = 1-e^{-sT} $$ Then, by choosing the right plane of the Laplace space, as the $ROC$. And matching with known transformations. $$ f(t) = \mathcal{L^{-1}}\{ F(s)\} = \delta(t) - \delta(t-T) $$ So going back to the first equation: $$ \begin{align} \delta(t) = (g\ast f)(t) &= \int_{-\infty}^{\infty}(\delta(\tau)-\delta(\tau-T))g(t-\tau)d\tau \\ &= \int_{-\infty}^{\infty}\delta(\tau)g(t-\tau)d\tau-\int_{-\infty}^{\infty}\delta(\tau-T)g(t-\tau)d\tau \\ &= g(t) - g(t-T) \end{align} $$ To conclude, I reached the dead-end with getting stuck with this functional equation: $$ g(t) - g(t-T) = \delta(t) $$ where $T$ is a constant.

Is my action/move of splitting the "integral" in the above, wrong? Since $\delta(t) - \delta(t-T)$ is a generalized function? Thanks in advance.

Solve a Partial Differential Equation without boundary conditions

Posted: 16 May 2022 04:04 AM PDT

I have to solve this PDE, that doesn't have any boundary condition.

$ u_y + uu_x=1 $

$ u=0$ in $y=x^2$

I already found that

$u(s)=s$

$x(s)= \frac{s^2}{2} + x_0$

$y(s)= s+ x_0^2$

With the conditions $x(0)=x_0,$ $y(0)=x_0^2,$ $u(0)=0$

The problem now is that I can't find a way to transform the function $u(s)$ to a function $u(x,y)$.

Does someone as an idea? Thank you

nth-root of continued fraction with Raney transducers

Posted: 16 May 2022 03:37 AM PDT

There are some algorithms for doing basic arithmetic by using regular continued fraction expansions. These algorithms are mainly due to Gosper (1972) and Raney (1973). These two approaches use (bi)homographic functions (Raney's approach was extended to bihomographic functions by Liardet and Stambul in 1998) but Raney's one use a binary encoding of continued fractions and transducers from automata theory. This makes Raney's approach attractive (for me).

It seems to me one is able to compute $n^{th}$-roots of some reals - i.e. $\sqrt[n]{a}$ with $a\in \mathbb{R}$ - following Gosper's way. Do you know if one can do so with Raney's transducers (see On continued fractions and finite automata) ?

No comments:

Post a Comment