Thursday, June 2, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Does $\epsilon-\delta$ def of a limit assure that the left-hand limit and right-hand limit should be the same?

Posted: 02 Jun 2022 06:36 AM PDT

I know that the lh limit and rh limit should be the same if the limit exists. But the $x$ in the definition is an element of the domain of the function $f$, so here's the question : If function $f$ is defined only for $x \ge 0$, then can we say that the limit for $x = 0$ exists for function $f$? It is trivial for the rh limit, but I would like to know if we can say this rh limit as "the limit" when $x$ goes to $0$.

According to the epsilon delta definition, it seems to hold since $x$ cannot be smaller than $0$, and thus the rh limit would become the limit itself.

Every new year Jack gives $320 ....

Posted: 02 Jun 2022 06:34 AM PDT

Every new year Jack gives dollar 320 to be shared between her niece in the ratio of their ages. This year the nieces are aged 3 and 7. Show that the younger niece receives $32 more than this year in the next five years.

My attempt: $3:7 = 10 320/10 = 32 3*32=96 7*32=224$

5 years: $8:12$

Difference = 4

$4:32 1: 8 8*8=64 12*8=96$

However, she receives $32 less in the next five years.

Computing the range of a linear transformation over the space of polynomials

Posted: 02 Jun 2022 06:31 AM PDT

The original problem can be found here. The linear transformation is defined as: $$T: P_4 \rightarrow P_5 : T(p(x)) = (x-2)p(x) $$

The spanning set is: $$\{ x-2,\, x^2-2x,\, x^3-2x^2,\, x^4-2x^3, x^5-2x^4, x^6-2x^5\}$$ Which is also a little confusing to me since it implies that the standard basis for $P_4$ is: $$ \{ 1, x, x^2, x^3, x^4, x^5 \} $$ even though $ P_4 $ is the space of all polynomials of degree 4. Moving on the linked page says that a range for the linear transformation is: $$ \{ -\frac{1}{32}x^5+1,\, -\frac{1}{16}x^5+x,\, -\frac{1}{8}x^5+x^2,\, -\frac{1}{4}x^5+x^3,\, -\frac{1}{2}x^5+x^4 \} $$

Attempting to solve this problem myself I took the spanning set and put it into matrix form as: $$ \begin{bmatrix} 1&0&0&0&0&0&-2 \\ -2&1&0&0&0&0&0 \\ 0&-2&1&0&0&0&0 \\ 0&0&-2&1&0&0&0 \\ 0&0&0&-2&1&0&0 \\ 0&0&0&0&-2&1&0 \\ \end{bmatrix} $$ I proceeded to transpose, row reduce, and take the rows as columns to create a basis for the range, however, instead of getting the range above I just got the identity matrix of size 6. The transpose, row reduce, and removing the zero rows was a strategy suggested by the book and has worked thus far. How can I create a range of a linear transformation over the space of polynomials? Is the standard basis for $P_4$ given by the textbook correct? I thought that the set had to have a term of highest order 4.

Is there notation for a set as a generated series?

Posted: 02 Jun 2022 06:22 AM PDT

For summation of a series I can write

$$ x_1 + x_2 + \ldots + x_N $$

but this much more compact notation is typically used:

$$ \sum_{i=1}^N x_i $$

On the other hand I often see sets expressed as

$$ X = \{x_1, x_2, \ldots, x_N\} $$

but is there a more compact representation like with summation that's commonly used?

Meaning and examples of Grothendieck condition AB4*

Posted: 02 Jun 2022 06:20 AM PDT

I am trying to understand Grothendieck's AB conditions https://stacks.math.columbia.edu/tag/079A . In particular, I need to understand AB4(*). First of all, what is meant in an arbitrary abelian category by product or coproduct of exact sequences? What are the morphisms in the (co)product sequence?

Anyway, what are some examples of AB4* categories? In particular, for a thesis I am writing I need to know if cochain-complexes form an AB4* category

proof with prime numbers that is reflexive, symmetric and transitive

Posted: 02 Jun 2022 06:29 AM PDT

I have found the following problem and try to solve it by myself, but I have some doubts about it. The problem is the following:

A binary relation P is defined on Z as follows: For all $m,n\in \mathbb{Z},mPn\Leftrightarrow \exists$ a prime number p such that p|m and p|n.

Reflexive:

$mPm\Longleftrightarrow \exists p$ such that p|m and p|m, so clearly it holds the reflexive property.

(However, in the solution book it states that this property does not hold, because the author considers m=1, but 1 is not a prime number).

Symmetric:

$mPn\Longleftrightarrow \exists p$ such that $p|m\wedge p|n$

$nPm\Longleftrightarrow \exists p$ such that $p|n\wedge p|m$

Because of the commutative law this symmetric property also holds.

For the transitive property I have seen that it could be fairly straightforward also:

If I have $mPn,nPq$, then I would like to prove that $mPq$

so I take a prime p and I make that $p|m$ and $p|n$ because mPn. Also, I have that $p|n$ and $p|q$ because $nPq$ and I can say that $p|m$ and $p|q$ proving that is transitive.

However, the solution book states that the transitive property does not hold, because considering a counterexample with the following values: m=2,n=6 and p=9. We have that $mPn$ because the prime number 2 divides 2 and 6, then $nPq$ it holds when I choose 3 as a prime number that divides 6 and 9. However, the transitivity does not hold because 2 cannot divide 9.

The proof by counterexample seems right, but I would like to know if there is some way to prove this transitivity property without using the counterexample. The proof that I found and stated before seemed right, but the counterexample prove it wrong.

Thanks for your help.

Venn diagram to a simplified boolean equation

Posted: 02 Jun 2022 06:12 AM PDT

I have a template venn diagram here:
enter image description here
it has three circles: a,b,c and, a universal set: μ
Circle a= A+z+x
Circle b= B+z+y
Circle c= C+y+x

Now, parts of this venn diagram are shaded blue:
enter image description here
the shaded parts are: B+z+x+μ ,with respect to image 1.
So the logical equation(boolean expression),would be=$a'bc'+abc'+ab'c+a'b'c'$

But we can combine some circles and make the expression simplified,eg. B and z together $=bc'$
so now the expression can be simplified to $bc'+ab'c+a'b'c'$

Take into account derived gates like xor,nand,nor...
Again it depends on how we interpret it
eg. One may think of finding the white shaded part and inverting it with not gate

I would like to subtract o and y after finding μ+z+o+y+B+x so,
μ+z+o+y+B+x $ = \overline{cb} \cdot((\overline{a \oplus c})+c)$
so now It has four gates,

  • a Nand gate as in $ \overline{cb}$,a EXNor gate as in $\overline{a \oplus c}$
  • a And gate,Or gate as represented by $\cdot$ , $+$

If we do some other way round,we find the lowest number of gate required to represent is three,for this diagram,one such eg. $a\oplus(b \oplus \overline{a\cdot c})$ here 3 gates: xor,xor,nand is used,but remember,this is not the only solution having lowest gates(3) in it,there are many other

So is there any way to find shortest expression(lowest gates) from a method of interpreting venn diagram
That is, is there any specific way to interpret any venn diagram and find expression like,"first find a...add b...c...... then..."

means of conditioned probability

Posted: 02 Jun 2022 06:05 AM PDT

I really not sure about this conditional probability of $a$:

\begin{equation} P = \langle a|(\cos(x) > \cos_r(x))\rangle \end{equation}

where: $cos(x)$ is the domain of the absolute value of the cosine ($[0,1]$) discretized in $N$bins and $cos_r(x)$ should be the value of cosine measured each time.. I need to create a Pdf by computing the $cos_r(x)$ for 3000-time step

Topological equivalent vs Holomorphic equivalent of line bundle.

Posted: 02 Jun 2022 06:03 AM PDT

I know there are indeed some examples show that two line bundles over some projective manifold is topological equivalent but not holomorphic equivalent, but I find I give a "proof" shows that two bundles over a projective manifold are topological equivalent=holomorphic equivalent, but I can't figure out what wrong with my proof.

Suppose our projective manifold is $M$, First for any pair of topological equivalent line bundle $L_1$ and $L_2$, suppose their curvature are $\Omega_1$ and $\Omega_2$, and we know that $\Omega_1=\Omega_2+d\alpha$ for some $alpha\in \Omega^1(M)$.

Secondly we know the Kahler identity: $d\alpha=\partial\bar{\partial}f$ for some $f\in C^{\infty}(M)$.

Now consider complex gauge transformation $g=\exp{\frac{f}{2i}}$, in curvature level we have $g^*\Omega_1=\Omega_1+\partial\bar{\partial}f$, so we find a complex gauge transformation(holomorphic endomorphism) from $L_1$ to $L_2$.

Do I misunderstand something? Thanks you for your answer.

Probability that out of the balls drawn atleast one will be green

Posted: 02 Jun 2022 06:30 AM PDT

There is a bag in which there are 17 balls of four different colors- red, green blue and white. There are at least 2 balls of every color and maximum number of balls are green. I need to find the probability that if 11 random balls are drawn such that at least two of them have same color and rest different color then the probability that there is one green ball among those 11 balls is?

I tried to find the distribution pattern of the balls as there are 2 balls of each color so remaining there are 12 balls. Hence $$r+g+w+b=12$$

Now I tried finding different solution sets by fixing values of g and I understand that g should atleast be 5. But still it isn't working as the calculations are getting messier. Is there some other point that I am missing? Thank you

Analysing Geometric Brownian Motion

Posted: 02 Jun 2022 06:29 AM PDT

For uni I'm doing this exercise on brownian motion, specifically geometric brownian motion. For this part of the exercise I'm required to compute certain values: sample mean, sample variance, and the quadratic variation for m = 10, 100, 1000. The output I get is posted below but I'm not quite happy with the results. The distribution of the ends is a sort of shifted normal distribution do the variance and mean being off a little seems reasonable, but the thing I'm worried about is the quadratic variation, as this should be equal to T. T is set to 5 but the QV is nothing close to that. I cant see what I'm doing wrong, can anyone help me? any help is appreciated!

import yfinance as yf  import numpy as np  import math  from matplotlib import pyplot as plt  import scipy.stats as stats      class BMG:  #Geometric Brownian Motion      def __init__(self, drift, variance_term, m, T):          self.dt = T/m          self.Z = np.random.standard_normal(m)          self.Y = np.zeros(m+1)          for i in range(1, m+1):              self.Y[i] = math.e**(variance_term*math.sqrt(self.dt)*self.Z[i-1]) + drift*self.dt          self.arr = np.ones(m+1)          for i in range(m):              self.arr[i+1] = self.arr[i]*self.Y[i+1]            self.end = self.arr[-1]      fig, axs = plt.subplots(2)    N = 500                 #Number of BM simulations  drift = -0.1  variance_term = 0.25  m = 500                 #Number of time-steps   T = 5                   #Time in years    ends = np.zeros(N)    for i in range(N):                    bmg = BMG(drift, variance_term, m, T)      ends[i]=bmg.end      axs[0].plot(np.arange(0,m+1), bmg.arr)          #Plotting BM trajectories  axs[1].hist(ends, density = True, bins=int(N/5))    #Plotting distribution of end points    plt.show()        #Sample Mean  sample_mu = 0  for i in range(N):      sample_mu += ends[i]  sample_mu = sample_mu/N  print("sample mean:  "+ str(sample_mu) + "  analytic mean:  " + str(bmg.arr[0]*math.e**(drift*T)))    #Sample Variance  sample_var = 0   for i in range(N):      sample_var += (ends[i] - sample_mu)**2  sample_var = sample_var/(N-1)  print("sample variance:  "+ str(sample_var) +   "  analytic variance:  " + str( (bmg.arr[0]**2) * (math.e**(2*drift*T)) * ((math.e**(variance_term**2))-1)))    #Quadratic variation  steps = [10, 100, 1000]  for n in steps:      bmg1 = BMG(-0.1, 0.25, n, T)      QV = 0        for i in range(n):          step = bmg1.arr[i+1] - bmg1.arr[i]          QV += step**2        print("for m = " + str(n) +":   QV = " + str(QV))    /* Output:  sample mean:  0.6942405094555087  analytic mean:  0.6065306597126334  sample variance:  0.16157387382343052  analytic variance:  0.023726185505356632  for m = 10:   QV = 0.12398361878757773  for m = 100:   QV = 0.4954051008996326  for m = 1000:   QV = 0.10524633607091849  */  

enter image description here

Show that the map is a group homomorphism

Posted: 02 Jun 2022 06:36 AM PDT

I am struggling a bit with the concept of showing that a map is a group homomorphism and that it is isomorphic

So for example we have two groups $(H, *)$ and $(K, \circ)$

For example if we had $H \times K= \{(h, k): h ∈ H\text{ and }k ∈ K\}$, with the binary operation $\star$: $$(h_1, k_1) \star (h_2, k_2)= (h_1*h_2, k_1\circ k_2)$$

Subset of $H =\{(h, 1): h ∈ H \}$ is a subgroup of $H \times K$

How would you show that the map $\phi:H \longrightarrow H \times K$, $\phi(h) = (h, 1)$ is a group homomorphism and that it is isomorphic using the homomorphism theorem

my attempt was to show that the map ϕ:H⟶H×K, ϕ(h)=(h,1) is a group homomorphism to H

ϕ(gh)=(g,1)⋆(h,1) =(g∗h,1∘1)= ϕ(g)ϕ(h)

Improper integral $\int_0^{2020} \frac{\sqrt{x}\arctan{x^\frac{3}{2}}}{\ln(1+x^2)\sin(\sqrt{x})}dx$

Posted: 02 Jun 2022 06:03 AM PDT

I have to investigate convergence of improper integral $$\int_0^{2020} \frac{\sqrt[3]{x}\arctan{(x^\frac{3}{2}})}{\ln(1+x^2)\sin(\sqrt{x})}dx$$ Singularities are $k^2\pi^2$ for $k\in\mathbb{Z}$. Integral can be written as sum of integrals whose bounds are singularities. I got that the last integral in the sum diverges, therefore $\int_0^{2020} \frac{\sqrt[3]{x}\arctan{(x^\frac{3}{2}})}{\ln(1+x^2)\sin(\sqrt{x})}dx$ diverges. Am I right? Is this explanation satisfying? Any help is welcome. Thanks in advance.

Length formula for Lipschitz curve, i.e. ${\rm Length}\ \gamma = \int_a^b|\gamma '(t)| dt $

Posted: 02 Jun 2022 06:10 AM PDT

I want to prove the following problem. But I think I can only complete the half : We need to prove the part ${\rm Length}\ \gamma \geq \int_a^b|\gamma '(t)| dt $

Problem : If $\gamma :[a,b]\rightarrow \mathbb{R}^3$ is $L$-Lipschitz map with $\gamma(0)=(0,0,0)$, then we define ${\rm Length}\ \gamma =\sup_{P} \bigg\{\sum_{i=1}^N |\gamma(t_i)-\gamma(t_{i-1} )| \bigg\}$ over all partition $P=\{t_i\}_{i=0}^N$. Then prove that $$ {\rm Length}\ \gamma =\int_a^b|\gamma '(t)| dt $$

(This is an exercise in the book, What is differential geometry : curves and surfaces - Petrunin and Barrera)

Proof :

Step 1 : If $\gamma$ is $L$-Lipschitz, then each component functions $x,\ y,\ z$ of $\gamma$ is $L$-Lispchitz :

\begin{align*}L|s-t|&\geq |\gamma (s)-\gamma (t)| \\&= \sqrt{ (x(s)-x(t))^2 + (y(s)-y(t))^2 + (z(s)-z(t))^2} \\& \geq | x(s)-x(t) | \end{align*}

Step 2 : By Rademacher Theorem, we have measurable functions $ x',\ y',\ z'$ with a bound $M$.

By Lusin Theorem on $[a,b]$, there are continuous functions $X_\epsilon, \ Y_\epsilon,\ Z_\epsilon$ with a bound $M$ s.t. $x',\ X_\epsilon$ coincides outside some set of measure $< T $ and so does $Y_\epsilon,\ Z_\epsilon$

Step 3 : We define $\alpha = (f,g,h)(t)$ where $$ f(t)=\int_a^t X_\epsilon (s)ds,\ g(t)=\int_a^t Y_\epsilon (s)ds,\ h(t)=\int_a^t Z_\epsilon (s)ds $$

Hence $\alpha$ is in $C^1$-class i.e., continuous first derivatives, so that we have ${\rm Length}\ \alpha = \int_a^b |\alpha'(t)|\ dt $ in the same book (I think I have already proved by using fundamental theorem of calculus, but I think it is still messy)

Furthermore, by Lusin Theorem, $$ \bigg| \int_a^b |\alpha'(t)| dt -\int_a^b|\gamma'(t)|dt\bigg|<2\sqrt{3}M T $$

By Rademacher Theorem, $$ |x(t)-f(t)| =\bigg|\int_a^t (x'(s)- X_\epsilon (s)) ds \bigg|<2MT $$

Hence $$|\alpha (t)-\gamma(t)| \leq 2\sqrt{3} MT $$

$$\bigg||\gamma (t_i)-\gamma (t_{i+1}) | - |\alpha (t_i)- \alpha (t_{i+1} )| \bigg| < 4\sqrt{3} MT $$

Assume that given $\epsilon>0$, there is a partition $\{ t_i\}_{i=0}^N$ for $\gamma$ s.t. $$\bigg|{\rm Length}\ \gamma - \sum_{i=1}^N\ |\gamma (t_i)-\gamma (t_{i+1}) | \bigg|<\epsilon$$

If $4\sqrt{3}MNT<\epsilon$, then $$ \bigg|{\rm Length}\ \gamma -\sum_i\ |\alpha (t_i)- \alpha (t_{i+1} )|\bigg| < 2\epsilon $$

so that \begin{align*}{\rm Length}\ \gamma &\leq\sum_{i=1}^N \ \bigg|\alpha (t_i)- \alpha (t_{i-1} )\bigg| + 2\epsilon \\&\leq \int_a^b|\alpha'(t)|dt +2\epsilon \\&\leq \int_a^b|\gamma'(t)|dt +3\epsilon \end{align*}

How could we prove the remaining ?

Prove that${n \choose k} \frac{1}{n^k} \le 2^{1-k}$, concluding that: $2 \le e \le 3$

Posted: 02 Jun 2022 06:28 AM PDT

$${n \choose k} \frac{1}{n^k} \le 2^{1-k}$$ for all $1 \le k \le n$ by using $$e := lim_{n \rightarrow \infty} \Big(1+\frac{1}{1+n}\Big)^n$$

Show concluding that: $2 \le e \le 3$

My idea is the following:

${n\choose k}=\frac{n!}{k!(n-k)!}$ by using it :$${n \choose k} \frac{1}{n^k} \le 2^{1-k}=\frac{n!}{k!(n-k)!}\cdot\frac{1}{n^k} \le 2^{1-k}$$ I'm not sure that this step makes any sence since the exercise seems to be much more difficult now.

My second idea is that:

$${n\choose k}= {n\choose n-k}=\frac{n!}{{(n-k)}!(n-{(n-k)})!} = n!(n−k)!−k!$$ using the substitution:

$$n!(n−k)!-k!\frac{1}{n^k} \le 2^{1-k}$$ which doesn't makes the problem again more complicated. I would appreciate your help a lot.

Matrix group acting on vector space by left multiplication find size of orbits

Posted: 02 Jun 2022 06:04 AM PDT

I have been doing some studying for abstract algebra and I am a bit confused with the concept of finding the size/ length of orbits from matrix groups. Most examples in books don't seem to use matrices with integers

If the matrix is :\begin{pmatrix}a&b\\ c&d\end{pmatrix} with $a, b, c, d$ being elements of $\Bbb Z_4$.

For example: \begin{pmatrix}1&2\\ 0&1\end{pmatrix}

and the matrix group acts on the vector space $\Bbb Z_4 \times\Bbb Z_4$ by left multiplication \begin{pmatrix}e\\ f\end{pmatrix} where $e,f$ are elements of $\Bbb Z_4$.

For example vector: \begin{pmatrix}1\\ 2\end{pmatrix}

How do you work out the size/ length of the orbit for one of these vectors?

I know how to generate all the vectors, so that part of the working out I am ok with.

Why does the following matlab function for the Gauss-Siedel method outputs a 3-component vector instead of a 2-component one?

Posted: 02 Jun 2022 06:02 AM PDT

I have implemented the gauss-seidel method in matlab, with the code at the bottom and I am appling it to a simple problem Ax=b

$\begin {bmatrix} 1 & 2\\ 3 & 4\end{bmatrix}x=\begin {bmatrix} 3 \\ 7\end{bmatrix}$ whose solution is $\begin {bmatrix} 1 \\ 1\end{bmatrix}$

When I call Gauss_Seidel_mat(A,b,[0;0],10^-3,2000)

matlab yields a completely incorrect result and an astronomical value for one of the variables

 ans =    1.0e+307 *      8.5597        -Inf    
  1. I know the convergence of the method is only garanteed for strictly diagonally dominant or for symmetric and positive-definite matrices, so is that the reason of this result? This means that every time I want to use the G-S method I should implement a code that checks if the matrix is either strictly diagonally dominant or symmetric and positive-definite, right? If is not , I shouldn't use it?

  2. What is the meaning of the * in the matlab output for x and why is it outputting 3 values instead of 2? The solution should have only two components I noticed that the 3 values and the * first appear if I call the function with a kmax >= 17

>> Gauss_Seidel_mat(A,b,[0;0],10^-3,16)  ans =    876.7878   -655.8408  >> Gauss_Seidel_mat(A,b,[0;0],10^-3,17)  ans =     1.0e+03 *      1.3147     -0.9843  

Code:

function [x,k,ier] = Gauss_Seidel_mat(A,b,x,tol,kmax)  % Gauss-Seidel  D = tril(A);  C = A-D;  for k = 1:kmax      y = x;      x = D\(b-C*y);      if norm(x-y,inf) <= tol*norm(x,inf)          ier = 0;          return      end  end  ier = 1;  

Find the limit $\lim_{n \to \infty} \frac{a_1 + a_2 + a_3 + ... + a_n}{\ln(n)}$ where $a_{n} = \int_{1}^{e} \ln^{n}(x)dx$

Posted: 02 Jun 2022 06:25 AM PDT

First I tried to solve this problem by noticing that: $$a_1 + a_2 + a_3 + ... + a_n = \sum_{k=1}^{n}a_{k}=\sum_{k=1}^{n}\int_{1}^{e}\ln^{k}(x)dx = \int_{1}^{e}\sum_{k=1}^{n}\ln^{k}(x)dx \\= \int_{1}^{e}\frac{\ln^{k+1}(x)-\ln(x)}{1-\ln(x)}dx,$$ but this didn't really help.

Then I arrived at the equivalent form $a_n = \int_{0}^{1}x^{n}e^{x}dx$ which yields: $$a_1 + a_2 + a_3 + ... + a_n = \int_0^1 \frac{x^{n+1}-x}{1-x}e^xdx.$$

My last attempt was to introduce a new function $F(t) = \int_0^1x^ne^{tx}dx$.

$$F^{\prime}(t)= \frac{d}{dt}\int_0^1x^ne^{tx}dx =\int_0^1 \frac{\partial}{\partial t}x^ne^{tx}dx=\int_0^1 x^n e^{tx} xdx = \int_0^1x^{n+1}e^{tx}dx.$$

By doing IBP and simplifying I got this differential equation: $$ F^{\prime}(t)= \frac{e^t}{t}-\frac{n+1}{t}F(t)$$ which I don't know how to solve.

Can the housholder transformation be determined in $\operatorname{GF}(2^8)$?

Posted: 02 Jun 2022 06:08 AM PDT

I have a linear system of equations in $\operatorname{GF}(2^8)$ which I currently solve with Gaussian elimination. However, due to several reasons my current implementation now requires an alternative approach to solve the linear system of equations.

My first alternative approach amounts to the Householder transformation. Thereby, I noticed that the calculation of the norm of the currently observed column is necessary. In my first, naive implementation I used the euclidean norm to calculate the respective norm. Nevertheless, I realized that this yields invalid divisions by zero such that I fail to eliminate the linear system of equations through Householder matrices.

An answer that was given in How to find orthogonal vectors in GF(2) stated that that the Gramm-Schmidt orthogonalization method is not possible in $\operatorname{GF}(2)$. For this reason I am wondering whether the Householder transformation in $\operatorname{GF}(2^8)$ is also not possible because of the missing norm for vectors.

Thank you in advance for your efforts.

How to solve $x^y = ax-b$

Posted: 02 Jun 2022 06:18 AM PDT

I have encountered this equation:

$$x^y = ax-b$$

I know to find $y$ as a Function of $x$ then:

$$y\ln(x) = \ln(ax-b)$$

$$y = \frac{\ln(ax-b)}{\ln(x)}$$

or

$$y = \log_x(ax-b)$$

But the problem I need to find $x$ as a Function of $y$ and don't know how .

Problems on Discrete Probability [closed]

Posted: 02 Jun 2022 06:07 AM PDT

  1. Eight people were seated in a row where any of the number of ways they can be seated is equiprobable. Find the event probabilities separately with each of the following scenarios: (a) two particular persons A and B were sitting next to each other (b) there were five men and they were sitting next to each other (c) there were four married couples and each couple were sitting together

Vector valued definite volume integral

Posted: 02 Jun 2022 06:18 AM PDT

In the context of a seminar lecture on some physico-chemical subject (magnetic shielding by electrons on atomic scale) I wanted to present a simple analytical example for the solution of an equation that is usually solved numerically. For that I have chosen a simple but still relevant case. Finally the thing boils down to the solution of this integral:

$$ \begin{align} \vec{\sigma}(\mathbf{r}_0) = & \int_{\mathbb{R}^3} \mathbf{r}\times (\vec{e}_z \times \mathbf{r})% \frac{\exp(-2|\mathbf{r}+\mathbf{r}_0|)}{r^3}\mathrm{d}\mathbf{r} \\ \tag{1}\end{align} $$

here $\vec{\sigma},\mathbf{r}, \vec{e}_z$ and $\mathbf{r}_0\in\mathbb{R}^3$ in particular the "position" vector $\mathbf{r}=\begin{pmatrix}x\\ y\\ z\end{pmatrix}$, $\mathbf{r}_0=\begin{pmatrix}x_0\\ y_0\\ z_0\end{pmatrix}$ and $\vec{e}_z=\begin{pmatrix} 0\\ 0\\ 1\end{pmatrix}$. $r:=\sqrt{\mathbf{r}\cdot\mathbf{r}}$ (i.e. the norm of $\mathbf{r}$). Accordingly for the resulting function we have $\vec{\sigma}: \mathbb{R}^3\to\mathbb{R}^3$. I use the integration symbol $\int_{\mathbb{R}^3} \mathrm{d}\mathbf{r}$ for the volume integral $\int_{-\infty}^\infty\int_{-\infty}^\infty\int_{-\infty}^\infty \mathrm{d}x\mathrm{d}y\mathrm{d}z$, which is sometimes also denoted $\int_{-\infty}^\infty\int_{-\infty}^\infty\int_{-\infty}^\infty \mathrm{d}V$

I have tried various strategies, namely all 4 combinations of A) with and without transformation of the variable ($\mathbf{r}':=\mathbf{r}+\mathbf{r}_0$) and B) using cartesian or spherical coordinates ($\int_{-\infty}^\infty\int_{-\infty}^\infty\int_{-\infty}^\infty \cdot \mathrm{d}x\mathrm{d}y\mathrm{d}z$=$\int_{0}^\pi\int_{0}^{2\pi}\int_{0}^\infty \cdot r^2\sin\theta\mathrm{d}r\mathrm{d}\phi\mathrm{d}r$), but in each case I obatin finally expressions which I fail to solve. For example in cartesian coordinates and without transformation I finally obtain the expression $$ \int_{\mathbb{R}^3} \begin{pmatrix}-xz\\-yz\\x^2+y^2\end{pmatrix} \frac{\exp(-2\sqrt{x^2 +y^2 +z^2 + 2(xx_0 + yy_0 + zz_0) + r_0^2})}{(x^2+y^2+z^2)^\frac{3}{2}} \tag{2}$$

I believe that a anaylitcal (symbolic) integration shall be feasible, since by some physical arguments one can "guess" a solution that fulfills certain constraints one would expect the solution to fulfill. This solutions is

$$ \vec{\sigma}^{guess}(\mathbf{r}_0)=\begin{pmatrix}0\\0\\1\end{pmatrix}\frac{1+2r}{\pi}e^{-2r_0}$$

However the knowledge of this didin't really help me to solve $(1)$ except to find some symmetry arguments on why the $x$ and $y$ components are $=0$ (For example this can be seen from $(2)$ with 'ungerade' vector components in $x,y$ and 'gerade' ones $z$).

Who can help me to solve $(1)$ "systematically"?

(Note: This question is an extension/generalzation of this one I have asked previously.)

Does $ \sum_{k=1}^{n} \frac{(n-k)^k}{k!} $ have a closed-form expression in terms of $n \in \mathbb{N}$?

Posted: 02 Jun 2022 06:04 AM PDT

Does $ \sum_{k=1}^{n} \frac{(n-k)^k}{k!} $ have a closed-form expression in terms of $n \in \mathbb{N}$? It seems to grow a bit faster than $e^{0.5n}$, but there's clearly more to it, and I don't know how to look this up.

I know that:

$ \sum_{k=1}^{\infty} \frac{z^k}{k!} = e^z - 1$. I'm looking for a similar closed-form expression that eliminates the $k$.

$ \sum_{k=1}^{n} \frac{n^k}{k!} = e^n \frac{\Gamma(n+1,n)}{\Gamma(n)} - 1$, where the $\Gamma$ ratio is close to 1/2.

$ \sum_{k=1}^{\infty} \frac{(n-k)^k}{k!} $ diverges. (It makes no sense for my purpose anyway, once the $n-k$ goes negative and the exponents flip its sign.)

Thanks!

On the properties of sum-of-squares polynomials

Posted: 02 Jun 2022 06:34 AM PDT

Def 1: If a multivariate polynomial $f$ can be written as a finite sum of squared polynomials (i.e., $f(x)=\sum_{i = 1}^n g(x)_i^2$, then $f$ is SOS.

Def 2: If an $n$-variate polynomial $f$ is nonnegative on $\Bbb R^n$, then $f$ is positive semidefinite (psd).

We know all SOS polynomials are psd. However, It is not correct that all psd polynomials are SOS. For example, the Motzkin polynomial:

$$f(x,y)= 1 + x^2 y^4 + x^4 y^2 - 3 x^2 y^2 $$

Now, my question is as follows. Suppose $f(x_1, \dots ,x_n)$ is sos and $\min\limits_{x \in \Bbb R^n} f = r $. Is $f(x)-\alpha$ SOS for all $0 \le \alpha <r$? If not, would you give me a counterexample?

prove If $k_1$ and $k_2$ are positive semidefinite kernels then $min\{k_1, k_2\}$ and $max\{k_1, k_2\}$ are psd too.

Posted: 02 Jun 2022 06:35 AM PDT

I can prove for $R+$ the function $min(x,y)$ is a positive semidefinite kernel. But I'm stuck in proving the following statement.

Suppose $k_1(x,y)$ and $k_2(x,y)$ are positive semidefinite kernels from $\chi \times \chi \rightarrow R $

Prove both $k_{min}$ and $k_{max}$ are positive semidefinite kernels too

  • $k_{min}(x,y) = min\{k_1(x,y), k_2(x,y)\}$
  • $k_{max}(x,y) = max\{k_1(x,y), k_2(x,y)\}$

Why is $L(x) = x^{o(1)}$ for any slowly varying function $L$?

Posted: 02 Jun 2022 06:19 AM PDT

Question

I've heard that for any slowly varying function $L$, it's true that $L(x) = x^{o(1)}$. But why is this the case?

Note: Here I'm using the standard (Karamata) definition of slow variation, namely a measurable function $L : (0,\infty) \to (0,\infty)$ is called slowly varying if $$ \frac{L(\gamma x)}{L(x)} \underset{x \to \infty}{\longrightarrow} 1 \quad \text{for all $\gamma > 0$.} $$

Thoughts/Attempt

From the Karamata representation of $L$, we have $$ L(x) = \exp\left\{ \eta(x) + \int_B^x \frac{\varepsilon(t)}{t} dt \right\}, \quad \text{for all $x > B$,} $$ for some $B>0$ and some bounded measurable functions $\eta$ and $\varepsilon$ with $\lim_{x \to \infty} \varepsilon(x) = 0$ and $\lim_{x \to \infty} \eta(x) \in \mathbb R$.

So, $$ \begin{aligned} && L(x) &= x^{o(1)} \\ &\Leftrightarrow & \log L(x) &= \eta(x) + \int_B^x \frac{\varepsilon(t)}{t} dt = o(1) \cdot \log x \\ &\Leftrightarrow & \underbrace{\frac{\eta(x)}{\log x} }_{\to 0} + \frac{1}{\log x} \int_B^x \frac{\varepsilon(t)}{t} dt &= o(1) \\ &\Leftrightarrow & \frac{1}{\log x} \int_{\log B}^{\log x} \varepsilon\big(\text e^u \big) du &= o(1) \qquad \text{($u := \log t$)}. \end{aligned} $$ The term on the left-hand side of the last line is clearly $O(1)$. But why is it $o(1)$ -- i.e. why does it have to converge to $0$?

Are coherent modules hopfian?

Posted: 02 Jun 2022 06:15 AM PDT

As it is well-known, a noetherian module over an arbitrary ring is hopfian.

Are coherent modules also hopfian?

find the asymptotic upper bound

Posted: 02 Jun 2022 06:04 AM PDT

I need to find the asymptotic upper bounds in $O$ notation for $T(N)$ in two recurrences. Assuming that $T(N)$ is constant for sufficiently small $N$, I need to make the bounds as tight as possible.

$T(N) = T(N-3) + 3 \log N$

$T(N) = 2T(N/4) + \sqrt{N}$

I know I should use master theorem. I can't implement it correctly to save my life. Can someone walk me through this

How many digits does the integer zero have?

Posted: 02 Jun 2022 06:31 AM PDT

Should zero be classified as having no digits, or 1 digit?

No comments:

Post a Comment