Sunday, May 22, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


On the eigenvalue problem

Posted: 22 May 2022 06:37 AM PDT

A standard approach to the eigenvalue problem is as follows. We are looking for solutions $\omega$ of the following:

$\Omega\vec v=\omega\vec v \tag{1},$

where $\vec v$ is some element of our vector space. We conclude that if there exists a solution, it must then satisfy

$$(\Omega-\omega\hat 1)\vec v=\vec 0, \tag{2}$$

in which $\hat 1$ is the identity operator. Next, we see that if such a solution exists, then the operator $(\Omega-\omega\hat 1)$ must not be invertible, a statement which is equivalent to $\det(\Omega-\omega\hat 1)=0$. This has the form:

$$\sum_{n=0}^N c_n\omega^n \tag{3}.$$

Finally, we conclude that the eigenvalues of $\Omega$ must be the solutions of the above "characteristic equation" when it is equal to zero.

My problem is this, we know that if a solution of $(2)$ exists, the operator in brackets must be singular, however we seem to rely on the converse: if $(\Omega-\omega\hat 1)$ is singular (i.e. $\omega$ solves $\det (\Omega-\omega\hat 1)=0$), then $\omega$ is a solution of $(1)$, when determining the eigenvalues. I don't see why this must be true.

Proving Tensor identities using Penrose diagrammatic notation (Page-322, Penrose, Road to Reality)

Posted: 22 May 2022 06:35 AM PDT

I want to show that for an anti-symmetric two upper index tensor $S^{ab}$, that: $$Q^{bcd}=S^{a[b} \nabla_a S^{cd]}=0$$

I'm trying to figure out how to show prove the hint equality is true using the penrose diagrammatic notation. I've tried to make a diagram of it below:

enter image description here

Could someone explain me how to proceed after this?

Fractional powers of the Laplacian

Posted: 22 May 2022 06:33 AM PDT

I have recently started to study fractional fractional powers of the Laplacian. In the books I've read, fractional powers are defined only for $$-\Delta =-\sum_{j=1}^n\frac{\partial^2}{\partial x_j^2}.$$ My question is: why are fractional powers set to $-\Delta$ and not to $\Delta$? Is this somehow related to the spectrum of $-\Delta$ being positive?

Thank you in advance.

Linear Algebra SVD

Posted: 22 May 2022 06:33 AM PDT

Can you please help me with proving this? enter image description here Thanks in advance.

Show: $\text{lcm}(a,a+p)=\text{lcm}(b,b+p), p \;\text{prime}\implies a=b$

Posted: 22 May 2022 06:40 AM PDT

(Romania Mathematical Olympiad). Let $a,b$ be positive integers such that exists a prime $p$ with the property $lcm(a,a+p)=lcm(b,b+p)$. Prove that $a=b$.

What I could do: WLOG $p|a, p \nmid b \implies a=pk, k \in \mathbb{Z}. lcm(pk,pk+p)=lcm(pk,p(k+1))=pk(k+1). lcm(b,b+p)=b(b+p)$

We want to prove that is impossible to $pk(k+1)=b(b+p)$, but I don't know how.

Shorthand set builder notation?

Posted: 22 May 2022 06:20 AM PDT

I was reading some papers and they had the following notation:

$$ \big\{k\big\}_{k=0}^n $$

I assume this implies that

$$ \big\{k\big\}_{k=0}^n = \{k \in \mathbb{WHAT} : 0 \leq k \leq n\} $$

What is $k$ an element of? I assume integers, but it might as well be real or complex. I propose a better, less ambiguous notation:

$$ \big\{k \in \mathbb{SOMETHING}\big\}_{k=0}^n $$

Would this work better or does this mean something completely different?

Is $f(x) = 1 , when \ 0 ≤ x ≤ 1 \ and \ 0, when \ 1 < x ≤ 2$ Riemann integrable?

Posted: 22 May 2022 06:26 AM PDT

Let $$f(x) = \begin{cases}1 , & 0 ≤ x ≤ 1 \\ 0, & 1 < x ≤ 2.\end{cases}$$

I know that it is Riemann integrable, but don't know how to prove it.

$U(f, P) - L(f, P) < \epsilon$

I have chosen P = {0, 1, 2}

What I have found that:

$L(f, P) = 0 (1-0) + 1 (2-1) = 1$

$U(f, P) = 1 (1-0) + 2 (2-1) = 3$

$U(f, P) - L(f, P) = 3-1 =2$

So do I have to choose a $\epsilon$ > 2?

Or am I all wrong here?

Solve $a^{3}+(a+1)^{3}+\ldots+(a+6)^{3}=b^{4}+(b+1)^{4}$

Posted: 22 May 2022 06:14 AM PDT

I tried using mod of $6$ on $6$ consecutive integers as they would have $0,1,2,\dots,5$ as residues and tried checking for residue of $b^{4}+(b+1)^{4}$ mod $6$ but wasn't able to get any satisfactory result. Please provide some hints.

Creating non-intersecting linear Diphantine equations

Posted: 22 May 2022 06:13 AM PDT

I have a problem involving linear Diophantine equations which I know should be fairly simple, but I've forgotten the math. Let $r(t)$ and $w(t)$ be two linear, partial Diophantine equations: $$ r(t) = k_r(t-c) + m_r\quad c\leq t< n_r+c\\ w(t) = k_wt+m_w\quad 0\leq t<n_w. $$ $k_r, k_w, m_r, m_w$ are known integer constants. The idea is to find the smallest $c$ satisfying $$ \forall t_w \in [0,n_w), \forall t_r \in [c, n_r+c): t_r \leq t_w \implies r(t_r) \neq w(t_w). $$ For example, suppose: $$ r(t) = 2(t-c)+10\quad c \leq t < 6 + c\\ w(t)=5t\quad0 \leq t < 7 $$ then $c=3$. Because if $c=0$ then $r(0) = w(2) = 10$, if $c=1$ then $r(1) = w(2) = 10$, if $c=2$ then $r(2)=w(2)=10$. In this case $c \leq 7$ so there are only a few possibilities to check. But in the general case the number of possibilities might be much larger.

Perfect maps and local compactness

Posted: 22 May 2022 06:06 AM PDT

Suppose $f:X \rightarrow Y$ is a perfect map between topological spaces $X$ and $Y$, i.e. $f$ is continuous, surjective and closed. Also suppose that the fibers of $f$ are compact. If $Y$ is locally compact, will $X$ be as well?

My definition for locally compact is the following: A space $X$ is locally compact if for each $x \in X$ there exists a compact subset $C$ that contains a neighborhood of $x$.

Initially I thought this would be the case, however, I can't really seem to find a good proof. This led me to believe that it would only work for Hausdorff spaces, but I'm not sure if that's true either...

Find $2^{(3^{5782})} \pmod4$ [duplicate]

Posted: 22 May 2022 06:03 AM PDT

How to find $2^{(3^{5782})} \pmod 4$?

Is it correct to say $2^{3^{5782}}\pmod 4≡2^{2t+1}\pmod4≡4^t·2\pmod4≡2$?

Line Integral. Vector field. Parametrization

Posted: 22 May 2022 06:12 AM PDT

  • Let´s say we want to find the circumferences of the plane $C$ that make the line integral $\int_C y^2dx + x^2dy$ worth zero.

My attempt:

The vector field given is: $\mathbf{F}(x,y)=(y^2,x^2)$. The first thing I try to do is find a parametrization for the path, but I am not sure whether I am looking for the parametrization of a general circumference centered at $(a,b)$ with radius $R$:

$\mathbf{r}(t)=(a+R \cdot \cos(t),b+R \cdot\sin(t))$ where $0 \leq t \le 2\pi$

$\mathbf{r'}(t)=(-R\sin(t),R\cos(t))$

Being that the case:

$\mathbf{F}(\mathbf{r}(t))=\mathbf{F}(x(t),y(t))=((b+R \sin(t))^2,(a+R\cos(t))^2$) obtaining:

$\mathbf{F}(\mathbf{r}(t))\cdot\mathbf{r'}(t)= ((b+R \cdot \sin(t))^2,(a+R \cdot\cos(t))^2) \cdot (-R\sin(t),R\cos(t)) $

Finally, we must impose: $\int_{0}^{2\pi}\mathbf{F}(\mathbf{r}(t))\cdot\mathbf{r'}(t) dt =0.$

However, we aim to find three parameters using only one condition. Perhaps, while calculating the integral we are left with fewer unknowns, but, despite that, the calculation part is still hard.

Any suggestions are welcome.

A is noetherian and every prime ideal of A is maximal then...

Posted: 22 May 2022 06:07 AM PDT

This proof was part of my lecture notes in Commutative algebra and I am having trouble following it. So, I am asking the question here.

Statement: If A is noetherian and every prime ideal of A is maximal then A is artinian.

Proof: Since A is noetherian, the 0 ideals has a primary decomposition.( Can you please explain why?) Let $\sqrt(q_i)=M_i => {M_i}^{n_i} \subseteq q_i$ for some $n_i$. ${M_1}^{n_1}...{M_r}^{n_r} \subseteq {M_1}^{n_1} \cap... \cap {M_n}^{n_r}$ ( Can you please tell how to deduce this?) $\subseteq q_1 \cap ... \cap q_r =(0) => A $is artinian as A is given noetherian.( How does A being noetherian proves that A is artinian?)

I shall be really thankful for the help provided!

Finding a subset of the powerset whose union is a subset

Posted: 22 May 2022 06:02 AM PDT

Let $P_3(S)$ denote the powerset of $S$ where all elements of length greater than $3$ are removed. Given a subset of $S$, $E$; how many subsets of $P_3(S)$ $R$ are there such that the union of the members of $R$ is equal to $E$?

Follow-up: Can you enumerate the selections, if the number is greater than $0$?

Notation: function returning the element of a partition containing $x$

Posted: 22 May 2022 06:34 AM PDT

Suppose $Y=\{y_1,\ldots,y_m\}$ partitions the set $X=\{x_1,\ldots,x_n\}$. I would like to define a function $y: X \to Y$ which returns $y \in Y$ if and only if $x \in y$. Is there a way to write this more formally?

I came up with

$$y(x) = \{y \in Y :y \ni x\}$$

or

$$y(x) = \{y:y \in Y \ \land \ x \in y \}$$

but I am not sure this makes sense.

How to find a fractal with a predetermined Hausdorff dimension?

Posted: 22 May 2022 06:11 AM PDT

For many patterns that display self-similarity, the Hausdorff dimension can be found. Sometimes the dimension is calculated and approximate - as is the case with the Feigenbaum attractor - but often its closed form in terms of known mathematical constants be obtained - for instance, the Hausdorff dimension of the Cantor set is $ \log_{3}(2)$.

I am interested in the inverse problem: suppose we consider a mathematical constant like $\pi^{-1}$ or $\gamma e$. Can we always find and describe a fractal whose Hausdorff dimension is equal to this preset number?

How to relate spectral radius and Cauchy interlace theorem in a separate equation

Posted: 22 May 2022 06:38 AM PDT

Consider a a matrix $A_{n\times n}$. Also consider another matrix $M_{(n-1)\times(n-1)}$ which is obtained by deleting the $i^{th}$ row and column of $A$. I have the following equation in $M$

\begin{equation} R_{i}= m_{ii} + \mathbf{m_{i:-i}}^{T}(I-M)^{-1}\mathbf{m_{-i:i}} \end{equation}

where $\mathbf{m_{i:-i}}=(m_{i1},m_{i2},\ldots,m_{i(i-1)},m_{i(i+1)},\ldots,m_{in})^T$ and $\mathbf{m_{-i:i}}=(m_{1i},m_{2i},\ldots,m_{(i-1)i},m_{(i+1)i},\ldots,m_{ni})^T$ and $I_{(n-1)\times (n-1)}$ is an identity matrix.

I intend to show the hypothesis: If $\rho(A)>1$, then $R_{i}>1$ where $\rho(A)$ denotes the spectral radius (the largest eigenvalue) of $A$.

I was thinking of approaching this demonstration using the fact that $M$ is the principal sub-matrix of $A$ and hence according to Cauchy Interlacing Theorem, the eigenvalues of $M$ interlace those of $A$ leading to the fact that $\rho(A)\geq\rho(M)$. I am at a loss on how I can connect the spectral radius of $A$ and the above equation for $R_{i}$ with the Cauchy Interlace theorem and prove the above hypothesis. I would appreciate any ideas on how to do that or if there is another way I can prove the hypothesis.

Numerical solution of $2D$ wave equation using Fourier transform and finite differences

Posted: 22 May 2022 06:22 AM PDT

This is the $2$-dimensional wave equation

$$ u_{tt} = u_{xx} + u_{yy} $$

with initial condition $u(x,y,0)=f(x,y)$ and $u_{t}(x,y,0) = 0$.

The inverse Fourier transform used is

$$ u(x,y,t) = \iint \hat{u}(\omega_{x}, \omega_{y}, t)e^{(2\pi) \omega_{x}i x} e^{(2\pi) \omega_{y} i y}d\omega_{x} d\omega_{y} $$

this is also the version that is used by Matlab's FFT. Applying this to wave equation we get

$$ \iint \hat{u}_{tt}e^{(2\pi) \omega_{x}i x} e^{(2\pi) \omega_{y}i y}d\omega_{x} d\omega_{y} = (2 \pi)^{2} (\omega_{x}^{2} + \omega_{y}^{2}) \iint \hat{u} e^{(2\pi) \omega_{x} i x} e^{(2\pi) \omega_{y} i y}d\omega_{x} d\omega_{y} $$ so the Wave equation in frequency space is:

$$ \hat{u}_{tt} = -(2\pi)^{2} (\omega_{x}^{2} + \omega_{y}^{2}) \hat{u} $$

Although this gives exact solution very easy, I tried to solve it numerically using Matlab.

The numerical method I used is finite difference:

$$ \hat{u}_{tt} \approx \frac{\hat{u}_{t}(t) - \hat{u}_{t}(t-\Delta t)}{ \Delta t} $$ $$ \approx \frac{\hat{u}(t) - 2 \hat{u}(t- \Delta t) + \hat{u}(t - 2\Delta t)}{ (\Delta t)^{2}} $$

And for the initial condition: $\hat{u}(\Delta t) = \hat{u}(0)$.

I did it successfully in 1D case, $\hat{u}(\omega,t)_{tt} = (2\pi \omega)^{2}\hat{u}(\omega,t)$, using similar formulation. But I get something wrong in the solution in 2D case, the solution does not blow up but the behavior does not match the exact solution (using Gaussian curve as initial condition, then the wave should split with it's height equals half of the initial height, instead it decays drastically). See animation below:

enter image description here


Matlab code

clear;  tic;    dx = 0.005; dy = 0.005;  xmax = 1; xmin = -1;  ymax = 1; ymin = -1;  periodx = xmax-xmin; periody = ymax-ymin;  x = [(xmin):dx:(xmax)];  y = [(ymin):dy:(ymax)];  nx = length(x);  ny = length(y);  multiplierx = (1)/periodx;  multipliery = (1)/periody;  [X, Y] = meshgrid(x,y);    if (mod(nx,2) == 0)      w_right = [0:1:((nx/2)-1)];      w_left = [(-nx/2):1:-1];      w = [w_right, w_left];      wx = multiplierx*w;  else        w_right = [0:1:(((nx-1)/2))];      w_left = [(-(nx-1)/2):1:-1];      w = [w_right, w_left];      wx = multiplierx*w;  end    if (mod(ny,2) == 0)  w_right = [0:1:((ny/2)-1)];  w_left = [(-ny/2):1:-1];  w = [w_right, w_left];  wy = multipliery*w;  else    w_right = [0:1:(((ny-1)/2))];  w_left = [(-(ny-1)/2):1:-1];  w = [w_right, w_left];  wy = multipliery*w;  end    [Wx, Wy] = meshgrid(wx, wy);    f = @(x,y) 0.5*exp(-(x.^2 + y.^2)/0.1);  Z = f(X,Y);    dt = 0.005;  t = [0:dt:2]; nt = length(t);    u(:,:, 1) = f(X, Y);  u(:,:, 2) = f(X, Y);    uhat(:,:,1) = fft2(u(:,:, 1));  uhat(:,:,2) = fft2(u(:,:, 2));      surfl(u(:,:,1));  zlim([-1,1]);  view([0, 0]);  colormap(winter);  xlabel('x'); ylabel('y');  shading interp;    for i = [3:1:nt]      uhat(:,:,i) = (uhat(:,:,i-1) + (uhat(:, :, i-1) - uhat(:, :, i-2)))./(1 + (((dt*2*pi)^2)*(Wx.^2 + Wy.^2)));    u(:,:,i) = real(ifft2(uhat(:,:,i)));    uhat(:,:,i) = fft2(u(:,:,i));    disp(i);    end     toc;    for i = [1:1:nt]      surfl(u(:,:,i));    zlim([-1,1]);    view([0, 0]);    colormap(winter);    shading interp;    pause(0.01)  end  

Prove that $\sum_{n=0}^\infty \frac{1}{(2n+1)^2}=\frac{\pi^2}{8}$

Posted: 22 May 2022 06:16 AM PDT

I am asked to prove that $$\sum_{n=0}^\infty \frac{1}{(2n+1)^2}=\frac{\pi^2}{8}.$$ However, I am asked to prove it using the fact that $$\frac{\pi}{2}\tan\left(\frac{\pi}{2}z\right)=\sum_{m \text{ odd}}\left(\frac{1}{m-z}-\frac{1}{m+z}\right),$$ where $z\in \mathbb{C}$, which is something I proved in a previous exercise.

My first thought was using the fact that $$\frac{1}{m-z}-\frac{1}{m-z}=\frac{2z}{m^2-z^2}$$ and therefore $$\sum_{m \text{ odd}}\left(\frac{1}{m-z}-\frac{1}{m+z}\right)=\sum_{m \text{ odd}}\frac{2z}{m^2-z^2}=\sum_{n=0}^\infty \frac{2z}{(2n+1)^2-z^2}.$$ This last series is similar to the one I am aiming at, but I don't know how to transform it into the one that I want. Can someone help me?

Three consecutive powerful integers do not exist

Posted: 22 May 2022 06:22 AM PDT

I (who is not a professional mathematician), ended up in the following on which I would like to have your comment, because this is overly simple solution. I know this certainly should not be easy and that is why I'm here.

Two consecutive powerful numbers may be written as: ($A^2+n_1)(B^3+m_1)-A^2B^3=1$. (1)

Two powerful numbers a difference of which 2 may be written as: $(A^2+n_2)(B^3+m_2)-A^2B^3=2$. (2)

I suppose $A^2B^3,(A^2+n_1)(B^3+m_1)$ and $(A^2+n_2)(B^3+m_2)$ should then represent three consecutive powerful numbers.

Beckon, Edward (2019), "On Consecutive Triples of Powerful Numbers", Rose-Hulman Undergraduate Mathematic Journal, Vol. 20: Iss. 2, Article 3, teaches as I understood (please see Beckon for details and further info):

"Three consecutive numbers are of one of the following forms: (36k+7, 36k+8, 36k+9), (36k+27, 36k+28, 36k+29) or(36k-1, 36k, 36k+1)."

We get from equation (2): $A^2m_2+B^3n_2+m_2n_2=2$, and after division by 2, we get

$A^2M+B^3N=1-2MN$ (3), where $M=m_2/2$ and $N=n_2/2$.

Note that $m_2$ and $n_2$ must be even because A, B, $(A^2+n_2)$ and $(B^3+m_2)$ are odd. We can now formulate: M and N do not have the same parity.

We can notice that the following equations are constant:

$\frac{(36k+7)(36k+9)+1}{(36k+8)^2}=1$ (4)

$\frac{(36k+27)(36k+29)+1}{(36k+28)^2}=1$ (5) and

$\frac{(36k-1)(36k+1)+1}{(36k)^2}=1$ (6).

Examine now the equation (4) of the three alternatives. We can replace the terms in the following manner: $36k+7=A^2B^3$ and $36k+9=(A^2+2N)(B^3+2M)$ in the equation (4) which results in the following:

$A^2B^2(A^2+2N)(B^3+2M)+1=16(9k+2)^2$ (7)

We can solve the equation (7) for M and N:

$M=\frac{63-A^4B^6-2A^2B^6N+18(72k^2+32k)}{2A^2B^3(A^2+2N)}$ => $M=\frac{\frac{63-A^4B^6}{2}-A^2B^6N+18(36k^2+16k)}{A^2B^3(A^2+2N)}$ (8)

and

$N=\frac{63-A^4B^6-2A^4B^3M+18(72k^2+32k)}{2A^2B^3(B^3+2M)}$ => $N=\frac{\frac{63-A^4B^6}{2}-A^4B^3M+18(36k^2+16k)}{A^2B^3(B^3+2M)}$. (9)

The equations (8) and (9) should be examined together. Based on the equation (8), M may or may not be divided by two depending the term $\frac{63-A^4B^6}{2}$ and N. Because the same term is also in the equation (9), N is in the same situation. If the term $\frac{63-A^4B^6}{2}$ is divisible by two, the divisability of M depends on N such that both must be divisible by two if one of them is divisible by two. If M is not divisible by two, it also means that N is not divisible by two. Both possibilities thus contradict the assumption that M and N do not have the same parity.

The two other cases (5) and (6) go in the same way (in order to keep this short I don't write them here). Note also that the condition $AB(A^2+2N)=0$ (denominator) is not possible because an even number cannot be an odd number (and A and/or B are/is not 0).

What is above seems far too simple (Erdös could not solve this problem). OK, the question is, can this super tough problem be solved this easily? Or, can you point out where the calculations go wrong? Thank you for reading.

Inequality involving Stirling numbers of the first kind

Posted: 22 May 2022 06:18 AM PDT

For every $n\geq1$, there is some $f(n)$ such that for the Stirling numbers of the first kind we have:

$c(n,0)<c(n,1)<...<c(n,f(n)-1)\leq c(n,f(n))>c(n,f(n)+1)>...>c(n,n)$

Moreover, $f(n)=f(n-1)$ or $f(n)=f(n-1)+1$.

Minimize trace of quadratic inverse + LASSO

Posted: 22 May 2022 06:36 AM PDT

Given a symmetric positive definite matrix $S \in \mathbb R^{d\times d}$ and $\lambda > 0$, I would like to find

$$X^\star := \underset{{X\in\mathbb R^{d\times d}}}{\operatorname{argmin}} \operatorname{tr}\left(X^{-T}SX^{-1}\right) + \lambda \|X\|_1.$$

where

$$\|X\|_1 := \sum_{i=1}^d\sum_{j=1}^d\left\vert X_{ij}\right\vert$$

Has anyone seen this kind of objective function? In particular, it has proven to be quite tricky as it seems to be locally convex.

Is there a better way to find the "outline" path for a set of points in $3D$ space?

Posted: 22 May 2022 06:22 AM PDT

I am trying to create a $3D$ model. In $3D$ model file formats, a face is defined by ordering the vertices of that face clockwise as viewed from the center of the object. I have a script that generates the $xyz$ coordinates associated with each face, however I do not know the order they should be in. Here is an image of the problems caused from incorrect vertex ordering.

What is the best way to determine an "outline" order of a set of points approximately set on a plane, but oriented arbitrarily in $3D$ space? There will be anywhere from $4$ up to a dozen vertices.

I've been trying to develop a way to use the angle of each point with respect to a created center point to order them radially, but I'm having too much trouble accommodating different orientations. What also seems like it would work is finding the shortest closed loop path that goes through all points, but I can't figure out how to do this without simply brute forcing all possible paths, which has a factorial time complexity and will be impractical for the larger data sets I'll be using.

Making sure if it is Cauchy

Posted: 22 May 2022 06:18 AM PDT

In my real analysis exam I had a problem in which I proved that given any positive number $a\lt 1$ if $|x_{n+1} - x_n|\lt {a^n}$ for all natural numbers $n$ then $(x_n)$ is a Cauchy sequence.

This was solved successfully but the question is if $|x_{n+1} - x_n|\lt \frac 1n$ does that mean $(x_n)$ is Cauchy? Well my answer was yes because I could write this in the form of the first one, but now I am somehow confused with what I have answered since $1/n$ is a sequence of $n$ so maybe the answer is not necessarily true... Can you please provide me with the correct answer for this question?

RSA: Factorising $n$ using square difference

Posted: 22 May 2022 06:22 AM PDT

I am working on the following exercise:

I have to show that if for the two primes $p$ and $q$ used in RSA to compute $n$ holds:

$$p < q \le (1+\epsilon)*\sqrt{n}$$

one has to test at most $\frac{\epsilon^2\sqrt{n}}{2}$ values to find integers $s$ and $t$ such that $n = s^2-t^2$.

I do not know how I should start with this exercise, but I guess I need some algorithm that finds such a form for $n$. I think the straightforward idea would be to start at $\sqrt{n}$ and try out the numbers from there, but I guess I need a more systematic approach.

Unfortunately I am unable to find one. Could you help me?

Asymptotic behaviours from Fourier transforms

Posted: 22 May 2022 06:07 AM PDT

I have completely forgotten how one derives the asymptotic behavior in frequency space, given the asymptotic behavior of the function in real space (e.g. time). As an example example, it is often said that when $f(t)\sim t^\alpha$ for $t\to\infty$, then $f(\omega)\sim\omega^{-\alpha-1}$ for $\omega\to 0$. Aside from a dimensional analysis, how do you derive this result a bit more strictly?

A user's guide to Penrose graphical notation?

Posted: 22 May 2022 06:37 AM PDT

Penrose graphical notation seems to be a convenient way to do calculations involving tensors/ multilinear functions. However the wiki page does not actually tell us how to use the notation.

The several references, especially ones with Penrose as author, must be good places to start. But it is now the summer holiday and I am away from my school library. So I am wondering whether someone here has a nice introduction to "the use" of Penrose graphical notation.

Thanks!

Proof of $\det(\textbf{ST})=\det(\textbf{S})\det(\textbf{T})$ in Penrose graphical notation

Posted: 22 May 2022 06:37 AM PDT

For two matrices $\textbf{S}$ and $\textbf{T}$, a proof of $\det(\textbf{ST})=\det(\textbf{S})\det(\textbf{T})$ is given below in the diagrammatic tensor notation.

Here $\det$ denotes the determinant.

Why can the antisymmetrizing bar be inserted in the middle because "there is already antisymmetry in the index lines"?

For an introduction to the notation, you can refer to Figures 12.17 and 12.18 below.

No comments:

Post a Comment