Wednesday, July 6, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


How can I prove n^2+9 and n=7 are coprime, if n is a multiple of 58?

Posted: 06 Jul 2022 09:35 PM PDT

I have been working on this for quite a while, but I can't find the solutions with the Euclidean algorithm, contrapositives and others. Is there someone who can help me out here? Thanks!

Corestriction of smooth map

Posted: 06 Jul 2022 09:15 PM PDT

In Lee's Introduction to Smooth manifold, Example 5.28 shows that corestriction of smooth map need not to be smooth.

Let $S\subset\Bbb R^2$ be the figure eight (immersed) submanifold, with the topology and smooth structure induced by the immersion $\beta:(-\pi,\pi)\to \Bbb R^2$, $t\mapsto (\sin 2t,\sin t)$. Define a smooth map $G:\Bbb R\to\Bbb R^2$ by $$G(t) = (\sin 2t,\sin t).$$ It's easy to check that the image of $G$ lies in $S$. However, as a map from $\Bbb R$ to $S$, $G$ is not even continuous, because $\beta^{-1}\circ G$ is not continuous at $t = \pi$.

The fact that $\beta^{-1}\circ G$ is not continuous at $t =\pi$ is easy to see. But to conclude $G$ is not continuous from this, I think one thing need to be ensured is the continuity of $\beta^{-1}:S\to (-\pi,\pi)$. But once $\beta^{-1}$ is continuous, $S\cong(-\pi,\pi)$ which is not as LHS is compact whereas RHS is not. Or discontinuity of $\beta^{-1}\circ G$ implies discontinuity of $\beta\circ\beta^{-1}\circ G =G$? Why $G$ is not continuous?

How do a solve a quadratic bezier curve

Posted: 06 Jul 2022 09:14 PM PDT

I'm looking at specific cases of quadratic bezier curves defined by three points: p1 = [0,0], p2 = [i,m], p3 = [d,m], where i,d, and m are positive real numbers and i < d.

How can I derive the equations to solve for x and y individually? I understand that not all bezier curves are aligned to the x and y axis, so might not follow the usual pattern of a quadratic equation or formula, but intuitively it should be possible to state y = f(x) and x = f(y) because we're only concerned with one root and we know which direction it grows.

Matrix that is conjugate to its own inverse in $GL(4,\mathbb{Z})$

Posted: 06 Jul 2022 09:27 PM PDT

I am looking for examples of matrices that are conjugate to their own inverses in $GL(4,\mathbb{Z})$, i.e. $A=B^{-1}A^{-1}B$ for $A,B\in GL(4,\mathbb{Z}),\ A\neq I,\ B\neq I$.

I couldn't find it either by myself or Sage (it runs for a very long time and never gives an output).

I wonder if there are such matrices. If they exist, how can I find an example? I kind of want to see how they look like and be able to find a lot of examples...

Thanks in advance!

Edit: Also, I hope it is not in $GL(2,\mathbb{Z})\times GL(2,\mathbb{Z})$ or $GL(3,\mathbb{Z})$.

Orthogonal operators on gradient and Hessian

Posted: 06 Jul 2022 09:09 PM PDT

I have a question about an equality on page 16 in Alvarez's famous work Axioms and fundamental equations of image processing. I will try to make my question self-contained. Please remind me of anything unclear.

Domain and definition:

$x\in \mathbb{R}^N$, function $f:\mathbb{R}^N \rightarrow \mathbb{R}$, $T_{t}$ is an operator on $f$ such that $T_tf: \mathbb{R}^N \rightarrow \mathbb{R}$ is also a function on $\mathbb{R}^N$

The author defines [Isometry invariance] as below

$T_{t}(R \cdot f)=R \cdot T_{t}(f)$ for all $f, t \geqq 0$ and for all transforms $R$ defined by $(R \cdot f)(x)=f(R x)$ where $R$ is an orthogonal transform of $\mathbb{R}^{N}$.

Suppose $\frac{T_tf-f}{t}$ admits a limit when $t\rightarrow 0$, and the limit can satisfy a second-order PDE, i.e. $$\lim_{t\rightarrow 0}\frac{T_tf-f}{t}=F(H,p)$$

where $F$ is some function on $\mathbb{R}^{N\times N}\times \mathbb{R}^{N}$. $H=\nabla^2 f$ is the Hessian matrix, $p=\nabla f$ is the gradient.

Problem: The author states that, if we have [Isometry invariance], then $$F(R^T H R, R^T p)= F(H,p)$$ for any orthogonal matrix R.

My attempt: I omit the limit notation $\lim_{t\rightarrow 0}$ for simplicity \begin{align} F(H,p)&= \frac{T_tf-f}{t} \\ &= \frac{T_t(R^T\cdot R \cdot f)-R^T\cdot R \cdot f}{t} \qquad \text{(Orthogonal)} \\ &= \frac{R^T T_t(R \cdot f)-R^T \cdot R \cdot f}{t} \qquad \text{(Isometry invariance)} \\ &= R^T \cdot \frac{T_t(Rf)-Rf}{t} \\ &= R^T F(\nabla^2 (Rf) ,\nabla (Rf)) \qquad \text{(Definition)}\\ &= F(R^T H R, R^T p) \qquad \text{(Last equality, WHY?)}\\ \end{align}

I got stuck here. Specifically, in the second term, I don't know why $$R^T \nabla (Rf) = R^T p=R^T \nabla f$$

Any suggestions are welcomed.

Show that $\lim_{n \to \infty} \sum_{k = 0}^\infty (-1)^k \frac{n}{n + k} = \frac{1}{2}$.

Posted: 06 Jul 2022 09:36 PM PDT

Show that $$ \lim_{n \to \infty} \sum_{k = 0}^\infty (-1)^k \frac{n}{n + k} = \frac{1}{2}. $$

Progress: I rewrote the inside as

$$ S_n = \sum_{k = 0}^\infty (-1)^k \frac{n}{n + k} = \sum_{k = 0}^\infty (-1)^k \left( 1 - \frac{k}{n + k} \right),$$

at which point I sought to find the (ordinary) generating function

$$A(x) = \sum_{k = 0}^\infty (-1)^k \left( 1 - \frac{k}{n + k} \right) x^k = \frac{1}{1 + x} - \sum_{k = 0}^\infty (-1)^k \left( \frac{k}{n + k} \right) x^k , $$

because then $S_n = A(1)$. Unfortunately, I have been trying to evaluate the last sum using the Taylor Series expansion of $\ln \left( 1 + x \right)$ at $x = 0$, but I am stuck. While is true that

$$\ln \left( 1 + x \right) = \sum_{k = 1}^\infty (-1)^{k + 1} \left( \frac{1}{k} \right) x^k $$

(for $x \in (-1, 1)$ ; note this is irrelevant if we regard all g.f. as "formal"), I can't figure out how to transform this into $\sum_{k = 1}^\infty (-1)^{k + 1} \left( \frac{1}{n + k} \right) x^k$. The motivation is that once I can find that sum in explicit form, I can differentiate it w.r.t $x$ to find the desired sum. I guess my question is also: "How to shift backwards the coefficients of an ordinary generating function?", though I am eager to see other (possibly more elegant) solutions as well.

Note: One attempt I tried at shifting the coefficients was dividing the coefficients by some power $x^k$, but this introduces negative coefficients, whereas I need those terms with negative coefficients to disappear.

Six members of a family go to the cinema and all want to be seated on the same row next to each other. The row contains 20 seats.

Posted: 06 Jul 2022 09:20 PM PDT

Could someone help me with this?

I couldn't fit the whole question in the title.

Six members of a family go to the cinema and all want to be seated on the same row next to each other. This row contains 20 seats. Find the number of ways they can arrange themselves.

I've scratched the surface of the question by identifying 6! as the ways in which they can arrange themselves. But I'm not sure how to make it so that they will be next to each other.

What is eigenvalue decay?

Posted: 06 Jul 2022 08:41 PM PDT

I've come across the term "eigenvalue decay" in several machine learning papers recently, but I do not know what it means. I also haven't managed to find a definition online.

We are interested in the exact structure of the eigenvalues of the kernel matrix. In particular, we want to explain the experimental findings that the eigenvalues decay as quickly as their asymptotic counterparts.

This relates to kernel-based methods.

Compact subgroups of $p$-adic fields and the groups $GL_n$ over them

Posted: 06 Jul 2022 08:40 PM PDT

I am trying to fill in some details in a collection of materials I'm currently studying. I have a pretty poor understanding of the topology of $p$-adic fields and the general linear group over these fields.

To start with, fix a $p$-adic field $F$ (and to be honest I'm not sure if this has to be of characteristic zero for the facts I'm concerned about). I was told that the subgroup $G_j=\{x\in F:|x|_F\leqslant q^{-j}\}$ is compact, where $|\cdot|_F$ is the normalized absolute value on $F$ and $q$ the size of the residue field $\mathfrak o_F/\mathfrak p$. I read in Neukirch that the valuation rings of local fields (which seems to be the $p$-adic fields I'm referring to, i.e., local non-archimedean fields possibly of a finite characteristic) are already compact. So is it true that the sets above are just closed subsets of some compact set so they are compact?

When dealing with the locally profinite group $GL_n(F)$ where $F$ is a $p$-adic field, we have a neighborhood basis around the identity, defined by $$K_0=\{g\in GL_n(F):g\in M_n(\mathfrak o_F),|\det g|_F=1\},\quad K_j=GL_n(\mathfrak o_F)\cap ( 1+\pi^jM_n(\mathfrak o_F))$$ where $\pi$ is a prime element. It seems to me that their compactness follows from that of $M_n(\mathfrak o_F)$. I'm not sure why this is true; do we simply regard $M_n(\mathfrak o_F)$ as $\prod_i^{n^2}\mathfrak o_F$ which is simply a product of compact sets (since we give $M_n(F)$ the product topology) and thus compact?

Thanks in advance. Any help is appreciated.

Also, I sometimes see people write $K_0=GL_n(\mathfrak o_F)$. But in this case where does the condition $|\det g|_F=1$ go? (I understand that asking multiple questions in a single thread is unacceptable, but I'm not sure if this seemly minor point should be asked as a separate question; I apologize in advance if I should do the latter.)

Integration over complex functions in the complex plane

Posted: 06 Jul 2022 08:29 PM PDT

As usual, suppose that $U$ is a simply connected open subset of $\mathbb{C}$, $f : U \to \mathbb{C}$ is an analytic function, and $\gamma$ is a positively-oriented contour in $U$. Cauchy's theorem on integrals then states: \begin{equation}\label{one}\tag{1} \oint_{\gamma}f(z)\, \mathrm{d}z=0. \end{equation} Now let $A$ be the area enclosed by the contour $\gamma$. Consider the integral of $f(z)$ taken over every point in $A$, i.e., $$\int_A f(z) \, \mathrm{d}A.$$ How does the above integral relate to the integral given in \eqref{one}?

When are eigenvalues of $A+A^T$ all positive?

Posted: 06 Jul 2022 09:02 PM PDT

Suppose $A$ is a square real-valued matrix, which restrictions on $A$ guarantee that all eigenvalues of $A+A^T$ are positive?

help deriving Beta function as a ratio of Gamma functions

Posted: 06 Jul 2022 09:14 PM PDT

It comes directly from the wikipedia page on the Beta function.

$$ \begin{aligned} \Gamma(x) \Gamma(y) &= \int_{u=0}^\infty e^{-u}u^{x-1} du \;\; \cdot \;\; \int_{v=0}^\infty e^{-v}v^{y-1} dv \\ &= \int_{v=0}^\infty \int_{u=0}^\infty e^{-u - v}u^{x-1} v^{y-1} dudv \\ \end{aligned} $$

$u = zt$ and $v = z(1-t)$ means that $du = zdt$ and $dv = (1-t)dz$ I get the following after the substitution

$$ \begin{aligned} &= \int_{z=0}^\infty \int_{t=0}^1 e^{-z}(zt)^{x-1} (z(1-t))^{y-1} z (1 - t) \; dtdz \\ \end{aligned} $$

But the Wikipdedia page has it as...

$$ = \int_{z=0}^\infty\int_{t=0}^1 e^{-z} (zt)^{x-1}(z(1-t))^{y-1}z\,dt \,dz $$

So I must have gone wrong somewhere, but I cannot see my mistake. Can you find it? If I take the Wikipedia page as correct, I can get the rest of the way there, but I do not see why I am getting an extra $(1-t)$ after the substitution.

Find the area between the graphs of two functions involving exponentials

Posted: 06 Jul 2022 08:59 PM PDT

Let the function $f:\Bbb{R} \to \Bbb{R}$ and $g:\Bbb{R} \to \Bbb{R}$ be defined by $$f(x)=e^{x-1}-e^{-|x-1|}$$ and $$g(x)=\frac{1}{2}(e^{x-1}+e^{1-x}).$$ Then find the area of the region in the first quadrant bounded by the curves $y=f(x)$,$y=g(x)$ and $x=0$.

My attempt: The graphs of $y=f(x)$ and $y=g(x)$ intersect at a point where $x=1+\ln \sqrt{3}$. So the region in the first quadrant between these two graphs split into two parts, that where $0$ to $1$ and $1$ to $1+\ln \sqrt{3}$.

My doubt: Any short method for this problem.Actually the work demand is too long as it has four parts ,to idntify the region .

How to solve this equation in order to find surface area of a solid of revolution

Posted: 06 Jul 2022 09:21 PM PDT

Problem: Find the surface area of the solid formed by revolving $y = \frac{1}{5}x^5 + \frac{1}{12x^3}$ about the $y$-axis for $x$ between $1$ and $2$.

Question: How do I solve this equation for $x$?

I'd like to use the formula $$ 2\pi\int_{y_1}^{y_2} x(y)\sqrt{1 + \left(\frac{dx}{dy}\right)^2} \, dy $$ but I can't figure out how to solve for $x$ in order to use it.

Note that $y(x)$ is one-to-one on the $x$-interval, so I know an inverse exists there. I also know that I could implicitly differentiate to find $\frac{dx}{dy}$ if need be; nevertheless, the formula still requires me to know $x(y)$.

Could this be a typo and what was meant is to revolve around the $x$-axis instead? In this case, I have found the solution without a problem.

If $a > 0$, $b > 0$ and $b^{3} - b^{2} = a^{3} - a^{2}$, where $a\neq b$, then prove that $a + b < 4/3$.

Posted: 06 Jul 2022 09:07 PM PDT

If $a > 0$, $b > 0$ and $b^{3} - b^{2} = a^{3} - a^{2}$, where $a\neq b$, then prove that $a + b < 4/3$.

Now what I thought is to manipulate given result somehow to get something in the form of $a + b$: \begin{align*} b^{3} - b^{2} = a^{3} - a^{2} & \Longleftrightarrow b^{3} - a^{3} = b^{2} - a^{2}\\\\ & \Longleftrightarrow (b-a)(a^{2} + ab + b^{2}) = (b+a)(b-a)\\\\ & \Longleftrightarrow a^{2} + ab + b^{2} = b + a \end{align*}

but what next?

Can global sections on a scheme $X$ over a field $k$ be treated as scheme morphisms from $X$ to the affine line over $k$?

Posted: 06 Jul 2022 09:27 PM PDT

Let $X$ be a scheme over a field $k$. Let $R$ be the $k$-algebra of global sections of $X$'s structure sheaf.

Is there any way to view scheme morphisms $X\rightarrow \mathrm{Spec}(k[t])$ as elements of $R$? Or are these two notions unrelated?

(Apologies if this question is ill-formed; I have little background in schemes.)

Finding angles between planes in 4D space.

Posted: 06 Jul 2022 09:11 PM PDT

Suppose I have two intersecting planes in a four dimensional Euclidean space. It seems to me that there are two angles between these planes. If the two planes intersect in a line then one of those angles is zero. If the two angles are non-zero then the planes intersect in a point. If one plane is the WX plane and the other the YZ plane then the two angles are both 90 degrees.

The best I can do is this. Each plane is defined by two orthonormal basis vectors. Project a basis vector onto the other plane. Then the angle is the arccos of the length of the projection. Do this with each basis vector to get the two angles. However I believe that the result depends on the choice of basis vectors.

What I need could be to find a unit vector in a plane so that it's projection onto the other plane is of maximum length. I don't know how to do that. I suppose find an expression for the length of the projection, take the derivative, and find where it is zero. Is there an easier way?

Given that $T(-1,2,-1) = (-7,-3)$, $T(0,-1,1) = (4,3)$ and $T(-2,3,0) = (-7,-3)$, determine $T(-4,8,-1)$.

Posted: 06 Jul 2022 09:10 PM PDT

Let $T:\textbf{R}^{3}\to\textbf{R}^{2}$ be a linear transformation. Given that $T(-1,2,-1) = (-7,-3)$, $T(0,-1,1) = (4,3)$ and $T(-2,3,0) = (-7,-3)$, determine $T(-4,8,-1)$.

(a) $(−42,−18)$

(b) $(−7,−3)$

(c) $(10,3)$

(d) $(−29,−15)$

(e) $(−19,−12)$

How do you find the sum of a set of fractions where the denominator is increasing linearly?

Posted: 06 Jul 2022 08:38 PM PDT

Is there a quick way to find the sum of fractions where the denominator increases linearly within a finite range and infinitely?

For example:

1/5 + 1/6 + 1/7 .....

or:

1/5 + 1/7 + 1/9 .... 1/101

generating function in discrete mathematics

Posted: 06 Jul 2022 08:47 PM PDT

Exercise from mock exam

I really need help on question b and c on this specific exercise, anyone has an idea ?

Can we apply the dominated convergence theorem here?

Posted: 06 Jul 2022 09:21 PM PDT

Let

  • $(E,\mathcal E)$ be a measurable space
  • $\mathcal E_b:=\{f:E\to\mathbb R\mid f\text{ is bounded and }\mathcal E\text{-measurable}\}$
  • $(\kappa_t)_{t\ge0}$ be a Markov semigroup on $E$
  • $f\in\mathcal E_b$ with $$(\kappa_th-f)(y)\xrightarrow{h\to0+}f(x)\;\;\;\text{for all }y\in E\tag1$$

Are we able to show that $$g(t):=(\kappa_tf)(x)\;\;\;\text{for }t\ge0$$ is continuous?

Let $t\ge0$. Noting that $(\kappa_t)_{t\ge0}$ is a contraction semigroup on $\mathcal E_b$, we see that $$\left|(\kappa_th-f)(y)\right|\le2\|f\|_\infty\;\;\;\text{for all }y\in E\tag2$$ and hence $$(\kappa_{t+h}f-\kappa_tf)(x)=(\kappa_t(\kappa_hf-f))(x)=\int\kappa_t(x,{\rm d}y)(\kappa_hf-f)(y)\xrightarrow{h\to0+}0\tag3$$ by Lebesgue's dominated convergence theorem. Thus, $g$ is right-continuous at $t$.

Now we may write $$(\kappa_{t-h}f-\kappa_tf)(x)=(\kappa_{t-h}(f-f\kappa_hf))(x)=\int\kappa_{t-h}(x,{\rm d}y)(f-\kappa_hf)(y)\tag4$$ for all $h\in[0,t]$, but this time the dependence of $h$ on the right-hand side prevents us from a simple application of Lebesgue's dominated convergence theorem.

Can we fix this?

Maybe $\kappa_{t-h}$ tends to $\kappa_t$ in a suitable sense ...

Why must the point z = 0 must be on the branch cut of every branch of the multiple-valued function $F(z) = z^{1/2}$

Posted: 06 Jul 2022 08:37 PM PDT

In my complex analysis textbook, the author writes: "It is not a coincidence that the point $z = 0$ is on the branch cut for $f1, f2,$ and $f3$. The point $z = 0$ must be on the branch cut of every branch of the multiple-valued function $F(z) = z^{1/2}$."

Why must the point the point $z = 0$ must be on the branch cut of every branch of the multiple-valued function $F(z) = z^{1/2}$?

Does it have something to do with the square root of $0$ being undefined in the complex plane? Why is the square root of $0$ undefined?

My textbook claims undefined: enter image description here

Finding a dominant root $\alpha$ for a semisimple, irreducible Lie-algebra $\mathfrak{g}$.

Posted: 06 Jul 2022 09:01 PM PDT

Suppose $V$ is an irreducible $\mathfrak{g}$ module, with highest weight $\lambda$. And $\mathfrak{g}$ being a semisimple Lie-algebra, $V$ being a complex vector space. My question goes as follows:


Then does there exist a dominant root $\alpha$ for $\mathfrak{g}$, s.th. $(\lambda, \alpha^{\lor})=1, (w_0 \lambda, \alpha^{\lor})=-1, (\mu, \alpha^{\lor})=0$ for all weights $\mu$ with $w_0 \lambda < \mu < \lambda$? Here $w_0$ is the longest element in the Weyl group $\mathfrak{W}$ , and $<$ the usual ordering.


Well, in particular if this were true, then according to Bourbaki, this would imply, that $\lambda$ is minuscule. My main problem is, that i have no clue, how i actually should proceed in proving this, exept, that i propably have to use the irreducibility and semisimplicity somehow. So any help would be appreciated very much.

Are the determinants of these matrices always negative under these conditions?

Posted: 06 Jul 2022 08:54 PM PDT

Suppose $A,B \in M_{n}(\Bbb{R})$ such that $A = \left[C_{1}\middle|\frac{I}{0\dots0}\right], B= \left[C_{2}\middle|\frac{I}{0\dots0}\right]$ , where $A$ and $B$ have different first columns (represented as $C_{1}, C_{2}$).

Let $\lambda_{i}, i=1, \ldots, n$ denote the eigenvalues of $AB^2$. Then we have the condition that $|\lambda_{i}|<1 \, \forall i$.

Let $\beta_{i}, i=1, \ldots, n$ denote the eigenvalues of $A^2B$. Then we have the condition that $|\beta_{i}|<1 \, \forall i$.

Then I have an intuition that $\det(AB+A+I) < 0$ and $\det(BA+B+I)<0$ is not possible. That is both of the determinants cannot be negative. I am not sure how to prove it?

The approach I am thinking is if $\det(AB + A + I) < 0$, then the matrix $AB+A+I$ should have odd number of real negative eigenvalues (say $n_{A}$). Similarly, the matrix $\det(BA + B + I) < 0$ should have odd-number of negative eigenvalues (say $n_{B}$). That means we have $n_{A}n_{B}$ as an odd number and if the intuition is true then $n_{A}n_{B}$ cannot be an odd number, it will always be even.

Since the absolute values of the eigenvalues of $AB^2,$ and $A^2B$ are less than one, that means we have $|\det(AB^2)| < 1$ and $|\det(A^2B)|<1$.

Prove that a matrix that is the product of reflection and an ortogonal matrix is invertble

Posted: 06 Jul 2022 08:53 PM PDT

Hello: I need help with this problem:

Let $V=(V,b)$ be a finite dimensional vector space equipped with $b$ a symmetric and positive definite bilineal form. And let $\{e_1,…,e_n\}$ be a orthonormal basis for the subspace $\ker(P_A)$ ($P_A$ is defined below).

For a matrix $A \in \mathrm{O}(V)$, let $\mathrm{O}_*(V)$ the subset of $\mathrm{O}(V)$ such that be the matrix $P_A:=\frac{A-JAJ}{2}$ is invertible, where $J$ is a complex structure (a matrix such that $J^2=-1$).

Let $n=\mathrm{dim} \mathrm{ker}(P_A)$. For every $j \in \{1,…,n\}$ we define the reflexions $r_j$ such that $r_j(e_j)=-Je_j$, $r(Je_j)=-e_j$ and $r_j(v)=v$ for and $v \in V$ such that $b(v,e_j)=b(v,Je_j)=0$. Finally, let $$R:=r_1r_2\dots r_n \in \mathrm{O}(V).$$

I need to prove that $$RA \in \mathrm{SO}_*(V),$$ where similarly as $\mathrm{O}(V)$: $\mathrm{SO}_*(V)$ is the subset of $\mathrm{SO}(V)$ such that $P_B:=\frac{B-JBJ}{2}$ is invertible.

I already proved that $RA \in \mathrm{SO}(V)$; the only thing that I haven't been able to figure out is to prove that $\frac{1}{2}(RA-JRAJ)$ is invertible, since $n$ can be even or odd.

Also, $P_{r_j}$ is not invertible since $\det(r_j)=-1$.

¿What is good and optimized approach to deal with the product of reflections $$R=r_1r_2\cdots r_n?$$

Any help will be greatly appreciated. Thanks :).

Find the measure of the relationship $\frac{1}{r_1} - \frac{1}{r} $in the figure below

Posted: 06 Jul 2022 09:11 PM PDT

In In a right triangle $ABC$ ($A=90°$) with inradio $r$, cevian $AD$ is drawn in such a way that the inradium of $ABD$ and $ADC$ are equal to $r1$.If $AD=2$, calculate $\frac{1}{r1}-\frac{1}{r}$ (Answer:0,5).

My progress:

enter image description here

$\triangle CED \sim \triangle CAB \\ \frac{CE}{AC}=\frac{DE}{AB}=\frac{CD}{BC}\\ \triangle BDL \sim \triangle BCA\\ \frac{DL}{AC}=\frac{BD}{BC}=\frac{LB}{AB}\\ CE = CI\\ BK = BL$

but I still haven't found the necessary relationship to finalize

Green's function for Laplace equation in the exterior of disks

Posted: 06 Jul 2022 09:37 PM PDT

For two disks in $\mathbb{R}^2$, namely, $D_1(0)$ and $D_1(a)$ for some $a=(a_1,a_2)$ such that these two disks are non-overlapping. How can we find the Green function for the Dirichlet Laplace equation $\Delta u=0$ in the domain $\mathbb{R}^2 \setminus (\overline{D_1(0)} \cup \overline{D_1(a)})$? For a single disk, namely the domain $\mathbb{R}^2 \setminus \overline{D_1(0)}$, I think the Green function in this case reads $$ G(x,y) = \dfrac{1}{2\pi} \left[ -\log|x-y| + \log\left|x|y|-\dfrac{y}{|y|}\right| \right]$$ for $x,y \in \mathbb{R}^2$.

Derivative of a distance function

Posted: 06 Jul 2022 09:08 PM PDT

I have a question about a derivative of a distance function.

Let $D$ be a bounded and connected open subset of $\mathbb{R}^{d}$ with Lipschitz boundary. We define the following distance function $F$ on $\mathbb{R}^{d}$. \begin{align} F(x)=d(x,\partial D)\left(=\inf_{y \in \partial D}|x-y| \right) \end{align} Since this function is Lipschitz continuous, differentiable in a.e. sense (Rademacher's theorem).

Question

I want to know the value of $\left|\nabla F \right|$.

Since Lipschitz constant of $F$ is $1$, we can deduce $\left| \nabla F \right| \le 1$ a.e. Can we show \begin{align} \left|\nabla F \right| \ge 1 \text{ a.e. or }\left| \nabla F \right| > 0 \text{ a.e. ?} \end{align}

Trying to understand significance of monoid as a one object category

Posted: 06 Jul 2022 08:37 PM PDT

So I generally understand the idea of a monoid from set theory, but I'm trying to understand better the mapping to category theory.

http://en.wikipedia.org/wiki/Monoid#Relation_to_category_theory

I think there are a couple of things that are tripping me up, and I think it'd be instructive to have a concrete example, so let's do the monoid of integers. So the associative operator is +, and the zero is 0.

As a category, it has one object.. what is that object? And according to the wikipedia link, we have morphisms for every element... so we have a category whose object is Int, and the morphisms are 0, 1, 2, 3... (important question: can an object have multiple distinct morphisms? I didn't think that was defined, as I thought that morphisms were completely defined by the domain and codomain). So now we need our associative operator, ideally such that the map named 1 composed with the map named two is the same as the map named 3. I think this is where there is a breakdown in how I am thinking about these. How do we think about that last property?

Thank you very much for any help you have.

Update: the answer given gets me 95% of the way there, but there is one lingering question: The one thing I'm not sure about is how do we formalize equivalence? Like, we can definitely say that (a * b) * c = a * (b * c) (by the definition of a category), but how do we "define" 1 * 2 = 3 and think about it in a rigorous way when they are all morphisms with the same domain and codomain. Another way of asking this is if we have a morphism called 2, and a morphism called 3, and a morphism called 7, how do we know that 2*3=7?

“Unique” doesn't have a unique meaning

Posted: 06 Jul 2022 09:15 PM PDT

When using words like "unique" and "any", particularly in technical communication, I sometimes find myself deliberating over which definition and tenor is the most natural, or which alternative phrasing might be clearer even if less succinct or accessible.

Does "Every boy has a unique shirt" mean that

  • no two boys have the same shirt,

or does it mean that

  • no two shirts belong to the same boy?

I suppose the former; if so, then does the latter mean "Every shirt belongs to a unique boy"?

No comments:

Post a Comment