Saturday, September 4, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


At what value of `a` sum of series equals 1

Posted: 04 Sep 2021 08:09 PM PDT

Series is Sigma(i=1 to infinity) i^2/a^i equals 1.

Problem with finding a substitution in equation

Posted: 04 Sep 2021 08:05 PM PDT

My work on the mechanics of fluids has led me to find the following function, which is an expression of Eta in terms of x and r, and the rest of the symbols are fixed.

my equation

I intuitively came to the conclusion that the above equation must be independent of x, meaning that I must be able to find an Eta(x,t) that when I put it in the above equation, the equation be independent of x, but I could not find it practically. I tried Bessel and Legendre functions and got no answer. My goal is to find this function that can be eliminated by placing it in the equation. I am waiting for your ideas. I appreciate your efforts.

Finding a suitable sample size from unknown population in order to confidently assume correlation

Posted: 04 Sep 2021 08:01 PM PDT

Let's say I have two bags containing many balls, they can be any colour (including all being the same colour). The number of balls in each bag is unknown (and may be infinite). I randomly select 1 ball from each bag and find both are blue. I do this again, and find that they are both blue again. Intuitively, if after doing this 100s of times and finding they are both blue, I can begin to very strongly suspect that both bags contain only blue balls. If I do this 1 million times and the result is all blue, I can reasonably with very high confidence assume that the two bags contain only blue balls. However, it may be that I'm just very very lucky and the one millionth and one draw will give me a red ball, I just haven't seen it yet.

So what is the number of times I must conduct this experiment to assume with high confidence (> 95%) that both bags contain only blue balls?

A real world example would be if I have a theory that states: "gravity always attracts". Now it is obviously impossible for me to test every combination of every material in existence to check if gravity always attracts, but given enough tests, we reasonably assume that "gravity always attracts" to be a fact.

Prove $\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\pi\ln\big(\frac 2 e\big)$

Posted: 04 Sep 2021 08:07 PM PDT

I was having trouble with the question:

Prove that $$I:=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\pi\ln\big(\frac e 2\big)$$

My Attempt

Perform partial fractions $$I=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2}dx-\int_0^{\infty}\frac{\ln(1+x^2)}{1+x^2}dx=$$ First integral $$\int_0^{\infty}\frac{\ln(1+x^2)}{x^2}dx=-\Bigg[\frac{\ln(x^2+1)}{x}\Bigg]_0^{\infty}+\int_0^{\infty}\frac{2}{x^2+1}=\pi$$ Second integral $$\int_0^{\infty}\frac{\ln(1+x^2)}{1+x^2}dx=$$ How do you solve this integral? Thank you for your time

Find f(x) in the following condition

Posted: 04 Sep 2021 07:40 PM PDT

$$f(f(x))=x\int_0^xf(t)dt$$

I tried differentiation using chain and lebinitz rules on the lhs and rhs and I ended up with $$f'(f(x)).f'(x)=\int_0^zf(t)dt \ + xf(x)$$ Now I don't know how to solve further from this. Any help would be appreciated.

Is there a way to convert a 2d bezier curve into 1d (with time as another dimension)?

Posted: 04 Sep 2021 07:35 PM PDT

I was trying to do some curve fitting for 2d points, and I immediately found this piece of code here: https://github.com/erich666/GraphicsGems/blob/master/gems/FitCurves.c

But after some research I think the algorithm does work for me, since my target curve is a 1D Bezier curve with a time dimension, basically: $$\mathrm{P_{mine}} = (t, \mathrm{Bezier}(t, P_0, P_1, P_2, P_3))$$ While the code produces a 2D Bezier curve given a series of 2D samples: $$\mathrm{P_{Gems}}=(\mathrm{Bezier}({u, x_0, x_1, x_2, x_3}), \mathrm{Bezier}({u, y_0, y_1, y_2, y_3}))$$

They don't appear to be equivalent.

So either there's an easy way to convert $\mathrm{P_{Gems}}$ into $\mathrm{P_{mine}}$, or I'll have to find another algorithm. To convert $P_{Gems}$ to $P_{mine}$ seems to involve solving this equation for $u$, so that I can represent y with it: $$x = (1-u)^3x_0 + 3u(1-u)^2x_1+3u^2(1-u)x_2+u^3x_3$$ But it looks so hopelessly complicated, working out an algorithm to fit my curves seems easier.

Do I get anything wrong here?

How to determine the range of values of t for which the sinusoid $x(t)=Ae^{-at} \cos{\omega t}$ has negative second derivative?

Posted: 04 Sep 2021 07:29 PM PDT

Considering the damped sinusoid $$x(t)=Ae^{-at} \cos{\omega t}$$

I would like to obtain the exact ranges of t where the second derivative is negative.

$$x'(t)=-aAe^{-at}\cos{\omega t} - Ae^{-at} \omega \sin {\omega t}$$

Note that this first derivative is zero when $\tan{\omega t}=\frac{-a}{\omega}\implies t=\frac{tan^{-1}{\frac{-a}{\omega}}+k\pi}{\omega},k\in \mathbb{Z}$

Note also that if we consider $t$ a function of $k$ we can calculate the difference in t between two successive critical points: $t(k+1)-t(k)=\frac{\pi}{\omega}$. Because we have a sinusoid, this is the distance in t between a max and a min or vice versa.

Now I calculate when the second derivative is zero

$$x''(t)=a^2Ae^{-at}\cos{\omega t}+aAe^{-at} \omega \sin{\omega t} +aAe^{-at} \omega \sin{\omega t} - Ae^{-at} \omega^2 \cos{\omega t}$$ $$x''(t)=Ae^{-at}(\cos{\omega t}(a^2-\omega^2)+2a\omega\sin(\omega t))$$ $$x''(t)<0 \implies \cos{\omega t}(a^2-\omega^2)+2a\omega\sin(\omega t) < 0$$ $$-\tan{\omega t}>\frac{a^2-\omega^2}{2a\omega}$$ $$\tan{\omega t}<\frac{\omega^2-a^2}{2a\omega}$$ $$t<\frac{tan^-1\big({\frac{\omega^2-a^2}{2a\omega}}\big)}{\omega}$$

I am not sure how to interpret this result.

If we take a specific case where $a=0.25$ and $\omega=1$ we have $x(t)=e^{-0.25t}$, we have: Sinusoid, first derivative and second derivative

The condition we derived for $x''(t)<0$ is $t<tan^-1{\frac{1-0.25^2}{2*0.25*1}}=1.08$.

From the graph and intuition, it seems that there is an inflection point every time the graph of $x(t)$ crosses $x=0$. From this point of view, the sign of the second derivative changes every $\frac{\pi}{\omega}$ length of t, or in our example every length $\pi$ in t.

Considering the result obtained above, namely $$t<\frac{tan^-1\big({\frac{\omega^2-a^2}{2a\omega}}\big)}{\omega}$$

How do we get this result that the sign of second derivative is half the time positive and half the time negative? This result of t smaller than a certain value seems to imply otherwise, namely that the second derivative spends much more time being negative than positive.

Confusion about independence of CDFs

Posted: 04 Sep 2021 07:18 PM PDT

I am confused about the solution given here to part c of the question posed below. I understand the algebra steps, but I do not understand how we can conclude at the end that therefore A_t and R_t are independent. I thought to show independence of the CDFs, we would need to show that $P(A_t \leq x, R_t \leq y) = P(A_t \leq x)\cdot P(R_t\leq y)$. I do not see how the result derived, that $P(A_t > x, R_t > y) = P(A_t > x)\cdot P(R_t > y)$, implies independence. I believe I am missing some useful identity in probability.

enter image description here

And the solution for part c is given as follows:

enter image description here

Graded algebra associated to finite groups

Posted: 04 Sep 2021 07:17 PM PDT

I would like to know many examples of graded algebra that arise or associated from a finite group. Please give me some reference.

Thanks in advance

Difficult limit: $ \lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx $

Posted: 04 Sep 2021 08:10 PM PDT

I found this question online

Find the limit:$$\lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx $$

I've been told that I need to use the gamma function by converting $n^{3/2}=n\sqrt{n}$ and doing $u$ sub $u=x\sqrt{n}\rightarrow du=\sqrt{n} dx$

$$\lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx = \lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{nx^2\sqrt{n}}{(x^2+1)^n}dx = \lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{u^2}{(\frac{u^2}{n}+1)^n}dx $$ $$=\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{u^2}{(\frac{u^2}{n}+1)^n}dx $$

How does one proceed from here to get the gamma function?

Solving this differential equation $y'=\frac{x+y+4}{x-y-6}$

Posted: 04 Sep 2021 07:20 PM PDT

If $\frac{dy}{dx}=\frac{x+y+4}{x-y-6}$ then I set $x=z+1$ and $y=w-5$ to get $\frac{dw}{dz}=\frac{z+w}{z-w}$ from here it's been solved in the text (George Simmons Differential equations). My concern is I made a mistake using $\frac{dy}{dx}=\frac{dw}{dz}$. My reason is $\frac{dy}{dx}=\frac{dy}{dw}\frac{dw}{dz}\frac{dz}{dx}$. The answer I get doesn't have (x+y+4) or (x-y-6) anywhere unlike the solutions I find through wolfram alpha. My solution is $\tan^{-1}(\frac{y+5}{x-1})=\log(\sqrt{(x-1)^2+(y+5)^2}+c$.

Challenge - Lucas number infinite sum

Posted: 04 Sep 2021 07:41 PM PDT

The challenge: Find the exact value of

$$\sum_{n=0}^{\infty}\frac{1}{L_{n}L_{n+2}}$$

Where $L_{n}$ is the $n$-th Lucas number - that is, $L_{0}=2,L_{1}=1$ and $L_{n}=L_{n-1}+L_{n-2}$.

Does linearity require a field, ring, or group?

Posted: 04 Sep 2021 07:23 PM PDT

Given any set $S$, a function $f : S^n \to S^m$ is linear iff for all $a, b \in S, x, y \in S^n, f(ax+by) = af(x) + b(y)$. This presupposes, of course, that addition and scalar multiplication in $S$ are defined. What requirements do we impose on these operations? Must they constitute a field? A ring? Since different definitions of addition and multiplication may be valid and result in fields, are there functions which are linear in any field definition, or are we always talking implicitly about a specific, fixed field?

Thinking about it further, the question as posed may be trivial, because we've only defined linearity, but haven't given it any significance. Does the definition below tighten things:

Given a set $F$ of functions from $S^n \to S^m$, $L$ is the subset of $F$ such that:

  1. $f, g \in F \implies f \circ g \in F$
  2. There is a one-to-one correspondence between elements of $L$ and matrices, a matrix defined as $n \cdot m$ values from $S$.

What restrictions on addition and multiplication are necessary for that definition to work? To allow standard matrix multiplication to implement the functions? To allow composition to be implemented as matrix multiplication?

Improper integral $\int_0^{\infty} \frac{1}{(x+1)(\pi^2+\ln^2(x))}dx$

Posted: 04 Sep 2021 08:02 PM PDT

I am having trouble solving

$$\int_0^{\infty} \frac{1}{(x+1)(\pi^2+\ln^2(x))}dx$$

let $\ln(x)=u$ then $x du= dx $

$$\Rightarrow \int_0^{\infty} \frac{1}{(x+1)(\pi^2+\ln^2(x))}dx=\int_0^{\infty} \frac{x+1-1}{(x+1)(\pi^2+u^2)}du=\int_0^{\infty} \frac{1}{\pi^2+u^2}du-\int_0^{\infty} \frac{1}{(e^u+1)(\pi^2+u^2)}du$$

$$=\frac{1}{2}-\int_0^{\infty} \frac{1}{(e^u+1)(\pi^2+u^2)}du$$

How do you proceed from here? I put it into an integral calculator and it stated "Antiderivative or integral could not be found." How do you solve this ?

Prove that $\sqrt{n - 1}\leq u_n \leq \sqrt n$, and find $\lim_{n\rightarrow \infty }\frac{u_n}{\sqrt n}$

Posted: 04 Sep 2021 07:20 PM PDT

Given a $(u_n)$ sequence that has the following recursive formula: $$\left\{\begin{matrix} u_1=1 \\ u_{n+1}=\frac{u_n^2+2n}{3u_n}, n \in \mathbb{N}^* \end{matrix}\right.$$ Prove that $\sqrt{n - 1}\leq u_n \leq \sqrt n$, and find $\lim_{n\rightarrow \infty }\frac{u_n}{\sqrt n}$

The only clue that I have is that the first problem is solved by induction.

Inequality max eigenvalue times max diagonal element.

Posted: 04 Sep 2021 07:59 PM PDT

Give a $n \times n$ matrix $A$, and $A$'s eigenvalues $\lambda_1, \cdots, \lambda_n \in \mathbb{R}$. give a real diagonal matrix $B=\text{diag}(b_1,\cdots,b_n)$.

Now compute the eigenvalues of $C=AB$ (or $C=BA$), if the eigenvalues are $c_1,\cdots,c_n$, could we got $|c|_{\max} \le |\lambda|_{\max} \cdot |b|_{\max}$ ?

$(p, \theta)$ values of lines intersecting the line segment from $(-l/2,0)$ to $(l/2, 0)$

Posted: 04 Sep 2021 07:21 PM PDT

I'm trying to argue that any line in $\mathbb{R}^2$ that intersects the line segment $L$ along the $x$-axis from $- \frac{l}{2}$ to $\frac{l}{2}$, for some $l >0$, must make angle with the $x$ axis in the range $\theta \in [0, 2 \pi]$ and have distance from the origin, say $p$, in the range $[0, |cos(\theta)|*l/2 ]$.

The first part is easy since any line segment through the origin that makes angle $\theta \in [0, 2 \pi]$ with the x-axis will intersect $L$.

To show the range of $p$, it seems to me that a vertical line at $x=l/2$ has distance $l/2$ from the origin and intersect $L$ with $\theta =0$, while a vertical line at $x=0$ intersects $L$ and has distance $0$ from the origin. I am not sure why the max value for $p$ should be $l/2$. Any insights appreciated.

The context of this question, for reference, is in an integral in Do Carmo's Differential Geometry of Curves and Surfaces at the bottom of page 44 as part of an argument for the Cauchy-Crofton formula.

$\{y \mid \limsup\limits_{i \rightarrow \infty}d(x_i,y) \leq 1\}$ is a borel set

Posted: 04 Sep 2021 07:22 PM PDT

Can I write the set $\{y \mid \limsup\limits_{i \rightarrow \infty}d(x_i,y) \leq 1\}=\bigcup\limits_{N=1}^{\infty}\bigcap\limits_{n \geq N}\{y \mid d(y,x_n) \leq 1\}$? I am trying to write the set as some combination of unions and intersections of open or closed sets in order to show the set is Borel. I tried to write the set as $\{y \mid \text{inf}\{\text{sup}\{d(y,x_k) \mid k \geq i\}\}\leq 1\}$ but am unsure whether this set is obviously closed. If $\limsup\limits_{i \rightarrow \infty}d(x_i,y) \leq 1$. Then there would be an integer $m$ such that $d(x_i,y) \leq 1$ or all $i \geq m$, so would the answer be $\bigcap\limits_{i=m}^{\infty}\bigcup\limits_{k=i}^{\infty}\{y \mid d(x_k,y) \leq 1\}$ ?

If none of my answers are correct, could someone help me try to show $\{y \mid \limsup\limits_{i \rightarrow \infty}d(x_i,y) \leq 1\}$ is a borel set?

how to solve inequalities like $0.9<(1+x)^{1/2} - \frac x 2 <1.1$

Posted: 04 Sep 2021 08:06 PM PDT

$0.9 < (1+x)^{1/2} - \frac x 2 <1.1$

The first part comes under the square root but the second part is just $x/2$ so I don't know how to go about finding the value of modulus of $x$ from this.

Help with a PCA procedure.

Posted: 04 Sep 2021 08:03 PM PDT

I'm studying THIS paper which builds an index from a set of observed variables using Principal Component Analysis (PCA).

However, the procedure described in Section 3.1 by formulas (6)-(8) confused me.

In particular, suppose the following model:

$$Y_i=\beta_1 A_i+\beta_2 B_i+\beta_3 C_i+u_i, \ i\in\{1,\dotsc,n\}$$ where explanatory vectors $A,B,C$ are observed, $u$ is an error term and vectors $Y,\beta$ are unknown.

The authors proposed to calculate the correlation matrix: $$R_3=X'X$$ where $X$ is the standardized version of the design matrix $[A,B,C]_{n\times 3}$. With the help of PCA, we can obtain the eigenvalues $\lambda_1\geq\dotsc\geq \lambda_k$ associated to the eigeivectors $\phi_1,\dotsc,\phi_k$. We can then calculate the principal components $P_1,\dotsc,P_k$. For simplicity, suppose $k=3$. The authors proposed an estimator for $Y$ given by $$Y_i=\frac{\sum_{j,k=1}^3 \lambda_j P_{ki}}{\sum_{j=1}^3 \lambda_j },$$ where $P_k=X\lambda_j$.

Question: Is there any error with this formula?

Comments

The notation $P_{ki}$ is more likely to mean the $i$th entry of the $k$th principal component. Also, $\sum_{j,k=1}^3 :=\sum_{j=1}^3\sum_{k=1}^3$ commonly. But if I accept this notation for the summation, an inconsistency arises once $P_k=X\lambda_j$. On the other hand, as $P_k$ is a principal component, I was expecting that $P_k=X\phi_k$.

Did they mean $$Y_i=\frac{\sum_{j=1}^3 \lambda_j(X\phi_{j})_i}{\sum_{j=1}^3 \lambda_j }=\frac{\sum_{j=1}^3 \lambda_j \sum_{k=1}^3 X_{i,k}\phi_{k,j}}{\sum_{j=1}^3 \lambda_j }=\frac{\sum_{j,k=1}^3 \lambda_j X_{i,k}\phi_{k,j}}{\sum_{j=1}^3 \lambda_j }$$ where $\phi_{i,j}$ denotes the $i$th entry of the $j$th eigenvector and $X_{i,j}$ denotes the $(i,j)$th entry of the matrix $X$?

Thanks in advance.

How to prove that the sum of all integers from $n$ to $n+k$ is given by $\ \frac{1}{2}(r+k)(k-r+1)$?

Posted: 04 Sep 2021 08:05 PM PDT

Prove that the sum of all integers from $n$ to $n+k$ is given by

$$\frac{1}{2}(n+k)(k-n+1)$$

Several methods were employed to tackle this problem such as

  • Partial sum
  • Binomial coefficient expansion
  • Arithmetic series
  • None of the methods seems to work...

$A$ upper triangular $n\times n$ over $\mathbb{R}$. Show that $I-A$ is invertible and express inverse of $I-A$ as a function of $A$.

Posted: 04 Sep 2021 07:37 PM PDT

Question: Let $A$ be a strictly upper triangular $n\times n$ matrix with real entries. Show that $I-A$ is invertible and express the inverse of $I-A$ as a function of $A$.

Attempt (and thoughts): $A$ is upper triangular, so the diagonal is all $0$, everything below the diagonal is all $0$, and then we have entries in the upper part, say, $a_{1,2}$, as the entry in row $1$ column $2$. If we consider $I-A$, then $I-A$ has $1$ in every entry of the diagonal, only $0$ below the diagonal, and negative whatever the entry of $A$ was in the upper part. So, det$(I-A)=1n=1$, so $I-A$ is invertible. Consider $X=I+A+A^2+\dots +A^{n-1}$. Then, I want to show that $(I-A)X=X(I-A)=I$. I can see why this "might" work, since we would have $I$ and then add all the powers of $A$, then subtract all the powers of $A$... but are there any unexpected issues I should be careful of here? Thank you much!

Is there a surjective homomorphism from $(\Bbb{Q},+)$ to $(\Bbb{Z},+)$?

Posted: 04 Sep 2021 07:43 PM PDT

Consider the groups $G = (\Bbb{Q},+)$ and $H = (\Bbb{Z},+)$. Is there a surjective homomorphism from $G$ to $H$? If not, how can I prove there isn't?

I considered a homomorphism that rounds up or down but I saw these operations are not "friendly" with the addition.

Parallel extension of a vector $z\in T_p M$ along coordinate lines

Posted: 04 Sep 2021 07:53 PM PDT

Here is something Professor Lee did in his book on Riemannian manifolds:

enter image description hereGiven a Riemannian $2$-manifold of $M$, here is one way to attempt to construct a parallel extension of a vector $z\in T_p M$: working in any smooth local coordinates $(x^1,x^2)$ centered at $p$, first parallel transport $z$ along the $x^1$-axis, and then parallel transport the resulting vectors along the coordinate lines parallel to the $x^2$-axis (Fig. 7.1). The result is a vector field $Z$ that, by construction, is parallel along every $x^2$-coordinate line and along the $x^1$-axis. The question is whether this vector field is parallel along $x^1$-coordinate lines other than the $x^1$-axis, or in other words, whether $\nabla_{\partial_1}Z\equiv 0$. Observe that $\nabla_{\partial_1}Z$ vanishes when $x^2=0$. If we could show that $$\nabla_{\partial_2}\nabla_{\partial_1}Z=0,\tag{7.1}$$ then it would follow that $\nabla_{\partial_1}Z\equiv 0$, because the zero vector field is the unique parallel transport of zero along the $x^2$-curves. If we knew that $$\nabla_{\partial_2}\nabla_{\partial_1}Z=\nabla_{\partial_1}\nabla_{\partial_2}Z,\tag{7.2}$$ then (7.1) would follow immediately, because $\nabla_{\partial_2} Z=0$ everywhere by construction.

I'm not certain about the terminology he used here, such as parallel extensions, the $x^1$-axis, coordinate lines, and this is where my question came about. In an earlier chapter, Lee said that a smooth vector field on $M$ is called parallel if it is parallel along every smooth curve in $M$. So I assume that Lee is trying to extend a vector $z\in T_p M$ to a parallel vector field on a neighborhood of $p$. Then I think a little bit about what tools I've got up to now. And it occurs to me that I can first consider the parallel transport of $z$ along the curve $$t\in x^1(U)\mapsto\phi^{-1}(t,0)\in M,$$ where the pair $(U,\phi)$ denotes the local coordinates $(x^1,x^2)$ centered at $p$ that is picked by Lee. I guess that the curve just defined might be what Lee calls the $x^1$-axis. For more information about coordinate curves/lines, one can consult Coordinate system - Wikipedia. Alright, I've got a parallel vector field along a curve. What's next? I don't know what Lee means when he says, "... and then parallel transport the resulting vectors along the coordinate lines parallel to the $x^2$-axis ...". Is there anyone who looks deeply into the context and would like to share with me? Thank you.

Cutting a polygon into 2 or 3 smaller, rationally-scaled copies of itself?

Posted: 04 Sep 2021 08:06 PM PDT

I've noticed that many 2D geometric figures can be tiled using four smaller copies of themselves. For example, here's how to subdivide a rectangle, equilateral triangle, and right triomino into four smaller copies:

a rectangle subdivided into four similar rectangles, an equilateral triangle subdivided into four smaller equilateral triangles, and a right triomino subdivided into four smaller right triominoes

Each smaller figure here is scaled down by a factor of $\frac{1}{2}$ in width and height, dropping its area by a factor of four, which is why there are four smaller figures in each.

You can also tile some 2D figures with nine smaller copies, each $\frac{1}{3}$ of the original size, or sixteen smaller copies, each $\frac{1}{4}$ of the original size, as shown here:

a rectangle subdivided into nine self-similar smaller copies; an equilateral triangle subdivided into nine-self-similar copies; a right triomino subdivided into sixteen self-similar copies

By mixing and matching sizes, we can get other numbers of figures in the subdivisions. For example, here's a $2 \times 1$ rectangle subdivided into five rectangles of the same aspect ratio, an equilaterial triangle subdivided into eleven equilaterial triangles, and a right triomino tiled by thirty-eight right triominoes:

a <span class=$2 \times 1$ rectangle subdivided into five rectangles of the same aspect ratio, an equilaterial triangle subdivided into eleven equilaterial triangles, and a right triomino tiled by thirty-eight right triominoes" />

I've been looking for a shape that can tile itself with exactly two or three smaller copies. I know this is possible if we allow the smaller copies to be scaled down by arbitrary amounts, but I haven't been able to find a shape that can tile itself with two or three copies of itself when those smaller copies are scaled down by rational amounts (e.g. by a factor of $\frac{1}{2}$ or $\frac{3}{5}$).

My Question

My question is the following:

Is there a 2D polygon that can be tiled with two or three smaller copies of itself such that each smaller copy's dimensions are a rational multiple of the original size?

If we drop the restriction about the smaller figures having their dimensions scaled by a rational multiple, we can do this pretty easily. For example, a rectangle of aspect ratio $\sqrt{2} : 1$ can tile itself with two smaller copies, and a rectangle of aspect ratio $\sqrt{3} : 1$ can tile itself with three smaller copies:

a <span class=$\sqrt{2}:1$ rectangle cut into two self-similar copies, and a $\sqrt{3}:1$ rectangle cut into three self-similar copies" />

However, in these figures, the two smaller copies are scaled down by a factor of $\frac{\sqrt{2}}{2}$ and $\frac{\sqrt{3}}{3}$, respectively, which aren't rational numbers.

If we move away from classical polygons and allow for fractals, then we can do this with a Sierpinski triangle, which can be tiled by three smaller copies of itself. However, it's a fractal, not a polygon.

What I've Tried

If we scale down a 2D figure by a factor of $\frac{a}{b}$, then its area drops to a $\frac{a^2}{b^2}$ fraction of its original area. This led me to explore writing $1$ as a sum of squares of rational numbers, such as $1 = \frac{4}{9} + \frac{4}{9} + \frac{1}{9}$ or $1 = \frac{9}{25} + \frac{16}{25}$. This gives several possible values for how to scale down the smaller copies of the polygon, but doesn't give a strategy for choosing the shapes of the reduced-size polygon to get the smaller pieces to perfectly tile it.

I've looked into other problems like squaring the square and other similar tiling problems. However, none of the figures I've found so far allow for a figure to be tiled with two or three copies of itself.

I've also tried drawing a bunch of figures on paper and seeing what happens, but none of them are panning out.

Is this even possible in the first place?

Thanks!

A Combinatorics Question Motivated by Game Theory Experiments

Posted: 04 Sep 2021 07:51 PM PDT

Recently I was looking at experiments on learning in games, and a methodological problem with these is that players have to play the same game repeatedly, which can create new strategic features that affect many solution concepts. The general way this works is that a player's action could affect other players' future behaviour in some way intended to "punish" that player for deviating from some equilibrium. Even if the relevant players are never scheduled to play the same game again, they can still try to punish through changing their behaviour in a way that will change other players' behaviour in the next game, that will change other players' behaviour in the next game ... to punish the player who deviated. The typical solution is to restrict to games where this will not happen. An alternative solution in the case of correlated equilibria would be simply make this kind of roundabout punishment impossible: make it so that no player in any game interacts with a player who will interact with a player who will interact with a player ... who will interact with the first player in a later game.

Formally (for symmetric games), we have a set of $n$ players. We want them to play $t$ rounds of an $m$ player game ($m|n$). We choose a partition of the $n$ players into sets of $m$ for each round out of $t$. Let a path from player $i$ to player $j$ be a finite sequence of players ${(k_a)}_{a=0}^b$ such that $k_0=i$, $k_b=j$ and there is a round $r$ such that for $a$ from $0$ to $b-1$, $k_{a+1}$ is in the same set of the partition for round $r+a+1$ as $k_a$. We want it so that the only paths from $i$ to $i$ are sequences consisting only of terms $i$. My question is how small can we make $n$ for given $m$ and $t$? A more restricted problem considers the case where the games are not symmetric at all, so each player is of a certain type out of $m$ types, with one of each type playing each game. In this case we have a partition of $n$ into $m$ sets, and we must have a member of each set in this partition in each set of the partition for each round. In either case, we can do at least as well as $m^t$ but this is completely impractical.

How to find a Bernoulli scheme which is isomorphic to a simple Markov Chain?

Posted: 04 Sep 2021 07:46 PM PDT

According to Wiki: by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; The isomorphism generally requires a complicated recoding.

The question is, for a given simple Markov chain, how to find the Bernoulli scheme which is isomorphic to this Markov chain ?

For example, the Markov chain is an one dimensional binomial random walk whose next step size $S_i$ is the current distance away from origin:

$S_i = \begin{cases} &+L, &\text{probability} &=1/2, \\ &-L, &\text{probability} &=1/2 \end{cases}$

where $L$ is the current distance away from origin.

For this Markov chain, what is the the Bernoulli scheme which is isomorphic to it ? How to construct it ?

Where could I find a discussion about "minimal sets" of axioms for ZF(C) set theory?

Posted: 04 Sep 2021 07:55 PM PDT

I know ZF is not finitely axiomatizable so a "minimal set of axioms for ZF" is actually a minimal set of metaxioms (or axiom schemata) that quantify (in natural language) over well-formed-formulas of first order logic (with equality symbol), like separation or replacement.

EDIT: I want to read a discussion about "minimal sets" of axioms for ZF(C) set theory from these usual axioms which have a name. By a "minimal list" I mean a non-redundant axiomatics. An example of a list, in the usual ZFC formulations, the "minimal" axioms would be (1) extensionality, (2) union, (3) pair, (4) infinity, (5) substitution, (6) choice. Separation and power come out with (6), the empty comes out via separation. Another list is Bourbaki's. A Professor of mine said me that "He [Bourbaki] has a very nice formulation that uses (1) extensionality, (2) pair, (3) parts, (4) infinity, (5) separation and union (in a single axiom, but I count two axioms for my purposes). It is well known that he did not use the axiom of choice (AC) because using Hilbert's epsilon (his famous "tau") allows him to demonstrate AC".

My purpose is didactic. I am seeking some lists like that and the discussions of preferring some above others.

Proving that if $\gcd(m, n) = 1$, and if $d | mn$, then there exist unique numbers $a$ and $b$ such that $a|m$, $b|n$, and $d = ab$.

Posted: 04 Sep 2021 08:07 PM PDT

What do I know?

If d | mn, there exist an integer k such that dk = mn.

I also know that because gcd(m, n) = 1 there exist some integers x and y such that mx + ny = 1.

I am having trouble to prove the statement because I don't even know how to start. Am I missing a key insight?

Intuition for an open mapping

Posted: 04 Sep 2021 07:55 PM PDT

What is an intuitive picture of an open mapping?

The definition of an open mapping (a function which maps open sets to open sets) is simple sounding, but it's really not as easy to picture as the simple language would suggest. When I think of fields, for example, I immediately think of the rational numbers, the real numbers, the complex numbers, the integers modulo a prime, etc. When I think of continuous functions, I can picture common examples like polynomials, the absolute value function, etc., or nastier "artificial" examples like the Weierstrass function. What are the functions I should think of, nasty and nice, when I think of open mappings?

No comments:

Post a Comment