Tuesday, November 30, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Does $A$ have infinite order in $G= \langle A,B \ |\ B A B^{-1} = A^2 \rangle $?

Posted: 30 Nov 2021 03:17 AM PST

I have a group (arising from the fundamental group of a manifold) $$G= \langle A,B \ |\ B A B^{-1} = A^2\rangle $$ and

I would like to show that $A$ is an element of infinite order inside $G$.

Notice that in the abelianization $Ab(G)\simeq \mathbb{Z}$, the image of $A$ is the identity. Nevertheless, $A$ is a non-trivial element in $G$, indeed we can find a representation $G\to SL(2;\mathbb C)$ that sends $A$ to the diagonal matrix $ diag (e^{i2\pi/3},e^{-i 2 \pi/3})$.

SVM Kernel Trick for Inseparable Example

Posted: 30 Nov 2021 03:17 AM PST

I am confused about the numerical methodology that we use during the kernel trick for linearly inseparable SVM. Let's assume I have the following data that can be separable with 2 lines but not linearly separable. I want to use SVM to solve this, so I have 2 options. I, either, need to use soft margin and accept some amount of error and stay in this dimension or use kernel trick to map these data to a higher dimension so that I can define a new hyperplane that will separate this data.

I have 2 questions.

  • How do I know which mapping to choose. I tried to find examples but the examples were always considered situations such as one class is on the middle and one class is on arround so that by defining a circular mapping. But how do we decide how to map for an arbitrary distribution. If we consider the given example, should I just need to come up with a function that maps the +'s lower and -'s higher? Can I just map $(x,y) \to (x, y, x^2, y^2, xy)$ or some mapping random so that it works?
  • What is my hyperplane function after the mapping? My intuition says it is the kernel polynomial so that when it is 0, the data is on the hyperplane?

enter image description here

Solution to a Lyapunov like equation.

Posted: 30 Nov 2021 03:12 AM PST

Given the matrices $A\in\mathbb{C}^{2N\times 2N}$, $B\in\mathbb{C}^{2N\times 3N}$ is it possible to solve the following matrix equation for $H\in\mathbb{C}^{2N\times 3N}$

$$BH^* + H B^* = A$$

where the $(\cdot)^*$ denotes complex conjugate transpose. $A$ itself is taken from $A = CX + XC^*$, where $X\in\mathbb{C}^{2N\times 2N}$ and $C\in\mathbb{C}^{2N\times 2N}$.

How to determine for the image room

Posted: 30 Nov 2021 03:16 AM PST

The linear transformation enter image description here is defined by enter image description here, where A is the matrix

enter image description here

And i got this question below:

Determine a base B for the image space Im(L) to L. Explain why B is a base for Im(L).

Should i use Gaussian to determine the Base or something?

How does the geometric Frobenius act on stalk?

Posted: 30 Nov 2021 03:00 AM PST

How does the geometric Frobenius act on stalk? The first image is from http://virtualmath1.stanford.edu/~conrad/Weil2seminar/Notes/L19.pdf page 9. The second image is from Kiehl-Weissauer page 7.

How to calculate the offset distance of triangle

Posted: 30 Nov 2021 03:18 AM PST

I need to find the offset distance of 2 points.

I start with my basic shape

The dashed lines are help lines

base shape

Now I will offset the line and need to find point X and Y

offset

Since I only know the offset and nothing else I wonder how to continue from here

What is the probability that an element is in a subset of a set, given a set of subsets.

Posted: 30 Nov 2021 02:52 AM PST

Given a set of natural numbers,

$$A = \{1,...,n\}$$

Let $\mathcal{S}$ be the set of $k<n$ elements from a uniform random sample without replacement from $A \setminus \{ \pi \}$, for a $\pi \in A$. From the resulting ordered set $\mathcal{S} \cup \{\pi \}$, the chance of correctly guessing which elements corresponds to $\pi$ is

$$\frac{1}{k+1}$$

For example, $n=10$, $k=3$ and $\pi=7$, a possible resulting subset is $$M_7= \{3,5,7,9\}$$ If you do not know $\pi$, there is a $1/4$ chance of correct guessing $\pi=7$. If we build another set for a different $\pi$,

$$M_2= \{2,3,7,9\}$$

would the overall chance of correctly guessing the $\pi$ from each $M_{\pi}$ decrease as more subsets are presented? The overall lower bound of guessing is bounded by $1/n$. Is it possible to achieve this bound as more $M_{\pi}$ are issued?

The subsets $M_{\pi}$ are independent from each other, so first one would have to guess the correct set out of $l$ subsets and then the correct element from $k+1$ elements in the subset.

\begin{equation} P(\pi = a|M_a) = \frac{1}{k+1} \cdot \frac{1}{l} \end{equation}

The probability as a function of the number of subsets with $n=3200$ is plotted in the Figure. A large $k$ converges faster to $1/n$ but increases the storage space.

Convergece plot

Goal: For each element in $i \in A$, issue a subset $M_i$. Given the ensamble of subsets, the probability of correctly guessing $i$ from $M_i$ grows from $1/|M_i|$ to $1/n$ as more subsets are considered, reducing space. I was wondering if my thoughts and conclusion are correct.

Probability density function of a random variables

Posted: 30 Nov 2021 02:59 AM PST

[Find the constant k and the mean of the random variables

enter image description here

Convergence of $\sum_{n=2}^\infty \frac{x^n}{\log(n)}(1+ \frac{1}{n})^{\log(n)}$ if $x>-1$

Posted: 30 Nov 2021 02:50 AM PST

so i'm trying to study whether or not some series converges assuming some free parameter x. the series is:

(I) $\sum_{n=2}^\infty \frac{x^n}{\log(n)}(1+ \frac{1}{n})^{\log(n)}$

where x is a real number such that $x>-1$


Let's begin by noticing that $1 \le (1+ \frac{1}{n})^{\log(n)} \le (1+ \frac{1}{n})^n \to e $ if $n \to \infty$

so if we assume $x \ge 1$ and use the nth-term divergence test we have to study: $\lim_{n \to \infty} \frac{x^n}{\log(n)}(1+ \frac{1}{n})^{\log(n)}$

which by observation above and limit law (notice that if $x \ge 1$ both the limits exist) can be considered equivalent to $k\lim_{n \to \infty} \frac{x^n}{\log(n)}$

where k is some real number $1 \le k \le e$

now notice that if $x>1$ the limit above clearly diverges, and if we assume $x=1$ it still diverges by comparison test with the harmonic series.

Indeed we have that (I) is divergent for $x \ge 1$

now assume we have $-1 < x < 1$ this time we may try using the root test. so we have evaluate the limit $$\lim_{n \to \infty} \biggl|(\frac{x^n}{\log(n)})^{\frac{1}{n}} (1+ \frac{1}{n})^{\frac{\log(n)}{n}}\biggr|$$

notice $\lim_{n \to \infty} (1+ \frac{1}{n})^{\frac{\log(n)}{n}} \to 1$

and $\lim_{n \to \infty} ({\log(n)})^{\frac{1}{n}} \to 1$

thus we obtain by applying again limit laws that the above limit depends just on |x|. since by hypothesis is s.t. $-1 < x < 1$ it means that our series has to be convergent by root test for |x|<1 and divergent for $x \ge 1$.

Am I correct? Thanks for the help

Relation between magnitude of diagonal and non-diagonal entries when given information about the inverse

Posted: 30 Nov 2021 02:43 AM PST

Think of $\Sigma$ as a covariance matrix, or any positive semidefinite matrix. Let $A(\lambda)$ be a $n \times n$ positive semidefinite matrix with $\lambda > 0$ and the following specifications to its inverse:

$(A(\lambda))^{-1}_{jk}=\begin{cases}\Sigma_{jk}+\lambda, & j=k\\ \Sigma_{jk}+\lambda \operatorname{sgn}(A(\lambda)_{jk}), & j \neq k \text{ and }A(\lambda)_{jk} \neq 0\\ \lvert (A(\lambda))^{-1}_{jk}-\Sigma_{jk}\rvert \leq \lambda, & j\neq k \text{ and }A(\lambda)_{jk} = 0 \end{cases}\; \; \; \; \; \; \; \; \; (*)$

Note that the dependency of $A(\lambda)$ is such that as $\lambda $ gets larger the matrix becomes sparser.

It is stated that by the definition of the inverse $(*)$ the diagonal entries of $A(\lambda)$ are much larger relative to its nonzero off-diagonal entries.

I do not find it clear why this would indeed be the case if we are only given the specifications of the inverse $(A(\lambda))^{-1}$? Any ideas?

Link to the paper: https://jmlr.org/papers/volume17/16-013/16-013.pdf (Top of page $7$)

optimization of a non-smooth function

Posted: 30 Nov 2021 03:19 AM PST

Can anyone help me to solve the following optimization problem?

$$\min_{x}~f(x)=\max\limits_{1\leq i\leq K}x_i+\frac{1}{2}\|x\|^2$$ where $x\in R^n,K\in[1,n]$

Is it possible to find the closed solution of optimal x? Thank you very much.

How can I solve the following equation without using L'Hospital

Posted: 30 Nov 2021 03:14 AM PST

Let $f(x) = x^2(x − \sin x)$ and $g(x) = (e^x − 1)(\cos 2x − 1)^2$. Find $\lim_{x\to 0^+} f(x)/g(x)$ without using L'Hospital's rule.

possible class equation for a group of order 8

Posted: 30 Nov 2021 03:10 AM PST

The exercise is written in the title . I have found that only 2 possibilities are 1+1+2+2+2 and 1+1+4+2 but why does the second not work? The text does not introduce sylow's theorem so i would like not to use them if needed

How do I prove this statement about winding numbers and continuous maps?

Posted: 30 Nov 2021 03:13 AM PST

I have the following problem:

Let $D$ be a disk with boundary circle $C$ and let $f:D\rightarrow \mathbb{R}^2$ be a continuous map. Suppose $P\in \mathbb{R}^2\setminus f(C)$ and the winding number of the restriction $f|_C$ of $f$ to $C$ around $P$ ia not zero. Show that there is a point $Q\in D$ such that $f(Q)=P$

I wanted to prove this by contradiction, i.e. let us assume that $f:D\rightarrow \mathbb{R}^2$ is a continuous map and $C\subset D$ be the boundary circle. Then we can restrict f to $$f|_C:C\rightarrow \mathbb{R}^2$$ which is also continous. Let $P$ be as above s.t. the winding number of $f|_C$ is non zero, i.e. $W(f|_C,P)=W(\gamma,P)\neq 0$ where $$\gamma:[a,b]\rightarrow \mathbb{R}^2$$ is a conitnuous loop and $$\gamma=\phi\circ f|_C$$ with $\phi:[a,b]\rightarrow C$. Now I assume also that forall $Q\in D, f(Q)\neq P$. Since $C\subset D$ we have that also for all $Q\in C, f|_C(Q)\neq P$. Since $\gamma$ is continuous and $[a,b]$ is compact also $\gamma([a,b])$ is compact, but this means that also $f|_C(C)$ is compact.

Now I want to lead this to a contradiction, but I somehow don't see how to procede, could someone help me and show me how I can go further?

Thanks a lot

Show that there exists a finite number of closed sets $C_i$ such that its intersection is contained in $U$.

Posted: 30 Nov 2021 02:57 AM PST

Let $(X, \mathcal{O})$ be compact, $U \subseteq X$ open in $X$ and

$C=\{C_i \subset X | C_i closed in X \forall i \in I$}

be such that

$\bigcap_{i \in I}C_i \subset U.$

Show that there exists a finite number of closed sets $C_i$ such that its intersection is contained in $U$.

my thought is: suppose $\bigcap_{i \in I}C_i = \emptyset $. Then $X= \bigcup_{i \in I}X \ C_i$, and hence by compactness of $X$, there exists $C_1,..,C_n$

Find the minimum of $\frac{\sin x}{\cos y}+\frac {\cos x}{\sin y}+\frac{\sin y}{\cos x}+\frac{\cos y}{\sin x}$

Posted: 30 Nov 2021 03:12 AM PST

Let $0<x,y<\frac {\pi}{2}$ such that $\sin (x+y)=\frac 23$, then find the minimum of

$$\frac{\sin x}{\cos y}+\frac {\cos x}{\sin y}+\frac{\sin y}{\cos x}+\frac{\cos y}{\sin x}$$

A) $\frac 23$

B) $\frac 43$

C) $\frac 89$

D) $\frac {16}{9}$

E) $\frac{32}{27}$


My attempts:

I think that the all possible answers are wrong. Because, by Am-Gm inequality we have

$$\frac{\sin x}{\cos y}+\frac{\cos y}{\sin x}+\frac {\cos x}{\sin y}+\frac{\sin y}{\cos x}≥2+2=4.$$

But, Wolfram Alpha gives us a different result : The Global Minimum doesn't exist. However, ​the local minimum must be $6.$

But, still the problem is not solved. Because Wolfram's graph shows that the minimum can be less than $6$.

I also tried

Let $$\sin x=a,\cos y=b,\cos x=c,\sin y=d$$ with

$$ab+cd=\frac 23≥2\sqrt{abcd}\implies abcd≤\frac 19\\ a^2+c^2=b^2+d^2=1 $$

then I need

$$\min \left(\frac ab+\frac ba+\frac cd+\frac dc\right)$$

But, I can't do anything from here. Finally, I attach the graph drawn by WA.

enter image description here

Why do we work on the Borel sigma algebra and not on the Lebesgue sigma algebra?

Posted: 30 Nov 2021 02:56 AM PST

In most measure theory text books one derives the Lebesgue anh Borel-Lebesgue measure from Caratheodory's extension to outer measures by first proving that the set of $\lambda^*$ measurable sets is a sigma algebra (the Lebesgue sigma algebra) and that $\lambda^*$ restricted to that sigma algebra is a measure, the Lebesgue measure. This measure space is even complete. However, then still one continues to show that $\lambda^*$ restricted to the sigmal algebra generated by the ring (on which the pre-measure was defined that was used to obtain $\lambda^*$ via Caratheodorys extension) is also a measure, the Borel-Lebesgue measure, and that the generated sigma algebra is the Borel sigma algebra.

So my question is, why is the Borel sigma-algebra "better" than the Lebesgue sigma algebra, because most of the time text books continue to work only on the Borel sigma algebra, even though the Lebesgue sigma algebra is its completetion and has some other favorable properties? I.e., I am just missing an argument in all the lecture notes and text books why we continue to work on the Borel sigma-algebra after having shown that the Lebesgue sigma algebra is larger (and after all we do all the extension from a pre-measure to an outer measure and then restricting to a measure because we want to get a larger set than just the ring on which we initially defined the pre-measure).

$\det A(t)=1$, $A(0)=E$, show $\operatorname{tr} A'(0)=0$.

Posted: 30 Nov 2021 02:52 AM PST

For $n\times n$ continuously differentiable matrix $A(t)$.

$\det A(t)=1$, $A(0)=E$, show $\operatorname{tr} A'(0)=0$.

I have just know something like $(e^{at})'=ae^{at}$.

To which a set of functions from $A$ to $B$ belongs?

Posted: 30 Nov 2021 02:40 AM PST

To understand the set-theoretic definition of functions, I tried to find a set that contains a set of all functions from $A$ to $B$ for (possibly empty) sets $A$ and $B$.

$ \newcommand{\eqv}{\Leftrightarrow} \newcommand{\imply}{\Rightarrow} \newcommand{\powset}{\mathcal{P}} $ My approach (possibly informal):
Because a function is a subset of binary relation,

$$ f \in A \to B \imply f \subseteq A \times B \imply f \in \powset(A \times B) $$ where $A \to B$ denotes the set of all functions from $A$ to $B$. Therefore, $$ \forall f: f \in A \to B \imply f \in \powset(A \times B) $$ i.e., $A \to B \subseteq \powset(A\times B)$.

Finally, we have $$ A \to B \in \powset(\powset(A\times B)) $$

Is this correct? What is the domain of discourse of $f$ here? (concerning $\forall f$)

Does this basis generate the discrete topology?

Posted: 30 Nov 2021 03:03 AM PST

In Topology by Munkres, Lemma 13.2, on page 78, the following is written:

Let $X$ be a topological space. Suppose that $\tilde{C}$ is a collection of open sets of $X$ such that for each open set $U$ of $X$ and each $x$ in $U$, there is an element $C$ of $\tilde{C}$ such that $x\in C \subset U$. Then $\tilde{C}$ is a basis for $\textbf{the}$ topology of $X$.

In the next to next paragraph, the following text is written:

Let $\tau$ be the collection of open sets of $X$; we must show that the topology $\tau'$ generated by $\tilde{C}$ equals the topology $\tau$.

Here too, I am confused as to what is meant by, "Let $\tau$ be the collection of open sets of $X$". Is it the set of all open subsets of $X$? This is the cental point of my confusion. $\tau$ is being compared with $\tau'$. According to the text, "$\tau$ is the collection of open sets of $X$". Is $\tau$ the discrete topology? If not, what is the nature of $\tau$?

I have attached a screenshot of this page of the book below: enter image description here

Gradient of $\sum_{i,j}^n A_{ij}x_i^TB^iC{B^j}^Tx_j$

Posted: 30 Nov 2021 02:54 AM PST

Suppose I have a symmetric matrix $A$ and $C$ and matrices $B_i$ of respective size. Then for vectors (of different dimension) $x_i$ I define the function

$$f(x_1,\dots,x_n)=\sum_{i,j=1}^n A_{ij}x_i^TB_iC{B_j}^Tx_j$$

and I would like to calculate the derivative or gradient of it. The reason I would like to find a solution of $\nabla f =0$.

Using the symmetry of $A$ I've ended up with

$$ \nabla f = 2\left[\begin{array}{@{}c@{}} \sum_{i=1}^nA_{1i}B_1C{B_i}^Tx_i \\ \sum_{i=1}^nA_{2i}B_2C{B_i}^Tx_i \\ \vdots \\ \sum_{i=1}^nA_{ni}B_nC{B_i}^Tx_i \end{array} \right]$$

is this correct?

Reciprocal binomial coefficient polynomial evaluation

Posted: 30 Nov 2021 02:49 AM PST

The conventional binomial coefficient can be obtained via

$$ f(x, n) = (1+x)^n = \sum_{i=0}^n { n \choose i} x^i $$

And the function $f$ can be every efficiently performed on evaluation.

I'm interested in evaluating value for a function $g(x, n)$ very similar to it

$$ g(x, n) := \sum_{i=0}^n \frac{1}{{ n \choose i}} x^i $$

With some fixed $n$, how can I convert $g(x, n)$ into a form so it can be evaluated efficiently?

Cauchy-Schwarz for sums of products of matrices

Posted: 30 Nov 2021 02:49 AM PST

The usual Cauchy-Schwarz inequality states that, for real sequences $a_i,b_i$ $$ \Big|\sum_{i=1}^n a_i b_i \Big| \leq \Big(\sum_{i=1}^n a_i^2\Big)^{1/2}\Big(\sum_{i=1}^n b_i^2\Big)^{1/2}. $$

My question is whether the same holds for matrices. More precisely, let $A_i,B_i$ be sequences of $m\times m$ matrices. Does it hold that $$ \Big\|\sum_{i=1}^n A_i B_i \Big\| \leq \Big\| \sum_{i=1}^n A_i^*A_i\Big\|^{1/2} \Big\| \sum_{i=1}^n B_i^*B_i\Big\|^{1/2}, $$ where $\|\cdot\|$ is the operator norm?

Upper bound of number of triangles in an edge-colored graph, given that none of the triangles are rainbow.

Posted: 30 Nov 2021 03:18 AM PST

Let $G$ be a simple edge-coloured graph with $m$ edges. Assume furthermore that it does not contain a rainbow triangle, (i.e a triangle such that each edge has different color) and each color class in $G$ has size at most $k$. Let $t(G)$ be number of triangles contained in $G$. Prove: $$ t(G) \leq \frac{1}{2} m(k-1) $$

Since the triangles are not rainbow, each one of them has a dominant color, i.e., at least two of the three edges in the triangle are in that color. This also gives a partition of the triangles based on the dominant colors, so I'm thinking of upper bounding the number of triangles $t_i$ dominant in color $i$, summing up the bounds across all colors, and hopefully get a bound on $t(G)$.

Now any $i$-dominant triangle use at least $2$ of the $m_i$ edges having color $i$. And we know $m_i \leq k$ for all $i$. So we have: $$t_i \leq {m_i \choose 2} = \frac{1}{2}m_i(m_i - 1) \leq \frac{1}{2}m_i(k-1)$$

We note that the $m_i$'s partition $m$, so when summing up the $t_i$'s across all color classes, we get: $$t(G) = \sum t_i \leq \frac{1}{2} m (k-1)$$

Note: The following is my first attempt, which was wrong. Also there was a typo in the question that I was not aware of.

I let $[r]$ be the color classes, and single out a color $i$. The classes $[r] \setminus \{i\}$ have at least $k(r-1)$ edges in total, so $i$ has at most $m - k(r-1)$ edges. Therefore: $$t_i \leq \frac{m_i}{2} \leq \frac{1}{2} (m - k(r-1)),$$

where $m_i$ is the number of edges colored $i$, and equality of the first $\leq$ is achieved when each of the $i$-dominant triangles contains $2$ edges colored $i$.

Summing up the above bound across all colors, we get: $$t(G) \leq \frac{1}{2}r(m - k(r-1))$$

While I can upper bound $r$ with $\lfloor \frac{m}{k} \rfloor$, I'm stuck at lower bounding it to make the above work. How should I proceed?

Benders Decomposition Convergence

Posted: 30 Nov 2021 03:05 AM PST

I have found James Murphy's "Benders, Nested Benders and Stochastic Programming: An Intuitive Introduction" (http://www.optimization-online.org/DB_FILE/2013/12/4157.pdf) to be quite helpful at deciphering exactly how and why optimality cuts perform the magic that they (seem to) do. The intuition he provides is showing how the optimality cuts identify the constraint(s) from the subproblem that must be included in the master problem (expressed using only the non-complicating variables, x and the approximation function $\alpha(x)$ to sufficiently reconstruct the $\alpha$ function such that the bounds converge (the solution found in the master matches what's found in the subproblem).

My question centers on Murphy's section on termination criteria (where the original problem is a minimization). Murphy says "we can test this by seeing whether the value of our approximation $\theta$ is already equal to (or greater than) the value of the currently active constraint on it, i.e. the optimality cut.

Question: when or how would we ever get a master problem value strictly greater than the subproblem value? Is he referring to numerical issues causing this? It makes no sense to me that we'd be iterating along and then find a solution like this...is this what would happen if we found the optimal solution and added an optimality cut for it as well?

How do I compute this integral on the set A?

Posted: 30 Nov 2021 02:50 AM PST

I have the following problem:

Compute the integral $\int_{\partial A} \langle F,\nu \rangle dS$ where $\nu$ is the normal vector and $$F(x,y,z)=(2x-3y+z,\,\,\,x-y-z,\,\,\,-x+y+2z)$$ and $$A=\{(x,y,z): |x-y|\leq 1, |y-z|\leq 1, |x+z|\leq 1\}$$

Since we have the topic "Gaussian Integral formula" I think that maybe one can compute it with the formula, but to do so, $A$ needs do be compact and have a smooth edge. I don't know how to check this. Otherwise I need to compute it directly, but then I struggle to work with the set $A$. They gave us a hint to use coordinate transformation, but I don't see where.

Could someone please help me?

With the comments from below I tried to solve it again, while solving it I remarked that the were some thinking errors from my side which were hopfully all corrected in the following image, maybe someone can have a look at the image to tell me if this works or if I have more errors.

Thank you very much.

My computations

Best books to self study number theory and especially diophantine equations?

Posted: 30 Nov 2021 03:05 AM PST

I'm an undergraduate math student. I've already passed linear algebra and abstract algebra and recently passed a course in elementary number theory and I really enjoyed it. (our source was daniel Flath and Rosen elementary number theory)

now I wanna self-study more books this semester and I'm searching for good books with exercises. my most interest is in solving diophantine equations and their applications.

tnx for your suggestions btw and sry for my bad English :)

Prove the Cantor set has no interior.

Posted: 30 Nov 2021 02:58 AM PST

This is not intended to ask for the proof, but just for you to please check my proof. Thank you.

Definition of the Cantor set $C$:

$C = C_0 \cap C_1 \cap \cdots \cap C_i \cap \cdots$

where

\begin{align} C_0 &= [0,1] \\ C_1 &= [0,1/3] \cup [2/3, 1] \\ C_2 &= \left[0, \frac{1}{3^2}\right] \cup \left[\frac{2}{3^2}, \frac{3}{3^2}\right] \cup \left[\frac{6}{3^2}, \frac{7}{3^2}\right] \cup \left[\frac{8}{3^2}, \frac{9}{3^2}\right] \\ &\vdots \end{align}

Prove that $C$ has no interior points.


We aim to show that there doesn't exist a ball $B$, with radius $r >0$, that is entirely contained in $C$.

proof

Let $x \in C_n$ be an endpoint of one of the intervals of $C_n$, then $x \in C$. Note that by the construction of $C$, that $C_n$ has equal length intervals all of length $\frac{1}{3^n}$. So, for any given $r > 0$, choose the integer $n$ so that

$$n = \left\lceil\log_3\frac{1}{r}\right\rceil + 1 \implies \frac{1}{3^n}< r$$

Now, the ball $B_r(x)$ has a radius larger than the length of any interval in $C_n$ and it contains $x$. Since $x$ is an endpoint of an interval in $C_n$, that interval's other endpoint $y$ must satisfy

$$\left|y - x\right| = \frac{1}{3^n} < r$$

Therefore, by the arbitrariness of $r$, the Cantor set has no isolated points. But furthermore, $[x,y] \subset B_r(x)$, and we construct $C_{n+1}$ by deleting the middle third of $[x,y] \in C_n$ which means that there exists points in $B_r(x)$ that are not in $C$. Again, by the arbitrariness of $r$ we conclude $C$ has no interior.


Alternate proof inspired by copper.hat and Berci

proof

The measure of each $C_n$ is

$$mC_n = \text{num_intervals}\cdot \text{len_intervals} = 2^n\left(\frac{1}{3^n}\right) = \left(2/3\right)^n \to 0 \quad \text{as} \quad n \to \infty$$

Since each set $C_n$ is a subset of $C_{n-1} \cap C_{n-2} \cap \cdots \cap C_1$, then

$$m(C_1 \cap C_2 \cap \cdots \cap C_n) = mC_n$$

and therefore $mC = 0$.

Now if $C$ contains any open set of the form $(a,b)$ then $mC \geq m(a,b) = b-a$. Since $mC = 0$, $C$ must not contain an open set, which implies it can't contain an open ball, which implies $C$ contains no interior points.

Given length of one side and its median and another median in a triangle. Find area of the triangle

Posted: 30 Nov 2021 03:01 AM PST

Given the triangle $ABC$. If length of medians $AM=7.5$ and $BN=12$ and side $AC=6$, find the area of the triangle.

NOTE: Solution must not use any special formula or trigonometric functions.

My attempt : $NM$ is half of $AB$ in length, Triangles $ANO$ and $MOB$ have same amount of area.

No comments:

Post a Comment