Thursday, June 23, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Please help me find the solution

Posted: 23 Jun 2022 02:09 PM PDT

Successor topology

Posted: 23 Jun 2022 01:59 PM PDT

Let $X = \mathbb{N}$ and let the topology equipped be given via

$$\tau=\{U \subset \mathbb{N} : (2n-1) \in U \Rightarrow 2n \in U\}$$

Show $(X,\tau)$ is locally compact but not compact.

For locally compact, I need to have a compact set containing an open set containing any given $x \in \mathbb{N}$

So does this split into cases where $x \in \mathbb{N}$ is even or is odd? If $x \in \mathbb{N}$ is even, can we take my open neighborhood to be $\{x\}$ and if $x \in \mathbb{N}$ is odd then $\{x,x+1\}$ can be my $U$? and my compact set containing it can be $\{x,x+1,x+2,x+3\}$?

And for not compact do I take an arbitrary open covering, say

$$\mathbb{N} \subseteq \bigcup_{\alpha \in A} U_\alpha$$

where $A$ is an arbitrary indexing set, and show it has no finite sub covering? could I do this by contradiction? Suppose there exists a finite subset $B \subset A$ such that $$\mathbb{N} \subset \bigcup_{\beta \in B} U_\beta$$

Am I on the right path?

How to solve these type of equations? (if they are solvable)

Posted: 23 Jun 2022 02:10 PM PDT

$$\frac{2x-8}{\:\sqrt{2x^2-16x+34}}+\frac{2x-3}{\sqrt{2x^2-6x+5}}=0$$

Is it possible to solve this equations? If yes then how?

Generate a function f() which maps a known set to another known set

Posted: 23 Jun 2022 01:59 PM PDT

I have a set S of random positive integers that is of known size n. Is there a way to generate a hashing function which perfectly maps each value of S uniquely onto the set P where P consists of values 0 to n-1.

For example:

Given:

S = { 345, 69205, 0, 8237, 512 }

P = { 0, 1, 2, 3, 4 }

a valid function f could be

f(345) = 2, f(69205) = 4, f(0) = 3, f(8237) = 3, f(512) = 1

Note that the ordering of the mapping need not match the ordering of the elements of set S, that is, 345 need not map to 0. While the given example function f is valid, it would not be the only valid mapping.

How to calculate probability of business failure given certain factors - Bayesian probability?

Posted: 23 Jun 2022 01:58 PM PDT

How do I calculate the probability of failure within the first five years of a business given the following conditions:

  • clear route (or not) to revenues within 6 months
  • experience of the industry (or not)
  • more than one founder (or not)

Assume:

  1. 55% chance of failure for all new businesses in the first five years of operation
  2. 75% chance of failure if no route to revenues in first 6 months
  3. 80% chance of failure if founder has no experience of the industry
  4. 80% chance of failure if only one founder

So for example, say a business has a route to revenues but the founder has no experience of the industry and there is only one founder, what is the likelihood of failure in the first five years?

Why (0,1) does not have a maximum? [duplicate]

Posted: 23 Jun 2022 02:04 PM PDT

The least upper bound of $(0,1)$ is 1 and from what I learned it does not have a maximum.
What is wrong with the statement that a maximum of this interval is $0.\overline{9}$?

Examples of cases where it is easier to define the negation of a property than the property itself

Posted: 23 Jun 2022 01:44 PM PDT

Are there cases in mathematics where it is easier to define the negation of a property $P$ than property $P$ itself? I would like several answers giving examples of such cases. Basically, I am looking for examples in math texts and papers where the author first defines the negation of a property, then negates the definition to get the property itself.

Explicit Solution of Quadratic Opt. Problem

Posted: 23 Jun 2022 01:41 PM PDT

I have the following optimization problem I am unsure whether I've got it correct:

$$ \text{min } x^Tx \\ \text{so that } a + c^Tx <= 0 $$

I have introduced a slack variable $s$ to make it conceptually (for me) simpler:

$$ \text{min } x^Tx \\ \text{so that } a + c^Tx - s^2 = 0 $$

wich leades to the langragian $L = x^Tx + \lambda(a + c^Tx - s^2)$, from which I follow $2x = - \lambda c$ and from $a + c^Tx - s^2 = 0 \iff a + \lambda c^Tc - s^2 = 0$ that if $a >= 0$ then $\lambda = 0$ else $\lambda = -\frac{a}{<c,c>}$ (since we want to minimize $x^Tx = \lambda^2(c^Tc)$).

Do we have a translation from intuitionistic logic to classical logic which can translate all formulas with their precise meaning?(not only theorems)

Posted: 23 Jun 2022 01:39 PM PDT

I want to know if there is a translation from intuitionistic propositional logic formulas to classical propositional logic formulas satisfying the properties I'm looking for.

Actually first part of my question is about existence of a mapping between two different model theories, for example between Heyting Algebras and Boolean Algebras. In the following you will understand that my main question is dependent to an answer to this question. I am speaking about a mapping which saves structure in a sense which I don't know how to express it formally.( I guess that it would be similar to a mapping from groups to rings! I mean they are from two different categories!) So what I'm asking here is that do we have such mappings? Did somebody ever try to make such a thing or maybe find a reason to do so?

After that, in the main part of my question, I want to know if we have a function which maps every formula in IPL to a formula in CPL while it saves it's value completely with respect to two model theories for IPL and CPL which are correspondent in the sense that I said.

More formally I am talking about a function $g$ and two model theories $\mathbb{M_\mathsf{I}}$ and $\mathbb{M_\mathsf{C}}$ which are respectively theories for IPL and CPL where $g$ is: $$g:\mathbb{M_\mathsf{I}}\rightarrow\mathbb{M_\mathsf{C}}$$

Be aware that $g$ has a property that I don't know how to express it formally! Intuitively I can say that it's a correspondence and it saves structure.

Can we have a function $f$ such that it gives a formula $\phi$ in IPL and maps it to a formula in IPC such that $$f(\phi)=\psi \Leftrightarrow {\Large \forall} \mathcal{M_\mathsf{I}} \in \mathbb{M_\mathsf{I}} : (\mathcal{M_\mathsf{I}} \models \phi \Rightarrow g(\mathcal{M_\mathsf{I}}) \models \psi) $$?

I have to say that my question comes from an intuitive thought that we maybe can translate a formula like ($\phi \lor \neg \phi$) from IPL to CPL and expect to have another formula with possibly different face and look which means exactly like what ($\phi \lor \neg \phi$) means in IPL!

Perceptron linearly separable but not linearly separable through the origin

Posted: 23 Jun 2022 01:32 PM PDT

Can anyone explain the solution to this problem?

"Provide two points, (x0, x1) and (y0, y1) in two dimensions that are linearly separable but not linearly separable through the origin. Enter a Python list with two entries of the form [[x0, x1], label] where label is 1 or -1. (So each entry represents a point with 2 dimensions and its label)"

One of the solutions is "There are many possible answers for this question. In the provided solution, [[[1, 1], 1], [[2, 2], -1]], the points are linearly separable, but not through the origin".

But I want to know other answers as well.

Equivalence between bi-invariant metrics on Lie groups and Symmetric spaces

Posted: 23 Jun 2022 01:29 PM PDT

Let $G$ be a connected Lie group with Lie algebra $\mathfrak{g}$ and $K$ a closed Lie subgroup of $G$ with Lie algebra $\mathfrak{s}$. Then $G/K$ is a homogeneous space. Equip $G$ with a left-invariant Riemannian metric $Q_G$ on $G$ and let $Q_{G/K}$ be the unique Riemannian metric on $G/K$ so that $\pi: G \to G/K$ is a Riemannian submersion (that is, so that $G/K$ is a Riemannian homogeneous space). Let $\mathfrak{h}$ be the orthogonal complement of $\mathfrak{s}$ with respect to $Q_G$, so that $\mathfrak{g} = \mathfrak{s} \oplus \mathfrak{h}$.

  1. If $Q_G$ happens to be bi-invariant (so that $Q_{G/K}$ is $G$-invariant), then is $G/K$ necessarily symmetric?
  2. Conversely, if $G/K$ happens to be symmetric, then is $Q_G$ necessarily bi-invariant?

For $1.$, I know that if $Q_{G}$ is bi-invariant, then $G$ is a symmetric space. In particular, for all $p \in G$, the map $s_p: G \to G$ given by $s_p(g) = pg^{-1} p$ for all $g \in G$ is an involutive isometry fixing $p$. So, if $G/K$ were a symmetric space, a reasonable guess for the involutive isometry fixing $[p] \in G/K$ would be the map $s'_{[p]}: G/K \to G/K$ defined implicitly by $\pi \circ s_p = s'_{[p]} \circ \pi$. This can be written explicitly as $s'_{[p]}([g]) = [pg^{-1} p]$ for all $[g] \in G/K$. However, I don't see any reason to believe that such a map is well-defined when $K$ is not a normal subgroup. Another potential strategy would be to show that the Cartan decomposition holds: $[\mathfrak{h}, \mathfrak{h}] \subset \mathfrak{s}, \ [\mathfrak{h}, \mathfrak{s}] \subset \mathfrak{h}, \ [\mathfrak{s}, \mathfrak{s}] \subset \mathfrak{s}$.

I believe $2.$ is false, as we could take $K = \{e\}$, from which we get $G/K \cong G$, and this post seems to suggest that there do exist Lie groups with only left-invariant metrics that are nevertheless symmetric spaces. Though the answer avoids taking a firm stand one way or another, and just proposes a strategy that would yield a counter-example if one existed.

how is the formula for determining the adjoint of a matrix derived?

Posted: 23 Jun 2022 01:20 PM PDT

I'm learning how to get the adjoint of a matrix using cofactors and the transpose, but how is this formula proven? When I try to google the proof for this, all I get is blog posts on how to get the adjoint of a matrix but not the proof of the formula itself.

Monomial order in Macaulay2

Posted: 23 Jun 2022 01:26 PM PDT

When I define the ring to work on in Macaulay2, I would like to change the order of the variables for building the Grobner basis of ideals with respect to this new order. For instance, if set R = QQ [x_1..x_5], then how may I define the lexicographic monomial order induced by x_3<x_2<x_1<x_5<x_4?

Is my average of averages correct in this particular case?

Posted: 23 Jun 2022 01:56 PM PDT

I read many times that averaging averages is leading to errors but here is my problem :

I'm trying to make a game simulation where a player is delving into a dungeon fighting monsters. One monster per level. The player can resurrect 2 times, restarting the dungeon on the first floor.

I want to know on average how deep my player will get per game (so a group of 3 delves).

Lets imagine that I run the simulation for a total of 100 games. I could run the simulation and calculate a global average ( the total monters killed / 3 / 100 ) but that seems wrong to me because it seems that I loose this "group of 3 delves per game" aspect.

Instinctively I want to make an average of averages : calculate the average level reached per game and average this average for the 100 games.

Am I wrong ?

Show that a random variable uniformly distributed over the unit square is independent

Posted: 23 Jun 2022 01:24 PM PDT

Let $X=(U,V)$ be a random variable which is uniformly distributed on $[0,1]^2$. I want to show that $U$ is independent of $V$.

Let $A,B\in B([0,1])$ where $B([0,1])$ denotes the Borel Sigma algebra.

We have,

$$\begin{aligned}P(U\in A, V\in B) &= P((U,V)\in A\times B) \\&=1_{[0,1]^2}(A\times B) \\ &=1_{[0,1]}(A) \cdot 1_{[0,1]}(B) \\ &=P(U\in A)P(V\in B) \end{aligned}$$

Is my solution correct? What could I have done better?

Question regarding the proof of Levelt´s theorem.

Posted: 23 Jun 2022 01:59 PM PDT

I have a question, regarding the proof of theorem 3.5 (Levelt) from the following article about the monodromy of hypergeometric functions. My question is the following:

In the theorem it is stated, that any hypergeometric group with the same parameters is conjugated in $GL(n,\mathbb{C})$ to the initial one. This is, what i dont immediately understand.

In the proof the author wants to show the uniqueness of the matrices $A$, $B$. He does this by considering $W = ker(B-A)$ which is $n-1$ dimensional and $V=W \cap A^{-1}W \cap ... \cap A^{-(n-2)}W$. He proves for $v \in V$ that $A^i v$ $(i=0,...,n-1)$ is a basis for $\mathbb{C}^n$ and $A^iv = B^iv$ on $W$ for all $i=0,...,n-1$. But how does one see, that the matrices $A$, $B$ have said form as in the theorem with respect to this basis, the uniqueness of $H=<A,B>$ and that all those groups with the same parameters are conjugated to $H$?

Lemma showing we can approximate the Hausdorff s-dimensional measure doesn't make any sense

Posted: 23 Jun 2022 01:22 PM PDT

I have been reading (and re-reading, and re-re-reading) Lemma 1.7 from The Geometry of Fractal Sets by K. J. Falconer, and I don't understand what it says. It is prefaced with the following remark:

The next lemma states that any attempt to estimate the Hausdorff measure of a set using a cover of sufficiently small sets gives an answer not much smaller than the actual Hausdorff measure.

The statement of the lemma is as follows.

Let $E$ be $\mathscr{H}^s$-measurable with $\mathscr{H}^s(E)<\infty$, and let $\epsilon > 0$. Then there exists $\rho>0$ (dependent only on $E$ and $\epsilon$), such that for any collection of Borel sets $\{U_i\}_{i=1}^\infty$ with $0<|U_i|\leq \rho$ we have $$\mathscr{H}^s(E\cap (\cup_i U_i))<\sum_i|U_i|^s+\epsilon.$$

I really don't understand what's going on here. A few questions come to mind:

  1. Don't we already know by definition of $\mathscr{H}^s$ that it is the limiting case of the estimate of the s-measure of a set by small enough sets?
  2. By monotonicity, we have $\mathscr{H}^s(\cup_i U_i)>\mathscr{H}^s(E\cap (\cup_i U_i))$, and $\mathscr{H}^s$ is just the limit as $\delta \to 0$ of the infimum of all covers of a set with each element of the cover having a diameter of no more than $\delta$. So, shouldn't it be immediate that $\sum_i|U_i|>\mathscr{H}^s(\cup_i U_i)>\mathscr{H}^s(E\cap (\cup_i U_i))$? If not, what am I missing?
  3. Why do the sets need to be Borel? Is this a necessary condition?

I don't understand the proof at all, but I'm hoping that it will be easier once I can really figure out what the lemma is saying. Any and all help will be appreciated.

Let $F:M\to M$ is a smooth map where $M$ is compact connected smooth manifold and $F\circ F=F$ then show that $F(M)$ is a submanifold of $M$.

Posted: 23 Jun 2022 01:53 PM PDT

Given here composition of $F$ with $F$ gives $F$ then to show $F(M)$ is a submanifold of $M$. I was thinking in the way that $F(M)$ should be inverse image of some regular value of some smooth function which needs to constructed using this property of $F$ but unable to proceed in this direction. Any hints regarding how to proceed?

Probability that two diagnostic test results are positive

Posted: 23 Jun 2022 01:51 PM PDT

Suppose Fred has a rare disease that only $1\%$ of the population gets affected with and he tested positive in the first diagnosis test. He decides to get tested for the second time. The new test is independent of the first test (given his disease status) and has the same sensitivity and specificity.

Let $D$ be the event that Fred has disease, $T_1$ be the event that his first result is positive and $T_2$ be the event that his second result is positive.

The given is $P(T | D) = 0.95 = P(T^c | D^c)$ (accuracy of the test) and $P(D) = 0.01$.

I would like to know if I am interpreting each of the following correctly:

  1. $P(T_1 \cap T_2 | D)$ : Probability that the results are positive in both the tests given Fred has the disease.

  2. $P(T_1 \cap T_2)$ : Probability that the results are positive in both the tests taking into account both the cases - when Fred has the disease and when not.

    Does it mean $P(T_1 \cap T_2) = P(T_1 \cap T_2 | D) P(D) + P(T_1 \cap T_2 | D^c) P(D^c)$?

  3. Is $P(T_1 \cap T_2 | D) = P(T_1 | D) P(T_2 | D)$?

    Is $P(T_1 \cap T_2) = P(T_1)P(T_2)$?

    I think the former holds while the latter does not. But I don't know why; an explanation using (conditional) independence would be helpful.

Is every closed interval domain, continuous rational function equal to a polynomial?

Posted: 23 Jun 2022 01:43 PM PDT

Given any 2 non zero polynomials with real coefficients $\mathbf Pn(x)$ and non-constant $\mathbf Pd(x)$ forming the rational function $\mathbf f(x)$ = $\mathbf Pn(x)$ / $\mathbf Pd(x)$.
Restrict its domain $\mathbb R$ to a closed interval $\mathbb I$ such that $\mathbf f(x)$ is continuous over $\mathbb I$.

  1. Is there for every such function $\mathbf f(x)$ a polynomial $\mathbf P(x)$ with real coefficients such that for any $\mathbf x$ $\in$ $\mathbb I$, $\mathbf f(x)$ = $\mathbf P(x)$?
  2. Is there for no non-trivial such functions $\mathbf f(x)$ a polynomial $\mathbf P(x)$ with real coefficient such that for any $\mathbf x$ $\in$ $\mathbb I$, $\mathbf f(x)$ = $\mathbf P(x)$??

Context: I'm working on interpolation/regression curves, that is I'm trying to identify families of curves with or without certain features. Here I'm wondering if there's any loss of generality from working with polynomials VS rational functions when we consider only finite intervals and continuous curves: no trivial line of reasoning come to my mind.

What is $\frac{1}{\aleph_{\text{0}}}$?

Posted: 23 Jun 2022 01:46 PM PDT

If you have a uniform distribution over the set of natural numbers, what is the probability of any of the numbers being picked? Can you think of it in terms of $\frac{1}{\aleph_{\text{0}}}$?

How to find a statistical function from this model?

Posted: 23 Jun 2022 01:51 PM PDT

We note that the set of parameters $\quad \theta = (q_{k},\Sigma^{(k)} )_{k \in [K]} $ where $\Sigma^{(k)}$ is a SDP Matrix (symmetric definite matrix)

We have also $\quad \mathbb{P}[z_{u}=k]=q_{k}\quad$ with $\quad q_{k}\geq0 \quad$ and $\sum_{k \in [K]} q_{k}=1 \quad and \quad z_{u}$ is random variable (type of $u\in [P]$) wich has values in [K]
as $u \in [P],\quad q_{k}*P \quad$is the number of u such that $z_{u}=k \quad$and with $\quad \mathbb{P}_{\theta} (rep. \mathbb{E}_{\theta})\quad $the conditional probability for the parameters

given these information below :

  1. $(N_{uj}|(z_{u}=k,(\alpha_{kj})_{kj}) \quad \sim \quad Poisson(exp(\alpha_{kj}))$
  2. $(N_{uj}|(z_{u}=k,(\alpha_{kj})_{kj}))_{j\in [J]} \quad $ are independant and K vectors (in dimension J) and $\quad (\alpha_{kj}:j\in [J])_k\in[K]\quad$ are independant and for all $k \in[K] \quad (\alpha_{kj})_{j \in [J]}\quad \sim \quad\mathcal{N}(0,\Sigma^{(k)})$
  3. $\mathbb{P}[z_{u}=k]=q_{k}\quad$
  4. $N_{uj} > 0 \quad \forall u \in [P] \quad and \quad \forall j \in [J]$

we are willing to find the following function
$R_{\theta}((c_{j})_{j=1}^{J})=\frac{1}{P} \sum_{u_{0} \in[P]} \mathbb{P}_{\theta}(\sum_{j \in [J]} N_{u_{0}j}>0 | \sum_{u \in [P]}N_{uj}=c_{j}\quad \forall j \in [J])$

I tried to calculate it but the result does not depend on $\theta$, can somoene tell me how can I calculate this with given informations

Prove that $\Gamma_{abc}=\frac{1}{2}\left(\partial_bg_{ac} + \partial_cg_{ab}-\partial_ag_{bc}\right)$

Posted: 23 Jun 2022 01:43 PM PDT

I am tasked with the following problem

Use the equation $$\nabla_ag_{bc}=\partial_ag_{bc}-\Gamma_{cba}-\Gamma_{bca}=0\tag{1}$$ where $$\Gamma_{abc}=g_{ad}\Gamma^d_{bc}\tag{A}$$ and the (no torsion) condition $$\Gamma_{abc}=\Gamma_{acb}\tag{i}$$ to show that $$\Gamma_{abc}=\frac{1}{2}\left(\partial_bg_{ac} + \partial_cg_{ab}-\partial_ag_{bc}\right)\tag{B}$$

To begin, I will cycle the indices in eqn. $(1)$ so that,

$$\nabla_bg _{ca}=\partial_bg_{ca}-\Gamma_{bac}-\Gamma_{cab}=0\tag{2}$$ and cycle again to obtain $$\nabla_c g_{ab}=\partial_cg_{ab}-\Gamma_{acb}-\Gamma_{abc}=0\tag{3}$$ Now, subtracting eqns. $(2)$ and $(3)$ from eqn. $(1)$ yields $$\partial_a g_{bc}-\partial_b g_{ca}-\partial_c g_{ab}-\Gamma_{cba}-\Gamma_{bca}+\Gamma_{bac}+\Gamma_{cab} + \Gamma_{acb}+\Gamma_{abc}=0$$ and rearranging gives $$\partial_a g_{bc}-\partial_b g_{ca}-\partial_c g_{ab}=\Gamma_{cba}+\Gamma_{bca}-\Gamma_{bac}-\Gamma_{cab} - \Gamma_{acb}-\Gamma_{abc}$$ Also, cycling the indices in $(\mathrm{i})$ (no torsion condition), leads to two more equations along with the original equation: $$\Gamma_{abc}=\Gamma_{acb}\tag{i}$$ $$\Gamma_{bca}=\Gamma_{cba}\tag{j}$$ $$\Gamma_{cab}=\Gamma_{bac}\tag{k}$$

Now substituting $(\mathrm{i})$, $(\mathrm{j})$ and $(\mathrm{k})$ into the expression above,

$$\partial_a g_{bc}-\partial_b g_{ca}-\partial_c g_{ab}=\Gamma_{bca}+\Gamma_{bca}-\Gamma_{cab}-\Gamma_{cab} - \Gamma_{abc}-\Gamma_{abc}$$ simplifying and rearranging gives $$\partial_a g_{bc}-\partial_b g_{ca}-\partial_c g_{ab}=2\Gamma_{bca}-2\Gamma_{cab} - 2\Gamma_{abc}$$ Rearranging, $$\Gamma_{bca} - \Gamma_{cab} - \Gamma_{abc} = \frac12 \left(\partial_c g_{ab} +\partial_b g_{ca}-\partial_a g_{bc}\right)$$

But this is not the same equation as $(\mathrm{B})$.


Now I will typeset the authors solution:

$$\nabla_ag_{bc}=\partial_ag_{bc}-\Gamma_{cba}-\Gamma_{bca}$$ Cycling the indices: $$\nabla_bg _{ca}=\partial_bg_{ca}-\Gamma_{\color{red}{acb}}-\Gamma_{abc}\tag{4}$$ $$\nabla_c g_{ab}=\partial_cg_{ab}-\Gamma_{\color{red}{bac}}-\Gamma_{cab}\tag{5}$$ Subtracting these two equations from the original equation gives $$\partial_a g_{bc}-\partial_b g_{ca}-\partial_c g_{ab}$$ $$=\Gamma_{cba}+\Gamma_{bca}-\Gamma_{acb}-\Gamma_{cab}-\Gamma_{bac}-\Gamma_{abc}\stackrel{\color{red}{?}}{=}-2\Gamma_{abc},$$ using the zero torsion condition. Hence the result.

  1. I can't understand the author's solution for two reasons, I marked both in red in the quote above but here is why: I think the index order in eqns $(4)$ and $(5)$ are incorrect; I think the correct expressions should be $(2)$ and $(3)$.

  2. Moreover, I would like to see what justifies the final equality in the author's solution quote, which can only be the case if the first four connection terms sum to zero.


Remark:

I have typed the words "cycling the indices" a lot in this question, but I didn't make it completely clear what I was doing. I am not swapping two of the three indices, instead, I am literally rotating the order of all $3$ indices. For example, if starting with say $bac$ I can obtain from it $ acb\to cba$ then one more rotation leads back to the starting point, $bac$. It is like going around a clock diagram in the counter-clockwise sense (the clockwise sense works equally well also):

Cyclic permutations

Above the 'clockwise sense' is depicted by the upper version of the diagram and the 'counter-clockwise sense' is the lower part of the diagram.


One thing that puzzles me more than anything else is that answers and comments indicate that the solution is wrong (or has typos), so I have taken a screenshot of the author's question and solution for the sake of completeness:

Here is the question:

enter image description here

and here is the corresponding solution:

Levi-Civita connection solution

$\triangle ABC$ has circumcenter $O$; $BO$ and $CO$ meet $AC$ and $AB$ at $D$ and $E$. If $\angle A=\angle EDA=\angle BDE$, show they are $50^\circ$

Posted: 23 Jun 2022 01:44 PM PDT

There is surely a purely geometrical solution to this problem, but none forthcoming so far! Trigonometry confirms the result, however the quest is to solve this geometrically, almost certainly involving a clever construction, leading to an equilateral triangle and hence exposing angle values. So a possible nod to "Adventitious Angles"

$O$ is the circumcenter of $\triangle ABC$. $BO$ cuts $AC$ at $D$, and $CO$ cuts $AB$ at $E$, as in the figure. Angles $EAD$, $EDA$, and $BDE$ are equal. Prove that their value is $50^\circ$

Teaser image here

You can quickly work out all the angles in terms of x and y=angle OBC and discover that x+y=90. Also that triangle COD is isosceles and further that it's sufficient to prove triangle BCE isosceles to get the answer.

This may help ... repeat "may"! Define point F on AD such that CE = CF. Then since triangle COD is isosceles, then (among other things) EF is parallel to OD. Angle BCF will ultimately be shown to be 60 degrees, so we need to show that triangle BFC is equilateral. Worth exploring a bit; I'm convinced the key is the equilateral triangle because this forces another relationship between x and y.

UPDATE:- I have made a bit of progress via the construction of the line BF, as described above. Since triangle COD is isosceles and CF=CE by construction, then triangle CEF is also isosceles and is similar to triangle COD. The following can then be proved:-

  1. Angle EFD = 180-2x
  2. Angle FED = x
  3. FE=FD
  4. FD=EO
  5. triangle EFA is congruent to triangle EDO
  6. triangle EFA is similar to triangle ABD

Now construct the line FO and drop a perpendicular from O to point G on BC. Then the following can be proved:

  1. FO bisects angle EOD into x + x and also bisects angle FB into (180-3x) + (180- 3x)
  2. The points F O and G are colinear
  3. BG = GC
  4. Triangles CFG and GFB are congruent
  5. CF = FB
  6. Triangle CFB is isosceles with CF = FB

BUT .... we still need to prove that triangle CFB is equilateral, which is very elusive. There are two distinct ideas here… SO ... any help is greatly appreciated! Thanks for your interest.

The flux of a the negative gradient flow of a Morse-Bott function on a compact manifold converges to a critical value?

Posted: 23 Jun 2022 01:45 PM PDT

Let $(M, g)$ be a compact Riemannian manifold and $f: M \rightarrow \mathbb{R}$ be a Morse-Bott function, i.e. the set a critical points of $f$, $Crit(f)$, has connected components which are smooth manifolds and which have as tangent spaces $T_x Crit(f) = \ker \nabla^2_x f$,

(where $\nabla^2_x f: T_x M \rightarrow T_xM$ is the linear operator obtained via $g$ from the hessian $f_{**,x} : T_x M \times T_x M \rightarrow \mathbb{R}$ defined as $f_{**,x}(v, w) = v(W(f))$ for $W \in \Gamma(TM)$ any extension of $w$ (this is well defined and symmetric at critical points) )

Let $\nabla f \in \Gamma(TM)$ be defined by $g(\nabla f, w) = w(f)$ and consider the flow of $-\nabla f$ denoted $\phi_t(y)$. I am trying to see why for any $y \in M$ it happens that $\lim\limits_{t \rightarrow \infty} \phi_t(y) \in Crit(f)$.

My attempt:

Since $M$ is compact, $\phi_t(y)$ is defined for all $t \in \mathbb{R}$. If the set $A_y:= \{ \phi_t(y) : t \in \mathbb{R} \}$ were closed, then, since $M$ is compact, this set would also be compact, and by Weierstrass $f$ would have to attain its minimum on it. Since moving along the flowlines of the negative gradient can only decrease $f$, this means that the minimum is attained at $x:= \lim\limits_{t \rightarrow \infty} \phi_t(y)$, so then $x$ would be a critical value for $f|_{A_y}$. But even ignoring the fact that I don't know why $A_y$ is necessarily closed, I don't see why if $x$ is a critical value for $f|_{A_y}$, then it is a critical value for $f$ as well.

I am thinking that this attempt not enough, as it doesn't use at all the fact that $f$ is a Morse-Bott function. But I don't see how to use this fact. I also know that $\nabla_x^2 f (v) = \nabla_V \nabla f$ for $x \in Crit(f)$ and $v \in T_xM$ and $V$ a vector field extending $v$, where $\nabla_V (\cdot)$ in the RHS is the Levi-Civita connexion of $g$, but I can't see how to use this either.

Exponential generating function of the falling factorial

Posted: 23 Jun 2022 01:32 PM PDT

Let $\alpha$ be a real number. Define the sequence $(a_n)_n$ by $a_0=1$ and $a_n=\alpha(\alpha-1)\cdots(\alpha - (n-1))$ for $n\geq 1$. Find the exponential generating function of this sequence.

We have that $a_n=(\alpha-(n-1))a_{n-1}$ for $n\geq1$, so \begin{align*} A(x)&=\sum_{n\geq 0}a_n\frac{x^n}{n!}=a_0+\sum_{n\geq 1}a_n\frac{x^n}{n!}\\ &=1+\sum_{n\geq 1}(\alpha+n-1)a_{n-1}\frac{x^n}{n!}\\ &=1+\sum_{n\geq 0}(n+\alpha)a_{n}\frac{x^{n+1}}{(n+1)!}=1+\alpha\int_0^xA(t)dt+\sum_{n\geq 0}n\frac{x^{n+1}}{(n+1)!}\end{align*} I'm stuck here. I tried to write the last sum as an integral and then solve a differential equation for $A(x)$, but it didn't work.

Should I search for another recurrence relation that $a_n$ satisfies?

Is the product of path connected spaces also path connected in a topology other than the product topology?

Posted: 23 Jun 2022 01:40 PM PDT

In Munkres - Topology, in section 24, question 8a), we are asked "Is the product of path connected spaces necessarily path connected?" Here is my proof:

In the product topology, yes; since for any two points $\mathbf{x}$ and $\mathbf{y}$ in $\prod_{\alpha\in J}X_{\alpha}$ where $J$ is an index set and $\{X_{\alpha}\}_{\alpha\in J}$ is a family of path connected spaces, then for every $x_{\alpha}$ and $y_{\alpha}$ in $X_{\alpha}$ there is a path connecting them; call it $f_{\alpha}'$. Scale all these paths so they map from $[0,1]$ to $X_{\alpha}$, call the scaled paths $f_{\alpha}$. Then we can define the path $\mathbf{x}$ to $\mathbf{y}$ by $f:[0,1]\longrightarrow\prod X_{\alpha}$ with $f(t)=(f_{\alpha}(t))_{\alpha\in J}$. This function is guaranteed to be continuous (in the product topology) by theorem 19.6 (Munkres - Topology).

I think my proof is okay, but wouldn't mind it being looked over. However, I am unsure if perhaps I could have done a proof without relying on the topology being the product topology. I know that, for example, the uniform and box topologies on $\mathbb{R}^{\omega}$ are not even connected, so they can't be path connected. What about other topologies? What can we say in general about the product of connected (path connected or not) spaces under any particular topology?

Thanks for reading.

Proof that a function is bijective if and only if it is both surjective and injective

Posted: 23 Jun 2022 02:00 PM PDT

I am given the usual definitions for surjectivity and injectivity, but I am introduced with an alternative formulation of bijectivity:

Suppose $X$ and $Y$ are sets and $f:X\rightarrow Y$ a mapping. This mapping is said to be bijective if $\exists g:Y\rightarrow X$ such that $\forall x\in X,\ y\in Y$, $f(g(y))=y$ and $g(f(x))=x$.

I have to proof that in this sense, bijectivity is equivalent to simultaneous injectivity and surjectivity. Now from bijectivity, I found it quite easy to prove the other two conditions. The other way around, however, poses some difficulty. My proof:

Suppose $f:X\rightarrow Y$ is surjective and injective. I define the function $g:Y\rightarrow X$ by $g(y)=x\Longleftrightarrow f(x)=y$. Surjectivity of $f$ implies $\forall y\in Y\ \exists x\in X$ such that $f(x)=y$, thus $f(g(y))=y$. Now suppose $x,z\in X$ and $f(x)=f(z)$. Along with the definition of $g$ this implies $g(f(z))=x$. From injectivity follows $x=z$, thus $g(f(x))=x$.

First of all, initially, I have not shown that $g$ is well-defined. I am not sure how to actually prove this, or if it is even necessary here. However, using surjectivity of $f$ is it easy to see that $g$ maps every value of $Y$. Can I conclude from this that the function is well-defined, or is there more to say on the matter? I suppose I would also have to show that $g$ cannot take on two different values of $x$ for the same $y$. Also, suppose I have proven that $g$ is indeed well-defined, is my proof as presented above correct? Thank you for your help!

EDIT:

From a discussion in the comments of an answer, I have come to realise that perhaps I have misused the term "well-defined". Since I don't really understand the formal definition, I will re-state a part of my question as follows: can I directly use $g$ as defined above, or do I have to prove that it is "okay" to use it? I'm really not sure how to say this anymore... intuitively, I would say that "okay" means that the definition itself does not produce any inconsistencies. If it is necessary to prove something about it prior to using it, what is it?

When does the two cars meet

Posted: 23 Jun 2022 01:58 PM PDT

At 10:30 am car $A$ starts from point $A$ towards point $B$ at the speed of $65$ km/hr, at the same time another car left from point $B$ towards point $A$ at the speed of $70$ km/hr, the total distance between two points is $810$ km, at what time does these two cars meet ?

This is what I have tried,

$t_1 = \frac{810}{65} \,$ km/hr $= 12.46 \, $hr
$t_2 = \frac{810}{70} \,$ km/hr $= 11.57\,$ hr
$t_1-t_2 = 0.89 \,$ hr
$t_1-t_2 = 0.89\cdot 60 = 53.4 \,$ minutes

but this couldn't be the answer because how could these two cars could meet after $53.4$ minutes ?

What is the difference between these two kernel definitions?

Posted: 23 Jun 2022 01:38 PM PDT

I am reading my graft and the document of David Haussler about Convolution Kernels on Discrete Structures, UCSC-CRL-99-10.

My graft

enter image description here

and the other document

enter image description here

enter image description here

The terminology seems to differ. The other document is from Computer Science department so I cannot trust it 100% at the moment. They call one instance of $\Phi$ kernel, $K$. I call the kernel $\sigma$. It seems that I can also take a series of "kernels" and call only one kernel.

The word convolution kernel caught my eye. I think the kernel of Wigner-Ville distribution is one of them. Is it?

Why are they taking a series of "kernels"? My interpretation can be false. What is the difference between the two kernel -definitions?

No comments:

Post a Comment