Monday, March 22, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Prove that dim(W\cap V^{\perp})=1 (Linear Algebra)

Posted: 22 Mar 2021 09:17 PM PDT

Let $V$, $W$ vector subspaces of $\mathbb{R}^n$, such that: $$V\leq W\hspace{1cm}\textrm{and}\hspace{1cm}\textrm{dim }W=1+\textrm{dim }V.$$ How can i prove that:$$\textrm{dim }(W\cap V^{\perp})=1.$$Where $V^{\perp}=\{x \in \mathbb{R}^n: \langle x,v \rangle=0\hspace{0.5cm}\textrm{for all }v \in V\}$, and $\langle x,v \rangle$ is the usual inner product in $\mathbb{R}^n$.

Condition for a rigid body in equilibrium on a horizontal surface

Posted: 22 Mar 2021 09:16 PM PDT

The following question is from a past paper from Further Mathematics and it has been bothering me immensely I have spent hours cracking at a way to make sense of it, the question is in two parts

Part 1:

An object consists of a uniform solid circular cone, of vertical height 4r and radius 3r, and a uniform solid cylinder, of height 4r and radius 3r. The circular base of the cone and one of the circular faces of the cylinder are joined together so that they coincide. The cone and the cylinder are made of the same material.

Find the distance of the centre of mass of the object from the end of the cylinder that is not attached to the cone.

I first roughly sketched the object as such

sketch

Then finding the $\bar{x}$ of each part seperately:

For the cone I used the standard result of the centre of mass being $\frac{1}{4}r$ away from the base:

$$\bar{x}_{Cone}=\frac{1}{4}.4r + 4r=5r$$

For the cylinder I derived $\bar{x}$ via integration:

$$y=3r$$

$$\bar{x}_{cylinder}=\frac{\int_0^{4r} xdV}{\int_0^{4r}dV}$$

$$\because dV=y^2 \pi dx $$

$$\implies \bar{x}_{cylinder}=\frac{\int_0^{4r} 9r^2 \pi x dx}{\int_0^{4r}9r^2 \pi dx}$$

$$\implies \bar{x}_{cylinder}=\frac{9r^2 \pi \int_0^{4r} x dx}{9r^2 \pi\int_0^{4r} 1 dx}$$

$$\implies \bar{x}_{cylinder}=\frac{\frac{1}{2}\left[x^2 \right]^{4r}_0}{\left[x \right]^{4r}_0}$$

$$\therefore \bar{x}_{cylinder}=2r $$

Then by taking the weighted average of both objects I arrived at the correct value for the distance of the centre of gravity from the base of the cylinder which was

$$\bar{x}=\frac{11}{4}r$$

However the next part has had me at a complete loss for the better part of the day, it states that:

Show that the object can rest in equilibrium with the curved surface of the cone in contact with a horizontal surface.

I tried coming up with a rough sketch for this also

sketch 2

However I do not understand how to tackle this question at all, all I know is that for a body to be at equilibrium on such a surface, the centre of mass must pass through the point of suspension, however clearly this does not give an insight on how to answer this.

The condition for this question given in the marking scheme is that

markscheme

Can someone explain what this means and why this is the case?

Finding Radon-Nikodym derivative given integral

Posted: 22 Mar 2021 09:18 PM PDT

Let $\lambda$ be the Gaussian measure on $\mathbb{R}$ defined by $$\lambda(E):=\int_{E}\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx.$$ Let $\operatorname{m}$ be Lebesgue measure on $\mathbb{R}$. What is the Radon-Nikodym derivative of $\operatorname{m}$ with respect to $\lambda$?

I've already show that $\operatorname{m}$ is absolutely continuous with respect to $\lambda$, written $\operatorname{m}\ll\lambda$. As $f:=1/\sqrt{2\pi}e^{-x^2/2}$ is non-negative and both $\operatorname{m}$ and $\lambda$ are $\sigma$-finite measures on $\mathbb{R}$, I take it the Radon-Nikodym derivative of $\lambda$ with respect to $\operatorname{m}$ is

$$\frac{d\lambda}{d\operatorname{m}} = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}\tag{1}.$$

Is this correct, and if so how do I then get the Radon-Nikodym derivative of $\operatorname{m}$ with respect to $\lambda$?

Artin's proof of the generalized associative law

Posted: 22 Mar 2021 09:13 PM PDT

I am trying to understand Artin's proof of the generalized associative law. I've included a screenshot below.

enter image description here

Here are the questions I have on his proof:

  1. I am not completely sure what his base cases establish: they're part of the given definition. I assume what he means is that the proposition is 'we can define a unique product of $n$ elements for any $n$' and it's obvious that we can do that when $n=1$ (just the element itself) and when $n=2$ (because the law of composition is a well-defined function).

  2. The line "it does satisfy (iii) when $n=2$ confuses me because if $n=2$, I can only write $(a_1 a_2)$ without any kind of parentheses or any choice of $I$. The only thing I can say for certain is that (iii) definitely holds when $n=3$ because of the associative law.

  3. I assume when he assumes the product is defined for a product of $r$ elements where $r \leq n-1$, he is assuming that $n \geq 2$ because the base case covered through $n=2$. Is this right?

  4. In the inductive step, I don't understand why he is allowed to "define" the product of $n$ elements. By the inductive hypothesis, $[a_1 \cdots a_{n-1}]$ and $[a_n]$ are already defined, but I'm trying to prove that the product of $n$ elements is well-defined, i.e., that the notation $[a_1 \cdots a_n]$ is valid. Why is he allowed to define it in this way?

  5. I do n't understand his point on uniqueness: "if a product satisfying (iii) exists, then this formula gives the product because it is (iii) when $I=n-1$. So if it exists, the product is unique. I get that this is indeed (iii) with $I=n-1$ (though that seems artificial to me, still, because he defined it that way), but why does this imply uniqueness? Doesn't this require that every line of the equation chain generates a unique product?

$\int_{0}^{1} \frac{x^{2n+2}}{x^2-2x+2} dx$

Posted: 22 Mar 2021 09:12 PM PDT

$$\int_{0}^{1} \frac{x^{2n+2}}{x^2-2x+2} \text{dx} - (1)$$

This integral came up as a result of another integral I was trying to evaluate. Just for recreational purposes. Is there some closed form for this, even in terms of special functions? In general, is it possible to evaluate integrals of the form:

$$\int_{0}^{1} \frac{P(x,x^n)}{Q(x)}dx - (2)$$

with $\text{deg}(P) > \deg(Q)$? For example:

$$\int_{0}^{1} \frac{x^n}{1+x^2} \text{dx} - (3)$$

Can be written in terms of the digamma function. If I'm not mistaken, $(1)$ does not follow from $(3)$, because if you try to complete the square in the denominator of $(1)$, you'd end up having to perform a $u$-sub with $u = x-1$ and then a $(u+1)^{2n+2}$ in the numerator. This ruins things. Can someone give some help?

non-constant harmonic function on simply connected domain does not achieve maximum in the interior of the domain

Posted: 22 Mar 2021 09:06 PM PDT

Let $u$ be a harmonic function on a simply connected domain $\Omega$. Prove that $u$ does not achieve maximum in $\Omega$.

What I have shown is that for any $u$ harmonic function on $\Omega$, $u(z)=\log|f(z)|$ for a function $f$ holomorphic in $\Omega$. I did this by finding the harmonic conjugate $v(z)$ of $u$ and let $f(z)= e^{u(z)+iv(z)}$.

Assume that $u$ achieves maximum in $\Omega$, then $\log|f(z)|$ achieves maximum in $\Omega$ and since $\log$ is an increasing function, $f$ achieves maximum in $\Omega$, and then I can use Maximum Modulus Principle + Identity Principle to show that $f$ is constant and hence $u$ is constant. However, I am unable to proceed here. Either the question is missing the assumption that $u$ is non-constant, or my approach is wrong.

I am unsure where to start with this question and how to go about finding the answer?

Posted: 22 Mar 2021 09:05 PM PDT

How can I understand the 2nd solution of this equation?

Posted: 22 Mar 2021 09:05 PM PDT

This is a follow-up question of the trig-equation problem I posted earlier. I'm trying to solve these two equations: $$ \cos\left(\frac{\gamma'}{2}\right) = g_0\qquad e^{i\beta'}\sin\left(\frac{\gamma'}{2}\right) = g_1 $$ Where $g_0$ is real and $g_1$ is complex. From the earlier discussions, we can find that $$ \gamma'=2\cos^{-1}(g_0),\qquad e^{i\beta'}=\pm\frac{g_1}{\sqrt{1-g_0^2}}\qquad\beta'=-i\log\frac{g_1}{\sqrt{1-g_0^2}}+k\pi,\ \text{where}\ k\in\mathbb{Z}$$ However, I'm still having trouble understand the solution when $k$ is an odd number, say if it equals $1$, then I'm trying to verify the solution using Mathematica:
Given that

g_0 = 0.8974187611347583, g_1= -0.33540402107889095+0.28660723962849705*I

We can find $\gamma' = 0.913825694123124 , \beta_1'= 2.434484141132717 , \beta_2'=5.5760767947225105$ (differ by $1*\pi$).

However, when I tried to use Mathematica to verify the solutions, only $\beta_1'$ and $\gamma'$ works, by which I mean $e^{i\beta'}\sin\left(\frac{\gamma'}{2}\right)-g_1=0$.

When I plug $\beta_2', \gamma'$ into the equation, it returns 0.670808 - 0.573214 I. Why the second answer doesn't work? Thanks a lot for the help!!

PS: This is what I tried: Exp[I*5.5760767947225105]* Sin[0.913825694123124/2] - (-0.33540402107889095 + 0.28660723962849705*I) // Simplify

Definable well-ordered class relations

Posted: 22 Mar 2021 08:56 PM PDT

Suppose we a working with an axiomatic set theory such as $\bf ZFC$. Formulae with 2 free variables define class relations. "Small" class relations can be turned into sets of ordered pairs, the others are proper classes. Some formulae define relations that are well-orders. A "small" such relation can be turned into a well-ordered set that is order-isomorphic to some ordinal $\alpha\in\bf Ord$. Proper class relations also be can well-orders, e.g. a formula $\phi(x,y)$ saying that $x$ and $y$ are both ordinals and $x\in y$ defines the usual well-order on the class of ordinals $\bf Ord$.

Now, we can also write a formula that well-orders all ordinals except $\bf 0$ in the usual way, but considers $\bf 0$ as larger than all of them. Apparently, it gives us an order type that is "longer" than $\bf Ord$, and can be denoted as $\bf Ord+\bf1$. In a similar way, formulae having even "longer" order types such as $\bf Ord+\bf Ord$, etc. can be written. These order types are not sets existing in the universe, but rather, informal equivalence classes of formulae defining well-ordered class relations under (provable) order isomorphism. But, still, we can think of them as very long "ordinals" that occur when we count beyond all set-sized ordinals and even beyond the proper-class-sized "ordinal" $\bf Ord$. I'm interested in how far those "ordinals" can extend, and whether there is some systematic or formal way to study them, or develop ordinal arithmetic on them. Perhaps, a different axiomatic theory, rather than $\bf ZFC$, would be more suitable for that?

Can I simulate bubble shuffle to generate permutations in better than O(n^2) time.

Posted: 22 Mar 2021 09:14 PM PDT

bubbleShuffle(p) is identical to bubbleSort except instead of comparing adjacent elements, we swap with probability p. This generates biased permutations. E.g., for most values of p, elements don't move far from their original position.

I would like to be able to simulate bubbleShuffle in better than O(n^2). I.e., to get a permutation where elements shift in a manner similar or identical to what bubbleShuffle produces.

def bubbleShuffle(arr, p):       n = len(arr)            # Traverse through all array elements       for i in range(n):               # Last i elements are fixed          for j in range(0, n-i-1):                    # traverse the array from 0 to               # n-i-1. Swap with probabilty p                if random.random() < p :                   arr[j], arr[j+1] = arr[j+1], arr[j]   

e.g., based on about a million runs, average values for [0, 1, ..., 19] after being bubble shuffled with p = 0.1, 0.5, and 0.9 are:

p = 0.1:  [1.16,  1.49, 2.13, 2.96, 3.91, 4.89, 5.90, 6.88, 7.89, 8.89, 9.89, 10.89, 11.89, 12.89, 13.89, 14.89, 15.89, 16.89, 17.89, 18.89]  p = 0.5:  [4.01,  4.01, 4.14, 4.40, 4.78, 5.28, 5.89, 6,59, 7.38, 8.23, 9.13, 10.07, 11.04, 12.01, 13.01, 14.01, 15.00, 16.00, 17.00, 18.00]  p = 0.9: [10.39, 10.06, 9.76, 9.49, 9.27, 9.08, 8.92, 8.82, 8.75, 8.73, 8.75,  8.83,  8.94,  9.11,  9.33,  9.60,  9.92, 10.30, 10.73, 11.22]  

What I've tried: It's easy to run this for smallish values of n and get the average original index landing in each position, so for any scheme, it's easy to test it based on that.

The last index only gets changed once, so we can simulate it accurately as being swapped with the element log_base_p(1-random.random) to the left. The same is true for prior indices, but fails to be a good simulation because it doesn't reflect the way prior elements of the array moved around previously.

For small values of n, we can solve for an array of p's to use at each position that will mimic the average original positions. The last value is always the input p, the next-to-last is somewhat larger, and so on. This works reasonably well, but doesn't solve the problem since it uses bubbleShuffle.

Can anyone suggest either a way to improve my current approach, or an alternative approach?

Proving the convergence of $\log(n+2)/\log(n+1)$

Posted: 22 Mar 2021 09:16 PM PDT

Currently trying to prove that $\log(n+2)/\log(n+1)$ converges to $1$ with analysis tools.

if I can prove that $\forall \varepsilon >0$, $\exists N$ such that $$\left| \log(n+2)/\log(n+1)-1|<\varepsilon \right|$$ whenever $n\geq N$.

I tried using the integral definition of the log, along with $1=\log(n+1)/\log(n+1)$ $$ \left| \frac{\int^{n+2}_1\frac{1}{u}-\int^{n+1}_{1}\frac{1}{u}}{\int^{n+1}_1\frac{1}{u}}\right|\ $$ which I suppose simplifies to $$ \left| \frac{\int^{n+2}_{n+1}\frac{1}{u}}{\int^{n+1}_1\frac{1}{u}}\right|\ $$ but I'm not sure where to go from here or even if this is the right direction. Or is proving that the sequence is cauchy is a better approach?

Why does $1+\left(\frac34+\frac1x+\frac1{4(x+1)}\right)^2=\left(1+\frac1x+\frac{x}{4(x+1)}\right)^2$?

Posted: 22 Mar 2021 09:05 PM PDT

I am calculating arc length, and in the solution to a homework problem I have it has an algebraic step that I cannot figure out.$$1+\left(\frac34+\frac1x+\frac1{4(x+1)}\right)^2=\left(1+\frac1x+\frac{x}{4(x+1)}\right)^2$$ I entered the equation into Wolfram Alpha so I know it is true, but I nor my teacher can figure out why this is the case. Any help would be appreciated, thank you.

How to find nonhomogeneous solutions of the following PDE: $b^2d_{,xx}+d-1=0$?

Posted: 22 Mar 2021 09:05 PM PDT

I have a question about the solutions for the following PDE. $$ b^2d_{,xx}+d-1=0, \text{where}\quad x,b\in \mathbb{R} \quad\text{and} \quad b>0 $$

It is easy to verify that $d=1$ is a homogeneous solution and $d(x)=1-\sin \left(\frac{|x|}{b}\right)$ is also a solution which is nonhomogeneous. The problem is when I use COMSOL Multiphysics to solve this equation, I can only get the homogeneous solution and by no means the nonhomogeneous one. I can see that the nonhomogeneous solution is not continuous which may be the reason why I can't get it. Is there any way to get this solution through a numerical way? Thanks!

equation transform operation using exponential

Posted: 22 Mar 2021 08:51 PM PDT

I have a very simple question to ask. The answer may be in middle or high school math. But since I worked more than 10 years, I already forgot all middle high school math. The question is: if a=b, then $2^{a}$= $2^{b}$. Is equation correct?

Thanks

John

Inequalities with multiple variables : Find out the range of another variable

Posted: 22 Mar 2021 08:53 PM PDT

Given that $-1 \le v \le1,-2 \le u \le -0.5 \ and -2 \le z \le-0.5 \ and \ w=\frac{vz}{u} ,$ then what is the range of values for $w$?

I solved this question by trial and error method i.e. by plugging in the values of $v,u \ and \ z$ in the equation $w=\frac{vz}{u}$ to find the maxima and minima of the function. But it feels like a tedious method and not a reliable one.

Can someone please help me with a better solution or approach for this question?

How to find $AM$ which is a distance between two points in a mountain when looked through a binoculars?

Posted: 22 Mar 2021 08:50 PM PDT

The problem is as follows:

The figure from below shows a cartoon which represents the view of a pair of mountains when looked through a binoculars. Assuming that according to the inner ruler of the binoculars you can spot $BM=MN=4\,cm$ and $NC=6\,cm$, the inner goniometer indicates that $\angle ANC=\angle AMB=90^\circ$ and $\angle BAM=\angle MAC$. Also it is known $A,\,M,\,N$ are colinear. With these information find the actual distance between $A$ and $M$. Assuming the scale is $\frac{1}{1000}$. Take into account that $\triangle ABM$ and $\triangle ANC$ are triangles.

Sketch of the problem

The choices given in my book are as follows:

$\begin{array}{ll} 1.&\textrm{40 m}\\ 2.&\textrm{50 m}\\ 3.&\textrm{60 m}\\ 4.&\textrm{80 m}\\ \end{array}$

I don't know exactly what to do in this problem. Can someone help me here?. I'm not sure if should I use the given relations in a right triangle or what?. It would be nice if someone could help me with how to use the drawing given to assign the angles and find the length that it is requested which is $AM$. But just I don't know should congruence be used here?. Can someone help me with this?. Or would it be similarity of triangles?.

Help please! I'm lost.

Conditional Probability for mutually exclusive events - What is the probability $P(X_j = 1 \ \vert \ \text{Only one of } X_i=1)$

Posted: 22 Mar 2021 08:50 PM PDT

Suppose I have $n$ random variables $X_1, X_2, X_3, ..., X_n$ such that $X_i =\begin{cases} 1 & p_i \\[5pt] 0 & 1-p_i \end{cases}$.

I would like to find the probability that $$P(X_j = 1 \ \vert \ \text{Only one of } X_i=1)$$

This is an abstraction of a likelihood function$^1$ I am trying to find, but you can also think of it a bit like winning a lottery with only one prize, but everyone has different chances of winning.

My source tells me it's $$\frac{p_j}{\sum_{i=1}^n p_i} \tag{1}$$

but I can't quite understand how they arrived at this.

I feel like it should be more like $$\frac{\frac{p_j}{1-p_j}}{\sum_{i=1}^n \frac{p_i}{1-p_i}} \tag{2}$$

however the result is quite a famous one (Cox's Partial Likelihood function) so I'm pretty sure the answer should be $(1)$.

Is there some interpretation of the fractions $\frac{p_{j}}{1-p_j}$? Are they some sort of rescaled probabilities?

The actual context uses conditional densities $f_{X_i \vert X_i \ge x} (x)$, not probabilities. Might that fix the matter?

1 Cox regression, partial likelihood function, Equation $(12)$

I do not understand this particular step in a proof using the Well Ordering Principle

Posted: 22 Mar 2021 09:01 PM PDT

Below is a proof using the Well Ordering Principle. I get lost starting at $(13)$...

$$ \begin{aligned} P(c): &:=c^{3} \leq 3^{c} \\ & \equiv c^{3} \leq 3(c-1)^{3} \\ & \equiv c \leq \sqrt[3]{3} \times(c-1) \end{aligned} $$

I don't understand how we get $c^{3} \leq 3(c-1)^{3}$ from $c^{3} \leq 3^{c}$.. why does the righthand side of the inequality change like that? Image of proof

Please give me a proof/explanation for this statement ...

Posted: 22 Mar 2021 08:50 PM PDT

If each element of first row of a determinant consists of algebraic sum of p elements , second row consists of algebraic sun of q elements , third row consists of algebraic sum of r elements and so on . *Then , given determinant is equivalent to the sum of ' p × q × r × ... Other determinants in each of which the elements consists of single term *.

Is this want say , if I take 2×2 determinant then , Value of determinant will be ( p×q) +(p×q) ? Each Column will be multiplied and added ?

What's the maximum and minimum of $\sin Z$ and $\cos Z$ where $Z$ is a complex number?

Posted: 22 Mar 2021 09:05 PM PDT

I see that a lot of people try solving something like $\sin Z=2$. So that breaks the the maximum of $\sin x$. Now I wonder if there is a maximum and minimum of $\sin Z$ and $\cos Z$ if $Z$ is a complex number.

Probability of drawing at least J cards out of I cards of interest from a deck of N cards within D draws with replacement

Posted: 22 Mar 2021 09:16 PM PDT

Given a regular deck of 52 playing cards, if you draw from the deck 10 times with replacement, what is the probability that you have drawn at least 3 different aces (any of the 4 aces) at least once each? And how does this generalize to N playing cards, D draws, I many cards that we are interested in, and J many of the cards we are interested in that we must draw at least once.

I came across a simpler version of this problem while working on a hobby project trying to figure out some statistics related to a Pokémon-themed Discord bot gacha game, but I was intrigued with this problem after I found a few unanswered Math StackExchange posts about this variant. (1, 2) Unfortunately those posts didn't get any attention so I figured I would make a post with the additional information that I found.

Formalization

I was able to formalize the problem as $f(n_1, n_2, k_1, k_2, p_1)$, where:

  • $n_1$ is the number of cards drawn
  • $n_2$ is the number of cards that we are interested in
  • $k_1$ is the number of times at least that we want to draw a card we are interested in
  • $k_2$ is the number of cards at least that we are interested in that we want to draw at least $k_1$ times
  • $p_1$ is the probability of drawing a specific card

My attempts so far

Binomial distribution of binomial distributions

I had some initial attempts to solve it by modeling a binomial distribution where the events were obtaining at least 1 of a card of interest in $n_1$ draws (that also being a binomial distribution). But I validated that against some simulation code that I wrote, and the formulation came up with different results, which I think were due to the events being not independent (drawing one particular card in a set of draws effects the probability of drawing at least one of another particular card on the same set of draws).

I think that approach should give an upper bound on the answer for the problem. It models drawing at least $k_1$ copies of a card we are interested in by using a binomial distribution. It then (incorrectly) assumes that we can model that for each card we are interested in drawing as a binomial distribution (not true, since those probabilities are not independent).

But I think the lack of independence is always destructive in this case (drawing a particular card prevents you from have drawing any other card in that same draw action), so the actual probability should be upper bounded by this formula. But I'm not 100% sure on that.

$$ p_2 = \sum_{k=k_1}^{n_1} \binom{n_1}{k} * p_1^{k} * (1 - p_1)^{n_1 - k} $$ $$ \sum_{k=k_2}^{n_2} \binom{n_2}{k} * p_2^{k} * (1 - p_2)^{n_2 - k} $$

Multinomial distribution

After looking around on the internet for similar problems or approaches that might work I learned about multinomial distributions. I think you can try modelling the card drawing with replacement as a multinomial distribution and then sum the results of the probability distribution function with the different input sets that make up all the possible winning sets of draws, where there is probably a trick to group those sets into categories of equal probability.

This is my current best guess as to a framework for a solution that might work, but I haven't worked out the details on it and tested it against the simulation yet.

Simpler variant

Before I came across this variant of the problem, in my hobby project I came across the simpler case where $k_2 = n_2$ and $k_1 = 1$, the "collect all of the Pokémon at least once" problem. Unfortunately I haven't solved this question yet either.

Starting with solving this simpler variant might be useful in finding a solution for the more general case. But if we do find a solution for the general case, it should work for this specific case as well.

Simulation code

I posted the simulation code I wrote on a GitHub gist at the link below. You can run it by providing $n_1$, $n_2$, $k_1$, $k_2$, and the number of total cards in the deck. You can also optionally provide an RNG seed and a number of iterations (number of draws of $n_1$ cards to simulate).

$ python sim.py simulate 10 4 1 3 52 --seed 42  Namespace(command='simulate', iterations=100000, k_1=1, k_2=3, n_1=10, n_2=4, seed=42, total=52)  100%|████████████████████████████████| 100000/100000 [00:01<00:00, 95673.52it/s]  estimated_probability = 0.01499  

The script also includes a mode to run the "binomial distribution of binomial distributions" approach that I think upper bounds the result.

$ python mathoverflow_answer.py calculate 10 4 1 3 52   p_1 = 0.019230769230769232  p_exactly = 0.16147234355824405  p_2 = 0.17649104785295508  result = 0.019079344697167273  

Internet research

While working on this problem I did some searching online to try to find similar problems or potential approaches for solutions. Here are some links I found the seemed potentially relevant.

  • Math StackExchange posts with the same question that went unanswered. (1, 2)
  • Resources related to cumulative distribution functions of multinomial distributions.
    • A page from the book "Chemical Process Dynamics and Controls", which includes a section 2.2 that gives a cumulative distribution function for multinomial distributions. (3)
    • A 1981 paper on a representation for a multinomial cumulative distribution function, which I don't really understand yet. (4)

Conclusion

Above are my thoughts on and current attempts at solving this problem. My current best lead is to try something with multinomial distributions, but I haven't been able to solidify and test that approach yet.

Does anyone have thoughts on how to go about solving this problem? Does this approach seem reasonable? Are there better approaches that yield a solution to this problem?

Does the matrix product $X^T X$ have a special meaning?

Posted: 22 Mar 2021 08:54 PM PDT

I have come across this specific matrix product several times, lately in the context of stochastic linear models where it is an integral part of the Least Squares Estimator (LSE). Often times in linear algebra there is some beautiful intuition hidden behind recurring formulae and since I don't see the one behind this one I'm asking for help.

Is there a geometric interpretation or special meaning for the matrix product

$$X^T X$$

for a matrix $X \in \mathbb{R}^{n \times p}$ with $p \leq n$?

If $\sqrt{x} = -1$, then what is $x$?

Posted: 22 Mar 2021 09:17 PM PDT

If $\sqrt{x} = -1$, then what is $x$?

Solve the question for a 10th standard student without use of higher level mathematics and you can use Complex number concepts.

Existence for the vectors with the same length and isotropic position.

Posted: 22 Mar 2021 09:17 PM PDT

Suppose that $v_1,\ldots,v_m\in\mathbb{R}^n$ satisfying $$\sum_{i=1}^m v_iv_i^T=I_n$$ and $$\|v_i\|^2=\frac{n}{m}$$ for each $i$.

Do the vectors $\{v_i\}_{1\leq i\leq m}$ always exist? If not, what's the condition for the existence?

Note that $[v_1,\ldots,v_m]$ can be viewed as a submatrix extracted $n$ rows from an orthogonal matrix, and each columns have the same length.

Finding the sum of possible values

Posted: 22 Mar 2021 08:55 PM PDT

$14$ students have taken part in a math olympiad. Every problem in that olympuad is solved by exactly $7$ students. Furthermore, the first $13$ students have all solved $10$ problems each. If the $14$th student has solved exactly $k$ problems, then what is the sum of the allpossible values of $k$?

I have managed the diophantine equation: 130+$k$=7$x$ where $x$ is the total number of questions in the question paper....but i have been stuck determining the sum of the values of $k$.

What is the mechanics of the union-intersection along a sequence of sets?

Posted: 22 Mar 2021 08:51 PM PDT

This question came about after I got stuck trying to understand this answer.

When $n\to \infty$ I don't understand why in a countable family of sets $(A_n)_{n\geq 0}$ the operation

$$\bigcap_{n\geq 0} \bigcup_{k \geq n} A_k $$

depends on whether the sequence is monotonous or not.

It is likely that I don't understand the symbols, or that I am just too dumb. Let's see... Should I read this as a composition of operations? If so, when I start from the inside, i.e.

$$\bigcup_{k \geq n} A_k$$

I necessarily end up with all the elements in all the sets combined into a single set. At that point this entire set containing all possible elements in any one of the sets $A_k$ is intersect-ioned with all of the sets $A_k,$ looking for elements that are in all sets. Is that it? If this is correct, then why does the order of the monotony of the sequence matter? Can I get an example?


Post-mortem:

' For a finite collection of sets:

$$ \begin{align} A_0&=\{a,b\}\\ A_1&=\{a,x\}\\ A_3&=\{a,x,y\}\\ A_4&=\{a,c,d,y\}\\[3ex] \bigcap_{n\geq 0} \bigcup_{k \geq n} A_k &=\{a,b,x,y,c,d\}\cap\{a,x,y,c,d\}\cap\{a,x,y,c,d\}\\ &=\{a,x,y,c,d\}\\[3ex] \bigcup_{n\geq 0} \bigcap_{k \geq n} A_k &=\{a\}\cup\{a\}\cup\{a,y\}=\{a,y\} \end{align} $$

In pseudo-code $\bigcap_{n\geq 0} \bigcup_{k \geq n} A_k $:

for $n$ in $n=0$ to $\infty:$

for $k$ in $n$ to $\infty:$

$B_k \leftarrow A_k \cup A_{k+1} \cup A_{k+2} \cdots$

$B_0 \cap B_1 \cap B_2 \cap \cdots$

The pictorial intuition is that for an element $\color{blue}\spadesuit$ to be in $\bigcap_{n\geq 0} \bigcup_{k \geq n} A_k $ it would have to keep on appearing so that no matter how far we go into infinity it appears in some subset to as to be part of all $B_k$'s, and hence, in the intersection:

$$\{\cdots\},\cdots,\{\cdots\},\{\cdots\},\{\cdots,\color{blue}\spadesuit,\cdots\},\{\cdots\},\{\color{blue}\spadesuit,\cdots\},\cdots,\{\cdots\},\{\cdots,\color{blue}\spadesuit,\cdots\},\cdots$$

On the other hand, $\bigcup_{n\geq 0} \bigcap_{k \geq n} A_k $ is more demanding, requiring that $\color{blue}\spadesuit$ appears consistently after a certain point:

$$\{\cdots\},\cdots,\{\cdots\},\{\cdots\},\{\cdots,\color{blue}\spadesuit,\cdots\},\{\cdots,\color{blue}\spadesuit\},\{\color{blue}\spadesuit,\cdots\},\cdots,\{\color{blue}\spadesuit,\cdots\},\{\cdots,\color{blue}\spadesuit,\cdots\},\cdots$$

An ellipse inside a heptagon

Posted: 22 Mar 2021 08:51 PM PDT

Can you provide a proof for the following claim:

Claim. Construct a convex heptagon circumscribed about an ellipse. Intersection points of its non-adjacent longer diagonals lie on a common ellipse.

enter image description here

GeoGebra applet that demonstrates this claim can be found here.

I have noticed that principal diagonals of the hexagon defined by the points: $𝐺,FG \cap ED,𝐷,𝐶,𝐵,𝐴$ concur at the point $P$. Similar is true for the points $Q,R,S,M,N,O$ as well . So I guess we should apply Brianchon's theorem somehow.

$\sum_{n=1}^{\infty} \frac{n^2}{(1+n^3)^p}$, find values of p

Posted: 22 Mar 2021 09:00 PM PDT

I have a sum $\sum_{n=1}^{\infty} \frac{n^2}{(1+n^3)^p}$ and $p>1$. Use integral tests and determine which values of $p$ can the partial sum, $s_5$ estimates true sum of $s$ to within $10^{-6}$ error. So this is a question about errors. It says using $s_5$, so error formula is $error \leq \int_{5}^{\infty} \frac{n^2}{(1+n^3)^p}$. So I calculate the sum and I can get p? Here I need $10^{-6}$ maximum error, so I equal $10^{-6} \leq \int_{5}^{\infty} \frac{n^2}{(1+n^3)^p}$. And finally it's get $p = 3.246$.

Identity for the sum of products of Sinc functions

Posted: 22 Mar 2021 09:03 PM PDT

The Sinc function is defined as follows: $$\mathrm{sinc}(t) = \begin{cases} \frac{\sin(\pi t)}{ \pi t} & \mathrm{if} \quad t \neq 0, \\ 1 & \mathrm{otherwise.} \end{cases}$$ I want to show the following identity, $$\sum_{n=-\infty}^\infty \mathrm{sinc }(2Bt-n) \mathrm{sinc }(2Bs-n) = \mathrm{sinc }(2B(t-s)).$$ where $B > 0.$ I have tried the Poisson summation formula and Fourier Series approach but they didn't work.

No comments:

Post a Comment