Friday, April 22, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


What is the next row of numbers

Posted: 22 Apr 2022 11:32 AM PDT

28 84 112 38 114 152 48 144 192

Correlation function in the Langevin equation

Posted: 22 Apr 2022 11:31 AM PDT

So the Langevin equation of Brownian motion is a stochastic differential equation defined as $$m {d \textbf{v} \over{dt} } = - \lambda \textbf{v} + \eta(t)$$

where the noise function $\eta(t)$ has correlation function $\langle \eta_i(t) \eta_j(t') \rangle=2 \lambda k_B T \delta_{ij} \delta(t - t')$.

I have two questions. How does one calculate the correlation function and where exactly do the constants proceeding the delta functions originate? I understand that the delta functions ensure that there is no correlation at different times etc, but I don't get why the constants are before it.

Thanks

Show the boundedness of this Identity Operator

Posted: 22 Apr 2022 11:30 AM PDT

Consider the identity operator $$I: C^1[0,1] \to AC[0,1]$$ ($AC[0,1]$ denotes the space of absolutely continuous functions on $[0,1]$). I need help showing that this mapping is a bounded linear operator where the norm on $C^1[0,1]$ is defined by $$||f||_{C^1} = \max \{ |f(t)| : 0 \leq t \leq 1 \}$$ and the norm on $AC[0,1]$ is defined by $$||f||_{AC} = \int_0^1 |f| + \int_0^1 |f'(t)|$$

Is there a closed expression for the limit of the following function?

Posted: 22 Apr 2022 11:26 AM PDT

It seems to me that the incomplete Beta function is related here, but I am interested only in the limit of the following function when $\beta \to \infty$: \begin{equation} f(x)=\displaystyle\frac{\Gamma(\beta +1)}{\Gamma(\alpha +\beta +2)}\sum_{j=0}^{+\infty}\displaystyle\frac{\Gamma(\alpha +\beta +j +2)}{\Gamma(\beta+j +2)}x^j. \end{equation}

Where $\alpha, \beta\in \mathbb{R}$, such that $\alpha>-1$ and $x\in [0,1)$. Function $f$ is an analytic function in $[0, 1)$. Thanks a lot in advance.

Maximum norm over 'spherical hull' of vectors

Posted: 22 Apr 2022 11:25 AM PDT

Consider $k$ vectors in $\mathbb{R}^n$, $v_1,...,v_k$.

Let us define the set $$S_v = \{ x \in \mathbb{R}^n| x = \sum_{i=1}^k \lambda_i v_i, \lambda_i \geq 0, \sum_i \lambda_i^2 = 1\},$$

that is, it is something similar to the convex hull of the vectors only that we want the vector of weights $\lambda = (\lambda_1,\lambda_2,...,\lambda_k)$ to be of unit $l_2$ rather than $l_1$ norm.

Is there an explicit formula for $$\underset{x \in S_v}{\arg \max } \vert \vert x \vert \vert_2?$$

Is there a name for the set $S_v$, something like spherical hull would be adequate?

Probability of consignment acceptance

Posted: 22 Apr 2022 11:11 AM PDT

When a consignment of pens arrives at the retailer's, ten of them are tested. The whole batch is returned to the wholesaler if more than two of those selected are found to be faulty. What is the probability that the consignment will be accepted if 5% of the pens are faulty?

Our instructor told us this would be solved using the binomial distribution which has a condition that the trials are independent. But if you are testing a batch of pens you would test it without replacement which makes the following trials dependent.

So help me understand why is this solved using the binomial distribution?

Area under curve in two different ways

Posted: 22 Apr 2022 11:10 AM PDT

I am currently trying to verify the computation of following integral using two different substitutions.

The area below $f$ on the unit the circle with $$f(x,y) = \frac{x^2 + y^2}{4} + \frac{xy}{2} $$

The calculation follows the basic formular:

$$ \int_C f(x,y) ds = \int_a^b f(g(t), h(t)) * \sqrt{g'(t)^2 + h'(t)^2}$$ for suitable $g, h, a \text{ and } b $

The easy parametrization uses $x = 2 \sin(t)$ and $y = 2 \cos(t)$ for $t \in [0, 2 \pi] $

Plugging everything in the integral immediately simplifies to:

$$ \int_0^{2 \pi} 1+ 2\sin(2t)2 dt = 4 \pi$$

However, one can parameterize a circle differently:

Using $x = \frac{1-t^2}{1+t^2}$ and $\frac{2t}{1+t^2}$ with $t \in (-\infty, +\infty) $

One may then create the following abomination:

$$p(t) = \frac{(\frac{1-t^2}{1+t^2})^2 + (\frac{2t}{1+t^2})^2}{4} + \frac{\frac{1-t^2}{1+t^2} \frac{2t}{1+t^2}}{2} = \frac{(-1 - 2 t + t^2)^2}{4 (1 + t^2)^2} $$

with: $g'(t) = \frac{-(4 t)}{(1 + t^2 (2 + t^2))} $ and $h'(t) = \frac{-(2 (-1 + t) (1 + t))}{(1 + t^2)^2}$

Fortunately: $\sqrt{g'(t)^2 + h'(t)^2}$ simplifies to: $\sqrt{\frac{4}{(1+t^2)^2}} = \frac{2}{1+t^2}$

Putting now these terms together we get:

$$\frac{(-1 - 2 t + t^2)^2}{4 (1 + t^2)^2} * \frac{2}{1+t^2} = \frac{(-1 - 2 t + t^2)^2}{(2 (1 + t^2)^3)}$$

This can then be integrated $$\int_{-\infty}^{+\infty} \frac{(-1 - 2 t + t^2)^2}{(2 (1 + t^2)^3)} = \frac{\pi}{2}$$ which is pretty close. However, this begs the questions: Where did the calculation go wrong?

Intuition behind computing net values

Posted: 22 Apr 2022 11:11 AM PDT

Suppose that we have to compute a net value, i.e. $$ A=X-Y $$ where $Y\leq X.$ Here, when does it become relevant which units of $Y$ are being removed from $X?$ For instance, suppose we compute net profits as revenues-costs: $$ \Pi=R-C $$ It does not seem relevant which units of $C$ we take away from $R,$ because money is fungible. But, whenever we compute a difference such as this, is fungibility always implied?

Is $\arctan(2)/\pi$ known to be transcendental or not or is it an open problem?

Posted: 22 Apr 2022 11:18 AM PDT

Is $$\arctan(2)/\pi$$ known to be a transcendental or not or is it an open problem?

Cannot find anything relevant with google search.

Frequency response at $\omega = \frac{4\pi}{9} $ for designing a two-pole BPF with the given specs

Posted: 22 Apr 2022 11:05 AM PDT

My attempt at evaluating  <span class=$H(\frac{4\pi}{9}) $ following the same steps I did for $H(\frac{\pi}{2})$" />

As shown above, this is an example I found on frequency-domain analysis of LTI Systems and I am unable to achieve the equation shown on the latter half of the example. Any clue would be greatly appreciated on how I go about evaluating $H(\frac{4\pi}{9})$

Example of an algebra that is a banach space but not a banach algebra

Posted: 22 Apr 2022 11:29 AM PDT

I'm looking for an example of a space $\mathbb{A} $ such,

  • $\mathbb{A} $ is a algebra;
  • $\mathbb{A}$ is equipped with a norm that make it a Banach space;
  • $\mathbb{A}$ is not a Banach algebra, i.e., the norm is not submultiplicative.

I haven't been able to think of any examples for this case, but I believe there must be a space satisfying this....

And if I assume that $\mathbb{A}$ has unity, is it still possible to find any examples?

What is a prefix set?

Posted: 22 Apr 2022 11:17 AM PDT

I am trying to understand the following definition of prefix set - "A prefix set is a language $A \subseteq \Sigma^*$" such that no element of A is a prefix of any other element of A.

I came across a similar phrase in Li-Vitányi's book, "no element in the set is a proper prefix of another element in the set".

Can someone please elaborate on this? Perhaps an example will help me understand it. I couldn't find the explanation for this on Wikipedia or on Math.SE

A finite axiomatization of the universal theory of rings in the signature $(+,*,0,1)$

Posted: 22 Apr 2022 11:31 AM PDT

Let $T$ be the theory of rings in the signature $(+,*,0,1)$, and let $T_\forall$ be the $\forall$-theory of rings in the signature $(+,*,0,1)$. My question is, is $T_\forall$ finitely axiomatizable? And if so, can someone exhibit a finite set of axioms?

Does ZFC prove existence of proper class many $H_\kappa$ sets with $|H_\kappa|=\kappa$?

Posted: 22 Apr 2022 11:11 AM PDT

Let $H_\kappa$ be the set of all sets hereditarily strictly smaller than $\kappa$, where $\kappa$ is cardinal.

Is it a theorem of $\sf ZFC$ that we have a proper class of sets $H_\kappa$ such that $|H_\kappa|=\kappa$?

Find the solution of the given integral problem

Posted: 22 Apr 2022 11:28 AM PDT

$$\int_0^2 \int_0^{\sqrt(2x-x^2)} x \,dx\,dy $$

Could you please find the solution to this?

Union of n-ary cartesian powers of a set

Posted: 22 Apr 2022 11:14 AM PDT

Given a set $S$, is there a name for the following:

$$f(S,N) = \bigcup\limits_{k=0}^{N}S^{k}$$

For example for $S=\{x, y, z\}$ and $N=2$ the result would be: $$\{(), (x), (y), (z), (x, x), (x,y), (x, z), (y, x), (y, y), (y, z), (z, x), (z,y), (z,z)\}$$

Is there any way to relate it to a notion/terminology in mathematics?

Proof that the module of section of a vector bundle and the module of section of the pullbackbundle are isomorphic.

Posted: 22 Apr 2022 11:07 AM PDT

Let $\pi: E \rightarrow M$ be a vector bundle and let $f: N \rightarrow M $ be a continuous map. Let $f^{*} E$ be pullback bundle $$

Let $\Gamma(E)$ be the module of section of the vector bundle $E$ and $\Gamma(f^*E)$ the module of section for the pullback bundle.

Now in this book Elements of Noncommutative Geometry at page 62 they have

Proposition 2.12. There is an isomorphism of $C(N)$-modules, $$ \Gamma\left(N, f^{*} E\right) \simeq \Gamma(M, E) \otimes_{C(M)} C(N) $$Proof. We proceed by interpreting geometrically the right hand side. We can regard simple tensors like $s \otimes g$ as functions $N \rightarrow E: y \mapsto s(f(y)) g(y)$ of a particular type, to wit, as sections of $E \rightarrow M$ along the map $f: N \rightarrow M$. A section along a map $f: N \rightarrow M$ is defined as a continuous map $\sigma: N \rightarrow E$ such that $\pi \circ \sigma=f$. For instance, a vector field $X$ over $N$ defines a vector field along $f$, namely $T f \circ X$. Now, there is a one-to-one-correspondence, in fact an isomorphism of $C(N)$-modules, between sections $\sigma$ of $E \rightarrow M$ along the map $f: N \rightarrow M$ and sections of the pullback bundle $f^{*} E \rightarrow N$ given by $y \mapsto \tilde{f}^{-1} \sigma(y)$; recall that $\tilde{f}$ is an isomorphism on each fibre.

I am not understanding the proof. How the isomorphism of the fibers implies that the theorem is true?

Error In Wade's Analysis (continuous function preserves compactness)

Posted: 22 Apr 2022 11:18 AM PDT

I would like to verify that the following proof is incorrect before raising it in another setting. This is a proof from Wade's "An Introduction to Analysis"; I do not take issue with the theorem itself...just this particular proof of it. I will first copy the proof as stated before addressing the issue of its correctness

Theorem: If $ H $ is compact in $\mathbb{R}^n$ and $\mathbf{f}: H \to \mathbb{R}^m$ is continuous on $H$, then $\mathbf{f} \left(H \right)$ is compact in $\mathbb{R}^m$

Proof:

Suppose $ \left\{ V_\alpha\right\}_{\alpha \in A \;} $ is an open covering of $\mathbf{f}\left(H \right)$. Then $\left\{ \mathbf{f}^{-1} \left( V_\alpha \right)\right\}_{\alpha \in A}$ covers $H$ and, by continuity, its members are relatively open in $H$ .

Thus for each $\alpha$ there exists an open set $O_\alpha$ such that $\mathbf{f}^{-1}\left( V_\alpha \right) = H \cap O_\alpha .\;$ Note that $\left\{ O_\alpha \right\}_{\alpha \in A}$ is an open covering of $H$ and since $H$ is compact, there exists a finite subcovering $\left\{ O_{\alpha_j} \right\}^N_{j=1}$. We conclude that $$ (*) \quad \quad \quad\mathbf{f}\left( H \right) \subseteq \mathbf{f} \left( \bigcup_{j=1}^N {O_{\alpha_j} \cap H}\right) = \bigcup_{j=1}^N {\mathbf{f} \left( \mathbf{f}^{-1} \left( V_{\alpha_j}\right) \right)} = \bigcup_{j=1}^N V_{\alpha_j}$$ In particular, $\left\{ V_{\alpha_j}\right\}^N_{j=1}$ is a finite subcovering of $\mathbf{f}\left( H \right)$ and thus $\mathbf{f}\left( H \right)$ is compact.

$\\$

The Issue: The $\subseteq$ symbol in the line marked above by a $(*)$ is actually an equality. The author notes this in his errata, but this introduces a further problem. We now have all equalities, so that $$\mathbf{f}(H) = \bigcup_{j=1}^N V_{\alpha_j} \quad \text{i.e. compact set = union of open sets}$$

That is, the set we are claiming is compact we have also shown to be open, and this will not be true in general.

It must therefore be the case that one of the other equalities is actually a $\subseteq$. It is actually the rightmost one. Recall that the $V_{\alpha_j}$ are an open covering of the image of $\mathbf{f}, \,$ $\mathbf{f}(H).$, and will in general need to intersect its complement. That is, some of the $V_{\alpha_j}$ will contain points outside of the range of $\mathbf{f}$. For those sets $\mathbf{f} (\mathbf{f}^{-1}(V_{\alpha_j}) \neq V_{\alpha_j}$. The confusion arises because we are taking the preimage of sets that do not completely lie within the function's image. The correct sequence is as follows:

$$\mathbf{f}\left( H \right) = \mathbf{f} \left( \bigcup_{j=1}^N {O_{\alpha_j} \cap H}\right) = \bigcup_{j=1}^N {\mathbf{f} \left( \mathbf{f}^{-1} \left( V_{\alpha_j}\right) \right)} = \bigcup_{j=1}^N \left( V_{\alpha_j} \cap \text{Im}(\mathbf{f}) \right) \subseteq \bigcup_{j=1}^N V_{\alpha_j}$$

Can anyone validate this set proof: If $A ⊆ \overline B$ for some sets A and B, does it imply that $B ⊆ \overline A$?

Posted: 22 Apr 2022 11:16 AM PDT

By definition, the complement of a set, $\forall a \in A, \nexists a: a \in B$

Similarly $\forall b \in B, \nexists b: A$

This implies that B must contain some or all of the elements that are not contained in A.

So, $B\subseteq \overline A$

Prove the inequality $f(x)>u(x)$

Posted: 22 Apr 2022 11:26 AM PDT

If $f(x)=\big(\frac{e^x}{x^2}\big)+\ln x+\frac1x$ and $u(x)=3$ then show that

$f(x)>u(x) , \forall x\in(0,+\infty)$.

What is $P(X+Y>0 \mid X>0)$ given that $X,Y$ two different normal?

Posted: 22 Apr 2022 11:29 AM PDT

X follows $N(0,\sigma_x^2)$ and Y follows $N(0,\sigma_y^2)$, and $X$ and $Y$ are independent. What is $P(X+Y>0 \mid X>0)$? If $\sigma_x^2=\sigma_y^2$ we can use symmetry and easily get answer 3/4, but what if $\sigma_x^2 \neq \sigma_y^2$?

Rings of regular functions, affine and quasi affine [duplicate]

Posted: 22 Apr 2022 11:10 AM PDT

Edit: I don't know how you can think that linking something which doesn't answer my question would satisfy me. By the way I don't know what a sheaf is, maybe you could ask before taking such decisions. So I would like an answer or another link, which I wasn't able to find, which is useful for my 2 questions, and my attempt, otherwise I guess that I'll have to ask this again and again until something satisfies me.

For $V\subseteq \mathbb A^n$ an affine variety set $$\Gamma (V)=\{f:V\to K| \exists F\in K[x_1,\dots,x_n]\text{ such that }f(c)=F(c)\forall c\in V\}=\{f:V\to K|f\text{ is regular}\}$$ $$K(V)=Frac(\Gamma(V)),\text{ }O_p(V)=\{f\in K(V)|f \text{ defined at } p\in V\}=\{f\in K(V)| \exists a,b\in \Gamma(V)\text{ with }f=\frac{a}{b},\text{ }b(p)\neq 0\}$$ and if $U$ is quasi affine $$O(U)=\{f:U\to K| f\text{ is regular}\}$$$$=\{f:U\to K|\forall p\in U,\exists U_p\subseteq U \text{ open },g,h\in K[x_1,\dots,x_n] \text{ with } f_{|U}=\frac{g_{|U}}{h_{|U}}\} \text{ and }h(p)\neq 0\text{ }\forall p\in U_p\}$$ So we have 2 definitions for being regular, in the affine case, and the quasi affine case.

First I don't understand why $O(V)=\Gamma(V)$ if $V$ is affine, one direction is clear, for the other we consider a local property and somehow we want to show that it satisfies something global. A proof I have says $O(V) \subseteq O_p(V)$ and uses $\Gamma(V)=\bigcap_{p\in V}O_p(V)$. But at this point it's not clear why an element of $O(V)$ should be a quotient on whole $V$

My attempt : Take $p,p'\in V$ consider $U_p,U_{p'}\subseteq V$, we have $U_p\cap U_{p'}\neq \emptyset $ and $\frac{g_p}{h_p}=\frac{g_{p'}}{h_{p'}}$. We have that $U_p\cap U_{p'}\subseteq U_p,U_{p'}$ is irreducible, dense, and those quotients are continuous so we get that $\frac{g_p}{h_p}=\frac{g_{p'}}{h_{p'}}$ on $U_p\cup U_{p'}$. We also get that $h_(p)(p')=h_{p'}(p')\neq 0$. So in fact we get that $f=\frac{g_p}{h_p}$, since we must have that $h_p(v)\neq 0$ $\forall v\in V$ we get $h_p\in K$ so $f$ is regular with the definition of an affine variety.

And I'm not sure about how to think about $O(V)$. Can be all functions in it be seen as a quotient of polynomials ? If not are there some great cases where we can just say "Take $f\in O(V)$, since V is ... we can write $f=\frac{a}{b}$ for $a,b\in K[x_1,\dots, x_n]$"

Maximum value of sum of sines [closed]

Posted: 22 Apr 2022 11:12 AM PDT

$$\begin{array}{ll} \underset{a,b,c,d,e}{\text{maximize}} & \sin (a) + \sin (b) + \sin (c) + \sin (d) + \sin (e) \\ \text{subject to} & a + b + c + d + e = 2\pi \end{array}$$

How do I prove that the objective is maximized only when $a = b = c = d = e = \frac{2\pi}{5}$?

Expected number of flips vs probability in a Markov Chain

Posted: 22 Apr 2022 11:09 AM PDT

The problem is the following:

(a) We keep flipping coins until we see the sequence HTHH. Find the expected number of flips.
(b) Alice and Bob play the following game. They keep flipping coins until either the sequence HHTH or HTHH occurs. Alice wins if the sequence HHTH occurs first, and Bob wins if the sequence HTHH occurs first. Find the probability that Alice wins.

For part (a) I set a transition matrix with states {0, 1, 2, 3, 4} corresponding to the "progress" made in achieving the sequence HTHH. Going from 0 to 0 means getting a Tails, going from 0 to 1 means getting Heads, from 1 to 1 means getting Heads again, from 1 to 2 means getting Tails (after getting Heads), going from 2 to 0 is getting Tails (after getting Tails) and so on.

$\begin{bmatrix} 0.5 & 0.5 & 0 & 0 & 0 \\ 0 & 0.5 & 0.5 & 0 & 0 \\ 0.5 & 0 & 0 & 0.5 & 0 \\ 0 & 0 & 0.5 & 0 & 0.5 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}$

The expected number of tosses would be first entry of the solution to $\overrightarrow{v}=R\overrightarrow{v}+\begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}$ if I'm not mistaken, which is 18.
Now for part (b) I don't know how to set up the matrices for Alice and Bob (if even that is the correct approach). I would greatly appreciate any advice on how to proceed, thank you!

How can we stay confidence replacing the population standard deviation by it's estimate?

Posted: 22 Apr 2022 11:20 AM PDT

So imagine we take $n$ random samples from a Bernoulli Trial. Thus our random samples are composed by binary random variables $X_1, X_2, ..., X_n$. So by central limit theorem we know that the distribution of $Z=\frac{\overline{X}-p}{\sigma/\sqrt{n}}$ such that $\overline{X}=\frac{X_1+X_2+...+X_n}{n}$ approximates a standard normal pdf when $n$ is big enough. So finding the probability that $Z$ lies between $-1.96$ and $1.96$ is:

$$P(-1.96\le Z\le 1.96)=P(-1.96\le \frac{\overline{X}-p}{\sigma/\sqrt{n}} \le 1.96) = 0.95$$

We also know that the standard deviation of our Binary Random Variable is $\sigma=\sqrt{p(1-p)}$. Thus:

$$P(-1.96\le \frac{\overline{X}-p}{\sqrt\frac{p(1-p)}{n}} \le 1.96) = 0.95$$

The book I'm using just replace $p$ by it's estimate $\overline{X}$ without further explanation. Why can we do that? Thus:

$$P(-1.96\le \frac{\overline{X}-p}{\sqrt\frac{\overline{X}(1-\overline{X})}{n}} \le 1.96) = 0.95$$

So transforming a little we have that:

$$P(\overline{X} -1.96\sqrt\frac{\overline{X}(1-\overline{X})}{n} \le p \le \overline{X} + 1.96 \sqrt\frac{\overline{X}(1-\overline{X})}{n}) = 0.95$$

How can we still saying that this is true with 95% of confidence? what justifies replacing $p$ by $\overline{X}$? I mean the 95% confidence interval is true when we use the population standard deviation and not some estimate. Using an unbiased estimator for the standard deviation population will only tells us that we are going to have a 95% confidence interval in the long run.

UPDATE Suppose $n=30$, then $\frac{\sigma}{n}=\sqrt{\frac{\overline{X}(1-\overline{X})}{n}}$ with $n=30$ can take this values as function of $p$:

Thus, we have three cases for $p$.

  1. $p=0.5$
  2. $p \lt0.5$
  3. $p \gt0.5$

Let's study the case 2. Imagine we have that $p=0.25$, thus our estimate $\overline{x}$ will fall normally around 0.25. Thus we would have 2 cases for $\overline{x}$:

  1. $\overline{x} \gt p$
  2. $\overline{x} \lt p$

When $\overline{x} \gt p$, our estimated interval will be overestimated. Thus, we are going to have a probability higher than 95% that our estimated interval contains $p$.

And, when $\overline{x} \lt p$, our estimated interval will be underestimated. Thus, we are going to have a probability lower than 95% that our estimated interval contains $p$.

Therefore, I think this would equilibrate in the long run such that our expected confidence interval would be 95%. But how can we prove that?

Existence of Gaussian process

Posted: 22 Apr 2022 11:32 AM PDT

Let $\Omega=C([0,1],\mathbb{R})$, $(\Pi_t)_{t\in [0,1]}$ the canonical process with $\Pi_t(\omega)=\omega_t$, $\mathcal{F}=\sigma(\Pi)$ and $\mathbb{F}$ the filtration generated by $\Pi$. Let $F:[0,1] \rightarrow \mathbb{R}$ be a continuous nondecreasing function.

I am currently reading a paper in which the following assertion is used without any reference.

There exists a unique probability measure $\mu$ on $(\Omega,\mathcal{F})$ such that $\Pi$ is a centered Gaussian process on $(\Omega,\mathcal{F},\mathbb{F},\mu)$ with $\mathrm{Cov}[\Pi_s,\Pi_t]=F(\min(s,t))$.

I can not find this claim anywhere. I would be very grateful for a reference to this assertion.

Convolution inequality $\|f\star g\|_p \le \|f\|_1 \|g\|_p$ on locally compact group

Posted: 22 Apr 2022 11:02 AM PDT

Consider the following fragment from Folland's book "A course in abstract harmonic analysis". Here $G$ is a locally compact Hausdorff group and the $L^p$-spaces are considered w.r.t. a left Haar measure on $G$.

enter image description here

The proof of proposition 2.40 applies Minkowski's integral inequality, however the proofs of Minkowski's integral inequality I know make crucial use of the fact that the measure spaces involved are $\sigma$-finite. However, $G$ with the left Haar measure is $\sigma$-finite if and only if $G$ is $\sigma$-compact, which is not an assumption here.

How can I solve this technicality? Perhaps I can make use of the fact that if $f \in L^1(G)$, then $f$ vanishes outside a $\sigma$-finite set and similarly for $g \in L^p(G)$ with $p < \infty$. Can someone fill in the details? And shouldn't the case $p=\infty$ be dealt with on its own?

Given a set of points X, locate the ball of maximum radius whose interior contains none of the points in X

Posted: 22 Apr 2022 11:10 AM PDT

Suppose $d$ is a metric space. Let $X$ be a finite set of points and let $B_R(x_0)$ be some ball such that $X \subset B_R(x_0)$. Find the maximum radius $r$ such that there exists a ball $B_r(p)$ whose centre is in $B_R(x_0)$ but whose interior contains no points in $X$.

Informally, where is the most isolated point in the vicinity of $X$? For example, if $X$ are three out of the four vertices on a square, then $p$ is the remaining vertex and $r$ is the width of the square. If $X$ is a finite representative sample of the Earth's coastlines, then $r = 1400\,\mathrm{NM}$ and $p$ is the oceanic pole of inaccessibility.

If it makes it easier, I'm thinking of the the Euclidean metric in $\mathbb{R}^2$ with the centre of the bounding ball in the centre of $X$.

I think the one-dimensional case is easy enough. Find the point at which the Hausdorff distance $d_H(X, X)$ occurs. The ball's radius will be half the Hausdorff distance and its centre will be to the right or left as required. For example, if $X = \{1, 2, 4\}$ then $p = 3$ and $r = 1$. However, this would locate the centre of the square rather than the fourth vertex in the example above.

(This is motivated by a 'real' problem of reverse geocoding addresses, so (1) ideally, I could derive an algorithmic solution, (2) the exact radius is not important, just a decent lower bound, and (3) questions of existence and uniqueness are not important to the problem -- I'm pretty sure yes and no are the answers though.)

Average complexity of linear search with weighted probability

Posted: 22 Apr 2022 11:03 AM PDT

Question:

Consider the list $L[0:n]$ where $n = 2k – 1$. Calculate the average complexity $A(n)$ of Linear Search, where the following conditions all hold simultaneously: the probability that search element $X$ occurs in the list is $.7$; given that $X$ occurs in the list it is twice as likely to occur in the first half $L[0:k – 1]$ as the second half $L[k, n – 1]$; and for any given half, $X$ is equally likely to occur in any position of that half

I came up with an answer that I intuitively came up with and I think it was very close to being correct but found it slightly faulty, as I was not considering the probability of the element not being in the list, of course. I then saw [this](Involving probability, distinct integers and linear search algorithms... distinct-integers-and-linear-search-algorithms) question which made a lot of sense to me, and therefore I tried to extend the answer to create the following, on which I'd like feedback. Thanks!

For the probability of the element existing in the first half to be twice as likely as the second half, we have to divide the entire probability up into thirds. $\frac{.7}{3} = .2\overline{333}$. This means the first half has probability $.2\overline{333}*2 = 0.4\overline{666}$ whereas the second half has probability of just $.2\overline{333}$.

Let's assume the basic operation of linear search is a $O(1)$ comparison.

  • AVG number of basic operations incurred if element $\in$ first half = $\frac{n}{4}(.4667)$

  • AVG number of basic operations incurred if element $\in$ second half = $\frac{n}{4}(.2334)$

  • AVG number of basic operations incurred if element $\notin$ array = $.3n$

Therefore the average complexity:

$$\frac{n}{4}(.4667) + \frac{n}{4}(.2334) + .3n$$

which simplifies to:

$$\require{cancel}\cancel{.4667n + .2334n + 1.2n}$$

$$\frac{.4667n + .2334n + 1.2n}{4}$$

Thus:

$$\cancel{A(n) \approx1.9n}$$

$$A(n) \approx \frac{1.9n}{4} \approx .475$$

Is this logic correct?


Edit

I think the above answer, and the current "answer" to this question are both wrong. Some of my new work indicating a possibly correct answer:

First let's define some things:

  • Let the desired value to find be named $x$
  • Let $p$ be the probability that $x$ is in the list
  • Assume the list has a uniform probability distribution, i.e. the probability that any given element is $x = \frac{1}{n}$

The definition of average complexity with input of size $n$ is:

$$A(n) = E[\tau] = E[\tau ~|~ x \in L]*p + E[\tau ~|~ x \notin L]*(1 - p)$$

where

$$E[\tau ~|~ x \in L]*p = p\sum_{i = B(n)}^{W(n)}{\frac{i}{n}}$$

and

$$E[\tau ~|~ x \notin L]*(1 - p) = n(1 - p)$$

The list in our example of course does not have a uniform distribution but we can use the fact that each half has a uniform distribution to our advantage. Let's consider the average complexity of linear search in the first half. $p = .4667$ and the length of the half is $\frac{n}{2}$

$$A\Big(\frac{n}{2}\Big) = E[\tau] = .4667\sum_{i = 1}^{i = \frac{n}{2}}{\frac{i}{\frac{n}{2}}} = \frac{.9334}{n}\Big(\frac{n^2}{8} + \frac{n}{4}\Big) = .116675n + .2334$$

Let's consider the second half

$$A\Big(\frac{n}{2}\Big) = .2334\sum_{i = \frac{n}{2}}^{i = n}{\frac{i}{\frac{n}{2}}} + \frac{.2334n}{2} = \frac{.4667}{n}\Big(\frac{3n(n+2)}{8}\Big) + \frac{.2334n}{2} = .1750125n + .350025 + \frac{.2334n}{2}$$

And of course the $E[\tau]$ when $x \notin L = .3n$

Making the entire complexity:

$$A(n) = .116675n + .2334 + .1750125n + .350025 + \frac{.2334n}{2} + .3n$$

Simplifying to

$$.7084n + .5834$$

No comments:

Post a Comment