Sunday, December 5, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Maximum non axis aligned inner box inside a polytope

Posted: 05 Dec 2021 10:32 PM PST

I am looking for a general method in n-dimension to find the maximum inner box inside a polytope that is not axis aligned. Finding the axis-aligned is straightforward and it can be found in this paper [Ref].

As an example in 2D, I am looking for the "blue" box not the "red" one in the following figure. enter image description here.

One idea is to rotate the polytope and then find the maximum axis-aligned. But this idea is not going to work efficiently in n-dimension. So I am looking for a better solution.

wavelet energy at each scale j

Posted: 05 Dec 2021 10:32 PM PST

By the wavelet representation, we know that,

for $f \in L_2[0,1]$, we can write $f = \sum_{j=0}^{\infty} \sum_{k=0}^{2^j-1} \theta_{jk} \psi_{jk}(x)$, where $\theta_{jk} = \langle f, \psi_{jk}\rangle$. Here, $j$ is the scale parameter.

What I want to argue/prove is the energy at each scale $j$ decays exponentially fast, formally like this: At $j$ \begin{align*} \sum_{k=0}^{2^j-1} |\theta_{jk}|^2 \le \text{const} \times 2^{-j} ~~~~ \text{ or } ~~~\text{const} \times 2^{-j/2} \end{align*} I don't know this statement is true or not, but I think it is true because scale $j-1$ parameters approximate function at rough scale (so picks bigger energy) and remaining function is some detailed shape and approximated by finer $j$ scale.

Any thought on this?

System of Equations with 6 variables using Implicit Function Theorem

Posted: 05 Dec 2021 10:32 PM PST

Use the implicit function theorem to discuss the solvability of the system $$3x + 2y + z^2 + u + v^2 =0 \\ 4x+ 3y+ z + u^2 + v + w + 2 = 0\\ x + z + u^2 + w + 2 = 0\\$$ for $u , v, w$ in terms of $x, y, z$ near $x = y = z = 0$ and $u = v= 0$ and $w = 2$.

In approaching this question, I'm quite stuck as usually questions on implicit function theorem deal with three variables, but this deals with 6. Should I be looking at a matrix that has 6 columns? If so, how would I continue to solve this system using the implicit function theorem?

How does the differential equation change to account for this half life problem?

Posted: 05 Dec 2021 10:27 PM PST

I am currently studying the textbook Ordinary Differential Equations by Morris Tenenbaum and Harry Pollard. The section LESSON 1. How Differential Equations Originate. has the following two exercise:

  1. The radium in a piece of lead decomposes at a rate which is proportional to the amount present. If 10 percent of the radium decomposes in 200 years, what percent of the original amount of radium will be present in a piece of lead after 1000 years?

  2. Assume that the half life of the radium in a piece of lead is 1600 years. How much radium will be lost in 100 years?

The answer to 1. is said to be 59.05 percent, and the answer to 2. is said to be 4.2 percent.

The differential equation for 1. was $\dfrac{dx}{dt} = -kx$, where $x$ is the amount of radium. Half life is defined to be the time taken for the amount of substance to fall to half its original value. So how does the differential equation change to account for this half life problem?

Hottest and coldest points on a plate

Posted: 05 Dec 2021 10:24 PM PST

I am asking this question because my professor tells me my procedure is not enough to prove my answer. I disagree. That being said, consider the following:

The temperature of a rectangular plate bounded by the lines $$x = ±1,y = ±1$$ is given by $$T =2x^2−3y^2−2x+10$$ Find the hottest and coldest points of the plate.

My solution was very simple: $$t_x=4x-2=0$$$$\Rightarrow x=\frac12$$

$$t_y=-6y=0$$$$\Rightarrow y=0$$ Now, since $t_{xx}>0, x=\frac12$ is a minimum, and $t_{yy}<0, y=0$ is a maximum. Therefore, $P_1(\frac12,0)$ will not be a maximum nor a minimum. Hence, I only had to check the correct combinations. In other words, for example, since $x=\frac12$ is a minimum $y$ has to be as large as possible if we want to find the minimum. Likewise, at $y=0$, $x$ has to be as large as possible to maximize the temperature. Intuitively, these points have to occur at the boundary (because of the shape of the graph of $T$). Therefore, $P_2(\frac12,±1)$ is the coldest point, and $P_3(-1,0)$ is the hottest.

My question is, is my procedure enough to prove those points are the hottest and coldest, in other words, is it true in general for quadratic equations?

I know this graph has no saddle points because it lacks $xy$ term.

Is $(\mathbb{Z},d)$ compact?

Posted: 05 Dec 2021 10:32 PM PST

Consider the following metric on $\mathbb{Z}$. We fix a prime number $p$. If $x \neq y$, then $d(x, y)= \frac{1}{p^n}$, where $n$ is the largest integer such that $p^n$ divides $y-x$ and $0$ when $x = y$. Is $(\mathbb{Z}, d)$ compact?

Approach: I can't prove it as compact but here $0 \leq d(x,y) \leq 1$ and there are many numbers whose distance is $1$ [$d(x,y)=1$]. This reminds me of discrete metric So I feel like $(\mathbb{Z},d)$ might not be compact. And we can consider the open cover $B(x,\frac{1}{2})$ where $x\in \mathbb{Z}$ i.e $\mathbb{Z} \subset \bigcup_{x\in \mathbb{Z}} B(x,\frac{1}{2}) $ and if it were compact then we can find a finite subcover say $\mathbb{Z} \subset B(a_1,\frac{1}{2}) \cup... \cup B(a_n, \frac{1}{2})$ but I can't find an element which is in $\mathbb{Z}$ but lies outside the union. Please help I can't proceed further

Solving System of Equations using Implicit Function Theorem

Posted: 05 Dec 2021 10:17 PM PST

Define a new set of coordinates $u, v, w$ in terms of $x, y, z$: $$u = x + xyz,\\ v = y + x\\ w = 2x + z + 3z^2 $$

Can the system be solved for $x, y,$ and $z$ in terms of $u, v, $and $w$ near $\begin{pmatrix} 0 \\ 0 \\ 0\\ \end{pmatrix}$. Justify your answer.

To my understanding, this question should be solved using the implicit function theorem. However, the past questions I have done has involved concrete numbers for $x, y, z$. In this case, how would I continue with $u, v, w$ as arbitrary coordinates and how should I substitute the column vector, as $x, y, z$ or as $u, v, w$

How can we show a sequence convergence to almost surely in conditional expectation

Posted: 05 Dec 2021 10:12 PM PST

Consider a sequence of i.i.d. random variables $\left\{Z_i\right\}$ where $\mathbb{P}\left(Z_i = 0\right) = \mathbb{P}\left(Z_i = 1\right) = \frac{1}{2}$. Using this sequence, define a new sequence of random variables $\left\{X_n\right\}$ as follows: $$X_0 = 0 \\ X_1 = 2Z_1 − 1\\ X_n =X_{n−1} +(1+Z_1 + \cdots +Z_{n−1})(2Z_n -1)\ \ \forall \ \ n \geq 2$$

Show that $\mathbb{E}\left[X_{n+1} \mid X_0, X_1, \cdots, X_n \right] = X_n$ almost surely for all $n$.

In this problem how can we show that the sequence convergence to almost surely to $X_n$ by applying iterated expectation? I have tried in this way through iterated expectation approach but not able to proceed can some body show me how to do this.

Relationship between independence and consistency: Why does Con$(\mathsf{ZFC})\implies\text{Con}(\mathsf{ZFC+\phi})$ imply $\phi$ can't be disproved

Posted: 05 Dec 2021 10:17 PM PST

Forgive me, because I'm sure this question has been asked before because of how central it is to set theory/logic, but I cannot find it on stackexchange. If someone finds that this is a duplicate, please take this question down. My question is about how exactly independence proofs work. I get the following idea, but there's one thing that evades me when it's explained. Here's in general how it goes. Suppose I have a formula $\phi$ which I wish to prove is independent of a theory $T$. Why must we show that if $T$ is consistent, then $T+\phi$ is consistent, and separately, if $T$ is consistent, then $T+\neg\phi$ is consistent?

The specific disconnect is that I'm not seeing why showing the consistency of $T+\phi$ implies $\neg\phi$ cannot be proven in $T$. I know that consistency means you cannot prove a contradiction from the theory, but I'm not sure how this implies that the negation cannot be proven. There are surely other ways to prove $\neg\phi$ than a proof by contradiction.

Expected number of iterations until all the cars can no longer move

Posted: 05 Dec 2021 10:08 PM PST

I am studying probability questions and I would like to revive this question for more attention.

Assume we have an array of length 2n. The first n slots are cars. Each round, we flip n coins where each coin $X_i$ corresponds to car i. $X_i$ is a fair coin and if it is heads, we will move car i to the right if the right space is free.

We are interested in the number of expected rounds until all the cars have moved to their final position at the end.

The previous question states that this is asymptotically O(nlogn) but I am unable to reason why. Thank you!

Why a monoidal category with only one object is a monoid in the category of monoids?

Posted: 05 Dec 2021 10:07 PM PST

I'm reading Chapter 4 in Steve Awodey's Category Theory and it mentions

A discrete monoidal category, that is, one with a discrete underlying category, is obviously just a regular monoid (in $Sets$), while a monoidal category with only one object is a monoidal monoid, and thus exactly a commutative monoid, by the foregoing remark 4.7.

The remark says

Note that we did not really need the full group structure in this argument. Indeed, the same result holds for monoids in the category of monoids: these are exactly the commutative monoids.

Maybe it's obvious to a category theory expert that monoidal monoid and the monoid in the category of monoids have the same meaning but I can't make the connection. My understanding is the category formed by morphisms in the monoidal monoid is indeed a monoid $(M,id, \circ)$ and because it is a monoidal category $(M,\otimes,I)$ we can define the homomorphisms $\mu = \otimes$ and $\eta = I$ to makes it a monoid in the category of monoids. Is that right?

And I was also wondering why the same does not apply to the discrete monoidal category. I am grateful for explanations.

Second Derivative of $\frac{2x}{71^2} - \frac{x}{95}$

Posted: 05 Dec 2021 10:16 PM PST

I am completely stuck on finding the Second Derivative of the following function.

$f(x) = \frac{2x}{71^2} - \frac{x}{95}$

How do I find the center of mass of a region given by a function using integration

Posted: 05 Dec 2021 09:53 PM PST

So I'm currently working on a project where i want to find the center of mass of a given region. To be exact, i want to find the center of mass (using moments and all of that) of this region the plane i want to find

Its a circle with a radius of 3 and is cut as you can see in the picture at x=5.3 How do i find the center of mass of such a plane (do i have to cut into seperate functions or do i have to use another equation please help

Showing a specific $p$-form on a star-shaped domain is the exterior derivative of a specific $(p-1)$-form.

Posted: 05 Dec 2021 09:56 PM PST

The folloing is a problem from Chapter 17 of Lee's Introduction to Smooth Manifolds:

Suppose $U\subseteq \mathbb{R}^n$ is open and star shaped with respect to $0$, and $\omega=\sum'\omega_Idx^I$ is a closed $p$-form on $U$. Show either directly or by tracing through the proof of the Poincaré Lemma, that the $(p-1)$ form $\eta$ given by $$\eta = \sum_I'\sum_{q=1}^p(-1)^{q-1}\left(\int_0^1t^{p-1}\omega_I(tx)dt\right)x^{i_q}dx^{i_1}\wedge...\wedge \widehat{dx^{i_q}}\wedge...\wedge dx^{i_p}$$ satisfies $d\eta =\omega$.

It says to either show it directly, or by tracing through the proof of the Poincaré Lemma. Well, the proof of the Poincaré Lemma in the book simply states that $H(x,t) = c+t(x-c)$ is a homotopy that contracts a domain that's star-shaped with respect to $c$. So I don't see how that helps. So I attempted to show it directly, but I am getting very lost. Here's what I have so far:

I know $$d\left(\left(\int_0^1t^{p-1}\omega_I(tx)dt\right)x^{i_q}dx^{i_1}\wedge...\wedge \widehat{dx^{i_q}}\wedge...\wedge dx^{i_p}\right) \\ = \sum_{j=1}^p\frac{\partial}{\partial x^j}\left(\int_0^1t^{p-1}\omega_I(tx)dtx^{i_q}\right) dx^j\wedge (dx^{i_1}\wedge...\wedge \widehat{dx^{i_q}}\wedge...\wedge dx^{i_p})\\ =\left(\int_0^1t^{p-1}\omega_I(tx)dt \right)dx^I $$ since $\frac{\partial x^{i_q}}{\partial x^j}\neq 0$ only when $j=i_q$, so we get $$d\eta =\sum_I'\sum_{q=1}^p(-1)^{q-1}\int_0^1 t^{p-1}\omega_I(tx)dt dx^I$$ Then I don't know how to go any further.

If $\omega$ were a smooth closed 1-form, I might know how to proceed since there is a Theorem in the book (11.49) which helps with that. The second part of the problem is that it coincides with the potential given in that theorem. Could someone either a) help me finish showing this directly, or b) help me see how the proof of the Poincaré Lemma helps with this problem?

Induction Approach for Coefficent/Exponent Problem

Posted: 05 Dec 2021 10:23 PM PST

There is a problem I'm working that diverges quite a bit from the ramp up problems I had with induction. Anyway, here it is:

You have (mx+ny)$^g$ and g is an element in the natural numbers $\Bbb N$.

What will the coefficient of x$^h$y$^g$$^-$$^h$ is g when h $\in$ $\Bbb N$ and 0 $\le$ h $\le$ g

I'm struggling even at the base case, and I'm stuck on the approach you'd use to solve this. Can this be approached some way other than induction as well?

Base Case

  • Just going with h=1,g=2 you get m$^2$x$^2$+2mn xy + n$^2$y$^2$ There the coefficient is 2, so an initial suspicion would be the coeff. is related to the exponent g. If you carry that forward for h=2, g=3, the coefficient is 3 and so on. When the two (g and h) are the same value, the coefficent seems to be the evaluated value of y because the exponent on y is 0 and so the value is 1. Does the base case need to be split up?

Inductive Hypothesis

  • This will naturally depend on what the base case is. I would think I'd want to prove this for k+1, but I struggle with what that should look like or if it would be split because of cases.

What is Double Counting?

Posted: 05 Dec 2021 10:14 PM PST

Can someone please explain what double counting is?

I have no idea what it is, and Google search yields results that are too complicated for me to understand at this point in time.

If there's some simple way and example to understand the principle, and apply it to problems, I'd appreciate it a lot. Thanks.

Question on the continuity of a certain kind of evaluation map

Posted: 05 Dec 2021 09:54 PM PST

I ran across a function in the proof of the Arzela-Ascoli Theorem and got stuck on proving the continuity of it:

Let $(C(X, Y), \rho)$ be the set of continuous functions mapping from $X$ to $Y$, $X$ is a topological space and $(Y, d)$ is a metric space, $\rho$ the sup metric corresponding to $d$. Define the function $\phi_x: C(X, Y) \to Y$ to be $\phi_x(f) = f(x)$.

Why is this function continuous?

How to find the tangent length of a tear drop of a given perimeter

Posted: 05 Dec 2021 10:22 PM PST

I came across this problem while looking for a solution to a different drafting question and thought it would be an interesting challenge when I had time to dig in deeper to parameterization of AutoCAD drawings.

The basic problem based on the image below is to determine the length |BC| in order to have the perimeter total to 30'. I can set |BC| up to be a function so in the future I changed the perimeter to 20' |BC| would update automatically.

enter image description here

I was able to go the other way quite easily...determining the length of the red line with a given |BC| but could not figure out how to pull it out of the trig functions I used.

  1. r = |AC| = 56/2 = 28"
  2. |BC|/|AC|=Tan(<BAC)
  3. <BAC=ATAN(|BC|/|AC|)
  4. Arc Length CAD = 2 * pi * r * (360-2*<BAC)/360

Putting it all together I got the following

  1. Perimeter L = 2*|BC|+ Arc Length CAD L= 2*|BC|+2 * pi * r * (360-2*<BAC)/360

sub 3 -> 5

  1. L= 2*|BC|+2 * pi * r * (360-2*(ATAN(|BC|/|AC|)))/360

So if I know L=360"=30', how do I solve for |BC|? I could not figure out how to pull |BC| out from the ATAN function.

Is there a way to do this with out using a trig function? Not even sure if my approach is a good one.

And just in case it is not clear:

  • the arc is tangent to BC and BD at their intersect/end points C and D.
  • |BC| = |BD|

Through trial and error I can figure out |BC| is 130.08129 roughly, but that does not help me remember my math courses from a couple of decades ago.

Please suggest better tagging if appropriate.

A list of supported AutoCAD math functions can be found here

Poisson process has probability with other distribution

Posted: 05 Dec 2021 10:14 PM PST

A bus platform is now empty of passengers. Passengers arrive according to a Poisson process with rate $\lambda$. Upon arrival the passenger will choose either bus A or bus B and wait in the corresponding line. We denote by $N_1(t)$ the number of passengers that arrive by time t and choose bus A, and by $N_2(t)$ the number of passengers that arrive by time t and choose bus B.

Find $\mathbb P\{N_1(t) = k_1,N_2(t) = k_2\}$ for $k_1, k_2 ≥ 0$ under the following conditions


(i) Let P ∼ Uniform(0, 1), independent of the arrival process of the passengers. Given P = p ∈ (0, 1), each passenger chooses bus A with probability $p$ and choose bus B with probability $1 − p$, independently with the arrival process and the choices of other passengers.


(ii) Let $T_1 > 0$ be the departure time of bus A and $T_2 > 0$ be the departure time of B. If the arrival time of a passenger is $t : t < max(T_1, T_2)$, he will choose bus A with probability $$ p(t):=\frac{\max(\frac{1}{T_1-t},0)}{\max(\frac{1}{T_1 - t},0) + \max(\frac{1}{T_2 - t},0)} \in[0,1]$$ and choose bus B with probability 1 − p(t), independently with the arrival process and the choice of the other passengers.


I have no idea how to handle these type of questions. Is it $Poisson(\lambda \cdot t \cdot A)$, where A is the PDF of the distribution?

Proving $a_n = a_i + (n - i)d$ by induction

Posted: 05 Dec 2021 10:07 PM PST

The formula for the $n$th term of an arithmetic sequence is $$a_n = a_1 + (n - 1)d \tag{1}\label{1}.$$ However, I want to prove that this can be generalized to $$a_n =a_i + (n - i)d. \label{2}\tag{2}$$

Using \eqref{1}, I can get \eqref{2} directly: \begin{align*} a_n &= a_1 + (n - 1)d \\ a_n &= a_1 + (n - i + i - 1)d \\ a_n &= a_1 + (i - 1)d + (n - i)d \\ a_n &= a_i + (n - i)d \end{align*}

How can we prove this by induction? I know that direct proofs are preferable over indirect proofs, but since this is a well-behaved sequence, a proof by induction should be achievable.

Suppose that a random variable $X$ is distributed according to a gamma distribution with parameters $\alpha = 6$ and $\beta = 2$. Find these values.

Posted: 05 Dec 2021 10:25 PM PST

Suppose that a random variable $X$ is distributed according to a gamma distribution with parameters $\alpha = 6$ and $\beta = 2$, i.e., $X \sim \text{Gamma}(6, 2)$.

A.) Computer the mean and variance of $X$.

$E(X) = \alpha \beta = 6\times2 = 12$

$\text{Var}(X) = \alpha \beta^{2} = 6\times2^2 = 24$

B.) Find $E(X^4)$

I believe I found the correct values for the mean and variance but I am having trouble calculating $E(X^4)$. I know $E(X^2) = \text{Var}(X) + E^2(X)$ but I have no idea what to do for $E(X^4)$. Any help would be much appreciated.

Showing that a vector field is conservative without finding a potential function

Posted: 05 Dec 2021 10:37 PM PST

I am really new to this topic and was tasked to show that $$G (x,y,z)=\langle z^3−3y^2e^{3x},−2ye^{3x}+2\cos z,4+3xz^2−2y\sin z\rangle$$ is conservative without finding a potential function. How do I pull this off? I have no idea where to start with this one. Any help would be much appreciated.

What does it mean to apply a vector field to a scalar function?

Posted: 05 Dec 2021 09:54 PM PST

In a textbook I am reading, the author writes:

Be careful not to confuse the notations $fX$ and $Xf$, the first is a smooth vector field, the second is a real valued function.

If we go down into a specific coordinate representation of $X$ it is a function that eats points in a smooth manifold and spits out a vector, so ti would look like this:

$X(p) = (X^1(p), \cdots, X^n(p))$

Similarly we would get:

$fX(p) = f(p)X(p) = f(p)(X(p) = (X^1(p), \cdots, X^n(p)))$

I am having a hard time understanding what $Xf$ means. $f$ spits out scalars, so it;s output cannot be an input of $X$.

The book describes the action of a vector field on a scalar funciton as follows:

$$(Xf)(p) = Xpf$$

Which is not very clear to me, at face value and naively reading the notation, that looks like the scalar product of a vector, which is just $fX$ but we know that it is not what that is supposed to represent.

How to approach this discrete graph question about Trees.

Posted: 05 Dec 2021 10:31 PM PST

A tree contains exactly one vertex of degree $d$, for each $d\in\{3, 9, 10, 11, 12\}$.

Every other vertex has degrees $1$ and $2$. How many vertices have degree $1$?

I've only tried manually drawing this tree and trying to figure it out that way, however this makes the drawing far too big to complete , I'm sure there are more efficient methods of finding the solution.

Could someone please point me in the right direction!

Diophantine vs. existential definability

Posted: 05 Dec 2021 10:39 PM PST

We consider the language $\{+,\cdot,0,1\}$ of rings.

I want to understand how terminology is used in the work by J. Koenigsmann, Defining $\mathbb Z$ in $\mathbb Q$, Annals of Mathematics 183 (2016), 73-93.

Koenigsmann uses the words diophantine and existential. Following Wikipedia for diophantine and the text book on Model Theory by Chang and Keisler for existential sentence, I hope that the definitions below are correct. Let $R$ be a ring and $S \subseteq R$ be a subset.

  • $S$ is called diophantine in $R$, if there exists a polynomial $f(T,Z_1,\ldots,Z_n)$ with integer coefficients such that $t \in S$ if and only if $\exists z_1,\ldots, z_n \ f(t,z_1,\ldots,z_n) = 0$ for all $t \in R$.

  • $S$ is called existential (or existentially definable) in $R$ if there exists a quantifier-free formula $\varphi(T,Z_1,\ldots,Z_n)$ such that $t \in S$ if and only if $\exists z_1,\ldots, z_n \ \varphi(t,z_1,\ldots,z_n) $ for all $t \in R$.

Now I have several questions:

  1. Is my definition of existential standard or does one rather use positive sentences $\varphi$?

  2. Concerning 1.: Why could it possibly be more sensible to use positive sentences?

  3. It seems to me (in particular in view of the first page of the paper and the two paragraphs before Proposition 23) that Koenigsmann is using the terminologies existentially definable and diophantine as synonyms. Is this the case?

  4. Concerning 3.: Why would one do this?

  5. Are the two definitions equivalent if $R = \mathbb Q$?

  6. I think, Hilbert's tenth problem over $\mathbb Q$ could only be solved by showing that $\mathbb Z$ is diophantine in $\mathbb Q$. An existential definition in the sense above would not suffice. Is this true?

Thank you very much in advance!

Class of rings between fields and euclidean domains (improvement of euclideanity) [closed]

Posted: 05 Dec 2021 10:28 PM PST

I am new to algebraic number theory, arithmetic in domains and I want an answer to the following question (that might be naive/unusual):

According to Wikipedia, there is no class specified between fields and euclidean domains (see the chain of class inclusions). Going to the right of the chain, the ring has more arithmetic properties. I feel that there must be something there, to further distinguish between/classify euclidean domains (below are $3$ ideas). Are these ideas reasonable or is there a "better" class (or property) that "fills this gap", i.e. stronger than euclideanity, but weaker than a field?

(For example, Schreier domains are between integrally closed domains and GCD-domains.)

I have only found these $3$ examples/ideas (inspired by some old Math SE questions), which are made in order to improve the arithmetic properties, but I don't know if these are "official", known classes, that might follow the standard pattern of class inclusions:

$1)$ $Q_1$ , shows when the quotient and remainder are uniquely determined. Let's call this U-Euclidean domains.

$2)$ $Q_2$ , this is a stronger case of U-Euclidean domains, because the euclidean function is a degree function, which gives uniqueness in the Euclidean division. In other words, $K$ or $K[X]$, where $K$ is a field, are U-Euclidean domains.

$3)$ $Q_3$ , reveals that any euclidean function must take at least $3$ values, for a non-field euclidean domain. I am also aware of this . To be specific, during the construction of the minimal norm, in that partition via $R_n$, $R_0$ is the set of units, and $R_1$ is the set of all universal side divisors. If the domain is not a field, it follows from $Q_3$ that $R_1$ and $R_2$ are nonempty. An example where this partition is finite might be a solution to fill the gap. In any field $K$, $K=\{0\} \cup R_0$, indeed, finite. Finitude might come closer to fields, because in the case of the integers, even the Motzkin minimal function won't give a finite partition. Therefore, one can investigate the following situation (EDIT: I added the answer, there is no need to answer it):

Q: Is there an example of non-field euclidean domain such that $\exists k\geq 3$ with $R_i=\phi, \forall i\geq k$?

A: NO. There aren't non-field euclidean domains $R$ for the "finitude" described in $3)$. It is known that any euclidean function defined on $R$ is unbounded, including the minimal euclidean function $N$ described above ($N(x) \geq L(x)$, where $L$ is the factorization length function). Due to this, $R_i \neq \phi, \forall i\geq 0$. In fact, $Q_3$ can be improved by stating that any euclidean function must take infinitely many values (again, non-field case).

Minimum pair of integer sets with distinct sum of pairs

Posted: 05 Dec 2021 09:55 PM PST

Let $A, B$ be two sets of positive integers. Define $A\bigoplus B=\{a+b | a\in A, b\in B\}$ representing all distinct sums of pairs in the Cartesian product of A and B. If all such $m*n$ sums are distinct, we want to know how compact the sets A and B can be. Formally:

Let $f(m, n)=min\{max\{A\bigoplus B\}\}$ for all $A, B\subset Z^+$ with $|A|=m, |B|=n, |A\bigoplus B| = |A| * |B|$. What do we know about $f(m, n)$?

Exact or asymptotic bounds would both help.

I can come up with a trivial bound $f(m, n) = O(2^{m+n})$ since any partition of $\{2^i | 0\leq i \leq m+n-1\}$ satisfies the above conditions. But due to the similarity of this problem with Golumb Ruler and Sidon Sequence, I would conjecture a quadratic bound for m and n.

Complex number equation $x^2+x+(1-i)=0$

Posted: 05 Dec 2021 10:00 PM PST

is there any easy way to solve the following equation: $$x^2+x+(1-i)=0$$ I have tried to write $-3+4i$ in the trigonometric way but I do not think that there is any normal way to get angle $\alpha$ such that $\cos(\alpha)=-3/5$ and $\sin(\alpha)=4/5$....

Calculate $ E(X) $ where $f(x,y) = \frac{1}{y}e^{-(y+\frac{x}{y})}$

Posted: 05 Dec 2021 10:02 PM PST

I am trying to solve the following probability problem:

Please calculate $ E(X) $ where the joint density function of x and y is:

$$f(x,y) = \frac{1}{y}e^{-(y+\frac{x}{y})} \text{ for } x,y > 0$$

It is well known that the expected value of a continuous random variable is:

$$ E(X) = \int_{-\infty}^\infty{xf_X(x)dx} $$

That means that we have to find $f_X(x)$ first by doing:

$$ f_X(x)=\int_{0}^\infty{\frac{1}{y}e^{-(y+\frac{x}{y})}}dy $$

Note: The zero lower bound is due to the given inequality $ y > 0$.

However, this integral turned out to be a handful (I tried substitution and integration by parts to no avail). Then, I tried this on WolframAlpha instead. However, it gave me an answer which I did not understand (link).

Am I even supposed to integrate this integral in the first place? I do not know much about the bessel function, which was present in the answer given by WolframAlpha. Furthermore, I am only undergoing an introductory probability course so it is not possible for the integral to be so complex (I was not taught bessel function anyway).

Could someone please advise me on how I could calculate the value of $E(X)$?

Functional analysis proof of Ramanujan's Master Theorem

Posted: 05 Dec 2021 10:22 PM PST

According to mathworld, Ramanujan's master theorem is the statement that if $$f(z) = \sum_{k=0}^{\infty} \frac{\phi(k) (-z)^k}{k!}$$ for some function (analytic or integrable) $\phi$, then $$\int_0^{\infty} x^{n-1} f(x) \, \mathrm{d}x = \Gamma(n) \phi(-n).$$

As written it is obviously false as the values of an (analytic or integrable) function $\phi$ at natural numbers do not determine its values anywhere else. However it turns out that $$\int_0^{\infty} x^{s-1} f(x) \, \mathrm{d}x = \Gamma(s) \phi(-s)$$ for arbitrary $s$ under growth conditions on $\phi$.

Recently I came across an elementary "proof": if $T$ denotes the shift operator $T\phi(s) := \phi(s+1),$ then we can write $$f(z) = \sum_{k=0}^{\infty} \frac{(-z)^kT^k \phi(0)}{k!} = e^{-zT}\phi (0)$$ such that $$\int_0^{\infty} x^{n-1} f(x) \, \mathrm{d}x = \int_0^{\infty} x^{n-1} e^{-xT} \phi(0) \, \mathrm{d}x = \Gamma(n) T^{-n}\phi(0) = \Gamma(n) \phi(-n),$$ by plugging $T$ into the Gamma integral $$\int_0^{\infty} x^{n-1} e^{-xs} \, \mathrm{d}x = \Gamma(n) s^{-n}.$$ I am curious whether this argument can be made rigorous with functional analysis on an appropriate function space (which necessarily would have to have some growth conditions).

No comments:

Post a Comment