Sunday, March 21, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Why don't we define "the derivative" of a univariate function as the average of its directional derivatives?

Posted: 21 Mar 2021 10:05 PM PDT

I just noticed an interesting thing when looking at the derivative of the absolute value function $f(x) = |x|$. As is well-known, this function is differentiable (in the classical limiting sense) at all points except $x=0$. Specifically, the directional derivatives of the function for the downward and upward limits are given respectively by:

$$\begin{align} f_\downarrow'(x) &\equiv \lim_{\Delta \downarrow 0} \frac{f(x+\Delta)-f(x)}{\Delta} = \begin{cases} -1 & \text{if } x < 0, \\[6pt] 1 & \text{if } x \geqslant 0, \\[6pt] \end{cases} \\[16pt] f_\uparrow'(x) &\equiv \lim_{\Delta \uparrow 0} \frac{f(x+\Delta)-f(x)}{\Delta} = \begin{cases} -1 & \text{if } x \leqslant 0, \\[6pt] 1 & \text{if } x > 0. \\[6pt] \end{cases} \\[16pt] \end{align}$$

At the point $x=0$ we have $f_\downarrow'(x) = 1 \neq -1 = f_\uparrow'(x)$ so the derivative (in its classical sense) does not exist at this point. However, I have observed that in some applied mathematical fields, it is usual to use the following Radon-Nikodym derivative as "the derivative" of the function:

$$f'(x) = \text{sgn}(x) = \begin{cases} -1 & \text{if } x < 0, \\[6pt] 0 & \text{if } x = 0, \\[6pt] 1 & \text{if } x > 0. \\[6pt] \end{cases}$$

That is all very well and good, but I just noticed that this Radon-Nikodym derivative is the average of the directional derivatives. This leads me to wonder whether it would be useful to define "the derivative" of a univariate function as the average of its directional derivatives as a general rule. The obvious advantage of doing this is that this form of derivative would exist in a broader class of cases than in the classical definition (e.g., in this case), and it would correspond to the one we use in at least the case shown. Moreover, it is one valid Radon-Nikodym derivative of the function, so it seems that it would give us a concept of "the derivative" that is more specific than the Radon-Nikodym but more general than the classical limiting concept. I imagine that the drawback would be that it might be misleading to use this kind of Radon-Nikodym derivative in some applications, particularly where we use derivatives to extrapolate a direction for iterative purposes. Nevertheless, I thought I would ask some of you esteemed experts for specifics of why this method of defining the derivative is not used.


Question: Why don't we define "the derivative" of a univariate function as the average of its directional derivatives? Are there any fields/literature where this definition has been used?

Approximation of Radon measure by smooth function

Posted: 21 Mar 2021 10:00 PM PDT

I would like to know how the following result is proved: Let $\Omega$ be a compact set in $R^N$ and let $u \in C(\Omega)^*$. Then, it is possible that $$ \exists (u_n)_n \subset C_0^\infty(\Omega) \ s.t. \ \|u_n\|_{L^1(\Omega)} \leq C \ (C \ is \ independent \ of \ n) \ and \ u_n \rightarrow u \ *-weakly \ in \ C(\overline{\Omega})^* $$ I would appreciate it if you could tell me the proof.

Do all rationals (Q) form a perfect set if the metric space is Q?

Posted: 21 Mar 2021 10:01 PM PDT

The following question came to my mind while reading the definition of a perfect set from Walter Rudin's Principles of Mathematical Analysis, 3rd Edition, Page 32, Section 2.18(h):

  • If the metric space is the set of all rational numbers $\mathbb Q$ (instead of $\mathbb R^1$ -- the set of all real numbers), then does $\mathbb Q$ constitute a perfect set?

In my understanding, the answer is "yes". That's because, $\mathbb Q$ is closed (since $\mathbb Q$ contains each of its limit points if the underlying metric space is $\mathbb Q$) and because every point of $\mathbb Q$ is a limit point of $\mathbb Q$ (arbitrary proximity). I'm not feeling so much sure somehow, and didn't find a corroboration elsewhere. Therefore this query.

Would appreciate help. Thanks.

How is the following relation transitive?

Posted: 21 Mar 2021 09:52 PM PDT

My book states : Let $A$ be a set and let $\omega$ be a subset of $P(A)$ (power set). Suppose that the elements of $\omega$ are pairwise disjoint. Then, the relation $\sim \omega$ associated with $\omega$ is transitive. I have tried taking example sets and drawing things around but just can't understand the theorem. Could anyone please explain with a simple example?

Edit: here is some more context. Let A be a set and $\omega$ be any subset of $P(A)$. If $a_1$ and $a_2$ are elements of A, $a_1$ is related to $a_2$ if there exists an element $S \in \omega$ that contains both $a_1$ and $a_2$. The relation $\sim \omega$ is called the relation on A associated with $\omega$

unit of closure of involution subalgebra

Posted: 21 Mar 2021 09:42 PM PDT

Let $\mathbb{H}$ be a Hilbert space, $A$ is a involution subalgebra of $B(\mathbb{H})$, let $K=[A(\mathbb{H})]\subset \mathbb{H}$, where $[A(\mathbb{H})]$is the closure of set $\{a\xi: a\in A, \xi \in \mathbb{H}\}$, and if $e$ is the orthogonal projection operator from $\mathbb{H}$ to $K$. A fact is that $\forall a\in A, ae=ea=a $. Now let $\overline{A}$ be closure of $A$ with respect to anyone of six topologies(wo, so, *-so, $\sigma$-wo, $\sigma$-so, $\sigma$-*-so), my question is $\forall a\in \overline{A}$, why $ea=ae=a$?

This problem comes from proof of von Neumann double commutant theorem(general case, i.e. subalgebra is degenerate), my attempt is as follows:

because $\overline{A}^{wo}=\overline{A}^{so}=\overline{A}^{*-so}$ and $\overline{A}^{\sigma-wo}=\overline{A}^{\sigma-so}=\overline{A}^{\sigma-\ *-so}$, so we just check for $so$ and $\sigma-so$. Firstly for $so$ topology, let $a=\lim_i a_i\in \overline{A}^{so}$, we check $ea=a=\lim_i a_i=\lim_i ea_i$, we only need to check $\lim_ie(a-a_i)=0$.

we know that $a=\lim_i a_i$ mean that $\forall \{\xi\}_k\subset\mathbb{H}$, $$\sum_k \|(a_i-a)\xi_k\|^2 \rightarrow 0$$ then $$\sum_k \|e(a_i- a)\xi_k\|^2 \le \sum_k \|e\|^2\|(a_i- a)\xi_k\|^2\le \sum_k \|(a_i-a)\xi_k\|^2 \rightarrow 0$$ It means that $\lim_ie(a-a_i)=0$.For $\sigma- so$ topology, the proof is similar.

I want to know my proof is right or not? In the proof of von Neumann double commutant theorem, the result is just ONE sentence, i doubt my idea is too complex!

Thanks in advance.

How can I take the complex logarithm of this equation?

Posted: 21 Mar 2021 09:54 PM PDT

I have $$ e^{i\beta}=\pm\frac{g_1}{\sqrt{1-g_0^2}} $$ where $g_0$ is a real value and $g_1$ is complex. I'm trying to solve for $\beta$ in the equation, but I'm not quite sure how I can do that. I tried to take the logarithm so that $$ \beta =-i \ln \left(\pm\frac{g_1}{\sqrt{1-g_0^2}}\right) $$ However, the answer looks like $$ \beta'=-i\log\frac{g_1}{\sqrt{1-g_0^2}}+k\pi,\ \text{where}\ k\in\mathbb{Z} $$ I'm wondering where does the $\pm$ sign go? Why there's a $+k\pi$ term? Is that relevant to the Riemann surface? Thanks:)

If $\vec a\in\mathbb{R}^3$ is non-zero, then $\left \{ \vec e_1\times\vec a,\vec e_2\times\vec a,\vec e_3\times\vec a \right \}$ is linearly dependent

Posted: 21 Mar 2021 09:47 PM PDT

Proof: write $$\;\vec a=(\alpha,\,\beta,\,\gamma)=\alpha e_1+\beta e_2+\gamma e_3$$ then we have $$\alpha \vec e_1\times\vec a+\beta \vec e_2\times\vec a+\gamma \vec e_3\times\vec a=(\alpha \vec e_1+\beta \vec e_2+\gamma \vec e_3)\times\vec a=\vec a\times \vec a=\vec 0$$ $$as\;(\alpha,\,\beta,\,\gamma)=\vec a\neq\vec 0$$ the set $$\;\{\vec e_1\times \vec a,\, \vec e_2\times \vec a ,\, \vec e_3\times \vec a\}$$ is linearly dependent. now we know the proposition is true, but if you look at the contraposition: if the set $$\{\vec e_1\times \vec a,\, \vec e_2\times \vec a ,\, \vec e_3\times \vec a\}$$ is linearly independent then$$ \vec a=\vec 0$$ which means $\vec e_i \times\vec a=\vec 0,$ the set is linearly dependent, contradiction!
What's wrong with that?

Defining equivariant maps of bimodules

Posted: 21 Mar 2021 09:44 PM PDT

In this paper of Shulman, at page 4, he describes the (pseudo-)double category of rings and bimodules. This is supposed to be a double category with rings as objects, ring homomorphisms as tight 1-morphisms, bimodules as loose 1-morphisms and the two morphisms are "equivariant maps of bimodules". I do not understand the last bit (in scare quotes).

For instance, if we have $M$, a $R-S$ bimodule and $N$, a $R'-S'$ bimodule, what are the "equivariant" maps between $M$ and $N$? As far as I know, there are no relations/morphisms between $R,S,R',S'$.

Update: I am guessing that the data of an "equivariant map" $\psi: {_RM}_S \to {_{R'}M'}_{S'} $ is a triple $(\alpha:R \to R', \beta:S \to S', \psi:M \to M')$ where $\alpha, \beta$ are ring homomorphisms and $\psi$ is a map of bimodules with the induced action via the ring homomorphisms. Is this right?

Cyclic subgroups of Torus

Posted: 21 Mar 2021 10:06 PM PDT

Consider a torus $T_2:= S^1 \times S^1$. Fix integers $a,b$ such that gcd(a,b)=1. We define the circle $S^1(a,b) \subset T_2$ via the following embedding

\begin{align} S^1 &\hookrightarrow S^1 \times S^1 \\ t &\mapsto (t^a,t^b) \end{align}

Further we define the finite cyclic group $\mathbb{Z}_n(a,b)$ to be the standard $\mathbb{Z}_n \subset S^1(a,b)$ i.e the cyclic group defined via the embedding

\begin{align} \mathbb{Z}_n &\hookrightarrow S^1 \times S^1 \\ \zeta &\mapsto (\zeta^a,\zeta^b) \end{align}

where $\zeta$ is a primitive $\text{n}^{\text{th}}$ root of unity.

Given $\mathbb{Z}_n \subset S^1(a,b)$, under what conditions on $a^\prime, b^\prime$ is $\mathbb{Z}_n \subset S^1(a^\prime, b^\prime) \cap S^1(a,b)$.

I have tried working out several examples with different values for $n,a,b$, and for those values I do get that if $a^\prime \equiv a ~(mod ~n~)$ and $b^\prime \equiv b ~(mod ~n~)$ then $\mathbb{Z}_n \subset S^1(a^\prime, b^\prime) \cap S^1(a,b)$.

Is $a^\prime \equiv a ~(mod ~n~)$ and $b^\prime \equiv b ~(mod ~n~)$ a necessary and sufficient condition for $\mathbb{Z}_n \subset S^1(a^\prime, b^\prime) \cap S^1(a,b)$?

If this is indeed true then any help in proving it would be appreciated.

Polynomials with specific roots

Posted: 21 Mar 2021 09:40 PM PDT

Consider the set $$S = \{F \mid F \in \mathbb{Z}^{+}[x], \partial F = k \implies [x^i]F\neq 0\text{ } \forall \text{ } 0\leq i < k\}$$

For all real valued $\theta$, Does there always exists a polynomial $H \in \mathbb{C}[x]$ such that $$(x^2 - 2x\cos\theta + 1)H \in S$$

Note that here $\partial F$ is the maximal degree of the polynomial. I found that the for a transcendental $\cos \theta$ there doesn't exist such polynomial. I'm very unsure weather there exists such polynomial for all real $\theta$. Or is it possible to find all real $\theta$ which satisfy the condition?

Any help or hint would be highly appreciated. Thanks.

Is the set of points ${i/n}$, $n=1,2,3,...$ in the complex plane a closed set?

Posted: 21 Mar 2021 09:30 PM PDT

A set is said to be closed if it contains all of its boundary points, where a boundary point of a set in the complex plane is any point such that every neighborhood of the boundary point contains at least one point in the set and at least one point not in the set. At first glance, I assumed that the set $S$ defined by $S=\{ i/n\,\vert \,n\in\mathbb{Z}^+\}$ is closed, since each point $i/n$ is a boundary point of the set and is an element of the set. However, the more I think about it, the more I feel that the complex number $z=0$ is also a boundary point of the set, since any neighborhood of $0$ must contain a complex number that is arbitrarily close to $0$ on the positive imaginary axis (and therefore in $S$) and $0$ itself (which is not in $S$). Since this boundary point $z=0$ is not contained in the set, $S$ does not contain all of its boundary points, and is therefore not closed. I know that this is not a rigorous proof of my claim, but is this reasoning correct?

Rudin's RCA complex part exercise problem recommendation

Posted: 21 Mar 2021 09:34 PM PDT

Does anyone have a recommended list of exercise problems in the complex part of Big Rudin(RCA)?

I have been self-teaching grad-level complex analysis with this book and it would be helpful to have some selected problems in each chapter to work on.

Thanks in advance.

If $A + B$ is invertible, do we know if $A$ and $B$ are invertible separately?

Posted: 21 Mar 2021 09:43 PM PDT

If $A$ and $B$ are $n\times n$ matrices and we know that $A + B$ is invertible, can we assume that $A$ and $B$ are invertible by themselves?

Set generated by irrational number in quotient group $\mathbb{R}/\mathbb{Z}$

Posted: 21 Mar 2021 10:06 PM PDT

I'm thinking of the set $S_r = \{nr \mod 1|n\in \mathbb{Z^+}\}$ for a fixed irrational number $r\in[0,1)$.

I just found that from an algebraic perspective, $S_r$ is equivalent to the set generated by $r$ in $\mathbb{R}/\mathbb{Z}$ (or may thinking about unit circle group).

There are many good questions about the set of rational numbers in [0,1) (even it forms a group), but I cannot find any good properties about irrational numbers for $S_r$.

I noticed the following facts:

  1. $S_r$ is dense.
  2. $q \notin S_r$ where $q$ is a rational number.
  3. If r is an algebraic number, every transcendental number in $[0,1)$ is not in $S_r$.
  4. If r is a transcendental number, every algebraic number in $[0,1)$ is not in $S_r$.

But still cannot cover all the numbers. Is there a named set or known keywords about $S_r$?

Power-Law PDFs and Expectations

Posted: 21 Mar 2021 10:07 PM PDT

A random variable $X$ follows the distribution: $$f_X(x)=\begin{cases} Cx^2 & -1\le x\le 2,\\ 0 & \text{otherwise}, \end{cases}$$

and $Y=X^2$. Calculate

  1. $C$
  2. $P(X\ge0)$
  3. $E[Y]$
  4. $V(Y)$

I know that I need to utilize the Power-law of PDFs but as soon as this went to integrals and derivatives I am lost and don't understand how these equations simplify. This is my first time looking at derivatives and without explaining each step of them I'm lost immediately.

Any help would be appreciated. Does it have to do with the equations below: $$\int_0^1(a+1)x^adx=x^{a+1}|_0^x=1$$

  1. I am at a loss for how to find the $C$ (I assueme this means the $constant$.

$\{(x,f(x)): x\in [a,b]\}$ is compact in $\mathbb R^2 \implies f:[a,b]\to\mathbb R$ is continuous

Posted: 21 Mar 2021 09:57 PM PDT

Given $f: [a,b]\to\mathbb R$, define $G: [a,b]\to\mathbb R^2$ by $G(x) = (x,f(x))$ for all $x\in [a,b]$. Show that the following are equivalent:

  1. $f$ is continuous.
  2. $G$ is continuous.
  3. The graph of $f$ is a compact subset of $\mathbb R^2$.

I have shown $1\implies 2$ and $2\implies 3$. Proving $3\implies 1$ would help me complete the argument full circle. Perhaps there are other ways too, such as showing $3\implies 2$ and $2\implies 1$ instead. Please let me know if the following makes sense (point out errors if any), and help me complete the proof.

$[1\implies 2]$: I used the sequential definition of continuity to show that if $x_n\to x$ and $f$ is continuous, then $f(x_n)\to f(x)$, and hence $G(x_n)\to G(x)$. Does this need more argument?

$[2\implies 3]:$ $[a,b]$ is compact by the Heine-Borel theorem, and the continuous image of a compact set is compact. Hence, $G([a,b])$, i.e. the graph of $f$ is compact in $\mathbb R^2$.


I found the following for $[3\implies 1]$, and I have some questions:

Let $G([a,b])$ be compact and $x_n\rightarrow x$ be a convergent sequence in $[a,b]$. We show that $f(x_n)$ converges to $f(x)$. Since the graph is compact, $f(x_n)$ has a convergent subsequence,

Q1. $G([a,b])$ is compact, so $(x_n,f(x_n))$ has a convergent subsequence. How does this tell us that $f(x_n)$ has a convergent subsequence? I feel we need to add some details here.

i.e. $f(x_{n_j})\rightarrow y$. That is $(x_{n_j}, f(x_{n_j}))\rightarrow (x, y)$. The graph is closed. That is, the limit of every convergent sequence in $G([a,b])$ is again in $G([a,b])$. Therefore $(x,y)\in G([a,b])$, i.e. $y=f(x)$. Since this is true for every convergent subsequence, we showed that $f(x)$ is the only limit point of $f(x_n)$, i.e. $f(x_n)$ converges to $f(x)$.

Q2. How is the last sentence concluded? We choose an arbitrary convergent subsequence $f(x_{n_j}) \to y$, and found $y = f(x)$. What now?


My attempt for $[3\implies 1]$:

Suppose $\Gamma = \{(x,f(x)): x\in [a,b]\}$ is compact in $\mathbb R^2$. To show that $f$ is continuous, it is enough to show that for every closed set $S \subset R$, $f^{-1}(S)$ is closed in $[a,b]$. Denote the two projection maps in $\mathbb R^2$ by $$\phi_1(x,y) = x, \phi_2(x,y) = y$$ $\phi_1,\phi_2$ are continuous. Suppose $S\subset\mathbb R$ is an arbitrary closed set in $\mathbb R$. $$f^{-1}(S) = \{x\in [a,b]: f(x) \in S\}$$ We can write $$f^{-1}(S) = \phi_1(\Gamma \cap \phi_2^{-1}(S))$$ The above expression seems good intuitively but I need help establishing it by showing $\subseteq$ and $\supseteq$ inclusions. Next, $S\subset \mathbb R$ is closed tells us that $\phi_2^{-1}(S) \subset \mathbb R^2$ is closed - since $\phi_1,\phi_2$ are continuous. $\Gamma \cap \phi_2^{-1}(S)\subset \Gamma$, and $\Gamma \cap \phi_2^{-1}(S)$ is closed. So, $\Gamma \cap \phi_2^{-1}(S)$ is compact. $\phi_1$ is continuous, which means so is $f^{-1}(S)$. By Heine-Borel, $f^{-1}(S)$ is closed. Done!


Please help me fill in the gaps in my attempt, and let me know if you've any other ideas to solve the problem, such as $[3\implies 2]$ and $[2\implies 1]$. I'd also like to understand the proof posted above my attempt. Thank you!

Solve for $x:\quad 7^{x}- 6\equiv 0\pmod{17}$

Posted: 21 Mar 2021 10:04 PM PDT

Solve for $x:\quad 7^{x}- 6\equiv 0\pmod{17}$
Wolfram Alpha returned the answer as $x\equiv 13\pmod{17}$ or $x=16n+13$ where $n\geq 0.$
But I am not able to proceed

How do we differentiate $\log_{10}A$, $\log A$, $\ln A$?

Posted: 21 Mar 2021 09:51 PM PDT

How do we differentiate $\log_{10}A$, $\log A$, $\ln A$? I always thought $\log A$= $\log_{10}A$, but got stuck in understanding one of the logarithm problems recently here. Now, I am aware that quite some folks interpret $\log A$ = $\ln A$. Just wondering how the majority of the users in this site use and differentiate these logarithm notations to avoid confusions?

Prove $(\cos(2\pi t), \sin(2\pi t))$ is injective for $0\leq t \lt 1$

Posted: 21 Mar 2021 09:55 PM PDT

Prove $(\cos(2\pi t), \sin(2\pi t))$ is injective for $0\leq t \lt 1$

Intuitively it makes sense - this traces out the unit circle - but how we do we show this from the definitions / rigourously?

Let $A, B$ be some sets such that $A\setminus B$ is infinite while $B$ countable or finite. Prove that $A\setminus B\sim A$

Posted: 21 Mar 2021 09:56 PM PDT

The problem is the following :
Let $A, B$ be some sets such that $A\setminus B$ is infinite while $B$ countable or finite. Prove that $A\setminus B\sim A.$

I got stucked in this problem by trying the following :
First, as $A\setminus B$ and $A\setminus B\subseteq A.$ By lemma, I can assume that there is an injection $f$ such $f: A\setminus B\to A.$ Thus, now I need to find an injection from $A\to A\setminus B,$ so I can use the Schröder–Bernstein theorem, that states :
If there's an injection $f$ such $f: A\to B,$ and there's another injection $g$ such $g: B\to A,$ then there is a bijection from $A$ to $B.$
As $A\setminus B$ is inifinite, there must exist a set $C$ countable such $C\subset A\setminus B.$
Now I may write that $A= (A\setminus B)\cup(C)\cup(A\cap B).$ Since $A\cap B\subseteq B,$ and $B$ is countable or finite, then $A\cap B$ is countable or finite as well, that means there exists an injection $A\cap B\to\mathbb{N}.$
But $C$ is countable, then there exists an injection $l$ such that $l: A\cap B\to C.$
That is, I found an injection from $A\cap B\to C,$ but how can I find now an injection $A\to A\setminus B$ in order to prove by the theorem ?

Find the value of $T=\mathop {\lim }\limits_{n \to \infty } {\left( {1+ \frac{{1+\frac{1}{2}+ \frac{1}{3}+ . +\frac{1}{n}}}{{{n^2}}}} \right)^n}$

Posted: 21 Mar 2021 10:06 PM PDT

I am trying to evaluate

$$T = \mathop {\lim }\limits_{n \to \infty } {\left( {1 + \frac{{1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}}}{{{n^2}}}} \right)^n}.$$

My solution is as follow

$$T = \mathop {\lim }\limits_{n \to \infty } {\left( {1 + \frac{{1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}}}{{{n^2}}}} \right)^n} \Rightarrow T = {e^{\mathop {\lim }\limits_{n \to \infty } n\left( {1 + \frac{{1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}}}{{{n^2}}} - 1} \right)}} = {e^{\mathop {\lim }\limits_{n \to \infty } \left( {\frac{{1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}}}{n}} \right)}}$$

$$T = {e^{\mathop {\lim }\limits_{n \to \infty } \left( {\frac{1}{n} + \frac{1}{{2n}} + \frac{1}{{3n}} + \cdots + \frac{1}{{{n^2}}}} \right)}} = {e^{\left( {0 + 0 + \cdots + 0} \right)}} = {e^0} = 1$$

The solution is correct but I presume my approach $\mathop {\lim }\limits_{n \to \infty } \left( {\frac{{1 + \frac{1}{2} + \frac{1}{3} + \cdot + \frac{1}{n}}}{n}} \right) \Rightarrow \mathop {\lim }\limits_{n \to \infty } \left( {\frac{1}{n} + \frac{1}{{2n}} + \frac{1}{{3n}} + \cdot + \frac{1}{{{n^2}}}} \right) = 0$ is wrong.

Is there any generalized method

Why is - $\frac{8}{17}$ taken instead of $\frac{8}{17}$ when trying to find the $cos x$ from $\tan x = \frac{15}{8}$ and $\in [\pi, \dfrac{3\pi}{2}]$?

Posted: 21 Mar 2021 09:39 PM PDT

$\sec^2 (x) = \tan^2 (x) + 1$

$\sec^2 (x) = \left(\dfrac{15}{8}\right)^2 + 1$

$\sec (x) = \pm \sqrt{\dfrac{289}{64}}$

$\sec (x) = - \dfrac{17}{8}$ because $x \in \left[\pi, \dfrac{3\pi}{2}\right]$ <--- This is where I don't understand why we take the negative.

Given a parabolic shape with maximum height $OC=8m$ and maximum width $AB=20m$. If $M$ is the middle of $OB$, then what is the height $MK$, from $M$?

Posted: 21 Mar 2021 09:58 PM PDT

Given a parabolic shape with maximum height $OC=8m$ and maximum width $AB=20m$. If $M$ is the middle of $OB$, then what is the height $MK$, from $M$?

enter image description here

I attempted to solve the question as follows:

$OB=10m$

Hence, $OM=5m$

$OC=8m$

I state that $O$ is point $(0,0)$ on the Cartesian coordinate system.

$\implies M(5, 0), C(0, 8), B(0, 10), A(-10, 0)$. I then attempted to find the coordinates of K, but I don't know how to. Obviously, $\triangle AKB$ is isosceles, but I couldn't see anything else further than that. Could you please explain to me how to solve this question?

Maxima of a given set of integers

Posted: 21 Mar 2021 09:39 PM PDT

If $a,b,c$ and $d$ are four different positive integers selected from $1$ to $25$, then what is the highest possible value of the following?$$\frac{a+b+c+d}{a+b+c-d}$$


I assumed $a+b+c =x$ and then :-

$\frac{(a+b+c+d)}{(a+b+c-d)} \\ \Rightarrow \frac{x+d}{x-d} \\ \Rightarrow \frac{1+d/x}{1-d/x} \\ \Rightarrow Assuming \ d/x \ to \ be \ 'r' \ hence \frac{1+r}{1-r}$

From here I am kind of lost and couldn't figure out a way to proceed also using derivatives didn't help much. Can someone please help me on this ?

Update : If I take $d = 25$ in order to maximize the numerator and minimize the denominator then we'll get $\frac{x+25}{x-25}$.

Thanks in advance !

How to calculate the probability of dice outcomes of a D$30$?

Posted: 21 Mar 2021 09:30 PM PDT

One D$30$ dice is rolled three times, will the difference between the maximum and minimum value be greater than $15$?

So I guess this is the same as just picking three random numbers between $1-30$, and looking at the biggest and smallest number to see if the difference is greater than $15$. But what I am wondering is; How do I calculate the different probabilities here?

Connected Component in a Group

Posted: 21 Mar 2021 09:37 PM PDT

Let $A$ be a unital $C^*$- algebra. Let $U = \{ u \in A : u^*u=uu^*=1\}$ be the unitary group of $A$.

Let $U'= \{ e^{ia_1}e^{ia_2} \cdots e^{ia_n} : a_k = a^*_k \in A, \text{for } 1\leq k \leq n \}$.

Show that $U'$ is both open and closed in $U$ so that $U'$ is the connected component of the identity in $U$.

If $A$ is commutative, show that $U' = \{ e^{ia} : a = a ^* \in A\}$.

I thought of using the theorems below from Banach Algebra Techniques in Operator Theory by Douglas. Then for Prop.2.9, I got stuck showing that $U'$ is the connected component that contains the identity.

Also, to apply theorem 2.14, we want the set of self-adjoint elements in $A$ to be a Banach algebra.

Any help or suggestions will be appreciated.

Thank you!

enter image description here

enter image description here

Formula to calculate logarithmic curve in the context of IT security

Posted: 21 Mar 2021 10:01 PM PDT

Background

I'm actually a dev, but I think this question fits here as its about maths. I'm implementing a site wide throttle on too many failed requests as protection against distributed brute force attacks.

The question I am stuck with is, after how many failed login requests should I start to throttle?
Now one reasonable way is, as mentioned here "using a running average of your site's bad-login frequency as the basis for an upper limit". If the site has an average of $100$ failed logins, $300$ (puffer added) might be a good threshold.

Now I don't have a running average and I don't want someone having to actively increase the upper limit as the user base grows. I want a dynamic formula that calculates this limit based on the active users amount.

The difficulty is that if there are only a few users, they should have a much higher user to threshold ratio than let's say $100,000$ users. Meaning that for example for $50$ users the limit could be set at $50\%$ of the total user count which means allowing $25$ failed login requests site-wide in a given timespan. But this ratio should decrease for 100k users, the threshold should be more like around $1\%$. $1000$ failed login requests in the same let's say hour, is a lot (probably not accurate at all I am not a security expert, the numbers are only examples to illustrate).

The Question

I was wondering, is there any mathematical formula that could archive this in a neat way?

This is a chart of what I think the formula should be calculating approximately:

enter image description here

Here is what I have now (I know it's terrible, any suggestion will be better I'm sure):

$threshold = 1;  if ($activeUsers <= 50) {      // Global limit is the same as the total of each users individual limit      $threshold *= $activeUsers; // If user limit is 4, global threshold will be 4 * user amount  } elseif ($activeUsers <= 200) {      // Global requests allows each user to make half of the individual limit simultaneously      // over the last defined timespan      $threshold = $threshold * $activeUsers / 2;  } elseif ($activeUsers <= 600) {      $threshold = $threshold * $activeUsers / 2.5;  } elseif ($activeUsers <= 1000) {      $threshold = $threshold * $activeUsers / 3.5;  } else { // More than 1000      $threshold = $threshold * $activeUsers / 5;  }  return $threshold;  

Expectation of a betting game involving generalized Catalan numbers

Posted: 21 Mar 2021 09:44 PM PDT

I've recently become interested in a type of biased random walk. In the original context, the simplest problem I cannot answer is as follows:

Suppose you enter a simple but lousy betting game. The price to enter is \$3, and a fair coin is flipped. If it lands heads you win \$4, otherwise you win nothing. So, each game either increases your wealth by \$1 or decreases it by \$3. If you begin with just \$3 and repeatedly enter the game until you can no longer pay the fee, what is the expected number of games you will play?

Actually, I'd like to be able to answer this question for any game where the cost is $m$, the prize is $m+1$, and we have any given amount of starting wealth $w$. This similar question+answer shows that the expectation diverges when $m=1$ for any starting wealth.

Another way to phrase the question would be as a random walk. If you consider our position on the number line to be our total wealth after paying the fee, the case $m=w=1$ is a random walk from the origin that ends whenever the position becomes negative. The general case $m$ is a sort of random walk, but really it's more of a "stumble", since we either advance one step or reverse $m$ steps. I am interested in the expected length of these $m$-stumbles. Here is what I know so far:

The expected number of games, $g_{m,w}$, satisfies an easy recurrence relating your starting wealth:

$$ g_{m,w} = 1 + \frac{1}{2}g_{m,w+1} + \frac{1}{2}g_{m,w-m}$$

Which I find has generating function:

$$\sum_{w \ge 0} g_{m,w}z^n = \left(\alpha_m - \frac{2z}{1-z}\right)\left(\frac{1}{1 - 2z + z^{m+1}}\right)z^{m}$$

where $\alpha_m = g_{m,m}$ represents the average length of an $m$-stumble from the origin. The $g$ terms are related to the Fibonacci n-step numbers by $A_{w}\alpha_m - 2B_{w}$ where A is a partial sum of the first $w$ n-step numbers $\sum_{k \le w}F_{k}^{(n)}$, and B is a double convolution, i.e. the sum of the first $w-1$ partial sums $\sum\sum_{k < w}F^{(n)}_{k}$. The easiest way to find $g_{m,w}$ given some $g_{m,s}$ that I know of is to use the recurrence directly.

I have found the following double sum formula for general $\alpha_m$:

$$\alpha_m = \sum_{n = 0}^{\infty} \sum_{k=1}^{m} k\binom{mn + n + k}{n} / 2^{mn + n + k}$$

which more-or-less follows from the definition of the Fuss-Catalan numbers. Each term represents the contribution to the expectation of games ending at flip number $(m+1)n+k$. The inner sum does not include the term $k=0$ because it is not possible to lose after flip numbers divisible by $m+1$. (After flip $n$ your total wealth must be congruent to $n+m \mod{(m+1)}$, since it increases by exactly $1$ or $-m \equiv +1 \mod{(m+1)}$ each flip, and it begins at $w = m \equiv -1 \mod{(m+1)}$, so if you haven't already lost your wealth must be at least $m$ meaning you are guaranteed to have enough to pay the fee for these games.)

I don't know how to evaluate this sum, but I do know at least one value and have good guesses for others. We know from the linked question $\alpha_1 = \infty$, but for $m > 1$, $\alpha_m$ converges. The one value I know is:

$$\alpha_2 = \sum_{n=0}^{\infty} \binom{3n+1}{n}/2^{3n+1} + 2\sum_{n=0}^{\infty}\binom{3n+2}{n}/2^{3n+2} = 1 + \sqrt{5}$$

Which is the value Mathematica gives me. I don't how how it arrives at this value, but it appears correct and matches my simulations. Unfortunately, Mathematica chokes on $\alpha_3$ and higher. I guessed that $\alpha_m$ is the root of an $m$'th order polynomial, so $\alpha_2 = 1 + \sqrt{5} = (x^2 - 2x - 4)_{+}$, meaning $\alpha_2$ is the unique positive real root of $x^2 - 2x - 4 = 0$.

A value for $\alpha_3$ would answer the initial problem statement in this question. I find the following guesses match the value of my sum with very high accuracy, to at least 100 digits:

$$ \begin{align} \alpha_3 &= (x^3 - 4x - 4)_{+} \\ \alpha_4 &= (3x^4 + 4x^3 - 8x^2 - 24x - 16)_{+} \\ \alpha_5 &= (2x^5 + 5x^4 - 20x^2 - 32x - 16)_{+} \end{align} $$

but even knowing these values I have no idea how to prove them, even for the apparently simple cases of $\alpha_2$ and $\alpha_3$. Does anyone know how to find $\alpha_3$ or general $\alpha_m$?

maximum volume?

Posted: 21 Mar 2021 09:59 PM PDT

gutter

I know I have to derive the formula for the volume $V=A_{base}*h$, with $h=600cm$, set it equal to zero and the result would be the maximum, but I don't know what parameters to take to get the area of ​​the triangle. I hope you can help me, thank you!

Determining the number of colourings of regions of a pentagon by Burnside's Lemma

Posted: 21 Mar 2021 10:02 PM PDT

I have the question:
The symmetry of a regular pentagon has 10 elements. Use 4 colours to colour the 5 regions of the pentagon below. Determine what the total number of distinct colourings of the pentagon using Burnside's lemma/counting theorem. (Two colourings are considered to be the same if there is a symmetry of the regular pentagon that maps one colouring to the other)

Pentagon

Is this how I would go about a question like this?
My solution.

I'll refer to each $|\operatorname{fix}(g)|$ by a particular letter. First, I have '$A$', any one of the 4 colours could be chosen for each of the 5 regions. This gives $4^5$.

Then I have the rotations around the centre point, '$B$'. There are 4 rotations ($\frac {1}{5}, \frac 25, \frac 35, \frac 45$), with each being able to one of the 4 colours. Therefore I have $4 \times 4$ possible choices.

Finally '$C$', I have the rotations in regards to 'cutting' through the edge. There are 5 edges. Each when 'cut' causes 2 'pairs' of opposite regions to swap/transpose. When transposed, these regions must be the colour which they were transposed with. So I have the axis (the cutting edge) which can be one of 4 colours and the each of the pairs which can also be one of the 4 colours. Therefore $C = 5 \times 4^3$

To find the number of colourings, the equation is: $$ \begin{align} \text{#Colourings} &= \tfrac {1}{10}[A + B + C] \\ &= \tfrac {1}{10}[4^5 + (4 \times 4) + (5\times 4^3)] \\ &= \tfrac {1}{10}[1360] \\ &= 136. \end{align} $$

Is this the correct method to go about a problem like this/have I missed anything?

No comments:

Post a Comment