Recent Questions - Mathematics Stack Exchange |
- Distributivity of dot product in a scalar multipier
- Best notation for partial derivatives
- Sets having the same cardinality
- $u_{xx}$+$u_{yy}$ +$e^{-u}$ $(u_{x}^2 +u_y^{2})=0$
- What are non-numerical variables such as "Heads" and "Tails" officially called?
- Derivative of Unit Speed Parameterization of a Curve
- Do arbitrary metrics have particular convex or concave properties
- Why vector field commute but the flow does not commute in this example
- When will $f\simeq g$ implies $X\bigsqcup_f Y\simeq X\bigsqcup_g Y$?
- dini number in lebesgue differentiation proof
- ¿Are $\mathbb{Z}[\frac{1 + \sqrt{5}}{2}]$ and $\mathbb{Z}[\frac{1 + \sqrt{3}}{2}]$ integral extensions over $\mathbb{Z}$?
- Is this proof about sequences correct?
- Finding pdf, E(Y) and sd for continuous random variable
- when do we say that a sequence of real valued functions doesnot converge pointwise
- Calculating CAGR without fractional re-investments
- $ \int \frac{x^3}{\sqrt{x^2+x}}\, dx$
- Does this set have cardinality $\beth_\omega$? And if so, how does it require the Axiom of Replacement to construct?
- Derive the equation of motion for test masses
- Proof of Reverse of Heisenberg Commutation Relationship
- I want to learn maths at masters level by the age of 20.Iam currently 17 and doing bacheleors.How much should i study.
- Cumulative Distribution Function of $S_{N_{t}}$ where $S_{N_{t}}$ is the time of the last arrival in $[0, t]$
- The Hilbert function and polynomial of $S = k[x_1, x_2, x_3, x_4]$ and $I = (x_1x_3, x_1x_4, x_2x_4)$ step clarification.
- If $xy = ax + by$, prove the following: $x^ny^n = \sum_{k=1}^{n} {{2n-1-k} \choose {n-1}}(a^nb^{n-k}x^k + a^{n-k}b^ny^k),n>0$
- Double layer potential in 1d?
- Finding $f$ s.t. the sequence of functions $f_n(x)=f (x − a_n )$ is not a.e. convergent to $f$
- For any $n-1$ non-zero elements of $\mathbb Z/n\mathbb Z$, we can make zero using $+,-$ if and only if $n$ is prime
- Solving system of polynomial matrix equations over $\Bbb Z_2$
- How to finish this argument showing preimage of maximal ideal is maximal under surjective map. [duplicate]
- Vector Field on the Real Projective Plane
- Correction of -0.5 in percentile formula
Distributivity of dot product in a scalar multipier Posted: 18 Apr 2021 07:50 PM PDT I landed upon this expression while solving a problem; $$\vec a×\vec b (\vec a . \vec c)-\vec a×\vec c(\vec a .\vec b )$$ To simplify this, I thought of factoring the $\vec a$ out, and it seemed okay to do so, since dot products are distributive. But I don't know what's going to happen to the $\vec c$ and $\vec b$ it was dotted with when it's taken out. Together, each pair of dotted vectors formed a scalar multipier for each vector term, but now that I've taken the $\vec a$ out, I have no idea how to treat the other two. Is taking the $\vec a $ out wrong here? If so, why? |
Best notation for partial derivatives Posted: 18 Apr 2021 07:49 PM PDT I know this one is a soft question so I will tag it as such, but I'm aware of the following ways in which derivatives are represented: lets say we have a function: $$f=f(x,y)$$ then the derivative wrt $x$ e.g. is: $$\frac{\partial f}{\partial x}$$ but often it will be written as: $$\partial_xf,\,f_x$$ and several other forms. The reason I ask is because I find when you have equations with large amounts of derivates it can be tedious to type or write them all out fully, e.g.: $$\frac{\partial^2T}{\partial r^2}+\frac1r\frac{\partial T}{\partial r}+\frac{1}{r^2}\frac{\partial^2T}{\partial\theta^2}=0$$ can be nicely abbreviated to: $$\partial^2_rT+\frac{\partial_rT}{r}+\frac{\partial^2_\theta T}{r^2}=0$$ or maybe: $$T_{rr}+\frac1rT_r+\frac1{r^2}T_{\theta\theta}=0$$ Does anyone have any opinions on which shorthand is the least ambiguous or is more generally accepted, or if you have any other interesting notation I'd like to see. I am also aware of common notation like: $\Delta,\nabla$ |
Sets having the same cardinality Posted: 18 Apr 2021 07:46 PM PDT I am asked to think of an example of cardinality being the same between two sets X and Y such that the function from X to Y is one to one but it is not onto. I am so confused about this one because I thought there has to be a one-to-one correspondence between X and Y for their cardinality to be the same. How is it possible for there to just be a one-to-one relationship? Can anyone please explain? Thank you! |
$u_{xx}$+$u_{yy}$ +$e^{-u}$ $(u_{x}^2 +u_y^{2})=0$ Posted: 18 Apr 2021 07:41 PM PDT Find the general solution (involving two arbitrary functions) of the following PDE by making the change of dependent variable $z = e^u:$ $u_{xx}$+$u_{yy}$ +$e^{-u}$$(u_{x}^2 +u_y^{2})=0$ [Hint: The general solution will involve complex quantities.] I don't understand the Hint given, Are complex numbers involved here? |
What are non-numerical variables such as "Heads" and "Tails" officially called? Posted: 18 Apr 2021 07:42 PM PDT Is there a mathematical terminology for non-numerical variable such as Head Tail (referring to a coin flip) or Rock Paper Scissors or Up/Down ? For example, I want to say that my set $\mathcal{S} $ is filled with ______, where _______ are things like Head, Tail/ Left, Right/ Up, Down, etc. One such set is $\mathcal{S} $ = {Head, Tail}, another set is $\mathcal{S}$ = {Left, Right}, yet another set is $\mathcal{S}$ = {Happy, Sad, Mad}. Note that I am not assuming a probability context. Non-numerical variables appear all over math, such as in game theory (Confess/Betray). What should I use as the proper mathematical terminology in the above blanks: States? Strings? Tokens? Literals? Symbols? Alphabets? Words? |
Derivative of Unit Speed Parameterization of a Curve Posted: 18 Apr 2021 07:42 PM PDT I've been working on this question for about 3 hours now. Part (a) asks to show that the derivative of the unit speed parameterization function is perpendicular to its second derivative. As far as I understand, this is only necessarily true in the case of a circle, but my instructor told me it works in the case of any curve. As for part (b), I'm confused about the notation since the notation used on the assignment is different from any parameterization notation I've seen elsewhere. Is the "unit speed parameterization" a single equation? Any and all help is much appreciated. |
Do arbitrary metrics have particular convex or concave properties Posted: 18 Apr 2021 07:32 PM PDT Given an arbitrary metric $d$, I want to define a concave continuous function in terms of $d$. For example if $d$ is the Euclidean metric then it is convex, so $-d$ is concave. Ideally there would be some function which transforms any metric into a concave continuous function but I don't think that's very likely. Instead I wonder if it is possible classify the kinds of metrics and give a function for each kind of metric, like in the following manner: Suppose every metric is either concave or convex, then either $d$ or $-d$ is a concave continuous function. So is there some classification of metrics in terms of (quasi/strict) convexity and concavity? Alternatively let me know if you have good reason to believe that it is impossible to construct a concave continuous function from an arbitrary metric |
Why vector field commute but the flow does not commute in this example Posted: 18 Apr 2021 07:31 PM PDT I was doing Lee's smooth manifold Problem 9-19. Which is stated as follows:
I solve the ODE and gets the solution: $$\theta_t\circ \psi_s = (p_1+t,p_2+s,\arctan(\frac{s+p_2}{p_1})+p_3-\arctan(\frac{p_2}{p_1}))\\\psi_s\circ\theta_t = (p_1+t,p_2+s,-\arctan(\frac{t+p_1}{p_2}) +p_3 + \arctan(\frac{p_1}{p_2}))$$ Which is obvious not equal,but they both defined for all $\Bbb{R}^3\setminus \{z\}$,but it contradict to the theorem 9.44 that vector field commute if and only if flow commute?If I haven't made mistake in the computation.Is my computation correct,it's so hard to compute.Why does it not consistent with the theorem? |
When will $f\simeq g$ implies $X\bigsqcup_f Y\simeq X\bigsqcup_g Y$? Posted: 18 Apr 2021 07:35 PM PDT Let $X,Y$ be two topological spaces and $f,g:X\rightarrow Y$ be two homotopic continuous map between them. My question is when will we have the attaching result space $X\bigsqcup_f Y$ and $X\bigsqcup_g Y$ are homotopic? Well, intuitively, $f,g$ give the way of attaching, and if $f\simeq g$, then we can think the attaching way determined by $f$ can be transformed continuously to the attaching wat determined by $g$, and hence the resulting attaching space should be homotopy. But I doubt this result is not true in general, otherwise this should be proved explicitly in standard textbooks of algebraic topology. So my question is when will the above true? Well, by asking this, I'm not going to pursue the most general result, I want some special cases and a justification why my intuition above failed in general. Thanks. Aside: the reason I came up with this question is because I want to find the relation between homotopy of continuous maps and the homotopy of spaces, more explictly, I want to know is there any sense in which homotopy maps induce homotopy spaces? |
dini number in lebesgue differentiation proof Posted: 18 Apr 2021 07:26 PM PDT In Stein's real analysis, I'm trying to prove that if F is of bounded variation, then it is differentiable. To do this, it uses Dini numbers.
I'm having trouble showing that $D^-(F)(x) \le D_+(F)(x)$. \begin{align*} D^+(-F)(-x) &= \limsup_{\substack{h \to 0 \\ h > 0}} \Delta_h (-F)(-x) \\ &= \limsup_{\substack{h \to 0 \\ h > 0}} {-F(-x+h) + F(-x) \over h} \\ &= \limsup_{\substack{h \to 0 \\ h < 0}} {-(F(-x-h) + F(-x) \over -h} \\ &= \limsup_{\substack{h \to 0 \\ h < 0}} {F(-x-h) - F(-x) \over h} \end{align*} how do I make the last term into $D^-(F)(x)$? |
Posted: 18 Apr 2021 07:24 PM PDT I am currently reading Atiyah's book about Commutative Algebra and someone gave me this question that I can't figure out quite right: |
Is this proof about sequences correct? Posted: 18 Apr 2021 07:37 PM PDT Let $(u_n)$ and $(v_n)$ be two real sequences with limits L and M respectively. If $x_n$= max$({u_n,v_n})$ and $y_n$=min$(u_n,v_n)$, prove that the sequence $x_n$ and $y_n$ converges to max$(L,M)$ and min$(L,M)$ respectively. My attempt: It is given that $u_n$ and $v_n$ converges to L and M respectively. So $u_n$+$v_n$ =$L+M$. Now, taking limits, $x_n$=max$(u_n,v_n)$=$1/2{(a+b+|a-b|)}$=$1/2{(L+M+|L-M|)}$=max$(L,M)$ Similarly, $x_n$=min$(u_n,v_n)$=$1/2{(a+b-|a-b|)}$=$1/2{(L+M-|L-M|)}$=min$(L,M)$ Is this correct?? |
Finding pdf, E(Y) and sd for continuous random variable Posted: 18 Apr 2021 07:43 PM PDT w = uniform distribution (0,7) S= 5w^3/2 + 5 S range (0,100) pdf of w: f(w) = 1/(b-a) = 1/ 7 for 0 < w < 7 I do not know whether to integrate S over 100 to 0 or 7 to 0 to get the pdf Find pdf, E(Y), sd of S. |
when do we say that a sequence of real valued functions doesnot converge pointwise Posted: 18 Apr 2021 07:12 PM PDT I am not able to think much On this. All I think is maybe at some point in the given interval the limit function may tend to infinity. |
Calculating CAGR without fractional re-investments Posted: 18 Apr 2021 07:16 PM PDT The typical CAGR formula assumes that upon receipt of initial profits, all profits can be reinvested at the same rate of return. This is useful for people investing in the stock market where they can buy fractions of a share, however, what if an investment required fixed amounts of capital for reinvestment? For example, if someone bought and sold a home for a profit of 10%, they could not then go buy 1.1 homes even though they now would have 110% of their original starting capitol. Assuming a consistent growth rate and cost of investment, they would need to repeat this action 10 times before they could buy two homes at once. Then, of corse, they would only need to repeat the process 5 times on the two homes before moving up to three homes at a time ... we all get the point. Does anyone know of a formula for this? Edit: I am aware my example is imperfect - obviously in the real world real estate investors use the extra capital to buy nicer homes. It was the best hypothetical I could do to get the point across. Another effective example would be trading in the stock market without fractional shares. |
$ \int \frac{x^3}{\sqrt{x^2+x}}\, dx$ Posted: 18 Apr 2021 07:30 PM PDT I'm trying to solve this irrational integral $$ \int \frac{x^3}{\sqrt{x^2+x}}\, dx$$ doing the substitution $$ x= \frac{t^2}{1-2 t}$$ according to the rule. So the integral becomes: $$ \int \frac{-2t^6}{(1-2t)^4}\, dt= \int (-\frac{1}{8}t^2-\frac{1}{4}t-\frac{5}{16}+\frac{1}{16}\frac{-80t^3+90t^2-36t+5}{(1-2t)^4})\, dt=\int (-\frac{1}{8}t^2-\frac{1}{4}t-\frac{5}{16}+\frac{1}{16}(\frac{10}{1-2t}-\frac{15}{2} \frac{1}{(1-2t)^2}+\frac{3}{(1-2t)^3}-\frac{1}{2} \frac{1}{(1-2t)^4}))\, dt=-\frac{1}{24}t^3-\frac{1}{8}t^2-\frac{5}{16}t-\frac{5}{16}\cdot \ln|1-2t| -\frac{15}{64}\frac{1}{1-2t}+\frac{3}{64} \frac{1}{(1-2t)^2}-\frac{1}{16 \cdot 12} \frac{1}{(1-2t)^3}+cost $$ with $t=-x+ \sqrt{x^2+x}$. The final result according to my book is instead $(\frac{1}{3}x^2-\frac{5}{12}x+\frac{15}{24})\sqrt{x^2+x}-\frac{5}{16}\ln( x+\frac{1}{2}+ \sqrt{x^2+x})$ And trying to obtain the same solution putting t in the formulas I'm definitely lost in the calculation... I don't understant why this difference in the complexity of the solution... Can someone show me where I'm making mistakes? |
Posted: 18 Apr 2021 07:39 PM PDT I was reading this Wikipedia article: https://en.wikipedia.org/wiki/Von_Neumann_universe, and it mentions that the Axiom of Replacement is required to go outside of $V_{\omega+\omega}$, one of the levels of the von Neumann Hierarchy. If I'm correct, this means that the Axiom of Replacement would be required to construct a set of cardinality $\beth_\omega$, since such sets would only exist in higher levels of the hierarchy. But now let the set $N_0 = \mathbb N$, and let $N_{i+1} = P(N_i)$, where $P(N_i)$ is the power set of $N_i$. Now let the set $A$ be the union of all $N_i$ for each $i \in \mathbb N$. Now it seems to me that $A$ has cardinality $\beth_\omega$. Certainly, $N_i$ has cardinality $\beth_i$, and $A$ cannot have cardinality $\beth_m$ for any natural number $m$, because $A$ contains $N_{i+1}$, which has a strictly larger cardinality. So the cardinality of $A$ must be higher than $\beth_m$ for all $m\in \mathbb N$. Further, sets of cardinality $\beth_{\omega + 1}$ or higher can be easily constructed by taking $P(A)$ and so on. If $A$ does not have cardinality $\beth_\omega$, then how so? And if it does, where in this construction is the Axiom of Replacement invoked? $N_0$ (or something analogous) exists by the Axiom of Infinity. Then all other $N_i$ exist by the Axiom of Power Set. Admittedly, there is then some subtlety in applying the Axiom of Union, since sets must be in a larger set together before a union can be taken. One might try to construct $A$, or at least a similar set, by repeatedly using the Axiom of Pairing and the Axiom of Union, but this doesn't seem to work, admittedly. But I'm still unsure how exactly adding the Axiom of Replacement solves this problem. If we want to invoke the Axioms of ZFC explicitly in the construction of $A$, where and how does the Axiom of Replacement come into play? |
Derive the equation of motion for test masses Posted: 18 Apr 2021 07:37 PM PDT I currently have the topic Newtonian gravity, which is described as a field theory by means of the Poisson equation $$\Delta \phi = 4\pi G\rho\,.$$ As an assignment I have: derive the equation of motion for test masses. (1 point, short task) I can do something with the single words, but their combination to this sentence, I don't understand. What should I derive? And from what? Should I derive according to the mass, so $\frac{d}{dm}$? And why? What is the point of that? Where should this lead me. This task confuses me. |
Proof of Reverse of Heisenberg Commutation Relationship Posted: 18 Apr 2021 07:30 PM PDT I'm trying to prove that if for $P,Q,Z\in\mathbf(M)(n,\mathbf{C})$ it holds that $exp(\sigma P)exp(\tau Q)=exp(\hbar \sigma\tau Z)exp(\tau Q)exp(\sigma P)$ for all $\sigma, \tau\in \mathbf{R}$, then $[P,Q]=Z$, $[P,Z]=0$, and $[Q,Z]=0$. To prove $[P,Q]=Z$, one can just differentiate with respect to $\sigma$, and then $\tau$, but as far as I can tell, differentiating does not yield the other two as easily. |
Posted: 18 Apr 2021 07:18 PM PDT What should I begin with and what should i learn.Also can you tell me the best textbooks in which i can find problems that explain concepts as well as critical thinking as well as creative thinking.Also please tell me about the best youtube channels out there that explain maths at a masters level.So I could puruse my phd when i get a little older than iam currently now.Also can you please inform about how should i apply to prestigous universities and what are their requirments. |
Posted: 18 Apr 2021 07:12 PM PDT I am confused on this problem. My professor gave this as the solution: $S_{N_{T}}$ is the time of the last arrival in $[0, t]$. For $0 < x \leq t, P(S_{N_{T}} \leq x) \sum_{k=0}^{\infty} P(S_{N_{T}} \leq x | N_{T}=k)P(N_{T}=k) $ $= \sum_{k=0}^{\infty} P(S_{N_{T}} \leq x | N_{T}=k) * \frac{e^{- \lambda t}*(\lambda t)^k}{k!}$. Let $M=max(S_1, S_2, ..., S_k)$ where $S_i$ is i.i.d. for $i = 1,2,.., k$ and $S_i $~ Uniform$[0,t]$. So, $P(S_{N_{T}}) \leq x = \sum_{k=0}^{\infty} P(M \leq x)\frac{e^{- \lambda t}*(\lambda t)^k}{k!} = \sum_{k=0}^{\infty} (\frac{x}{t})^k \frac{e^{- \lambda t}*(\lambda t)^k}{k!} = e^{- \lambda t} \sum_{k=0}^{\infty} \frac{(\lambda t)^k}{k!} = e^{- \lambda t}e^{- \lambda x} = e^{\lambda(x-t)}$ If $N_t = 0$, then $S_{N_{T}} = S_0 =0$. This occurs with probability $P(N_t = 0) = e^{- \lambda t}$. Therefore, the cdf of $S_{N_{T}}$ is: $P(S_{N_{T}} \leq x) = \begin{array}{cc} \{ & \begin{array}{cc} 0 & x < 0 \\ e^{- \lambda (x-t)} & 0\leq x\leq t \\ 1 & x \geq t \end{array} \end{array}$ I don't really understand the part of creating the variable M of the maximum of k i.i.d. random variables in order to solve the problem. Any help would be greatly appreciated, thank you! |
Posted: 18 Apr 2021 07:42 PM PDT My professor based on pg.320 - 321 of Eisenbud, wrote the following: If $I = (m_1, \dots, m_l)$ min. set of monomial generators.And $I' = (m_1, \dots, m_{l-1}) \subsetneq I,$ and $d = \operatorname{deg}m_l.$ $$S\mu \xrightarrow{\varphi} S/I' \rightarrow S/I \rightarrow 0$$ $$\mu \mapsto m_l + I^{'}$$ Then we have: $$\operatorname{ker}{\varphi} = (I': m_l) = J = (m_1/\operatorname{gcd}(m_1, m_l), \dots , m_{l-1}/\operatorname{gcd}(m_{l - 1}, m_l))$$ Therefore, $\operatorname{im}(\varphi) = S/J . \mu$ and we have the s.e.s: $$0 \rightarrow S/J . \mu \xrightarrow{\tilde{\varphi}} S/I' \rightarrow S/I \rightarrow 0 $$ Regard $S.\mu$ as a graded $S-$module, $\operatorname{deg}\mu = d.$ Then the $s.e.s$ is graded. So for each $i \in \mathbb Z,$ we get $s.e.s$ of vector spaces $$0 \rightarrow (S/J)_{i - d} \rightarrow (S/I')_{i} \rightarrow (S/I)_i \rightarrow 0 $$ which implies that $H_{S/I}(i) + H_{S/J}(i -d) = H_{S/I'}(i)$ and this gives us an algorithm to find $H_{S/I}(i).$ Now, here is her solution to the question I mentioned above: 1- Put $m_l = x_2x_4, I' = (x_1x_3, x_3x_4)$ then $J = (I': x_2x_4) = (x_1x_3/\operatorname{gcd}(x_1x_3, x_2x_4), x_1x_4/\operatorname{gcd}(x_1x_4, x_2x_4)) = (x_1x_3, x_2x_1)= (x_1)$ And $S/J = k[x_2, x_3, x_4],$ so $H_{S/J}(d) = \binom{3 + d -1}{d} = \frac{(d + 1)(d + 2)}{2}$ where $d = \operatorname{deg} m_l $ and $H_{S/J}(i - 2) = \frac{(i-1)i}{2}, i \geq 2$ 2- Put $m_{l}^{'}= x_1x_4$, $I^{''} = (x_1x_3)$ then $\operatorname{deg}m_{l}^{'} = 2,$ and $J' = (I'': x_1x_4) = (x_1x_3/\operatorname{gcd}(x_1x_3, x_1x_4)) = (x_1x_3/x_1) = (x_3)$ And $S/J' = k[x_1, x_2, x_4],$ so $H_{S/J'}(d) = \binom{3 + d -1}{d} = \frac{(1 + d)(2 + d)}{2}$ where $d = \operatorname{deg} m_l^{'} $ and $H_{S/J'}(i - 2) = \frac{(i - 1)i}{2}, i \geq 2$ So,$$H_{S/I^{'}}(i) + H_{S/J^{'}}(i - 2) = H_{S/I^{''}}(i)$$ Now, since $H_{S/J^{'}}(i - 2) = i - 2, i \geq 2,$ it remains to find $H_{S/I^{''}}(i)$ Now, since we have $S/I^{''} = \frac{k[x_1, x_3]}{(x_1 x_3)}[x_2, x_4],$ a polynomial ring over the indeterminates $ x_2, x_4,$ monomials: $x_1^a x_{2}^b x_4^c, x_3^ax_2^b x_4^c,$ then $$S/I^{''} = k[x_1, x_2, x_4] + k[x_3, x_2, x_4]$$ But then my professor wrote $H_{S/I^{''}}(d) = 2 \frac{(d + 1)(d + 2)}{2} - (d + 1) = (d + 1)^2$. Also, I know that $H_{S/I}(i)$ should be called Hilbert function of dimension $i$, but what should be called Hilbert polynomial and it is polynomial in which determinants? Could anyone clarify this to me, please? |
Posted: 18 Apr 2021 07:36 PM PDT If $xy = ax + by$, prove the following: $$x^ny^n = \sum_{k=1}^{n} {{2n-1-k} \choose {n-1}}(a^nb^{n-k}x^k + a^{n-k}b^ny^k) = S_n$$ for all $n>0$ We'll use induction on $n$ to prove this. My approach is to use this formula: $$ \frac{k}{r} {{r} \choose {k}} = {{r-1} \choose {k-1}}$$ I'd like to show: $$S_{n} = xy.S_{n-1}$$. Or: $$\sum_{k=1}^{n} {{2n-1-k} \choose {n-1}}(a^{n}b^{n-k}x^k + a^{n-k}b^{n}y^k) = (ax+by)\sum_{k=1}^{n-1} {{2n-3-k} \choose {n-2}}(a^{n-1}b^{n-k-1}x^k + a^{n-1-k}b^{n-1}y^k)$$ We have: $$ (ax+by)\sum_{k=1}^{n-1} {{2n-3-k} \choose {n-2}}(a^{n-1}b^{n-k-1}x^k + a^{n-1-k}b^{n-1}y^k) = \sum_{k=1}^{n-1} {{2n-3-k} \choose {n-2}}(a^{n}b^{n-k-1}x^{k+1} + a^{n-k}b^{n-1}xy^k + a^{n-1}b^{n-k}x^ky + a^{n-1-k}b^ny^{k+1}) = \sum_{k=2}^{n} {{2n-2-k} \choose {n-2}}(\pmb{a^{n}b^{n-k}x^{k}} + a^{n-k+1}b^{n-1}xy^{k-1} + a^{n-1}b^{n-k+1}x^{k-1}y + \pmb{a^{n-k}b^ny^{k}}) = \sum_{k=2}^{n} \frac{n-1}{2n-1-k} {{2n-1-k} \choose {n-1}} [...] $$ Now we can almost extract the intended term($S^{'}_{n}$): $$ \sum_{k=2}^{n} {{2n-1-k} \choose {n-1}} (a^{n}b^{n-k}x^{k} + a^{n-k}b^ny^{k}) + \sum_{k=2}^{n} {{2n-1-k} \choose {n-1}} (a^{n-k+1}b^{n-1}xy^{k-1} + a^{n-1}b^{n-k+1}x^{k-1}y) + \sum_{k=2}^{n} (\frac{n-1}{2n-1-k}-1) {{2n-1-k} \choose {n-1}} [...] $$ There is further derivation but not seems very promising. |
Posted: 18 Apr 2021 07:19 PM PDT I would like to illustrate the double layer potential idea with a simple 1d example, but seem to run into a situation where the resulting integral equation is singular. The problem is $u''(x) = 0$ on $[0,1]$, subject to $u(0) = a$, $u(1) = b$. A free-space Green's function for this problem is given by $G_0(x,y) = \frac{1}{2}|x-y|$. This satisfies four desirable properties of the free-space Green's function :
As associated dipole can be expressed as \begin{equation} \lim_{\varepsilon \to 0}\frac{1}{2}\frac{|x-\frac{\varepsilon}{2}| - |x+\frac{\varepsilon}{2}|}{\varepsilon} = \frac{1}{2} - H(x) \equiv -\frac{\partial G_0(x,y)}{\partial y} \end{equation} Expressing the solution as a double layer potential, I get \begin{equation} u(x) = \mu(y)\left(-\frac{\partial G_0(x,y)}{\partial y}\right) \bigg\rvert_{y=0}^{y=1} = \mu(1)\left(\frac{1}{2} - H(x-1)\right) - \mu(0)\left(\frac{1}{2} - H(x)\right) \end{equation} where $H(x)$ is the Heaviside function. To get an integral equation, I evaluate the above at the endpoints $x = 0^+$ and $x = 1^+$, where "+" indicates taking a limit as $x$ approaches boundary point from within the interval $[0,1]$. The resulting integral equation is given by \begin{eqnarray} u(0^+) & = & \frac{\mu(0)}{2} + \frac{\mu(1)}{2} = a \\ u(1^+) & = & \frac{\mu(1)}{2} + \frac{\mu(0)}{2} = b \end{eqnarray} which is clearly singular, and can only be solved if $a = b$. My question is, where did I go wrong? Or, if the above is correct, is there an explanation for why the 1d double layer potential doesn't exist for $a \ne b$? I have considered the following ideas :
\begin{equation} u(x) = -\frac{\mu(0)}{2} + \mu(0) H(x) + \frac{\mu(1) - \mu(0)}{2} x - \mu(1)H(x-1) \end{equation} with $\mu(0) = 2a$ and $\mu(1) = 2b$. This satisfies necessary double-layer jump conditions, but the dipole representation is not obviously the derivative of a free-space Green's function.
|
Finding $f$ s.t. the sequence of functions $f_n(x)=f (x − a_n )$ is not a.e. convergent to $f$ Posted: 18 Apr 2021 07:47 PM PDT The following is an exercise from Bruckner's Real Analysis (the book in second line not "elementary" one):
I don't understand how "$f_n(x)=f (x − a_n )$ is not a.e. convergent to f" can happen at all : $f (x − a_n )$ are 'moving' to become $f(x)$ as $n \to \infty$ and we only have a solution for not a.e. convergent to f when we consider any different $f$ that is not a.e. limit of $f_n$? So what is the use of fat Cantor sets here? |
Posted: 18 Apr 2021 07:50 PM PDT Inspired by Just using $+$ ,$-$, $\times$ using any given n-1 integers, can we make a number divisible by n?, I wanted to first try to answer a simpler version of the problem, that considers only two operations instead of three. Let $n>2$ and parentheses are not allowed. Then, there are equivalent ways to ask this:
Alternatively, we can also be asking to partition a $n-1$ element set $S$ into two subsets $S_+$ and $S_-$, such that the difference between the sum of elements in $S_+$ and the sum of elements in $S_-$ is a multiple of $n$ (is equal to $0$ modulo $n$). For example, if $n=3$ then there are only $3$ (multi)sets we need to consider: $$ \begin{array}{} 1-1=0\\ 1+2=0\\ 2-2=0\\ \end{array} $$ which are all solvable (we can make a $0$ in $\mathbb Z/n\mathbb Z$). In general, there are $\binom{2n-3}{n-1}$ (multi)sets to consider for a given $n$. My conjecture is that any such (multi)set is solvable if and only if $n$ is a prime number. If $n$ is not prime, then it is not hard to see that this cannot be done for all (multi)sets. If $n$ is even, then take all $n-1$ elements to equal $1$, to build an unsolvable (multi)set. If $n$ is odd, then take $n-2$ elements to equal to a prime factor of $n$ and last element to equal to $1$, to build an unsolvable (multi)set. It remains to show that if $n$ is prime, then all such (multi)sets are solvable. I have confirmed this for $n=3, 5, 7, 11, 13$ using a naive brute force search. Can we prove this conjecture? Or, can we find a prime that does not work? |
Solving system of polynomial matrix equations over $\Bbb Z_2$ Posted: 18 Apr 2021 07:47 PM PDT Let $A, B, C, D$ be $4 \times 4$ matrices over $\mathbb{Z}_2$. Suppose they satisfy \begin{equation} \begin{cases} A^2+ BC+ BCA+ ABC+ A= I_4\\ AB+ BCB+ ABD=0_{4 \times 4} \\ CA+ DCA+ CBC= 0_{4 \times 4} \\ DCB+ CBD= I_4 \end{cases}\,. \end{equation} Does anyone have an idea on how to find matrices $A, B, C, D$ either theoretically or numerically? I'm thinking that we can find a solution since there are four equations and four unknowns. Hints would suffice. Thank you so much. Update: I tried to run the first code in SAS but it gave me the following results: What could I be doing wrong? |
Posted: 18 Apr 2021 07:21 PM PDT Let $f: R \to S$ be a surjective ring homomorphism. Let $M \subset S$ be maximal, and let $f^{-1}(M) \subset I$ for some ideal $I \subset R$. Then $M=f(f^{-1}(M)) \subset f(I)$ since $f$ is surjective. Since $M$ is maximal in $S$, then either $f(I)=M$ or $f(I)=S$. If $f(I)=M$, then $I \subset f^{-1}(f(I)) =f^{-1}(M)$, hence $I=f^{-1}(M)$. Now, I'm having trouble showing that if $f(I)=S$, then $I=R$. |
Vector Field on the Real Projective Plane Posted: 18 Apr 2021 07:37 PM PDT An exercise (8-12) in Lee's Introduction to Smooth Manifolds involves showing that if $F : \mathbb{R}^2 \to \mathbb{RP}^2$ is given by $F(x,y) = [x,y,1]$, then there is a vector field on $\mathbb{RP}^2$ that is $F$-related to the vector field $X = x\partial/\partial y - y\partial/\partial x$ on $\mathbb{R}^2$. I solved this problem as follows: We begin by letting $U_1,U_2,U_3 \subset \mathbb{RP}^2$ be the open subsets on which the first, second, and third coordinates, respectively, are nonzero, and let $(u_i,v_i) : U_i \to \mathbb{R}^2$ be the usual coordinate systems for each $i = 1,2,3$. We then define a smooth vector field $Y_i$ in coordinates on each $U_i$ as follows: \begin{align*} Y_1 &= (u_1^2 + 1)\frac{\partial}{\partial u_1} + u_1v_1\frac{\partial}{\partial v_1} \\ Y_2 &= -(u_2^2 + 1)\frac{\partial}{\partial u_2} - u_2v_2\frac{\partial}{\partial v_2} \\ Y_3 &= -v_3\frac{\partial}{\partial u_3} + u_3\frac{\partial}{\partial v_3}. \end{align*} It's then a straightforward computation with Jacobians to show that these three vector fields agree on intersections, and so they extend to a smooth global vector field $Y$ on $\mathbb{RP}^2$. One more computation shows that $Y$ is $F$-related to $X$. (I might have made a computational error here but that's beside the point.) Despite having a formula for the vector field $Y$, I still have no intuitive grasp of what it actually looks like. $\mathbb{RP}^2$ is already a pretty abstract object, and how to imagine vector fields on it is a mystery to me---the above coordinate representations don't shed that much light on its structure. Is there a coordinate-independent way to define $Y$? I'm thinking maybe we can define a visualizable vector field on $\mathbb{R}^3 \setminus \{0\}$ that sinks through the quotient map $q : \mathbb{R}^3 \setminus \{0\} \to \mathbb{RP}^2$, but I don't know how the details would work out. |
Correction of -0.5 in percentile formula Posted: 18 Apr 2021 07:41 PM PDT My question is about how to calculate the percentile of a list of numbers. I found on the Internet the formula: $$p_i=100·\frac{i-0.5}{N}$$ Nevertheless, I don't understand the reason of -0.5. I mean, for example if I have the following ranked list of numbers: $$1, 2, 4, 5, 100$$ In my opinion, 100 should be the 100%p and not: $$p_5=100·\frac{5-0.5}{5} = 90\%$$ I am assuming that all the numbers have the same probability. In this way I'm having the same problem with another formula that is commonly used in this type of calculations: $$p=100·\frac{i}{n+1}$$ I found this formulas in the following websites: http://www.itl.nist.gov/div898/handbook/prc/section2/prc262.htm Thanks for you help! |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment