Recent Questions - Mathematics Stack Exchange |
- Try to calculate $\int \frac{1}{\sqrt{x^2+1}^3} dx$ without using trigonometry.
- limit of sequence involving arcsin
- Find the sum of the given angles
- Auslander transpose of a cyclic module of finite length over a local ring
- How to prove Span x,y (x+y, x-y) question.
- How to compute the dot product of two multivectors?
- Find RQ and SQ in an isosceles triangle with centroid Q?
- For the Logistic model, why is the objective function unbounded below if two sets are linearly seperated?
- Is this statement linked with the following result on linear forms?
- Finding Cauchy-sequence that converges to $1/\sqrt{x}$
- How to solve this differential equation? $u'=\frac{u^a}{u^b+u^c}$
- Differential Equations Power Series problem
- Find the numbers of ordered array $(x_1,...,x_{100})$ that satisfies the following conditions:$2017|x_1+...+x_{100}$;$2017|x_1^2+x_2^2+...+x_{100}^2 $
- Let $a$ be an element of a group and $|a| = 100$. Find $|a^{98}|$ and $|a^{70}|$.
- Proving two propositions are equivalent.
- Proof that $42y + 77z =50$ has no solutions for $y,z \in \mathbf{Z}$
- Induction and Union of Sets
- can't linearly independent vectors form a subspace?
- Context Free Grammar for strings of $z^n$y$x^m$ $w^n$
- Peterson Graph Non-Hamiltonian Proof Explanation
- $O$ is intersection of diagonals of the square $ABCD$. If $M$ and $N$ are midpoints of $OB$ and $CD$ respectively ,then $\angle ANM=?$
- Showing that $\sup\{t\leq1:W_t=1\}$ is not a stopping time
- Is there something in logic that behaves more like "traditional" implication?
- Antiderivation which preserves exact forms
- Factor $a^2+3b^3$
- Defining a Galois Field based on primitive element versus polynomial?
- Alternate Form of Vector Plane Equation
- Radon-Transform after scaling a function
- Why is long-term relative frequency used to predict the probability of the event occuring in a single trial?
- Is the viscosity solution of Laplacian harmonic?
Try to calculate $\int \frac{1}{\sqrt{x^2+1}^3} dx$ without using trigonometry. Posted: 17 Oct 2021 07:52 PM PDT The answer is $\frac{x}{\sqrt{x^2+1}}$ but i don't know how to do this. |
limit of sequence involving arcsin Posted: 17 Oct 2021 07:49 PM PDT
I think the answer is $\frac{a}{y},$ but I'm not sure which limit properties to use. Would the Squeeze theorem be useful, for instance? Intuitively, as $n\to\infty, \frac{a}{2^n y_n}\to 0,$ so $\arcsin (\frac{a}{2^n y_n} )\approx \frac{a}{2^ny_n}.$ I'm not sure if I can really apply the product property of limits; i.e. $\lim\limits_{n\to\infty} a_n b_n = \lim\limits_{n\to\infty} a_n\cdot \lim\limits_{n\to\infty} b_n$ provided the RHS value is finite. However, I'm not really sure how to formalize this using the definition of limits. Let $\epsilon > 0.$ I want to show that there exists $N\in\mathbb{N}$ so that for $n\ge N, |2^n \arcsin (\frac{a}{2^n y_n}) -L| < \epsilon,$ where $L$ is the limit of the sequence. |
Find the sum of the given angles Posted: 17 Oct 2021 07:48 PM PDT |
Auslander transpose of a cyclic module of finite length over a local ring Posted: 17 Oct 2021 07:46 PM PDT Let $(R,\mathfrak m,k)$ be a Noetherian local ring. For a finitely generated $R$-module $M$, the Auslander transpose with respect to the minimal free presentation is defined as follows : take a presentation $F_1\xrightarrow{f}F_0\to M\to 0$ , where $Im(f)\subseteq \mathfrak m F_0$, and $\ker(f)\subseteq \mathfrak m F_1$. Dualizing we have $0\to M^*\to F_0^*\xrightarrow {f^*} F_1^*$ and we define $\text{Tr}_R M:=\text{coker } f^*$. My question is: If $M$ has finite length i.e. if $M_P=0$ for every non-maximal prime ideal $P$ of $R$, then does $\text{Tr}_R M$ also have finite length? I am mainly interested in the case where $M=R/I$ is a cyclic module. I know that $\text{Tr }(-)$ commutes stably with localization (https://arxiv.org/abs/math/9809121, page 6, Remark 5) , hence $\text{Tr}_R M$ will be locally free at non-maximal prime ideals, but I cannot figure out if it will be zero ... Please help. |
How to prove Span x,y (x+y, x-y) question. Posted: 17 Oct 2021 07:43 PM PDT Let $\vec{x},\vec{y} \in \mathbb{R}^n$. Prove that $\mathrm{Span}[\vec{x}, \vec{y}] = \mathrm{Span}[\vec{x}+\vec{y}, \vec{x} − \vec{y}]$. How to go about solving this question? Read some notes but still don't get it yet. Appreciate any tips/guidance. Thank you in advance! |
How to compute the dot product of two multivectors? Posted: 17 Oct 2021 07:33 PM PDT In grassmann algebra, there is something called multivertors, I know how to compute the grassman product, but I have no clear how should be computed the dot product of two arbritary multivectors. How to compute the dot product of two arbitrary multivectors? |
Find RQ and SQ in an isosceles triangle with centroid Q? Posted: 17 Oct 2021 07:32 PM PDT Given: isosceles triangle RST RS = RT = 17 and ST = 16 Medians RZ, TX, and SY meet at centroid Q Find: RQ and SQ |
Posted: 17 Oct 2021 07:30 PM PDT I am reading Approximate linear discrimination via logistic modeling in the Section 8.6.1 of B & V's Convex Optimization book. On Page 428, $$ \operatorname{minimize} \ -l(a, b) \tag{8.27} $$ with variables $a$, $b$, where $l$ is the log-likelihood function $$ \begin{aligned} &l(a, b)=\sum_{i=1}^{N}\left(a^{T} x_{i}-b\right)-\sum_{i=1}^{N} \log \left(1+\exp \left(a^{T} x_{i}-b\right)\right)-\sum_{i=1}^{M} \log \left(1+\exp \left(a^{T} y_{i}-b\right)\right) \end{aligned} $$ It says that if two sets can be linearly seperated, i.e., if there exist $a$, $b$ with $a^T x_i > b$ and $a^T y_i < b$, then the optimization problem (8.27) is unbounded below. Why is it unbounded below for this case? |
Is this statement linked with the following result on linear forms? Posted: 17 Oct 2021 07:23 PM PDT Here is the result on linear forms : let consider $\psi,\psi_1,...,\psi_p$, $\ p+1$ linear forms on $E$ a $\mathbb{K}$-vector space of finite or infinite dimension. Then $\bigcap\limits_{i=1}^p \ker(\psi_i)\subset \ker(\psi)$ iff $\psi \in \text{span}\{\psi_1,...,\psi_p\}$. Now here is the statement :
Thanks in advance ! |
Finding Cauchy-sequence that converges to $1/\sqrt{x}$ Posted: 17 Oct 2021 07:48 PM PDT I am struggling with a homework example and I hope you could give me a hint on it. The task is as follows:
Now, as I understand it, in order to show that $f \in L^1(0,1)$, one uses completeness of the space $\big( C^{\infty}([0,1]) \, , \; || \cdot ||_{L^1(0,1)} \big)$. So, if one finds a Cauchy-sequence in that space that converges to $f$, then $f$ must be in $\big( C^{\infty}([0,1]) \, , \; || \cdot ||_{L^1(0,1)} \big)$ as well, which means that the norm of $f$ is finite and therefore $f \in L^1(0,1)$. Is this interpretation correct? If so, I am struggling to find such a sequence. Finding a sequence that is an element of $C^{\infty}([0,1])$ and that converges to $f$ is not that hard, e.g. $$f_n = \sqrt{\frac{1}{x + \tfrac{1}{n}}}$$ However, I am having difficulty finding a sequence that is Cauchy. Are there any hints or tips you could give me on finding such a Cauchy-sequence? |
How to solve this differential equation? $u'=\frac{u^a}{u^b+u^c}$ Posted: 17 Oct 2021 07:19 PM PDT Let $a,b,c\in\mathbb{N}$. The differential equation is $$y'+1=\frac{(x+y)^a}{(x+y)^b+(x+y)^c}.$$ With the change of variable $u=y+x$ it is not difficult to see that the equation is equivalent to $$u'=\frac{u^a}{u^b+u^c}$$ which is in turn equivalent to $$(u^{b-a}+u^{c-a})u'=1$$ so integrating both sides we have $$\int u^{b-a}du + \int u^{c-a}du=x+c$$ and we can then consider when $b-a=-1$, when $c-a=-1$, when both $c-a=b-a=-1$ and when both are different to $-1$, but in some cases I believe it is not possible to obtain a simple expression of $u$ in terms of $x$. How should one proceed? |
Differential Equations Power Series problem Posted: 17 Oct 2021 07:18 PM PDT The given equation is: $$x^2y''+xy'+\left(x-5\right)y=0$$ Using the method of frobenius where: $$y = \sum_{n=0}^\infty A_nx^{n+r}$$ $$y' = \sum_{n=0}^\infty A_n(n+r)x^{n+r-1}$$ $$y'' = \sum_{n=0}^\infty A_n(n+r)(n+r-1)x^{n+r-2}$$ I substituted the y, y', y'' into the original equation: $$\sum_{n=0}^\infty A_n(n+r)(n+r-1)x^{n+r} + \sum_{n=0}^\infty A_n(n+r)x^{n+r} + \sum_{n=0}^\infty A_nx^{n+r+1} - 5\sum_{n=0}^\infty A_nx^{n+r}=0$$ After I re-indexed the the third summation, I simplified it to: $$A_2 x^{r+1} +\sum_{n=0}^\infty (A_n(n+r)^2-5A_n+A_{n+1})x^{n+r} = 0 $$ Is this correct so far? If so, what do I need to do to find the series solution? Thank you |
Posted: 17 Oct 2021 07:47 PM PDT Find the numbers of ordered array $(x_1,\dots,x_{100})$ that satisfies the following conditions: $1)$ $2017\mid x_1+\dots+x_{100}$ $2)$ $2017\mid x_1^2+x_2^2+\dots+x_{100}^2 $ $3)$ $x_1,\dots,x_{100}\in\{1,2,\dots,2017\}$ Here's all i did : Let $\omega = e^{2 \pi i / 2017}$. Note that : $$\sum_{0\le a,b\le (2017-1)} \omega^{a(x_1+x_2+..+x_{100})+b(x_1^2+x_2^2+...+x_{100}^2)} = 2017^2$$ if $ (x_1;x_2;...x_{100})$ satisfies the conditions and : $$\sum_{0\le a,b\le (2017-1)} \omega^{a(x_1+x_2+..+x_{100})+b(x_1^2+x_2^2+...+x_{100}^2)} = 0$$ if $ (x_1;x_2;...x_{100})$ doesn't satisfy the conditions. Symbol: $X$ is the set of all tuples $(x_1, x_2,,...,x_{100})$ , $Y$ is the set of all tuples$(x_1, x_2,,...,x_{100})$ satisfying the condition. $$\Rightarrow |Y| = \frac{1}{2017^2} \sum_{0\le a,b\le (2017-1)} \sum_{(x_1;x_2;...;x_{100})\in X} \omega^{a(x_1+x_2+..+x_{100})+b(x_1^2+x_2^2+...+x_{100}^2)} $$ $$\Rightarrow |Y| = \frac{1}{2017^2} \sum_{0\le a,b\le (2017-1)} \left(\sum_{1 \le x \le 2017} \omega^{ax^2+bx} \right)^{100}$$ Let $$G(a,b) = \sum_{1 \le x \le 2017} \omega^{ax^2+bx} $$ Case $1 : a=0 ;b=0 \Rightarrow G(0,0) = 2017$ Case $2: a=0 ;b >0 \Rightarrow G(0,b) =\sum_{1 \le x \le 2017} \omega^{bx} =0$ Case$3$$:a>0 ;b =0 \Rightarrow G(a,0) =\sum_{1 \le x \le 2017} \omega^{ax^2} \Rightarrow |G(a,0)|^2 = (-1)^{\frac{p-1}{2} }.2017 \Rightarrow |G(a,0)|^{100} = 2017^{50} $ Case $4 : a>0 ;b >0 $ In this case, I'm very stuck, I don't know how to solve it! I look forward to getting help from everyone. Thanks very much! |
Let $a$ be an element of a group and $|a| = 100$. Find $|a^{98}|$ and $|a^{70}|$. Posted: 17 Oct 2021 07:47 PM PDT
I used a theorem for this. The theorem stated, Let $a$ be an element of order $n$ in a group and let $k$ be a positive integer. Then $\langle a^k\rangle = \langle a^{\gcd(n, k)}\rangle$ and $|a^k| = n/\gcd(n, k)$. However, I'm not sure if my approach is correct . |
Proving two propositions are equivalent. Posted: 17 Oct 2021 07:10 PM PDT Let $X \subset \mathbb R$, $K$ be a compact subspace of $X$ and $f: K\to \mathbb R$. And let $\alpha>0.$ Consider these two propositions. $$(A) \quad \exists C> 0\ ;\ x,y\in K \Rightarrow |f(x)-f(y)|\leqq C|x-y|^\alpha $$ $$(B) \quad \forall a\in K,\exists C>0, \exists \delta>0 \ ; \ | x|<\delta \Rightarrow |f(a+x)-f(a)|\leqq C|x|^\alpha $$ I want to prove propositions $(A)$ and $(B)$ are equivalent. I did $(A) \Rightarrow (B)$, but I'm having difficulty in $(B) \Rightarrow (A)$. Suppose $(B)$ holds and $(A)$ doesn't hold. Then, for all $n\in \mathbb N,$ there exists $x_n, y_n \in K$ s.t. $$|f(x_n)-f(y_n)|>n|x_n-y_n|^\alpha \cdots (\ast)$$ Since $K$ is compact, I can assume $\{x_n\}$ and $\{y_n\}$ converge to $x,y\in K$ respectively. If $x\neq y,$ then let $n\to \infty$ in $(\ast),$ I get $|f(x)-f(y)|=\infty (\because f \mathrm{\ is \ continuous\ since\ } (B))$.Can I deduce a contradiction from here ? And if $x=y$, I don't know how I can deduce a contradiction. Thanks for your help. |
Proof that $42y + 77z =50$ has no solutions for $y,z \in \mathbf{Z}$ Posted: 17 Oct 2021 07:12 PM PDT I am to prove that the equation $42y + 77z= 50$ has no solutions for $ y,z \in \mathbf{Z}$. I know that I can prove this by contradiction, assuming "both y and z are integers". But after this what should I do? How can I do the working to show a contradiction? I thought of taking cases for when y is even/odd and when z is even/odd, but don't know if that will lead me anywhere. Could someone please point me in the right direction? |
Posted: 17 Oct 2021 07:15 PM PDT I'm trying to prove the following: " Suppose that one has proven the proposition that if $A \subseteq B$ and $C \subseteq D$, then $A \cup C \subseteq B \subseteq D$. Prove that for any integer $n \geq 2$ that if sets $A_1, A_2,...,A_n$ and $B_1, B_2,...B_n$ are sets that satisfy $A_j \subseteq B_j$ for $j = 1, 2, ..., n$ then $$\bigcup_{j=1}^n A_j\subseteq \bigcup_{j=1}^n B_j."$$ I'm not sure if I what I came up with makes sense logically, and would appreciate some feedback. Proof: Define P(n): $\bigcup_{j=1}^n A_j\subseteq \bigcup_{j=1}^n B_j$. Base case $(n=2)$: $\bigcup_{j=1}^2 A_j \subseteq \bigcup_{j=1}^2 B_j = A_1 \bigcup A_2 \subseteq B_1 \bigcup B_2$. So $P(2)$ holds. Inductive step: Assume $P(k)$ holds for some $k \geq 2$. So $$\bigcup_{j=1}^k A_j \subseteq \bigcup_{j=1}^k B_j = A_1 \bigcup A_2 \bigcup ... \bigcup A_k \subseteq B_1 \bigcup B_2 \bigcup ...\bigcup B_k.$$ Notice, $$\bigcup_{j=1}^{k+1} A_j \subseteq \bigcup_{j=1}^{k+1} B_j = A_1 \bigcup A_2 \bigcup ... \bigcup A_k \bigcup A_{k+1} \subseteq B_1 \bigcup B_2 \bigcup ... \bigcup B_k \bigcup B_{k+1}.$$ So $P(k+1)$ holds and by induction $P(n)$ holds for all $n \geq 2$. |
can't linearly independent vectors form a subspace? Posted: 17 Oct 2021 07:21 PM PDT since linearly independent systems are not collinear with the origin does that mean a set of linearly independent vectors can't form a subspace? I'm not sure if my question makes sense. |
Context Free Grammar for strings of $z^n$y$x^m$ $w^n$ Posted: 17 Oct 2021 07:42 PM PDT I am trying to make a context-free grammar that generates all the strings in the language: {$z^n$y$x^m$$w^n$ : m,n >= 0}. Right now for my rules I have: S->yX S->y X->e X->xX S->zXSw However, it seems to generate some incorrect strings, for example: S->zXSw->zxXSw->zxxXSw->zxxSw->zxxzXSw->zxxzyw Does anyone know how I could fix these rules to construct the correct context-free grammar? Any help or tips would be appreciated, thank you! |
Peterson Graph Non-Hamiltonian Proof Explanation Posted: 17 Oct 2021 07:26 PM PDT I'm working on graph theory and I'm trying to find a generalised elegant proof to non-Hamiltonian graphs. I stumbled onto this proof from D. West, which is simple, but I'm having trouble understanding how it works. From Wolfram: If there is a 10-cycle C, then the graph consists of C plus five chords. If each chord joins vertices opposite on C, then there is a 4-cycle. Hence some chord e joins vertices at distance 4 along C. Now no chord incident to a vertex opposite an endpoint of e on C can be added without creating a cycle with at most four vertices. Therefore, the Petersen graph is nonhamiltonian. In fact, it is also the smallest hypohamiltonian graph. In the following illustration, my interpretation of the above proof is that by connecting opposite vertices in graph C, you are creating 4-cycles, which connected together make a Hamiltonian cycle, thus showing that in order for the graph to be Hamiltonian, you must have more edges. Therefore, the Petersen graph must be non-Hamiltonian. Furthermore, this can be generalised to any graph as: any graph that does not form any 4-cycles with chords is non-Hamiltonian and by adding chords to create 4-cycles, they will become Hamiltonian. Is this interpretation correct? If not, can someone please explain this proof in detail? |
Posted: 17 Oct 2021 07:30 PM PDT $O$ is intersection of diagonals of the square $ABCD$. If $M$ and $N$ are midpoints of the segments $OB$ and $CD$ respectively, find the value of $\angle ANM$. Here is my approach: Assuming the length of the square is $a$. We have $\tan(\angle AND)=2$ and I draw a perpendicular segment from $M$ to $NC$ and calling the intersection point $H$ then $\tan (\angle MNH)=\dfrac{\frac34a}{\frac a4}=3$ ($MH$ can be found by Thales Theorem in $\triangle BDC$) Hence $$\angle ANM=180^{\circ}-(\tan^{-1}2+\tan^{-1}3)=45^{\circ}$$ I'm looking for other approaches to solve this problem if it is possible. Intuitively, If I drag the point $N$ to $D$ and $M$ to $O$ (the angle is clearly $45^{\circ}$ here) then by moving $N$ from $D$ to $C$ and $M$ from $O$ to $B$ with constant speed, I think the angle remain $45^{\circ}$. But I don't know how to prove it. |
Showing that $\sup\{t\leq1:W_t=1\}$ is not a stopping time Posted: 17 Oct 2021 07:32 PM PDT I'm considering the last hitting time $\tau=\sup\{t\leq1:W_t=1\}$ (taking the supremum of the empty set to be zero), and want to show that it is not a stopping time. My strategy is to show that $\mathbb{E}(W_\tau)\neq\mathbb{E}(W_0)=0$ and conclude that by the optional stopping theorem and that $\{W_t\}$ is a martingale, $\tau$ fails to be a stopping time, but 1) how do I calculate $\mathbb{E}(W_\tau)$, and 2) is this a sufficient argument, or could there be other potential reasons why the two expectations are unequal? |
Is there something in logic that behaves more like "traditional" implication? Posted: 17 Oct 2021 07:16 PM PDT In logic, implication refers to the following truth table. Initially, we can't know if an implication is true or false if all combinations of $P$ and $Q$ are possible. Let's imagine we've ruled out the possibility that $Q$ is false when $P$ is true. From the truth table, we can see that the implication is true. This matches my real-world intuition. But let's imagine a different scenario: we've ruled out the possibility for $Q$ to be true when $P$ is true. My intuition would say that the implication is now most definitely false. However, according to the truth table, we haven't learnt anything about the implication. It is still either true or false. Clearly there is a discrepancy between my "intuitive" implication, and the version of implication defined in logic. I will call my implication "traditional implication". Traditional implication would be defined as true if the logical implication is a tautology, otherwise false. (It can only be a tautology if we've ruled out the second row, i.e. $P \land \lnot Q$ is impossible) From what I know, traditional implication is what's used to test the validity of an argument, as opposed to logical implication. Am I right in saying this? (I'm pretty sure this is correct). Is there any such operation in logic that represents traditional implication? Is there any symbol for it? Is there a clean way of writing down this type of implication? Or do I have to resort to using the following, which is quite messy, and possibly incorrect? $$\forall p, q (p \implies q)$$ Edit: I just came across tautological implication and tautological consequence. Hopefully this is related to what I'm talking about, which I'm reading up on now. |
Antiderivation which preserves exact forms Posted: 17 Oct 2021 07:51 PM PDT It's easily shown that the wedge product descends to cohomology. So for a smooth manifold $M$, given a closed 1-form $\alpha \in \Omega^1(M)$, one can define the derivation of degree 1: $$ \Gamma : \Omega^k(M) \to \Omega^{k+1}(M) \qquad \Gamma(\omega) = \omega \wedge \alpha $$ which preserves exact forms, in the sense that $d\omega = 0 \implies d\Gamma(\omega) = 0$. However, I cannot seem to find any way to define an anti-derivation of degree $-1$ which preserves exactness in a similar manner. A tempting construction is to use the interior derivative $\iota_X : \Omega^k(M) \to \Omega^{k-1}(M)$ for some suitable $X$. If we assume $M$ is endowed with a Riemmanian metric, then we could raise the index of a closed 1-form $\alpha$ to produce the vector field $X$. Using the Cartan homotopy formula, one can check that, given the definition $$ \Gamma' : \Omega^k(M) \to \Omega^{k-1}(M) \qquad \Gamma'(\omega) = \iota_X (\omega) $$ the exterior derivative of $\Gamma'(\omega)$ is given by (where $L_X$ is the Lie derivative) $$ d(\Gamma'(\omega)) = d ( \iota_X (\omega) = L_X(\omega) - \iota_X \underbrace{d\omega }_{=0} = L_X(\omega) $$ so $\Gamma'(\omega)$ is closed $\iff L_X(\omega) = 0$. This seems to be related to the idea of Killing vector field, however requiring $L_X (\omega) = 0$ for all $\omega$ most likely makes $X$ trivial (I haven't proven this but it seems true). Alternatively, we could have $X$ depend on $\omega$ in a predefined way, such that $L_{X_\omega} \omega = 0$. But it's not clear to me in general how one would construct such an $X_\omega$. Is there any other construction for $\Gamma'$ that has this property? Or is there some reason why going "down" the chain of forms is easier that going "up" ? |
Posted: 17 Oct 2021 07:17 PM PDT Using cubic root factor the expression $$a^2+3b^3$$ We have $$a^2+3b^3=\left(\sqrt[3]{a^2}\right)^3+\left(\sqrt[3]{3b^3}\right)^3 \tag{1} $$$$ \=\left(\sqrt[3]{a^2}+\sqrt[3]{3b^3}\right)\left(\sqrt[3]{a^4}-\sqrt[3]{3a^2b^3}+\sqrt[3]{9b^6}\right)\tag{2} $$$$ =\left(\sqrt[3]{a^2}+b\sqrt[3]{3}\right)\left(a\sqrt[3]{a}-b\sqrt[3]{3a^2}+b^2\sqrt[3]{9}\right)\tag{3}$$ Am I right? |
Defining a Galois Field based on primitive element versus polynomial? Posted: 17 Oct 2021 07:37 PM PDT Normally I see GF() defined in terms of a polynomial in x. For example, GF(24) defined by the reducing polynomial x4 + x + 1, where there are φ(24-1) = 8 primitive elements: 𝛼 = {x, x+1, x2, x2+1, x3+1, x3+x+1, x3+x2+1, x3+x2+x}. I sometimes see an alternative based definition based on a primitive element, such as: let 𝛼 be a primitive element of GF(24) such that 𝛼4 + 𝛼 + 1 = 0. This is essentially the same as above for 𝛼 = {x, x+1, x2, x2+1}, but it fails for 𝛼 = {x3+1, x3+x+1, x3+x2+1, x3+x2+x}, and GF(24) should have 8 primitive elements. What I was unaware of is defining 𝛼 to be the sum of a primitive element plus the reducing polynomial, such as 𝛼 = x + (x4 + x + 1) = x4 + 1. In this case, a primitive element = 𝛼 mod (x4 + x + 1). With this approach, 𝛼 is not an element of GF(24). For example, x4+1 is not an element of GF(24). Most of the textbooks I've seen for error correction codes define 𝛼 as a polynomial in x, as shown in the first paragraph above. |
Alternate Form of Vector Plane Equation Posted: 17 Oct 2021 07:45 PM PDT Using the standard vector plane equation form $ r\cdot n = a\cdot n $ where n is the normal vector to the plane and $a$ is the position vector of an arbitrary point on the plane. Using the fact that $ \vec n = (\vec b-\vec a) \times (\vec c-\vec a) $ the plane equation becomes; $ \vec r \cdot (\vec b \times \vec c + \vec a \times \vec b + \vec c \times \vec a) = \vec a \cdot (\vec b \times \vec c) $ I'm assuming there is no way to make $ \vec r $ the subject in this equation but is there some way we can infer that $\vec r$ must be composed of scalar multiples of $ \vec a, \vec b$ and $\vec c$ from this equality? |
Radon-Transform after scaling a function Posted: 17 Oct 2021 07:25 PM PDT Say I have a Radon transform of $f:\mathbb R^2 \to \mathbb R^+$ and denote it by $\mathcal Rf$. I now read that the Radon-Transform of $f_a(x,y):=a^2f(ax,ay)$ is then given by $$\mathcal Rf_a(t,\theta)=a\mathcal Rf(at,\theta), \quad t\in \mathbb R, \theta \in [0,\pi].$$ How come? If we set $x\cos\theta + y\sin\theta = t $, then from $ax\cos\theta + ay\sin\theta = t \iff x\cos\theta+y\sin\theta = \frac{t}{a}$, wouldn't we get $$\mathcal Rf_a(t,\theta)=a^2\mathcal Rf(\frac{t}{a},\theta)$$ or what property that I can make us of am I missing? |
Posted: 17 Oct 2021 07:44 PM PDT I am a beginner statistics student learning probability from a frequentist perspective. I am confused on the application of a 'probability' to the real world. Probability is the relative frequency of an event, performed over a very large (theoretically, infite) trials. My question: how is this useful when trying to predict the outcome of the random process in a single trial? For example, having a dice land on one has a probability of $\frac 16$. That's fine if we want to predict the chance of landing a one over a large number of dice rolls, but it's not accurate for predicting the outcome of the dice roll in a single roll. So my question is what is the rationale of using long-term relative frequency for the chance of a single repetition of the experiment to predict an event? Is it just the 'best guess' we have? I know logically it may just be the 'best guess' we have, but intuitively it's not clicking for me. An intuitive explanation would help with as much minimal math language as possible! Edit: To clarify, this is my confusion. The relative frequency of landing on a one in (for example) 6 rolls of the dice is NOT equal to 1/6 nor does it have to come near 1/6. It only comes close to 1/6 after a large number of trials. So my question is in one trial/roll of the dice why is 1/6 the best prediction we have for estimating the probability of landing a 1? Is it because the long-term relative frequency is about the best guess we have for the outcome of any single trial? Is there more to this than it just being the best guess? |
Is the viscosity solution of Laplacian harmonic? Posted: 17 Oct 2021 07:43 PM PDT
Here is my try, I want to use the mean value property to prove that $ u $ is harmonic, but I can not use the information of the $ \Delta \phi(x_0) $ or $ \Delta \psi(x_0) $. Can you give me some hints or references? |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment