Recent Questions - Mathematics Stack Exchange |
- Conditional convergence of $ \sum_{n=1}^{\infty} (1-cos\frac{\pi}{\sqrt{n}}) \cdot cos{n\pi} $
- How to compute $(\Omega+\partial \bar{\partial} \varphi)^{m}=(\exp \{F\}) \Omega^{m}$ where $\Omega$ is the kahler form of kahler manifold M.
- How do I compute the following trigonometric integral?
- Number of max intersections and regions lines and circle can form
- Arc Tangent to Circle Given Start Point and Direction
- (co)limits of models of a Lawvere theory
- On the existence of a special Hardy inequality
- Is "Probability Theory" an Inseparable Aspect of Machine Learning?
- Number of matrices possible correpsonding to 1modn amd satisfying the given constraints.
- Sum of independent continuous and discrete random variables
- Calculating area trapped between 2 functions
- Optimality condition of proximal gradient
- What is the terminology for the center of a symmetric object?
- How to quickly tell when a polynomial is positive?
- Cosine of a measurable set has smaller measure
- How can I prove the lower bound theorem?
- Odd Perfect Numbers
- Is there an 'inner product wrt a matrix' version of gradient descent?
- Show that norm bounds are exponents
- Known results about minimizing $\ell_\infty$ norm
- Quadrilateral ABCD satisfies $\overline{2AB}=\overline{AC}$, $\overline{BC}=\overline{\sqrt{3}}$, $\overline{BD}=\overline{DC}$ and $<BAC=60$
- Prove that $\int_0^\frac{\pi}{4}\frac{\cos (n-2)x}{\cos^nx}dx=\frac{1}{n-1}2^\frac{n-1}{2}\sin\frac{(n-1)\pi}{4}$
- What´s the measure of the segment HN in the figure below?
- Triangle-geometry problem
- How to get the derivative of the 2-norm of a vector with respect to a matrix?
- The notion of event, experiment and sample space
- Induced module and surjective morphism
- Help proving the Weierstrass Approximation Theorem using Fejer's Theorem
- Are S.I. Euler and Verlet the same thing?
- Show that this hyperplane is parallel to the null space of this linear functional.
Conditional convergence of $ \sum_{n=1}^{\infty} (1-cos\frac{\pi}{\sqrt{n}}) \cdot cos{n\pi} $ Posted: 28 Mar 2022 12:21 AM PDT Is series conditional convergence? $$ \sum_{n=1}^{\infty} (1-cos\frac{\pi}{\sqrt{n}}) \cdot cos{n\pi} $$ |
Posted: 28 Mar 2022 12:20 AM PDT I'm a beginner in Kahler geometry, I'm learing Yau's proof of calabi-conjecture and I met a problem. Let $M$ be a compact Kähler manifold with Kähler metric $\sum_{i, j} g_{i \bar{j}} d z^{i} \otimes d \bar{z}^{j}$.Then $\operatorname{det}\left[g_{i \bar{j}}+\left(\partial^{2} \varphi / \partial z^{i} \partial \bar{z}^{j}\right)\right] \operatorname{det}\left(g_{i \bar{j}}\right)^{-1}=\exp \{F\}$ is equivalent to $(\Omega+\partial \bar{\partial} \varphi)^{m}=(\exp \{F\}) \Omega^{m}$. I have problems in this computation: $\Omega$ = $\frac{1}{2} \sqrt{-1} \sum_{i, j} g_{i \bar{j}} d z^{i} \wedge d \bar{z}^{j}$ For the RHS:$\Omega^{m}=m !\left(\frac{\sqrt{-1}}{2}\right)^{m} \operatorname{det}\left(\left(g_{i \bar{j}}\right)\right) \bigwedge_{j=1}^{n}\left(d z_{j} \wedge d \bar{z}_{j}\right)$ = $m$ ! vol But I have problems computing the LHS, $\partial \bar{\partial} \varphi=\sum_{i , j} \frac{\partial^{2} \varphi}{\partial z^{i} \partial \bar{z}^{j}} d z^{i} \wedge d \bar{z}^{j}$ I fail to compute the LHS. How to compute $(\Omega+\partial \bar{\partial} \varphi)^{m}$, $\Omega$ has a term $\sqrt-1$ but $\partial \bar{\partial}\varphi$ doesn't have. |
How do I compute the following trigonometric integral? Posted: 28 Mar 2022 12:14 AM PDT
I somehow don't see where to start. I know that I should be able to write $2-\cos(t)-\sin(t)$ in the form $(z-a)(z-b)$ but I somehow don't see how. I thought about using the identities $$\cos(\theta)=\frac{e^{i \theta}+e^{-i\theta}}{2},\sin(\theta)=\frac{e^{i \theta}+e^{-i\theta}}{2i}$$ Could maybe someone help me? |
Number of max intersections and regions lines and circle can form Posted: 28 Mar 2022 12:12 AM PDT Given $A$ circles and $B$ straight lines, find the maximum possible number of intersection points. Repeat the previous question except that to find the maximum number of different regions created. What i did was as every circle intersects a line atmost two points and every circle intersects with amother circle at two points $2\binom{A}{2} + *\binom{B}{2}$(two lines intersect at one point only) + 2$\binom{A}{1}*\binom{B}{1}$ but i cant seem to show if that much intersections is all possible at some scenario , also in the different regions i think this also is like 2new regions by a circle , 2 new regions by a line and 2 new regions by intersection of circle and line but how to include every total possible combination in this ? |
Arc Tangent to Circle Given Start Point and Direction Posted: 28 Mar 2022 12:08 AM PDT Is there a way to calculate the Radius of an Arc tangent to two circles radii R1 and R2, centres X1, Y1, X2, Y2 respectively given a starting point on one of the circles at X3, Y3 Thanks, Ling. |
(co)limits of models of a Lawvere theory Posted: 28 Mar 2022 12:04 AM PDT Somehow, I got the impression from this answer that, for a Lawvere theory $L,$ the category $\text{Mod}(L,\text{Set})$ of models of a Lawvere theory has all small limits and colimits. Assuming this is true, my question is, how do I find such limits and colimits ? As I understand it, $\text{Mod}(L,\text{Set})$ is just the full subcategory of $[L,\text{Set}],$ on product preserving functors. Here $[L,\text{Set}]$ is the category of functors from $L$ to $\text{Set}.$ Given a functor $\mathcal{C} \overset{F}{\rightarrow} \text{Mod}(L,\text{Set}),$ can I find the (co)limit of $F$ just by calculating the (co)limit of $\mathcal{C} \overset{F}{\rightarrow} \text{Mod}(L,\text{Set}) \hookrightarrow [L,\text{Set}]$ (in other words, is it okay to pretend I'm working in the larger category $[L,\text{Set}],$ where I know how to find the (co)limits), or is there some other method I should use ? |
On the existence of a special Hardy inequality Posted: 27 Mar 2022 11:56 PM PDT It is well-known that the following Hardy inequality holds: \begin{equation} \frac{(N-2)^2}{4}\int_{\mathbb{R}^N}\frac{u^2}{|x|^2}\,dx\leq\int_{\mathbb{R}^N}|\nabla u|^2\,dx,\quad\quad\forall\,u\in\mathcal{D}^{1,2}(\mathbb{R}^N). \end{equation} My question is that is there exist the reverse type inequality of the above inequality? Or, equivalently, whether the following proposition holds for some constant $C>0$: \begin{equation} \int_{\mathbb{R}^N}|\nabla u|^2\,dx\leq C\int_{\mathbb{R}^N}\frac{u^2}{|x|^2}\,dx,\quad\quad\forall\,u\in\mathcal{D}^{1,2}(\mathbb{R}^N). \end{equation} I guess this conjecture is wrong, but I can't construct a counter-example. Any hints will be appreciated! |
Is "Probability Theory" an Inseparable Aspect of Machine Learning? Posted: 28 Mar 2022 12:21 AM PDT I have always had the following question about Probability and Machine Learning.
Now for instance, consider a more sophisticated model such as Neural Networks. In Neural Networks, we want to find out the optimal parameters (i.e. neuron weights) of the model - this is done optimizing the Loss Function associated with the Neural Network (based on the observed data). I was thinking about this for a while: it seems that unlike the earlier two examples, the error of the Neural Network does not seem to be required to follow some Probability Distribution - but it seems like the general idea still follows: Given a specific input (i.e. combination of covariates), we would like to identify the response for this input that has the highest probability "conditional" on this observed input. Thus, my question: Do the loss functions of Neural Networks have an inherent Probability component? Can we say that the predictions being made by some specific Neural Network model is "probabilistically the most likely response" given some observed inputs? Or does the notion of Probability have no real relevance here? Thanks! Note: I was thinking that maybe all this (i.e. Probability and Machine Learning Loss Functions) can come together through Empirical Loss Minimization (https://en.wikipedia.org/wiki/Empirical_risk_minimization) - I have heard people say that Machine Learning Models are trying to "learn" a High Dimensional Joint Probability Distribution corresponding to the observed data: Would it be a stretch to say that the prediction being made by the Neural Network model for a specific set of inputs is in fact the "probabilistically most likely prediction" corresponding to this set of inputs? Note: On a more abstract level, can we say that Machine Learning algorithms are trying to minimize the "Risk of Misclassification" (http://www.cs.cmu.edu/~aarti/Class/10704_Fall16/lec11.pdf)? Can we also tie this to PAC Theory? (https://en.wikipedia.org/wiki/Probably_approximately_correct_learning) |
Number of matrices possible correpsonding to 1modn amd satisfying the given constraints. Posted: 27 Mar 2022 11:43 PM PDT Show that, for every integer $n \geq 2$, the number of $2 \times 2$ matrices with integer entries in $\{0,1,2, \ldots, n-1\}$ and having a determinant of the form $1(\bmod n)$, is $$ n^{3} \cdot \prod_{q \text { prime }, q \mid n}\left(1-\frac{1}{q^{2}}\right) $$. I considered cases of some numbers from lets say $n = 5$ , i observed the pattern that 1mod5 can happen when $ad-bc -1$ is a multiple of $5$ , $(a,d,b,c)$ being $(3,2,0,0)$ seems to satisfy it its looks like there are only some possible combinations which make the ad part greater than bc by just 1 but how to show it ? |
Sum of independent continuous and discrete random variables Posted: 27 Mar 2022 11:37 PM PDT I am trying to understand how to create a PDF from the sum of an independent and dependent rv, Z = X + Y. I was wondering if anyone would be willing to provide an example of this. I have seen the equation FZ(z) = ∑ FY(z−x) . PX(x), but I can't seem to make sense of how the z's and x's are taken as input into FY. Specifically I am working with a Bernoulli and Gaussian, but an example with any continuous and discrete random variables would be appreciated. Thanks. |
Calculating area trapped between 2 functions Posted: 27 Mar 2022 11:31 PM PDT I want to calculate the area trapped between f(x) and g(x) while $$ 0\leq x\leq\frac{\pi}{4} $$ $$ g(x)=\frac{8x}{\pi}+\frac{\sqrt{2}}{2}-1,f(x)=sin(2x) $$ I`m given a clue to remember that: $$ sin(\frac{\pi}{4})=\frac{\sqrt{2}}{2} $$ what i tried: i compared f(x) and g(x) and didn`t find success,i tried to create new function h(x)=g(x)-f(x) to find the cut points but got stuck as well |
Optimality condition of proximal gradient Posted: 27 Mar 2022 11:40 PM PDT Given a convex $L$-smooth function $g: \mathbb{R}^d \rightarrow \mathbb{R}$ and a differentiable convex function $h: \mathbb{R}^d \rightarrow \mathbb{R}$ I want to find the optimality condition of the following algorithm $$x_k = \arg \min_{x \in \mathbb{R}^d} \frac{L}{2} \|x - x_{k-1}\|^2 + \nabla g^T(x_{k-1})(x - x_{k-1}) + h(x)$$ I've just differentiated the function and got the following condition $$x_k = x_{k-1} - \frac{1}{L}[\nabla g^T(x_{k-1}) + \nabla h(x_k)]$$ My question Is it possible to have $x_k$ on one side and $\nabla h(x_k)$ at the other one? Is there a way to get an explicit condition for $x_k$? |
What is the terminology for the center of a symmetric object? Posted: 27 Mar 2022 11:53 PM PDT |
How to quickly tell when a polynomial is positive? Posted: 27 Mar 2022 11:34 PM PDT I saw it written that: $$\frac{1}{2} (x^2 + x^3 - x - x^4)$$ was negative for $x > 0$. How do you figure this out by hand? |
Cosine of a measurable set has smaller measure Posted: 28 Mar 2022 12:07 AM PDT Let $A$ be a measurable subset of $[0,1]$ such that $|A| >0$, i.e., $A$ has positive measure. Prove that the set $\cos(A) = \{\cos x: x \in A\}$ has strictly smaller measure than $A$. The cosine graph tells us that the function is one-one at $[0,1]$ and clearly the measure of $[\cos 1,1]$ is smaller than $1$. But other than that I have no idea how to even begin. Any help would be appreciated. |
How can I prove the lower bound theorem? Posted: 28 Mar 2022 12:02 AM PDT The proof for the upper bound theorem, that I've seen, goes like this:
The problem is that they say that the same logic can follow for the lower bound theorem, but I can't find that anywhere. The book I'm using says the exact same thing as well but doesn't supply the proof for the lower bound version. So my question is: What is the proof for the lower bound version? |
Posted: 27 Mar 2022 11:46 PM PDT This seems a bit silly since it's kind of obvious, but I can't find it written anywhere unless it's behind a paywall. If some number $n \in \mathbb{Z}^{+}$ is an odd perfect number, it would have to have the following property: $σ_1(n) = 1 + \sum_{d|n}d$, where $d$ is odd (and $n$ must have an odd number of divisors $d$ such that $1 < d \leq n$). Is it just that this was obvious and relatively useless or am I making some obvious mistake? |
Is there an 'inner product wrt a matrix' version of gradient descent? Posted: 27 Mar 2022 11:40 PM PDT Gradient descent generally starts with a first order Taylor approximation motivation. If we have a function $f:\mathbb{R}^p\rightarrow\mathbb{R}^p$, and we start at a point $x\in \mathbb{R}^p$, then we can look at the first order Taylor approximation \begin{align} f(x+\Delta x)\approx f(x)+\langle\nabla f(x),\Delta x \rangle_{l^2} \end{align} We want to have the update $\Delta x$ to point in the same direction as $-\nabla f(x)$ in order to minimize $\langle\nabla f(x),\Delta x \rangle_{l^2}$. However could we use a different inner product? For instance let's say we have an SPD matrix $A\in \mathbb{R}^{p\times p}$ and we use the inner product $\langle x,y\rangle_A=x^T A y$. Then we could Taylor approximate \begin{align} f(x+\Delta x)\approx f(x)+\langle \nabla f(x),\Delta x\rangle_A \end{align} We would then have gradient descent updates \begin{align} x_{n+1}=x_n-\eta A\nabla f(x) \end{align} where $\eta$ is the learning rate. Is this type of gradient descent an actual procedure? If so, what is it called? If not, what is 'wrong' with it? I'm asking this because this paper 'seems' to be doing an infinite dimensional/functional version of this procedure. |
Show that norm bounds are exponents Posted: 27 Mar 2022 11:26 PM PDT Let $f$ be a continuous function on $S^2$. Consider $g\in C^{\infty}(R)$, such that $g(x)=1$ for $|x|\leq 1$ and for $|x|\geq 2$. Let $h(x)=g(x)-g(2x)$. The notation $proj_k$ denotes the orthogonal projection operator onto the space of spherical harmonics of degree $k$. Define $$K_0f(x)=\sum_{j=0}^1g(j)\, \operatorname{proj_j}f(x)$$ and $$ K_jf(x)=\sum_{k=2^{j-1}}^{2^{j+1}}h(2^{-j}k)\,\operatorname{proj_k}f(x), \quad j=1,2, \ldots. $$ Show that $\|K_jf\|_2\leq C_1e^{-c_2j}$ and $\|K_jf^2\|_2\leq C_3e^{-c_4j}$ |
Known results about minimizing $\ell_\infty$ norm Posted: 27 Mar 2022 11:45 PM PDT I am wondering about known algorithms for the following optimization problem, particularly in the case where $A \in \mathbb{R}^{d \times n}$ has $n \gg d$ (so the associated linear system is underdetermined): $$\begin{aligned} &\min&& \lVert x \rVert_{\infty} \\ &\operatorname{s.t. } && A x=b \\ &&& x \geq 0, \end{aligned}$$ I know a more general version of this problem is discussed in this paper, but I am interested in more whether specialized results are known for the $\ell_\infty$ case. I am mainly interested in an exact algorithm (or polylog dependence on error terms). |
Posted: 27 Mar 2022 11:30 PM PDT Quadrilateral ABCD is inscribed in a circle satisfies $\overline{2AB}=\overline{AC}$, $\overline{BC}=\overline{\sqrt{3}}$, $\overline{BD}=\overline{DC}$ and $<BAC=60$ I've never been good at Euclidean geometry questions like this... Really, what strategies could I employ to begin an analysis of the situation? I'm working through a text on Euclidean geometry and this is a question. The specific questions being asked about this scenario are the following: 1). The radius of the circumscribed circle 2). $\overline{AC}$ 3). $<BDC$ 4). The area of $\Delta BDC$ 5). $\overrightarrow{CA} \cdot \overrightarrow{DC}$ Now, The fact that $\overline{BD}=\overline{DC}$ struck me as odd. The length of a diagonal is equal to the length of one of the sides? I want to say that this implies that the quadrilateral we are dealing with must not be convex. Since $<BAC=60$, we can use the law of cosines to deduce that $\overline{AC}=2$ and so $\overline{AB}=1$. My analysis sort of hits a road block here. I have not used the fact that the quadrilateral is inscribed in a circle. For a quadrilateral inscribed in a circle, it is well known that:
However, I have not been able to make use of these facts. Can anyone help me out here?? Thanks in advance! |
Posted: 27 Mar 2022 11:41 PM PDT
For $n=2$, it is OK. For general $n$, it seems impossible by integration by parts. Any other method? When I calculate an integral $I_n=\int_0^\frac{\pi}{4}\frac{\cos nx}{\cos^nx}dx$, we find $I_n/2^{n-1}-I_{n-1}/2^{n-1}=-1/2^{n-1}J_n$. So we need to find the $J_n$, as the problem states. The proof of $I_n/2^{n-1}-I_{n-1}/2^{n-1}=-1/2^{n-1}J_n$ is as follows. \begin{align} I_n/2^{n-1}-I_{n-1}/2^{n-1} &=\frac{1}{2^{n-1}} \int_0^\frac{\pi}{4}\left(\frac{\cos nx}{\cos^nx}-\frac{2\cos(n-1)x}{\cos^{n-1}x}\right)dx\\ &=\frac{1}{2^{n-1}}\int_0^\frac{\pi}{4}\frac{\cos[(n-1)x+x]-2\cos(n-1)x\cos x}{\cos^nx}dx\\ &=-\frac{1}{2^{n-1}}\int_0^\frac{\pi}{4}\frac{\cos(n-2)x}{\cos^nx}dx =-1/2^{n-1}J_n \end{align} |
What´s the measure of the segment HN in the figure below? Posted: 27 Mar 2022 11:59 PM PDT |
Posted: 27 Mar 2022 11:42 PM PDT Here is the question: $\cos(A-B) = \frac{7}{8}$, $\cos(C) = ?$ By the Law of Cosines, I get: $AB^2 = 41-40\cos(C)$ I also tried to expand $\cos(A-B)$ by the compound angle formula, getting:
Which by the Law of Sines becomes:
That's where I have been able to get so far. One thing though that has been bothering me is whether $AB =3$. I am tempted to go down that way because of the Pythagorean triple $3^2 + 4^2 = 5^2$. However, they have not specified that $\angle{A} = \frac{\pi}{2}$, so I am worried about wrongly assuming it. Any assistance would be greatly appreciated. |
How to get the derivative of the 2-norm of a vector with respect to a matrix? Posted: 28 Mar 2022 12:16 AM PDT How to get the derivative of the 2-norm of a vector with respect to a matrix? For $$ f(\textbf{X})=\|\textbf{aX}\|^2_2, $$ it is easy to obtain $$ \frac{\text{d}f}{d\textbf{X}}=\textbf{a}^T\textbf{a}^*\textbf{X}^*. $$ where $\textbf{X}\in\mathbb{C}^{l\times l}$, $\textbf{a}\in\mathbb{C}^{1\times a}$. But for $$ f(\textbf{X})=\|\textbf{aX}\|_2, $$ how to get the derivative? |
The notion of event, experiment and sample space Posted: 27 Mar 2022 11:35 PM PDT I'm learning basics of the probability theory and trying to map the probability concepts I learnt to the following example from my book (Introduction to Algorithms (or CLRS):
I'm confused with what the author here means by an experiment and by an event? What's the sample space in this problem? My thoughts: the experiment here is actually "choose a coin, flip it twice" meaning that a possible outcome of this experiment is, for example, "biased coin, two heads" or "fair coin, head, tail" etc. Therefore this means that this "biased coin, two heads" and this "fair coin, head, tail" are elementary events. Consequently our sample space is something like $$ S = \{ \text{(F, TT), (F, HT), (F, TH), (F, HH), (B, HH)} \} $$ Therefore events should be subsets of the $S$, but author's event is like "choose a coin" or "flip coin first time" which are definitely can't be subsets of $S$. Where am I wrong and what's exactly the author means by his experiment and events and how they are related to the sample space concept. |
Induced module and surjective morphism Posted: 27 Mar 2022 11:35 PM PDT I am trying to solve the following question: Let $G$ be a finite group and $H$ a subgroup of $G$. Let $A$ be a $G$-module. Show that $\pi: I^{H}_{G}(A)\to A$ defined by $\pi(f)=\sum_{g\in G/H}g\cdot f(g^{-1})$ is surjective morphism. Here, $I^{H}_{G}(A)$ is the induced module from $H$ to $G$. It is $Hom_{H}(\mathbb{Z}[G],A)$. I tried to understand the image of $\pi$, but I could only conclude that it is invariant by $H$. I don't have any other idea what to do. |
Help proving the Weierstrass Approximation Theorem using Fejer's Theorem Posted: 28 Mar 2022 12:05 AM PDT I found a series of steps designed to give a constructive proof of WAT using Fejer's Theorem. For clarity, I'm using the following statement of WAT:
First, I want to produce a continuous function $\overset{\sim}{f}: [a-1, b+1]\to\Bbb R$ so that $\overset{\sim}{f}(x) = f(x)\ \forall x\in [a,b]$ and $\overset{\sim}{f}(a-1)=\overset{\sim}{f}(b+1)$. Second, I want to produce a continuous real function $g$ so that $g(x)=\overset{\sim}{f}(x)$ on $[a-1, b+1]$ and $g$ is $(2+b-a)$-periodic. Third, I'll define $F(y)= g((2+b-a)\frac{y}{2\pi})$. Fourth, I want to produce a trigonometric polynomial $T$ with $|T(y)-F(y)|<\frac{\varepsilon}2$. (I know that this is immediate from Fejer's Theorem.) Fifth, I want to produce an actual polynomial $P$ so that $|P(y)-F(y)|<\varepsilon$ for any $y\in \left[\frac{2\pi(a-1)}{2+b-a}, \frac{2\pi(b+1)}{2+b-a}\right]$. Last, I want to show that $\left|P(\frac{2\pi}{2+b-a})-f(x)\right|<\varepsilon$ for any $x\in [a,b]$.
Am I on the right track? If so, I'd appreciate seeing how all the details are worked out. I'd also really appreciate it if responses could follow the steps directly, so that I can understand the process better. |
Are S.I. Euler and Verlet the same thing? Posted: 28 Mar 2022 12:10 AM PDT Verlet as given by Wikipedia:
S.I. Euler as given by Wikipedia: $v_{n+1} = v_n + g(t_n, x_n) \, \Delta t\\ x_{n+1} = x_n + f(t_n, v_{n+1}) \, \Delta t$ Starting with S.I. Euler (and simplifying the notation): $$ x_{n+1} = x_n + hv_{n+1} \implies v_n = \frac{x_n - x_{n-1}}{h}\\ x_{n+1} = x_n + hv_{n+1} = x_n + h(v_n + ha_n) = x_n + h\left(\frac{x_n - x_{n-1}}{h} + ha_n\right) = 2x_n - x_{n-1} + h^2a_n $$ OK so they're the same thing, cool. That matches my tests (some simple projectile stuff + wind resistance + weird acceleration functions) where Verlet and SI Euler perform to within floating point error. However the Verlet page says, "The global error of all Euler methods is of order one, whereas the global error of [Verlet] is, similar to the midpoint method, of order two." It also says, "The global error in position, in contrast, is $O(\Delta t^{2})$ and the global error in velocity is $O(\Delta t^{2})$." The SI Euler page says "The semi-implicit Euler is a first-order integrator, just as the standard Euler method." How can they have different orders when they are the same method and seem to produce identical results? |
Show that this hyperplane is parallel to the null space of this linear functional. Posted: 28 Mar 2022 12:02 AM PDT This is an exercise of my homework of "Functional Analysis".
|
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment