Recent Questions - Mathematics Stack Exchange |
- Parameter Estimation: Maximum Likelihood
- What is meant when a universal quadratic form is ternary?
- How can we evaluate the integral $\int_{0}^{\infty} \frac{d x}{1+x+x^{2}+\ldots+x^{n}}$, where $n\geq 2$
- If $A_1$, $A_2$, $A_3$, $\ldots$ are sets such $A_n\subseteq A_{n+1}$ for $n\geq1$, use induction to show that $A_1 \subseteq A_n$ for all $n$.
- using the Fredholm alternative to find solvabilty condition for a PDE
- Fourier Series of x|cosx|
- Removing unnecessary edges from an updated cycle
- Why require $R$ to be a field to obtain that $R[\alpha]/(2\alpha^2)$ is either an exterior algebra or a polynomial algebra?
- What time did Train B arrive in New York?
- What is the domain of z=sqrt(xy) (exam question) [closed]
- At what point is the line $y = mx + b$ tangent to $y = \sin (mx + b)$?
- A possible easy proof of Borel determinacy?
- Convergence of $\sum_{\alpha\in \Bbb N^n} \frac{z^\alpha}{\xi(\alpha)}$ on $\operatorname{int} U$ where $U \subset\Bbb C^n$ is bounded
- Why does least square method work? Is it related to Jacobian matrix?
- Ist every axiom of ZFC Independent of the Rest?
- If Alice and Bob sequentially select from arbitrary vectors, can Alice guarantee a not-lesser resultant?
- In 3-dimensional space, what is the set of all points 12 units from the origin?
- Prove that the integral $\int_1^\infty f(x)\, dx$ and series $\sum f(n)$ both converge or both diverge?
- inferior limit of a sequence intuition
- Pick a random number uniformly in $[0,1]$. Is this the axiom of countable choice?
- Limit $\lim_{n\to \infty} n\cdot \sin(\frac{n}{n+1}\pi) = \pi$
- If $\int_A f d \mu = \int_A g d\mu$ for continuous positive $f,g$ and a Radon measure $\mu$, is $f=g$ a.e.?
- If sequence $a_n>0$ is decreasing to $0$ and $\sum_{k=1}^\infty a_n=\infty$, prove $\sum_{n=1}^\infty\frac{a_n}{e^{S_n}}<\infty$ [closed]
- Assistance in proving or disproving the existence of a certain matrix
- If $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ then $E[X] < \infty$?
- Counting total number of local maxima and minima of a function
- Formulating an optimisation problem into a mixed-integer problem
Parameter Estimation: Maximum Likelihood Posted: 05 Feb 2022 11:53 PM PST Given a dataset {x1, x2,....xn} of size N nonnegative real-valued numbers, we will derive the parameter estimate of θ for the uniform[0; θ] distribution: Given a single datapoint x1 = 3, how to plot the likelihood of x1 as a function of θ: P(x1|θ). |
What is meant when a universal quadratic form is ternary? Posted: 05 Feb 2022 11:43 PM PST I am reading about quadratic forms from class notes of a senior and I am not able to understand what this terminology means: A quadratic form is universal over K if it represents all totally positive algebraic integers $\alpha \in {O_K}^{+}$. But what is meant when a universal form is "ternary"? Kindly tell. |
Posted: 05 Feb 2022 11:35 PM PST We are going to investigate the integral $$ I_{n}=\int_{0}^{\infty} \frac{d x}{1+x+x^{2}+\ldots+x^{n}} \text {, where } n \geqslant 2. $$ Let's start with the simpler cases. $$ \begin{aligned} I_{2} &=\int_{0}^{\infty} \frac{d x}{1+x+x^{2}} \\ &=\int_{0}^{\infty} \frac{d x}{\left(x+\frac{1}{2}\right)^{2}+\left(\frac{\sqrt{3}}{2}\right)^{2}} \\ &=\frac{2}{\sqrt{3}}\left[\tan ^{-1}\left(\frac{2 x+1}{\sqrt{3}}\right)\right]_{0}^{\infty} \\ &=\frac{2}{\sqrt{3}}\left(\frac{\pi}{2}-\frac{\pi}{6}\right) \\ &=\frac{2 \pi}{3 \sqrt{3}} \end{aligned} $$ and $$ \begin{aligned} I_{3} &=\int_{0}^{\infty} \frac{1}{(1+x)\left(1+x^{2}\right)} d x \\ &=\frac{1}{2} \int_{0}^{\infty}\left(\frac{1}{x+1}+\frac{1-x}{x^{2}+1}\right) d x \\ &=\frac{1}{2}\left[\ln (x+1)+\tan ^{-1} x-\frac{1}{2} \ln \left(x^{2}+1\right)\right]_{0}^{\infty} \\ &=\frac{1}{4}\left[\ln \frac{(x+1)^{2}}{x^{2}+1}\right]_{0}^{\infty}+ \left[\frac{1}{2} \tan ^{-1} x\right]_{0}^{\infty} \\ &=\frac{\pi}{4} \end{aligned} $$ But when $n\geq 4$, the integrals are difficult. My question: Is there any elementary method to evaluate it? Your suggestions and solutions are warmly welcome. |
Posted: 05 Feb 2022 11:46 PM PST Suppose $A_1$, $A_2$, $A_3$, $\ldots$ are sets with the property that $A_n \subseteq A_{n+1}$ for every $n \geq 1$. Use induction to show that $A_1 \subseteq A_n$ for all $n$. |
using the Fredholm alternative to find solvabilty condition for a PDE Posted: 05 Feb 2022 11:28 PM PST I'm trying to understand how to use the Fredholm alternative to solve this system of PDE: enter image description here |
Posted: 05 Feb 2022 11:25 PM PST Fourier series of f(x) in range (-π,π) where f(x) = x| cos x | |
Removing unnecessary edges from an updated cycle Posted: 05 Feb 2022 11:17 PM PST Let $G=(V,E)$ be undirected, and let $s,t∈V$ and $C⊆E$ be a cycle that contains $s$ and $t$. As a dynamic algorithm, during each phase, the given input is a set of edges $F⊆E$ which are to be removed from the cycle. After removing these edges, I want to get rid of redundant edges. By redundant, I mean edges that can now no longer be part of this cycle. For example, $$ C=\left\{ (s,v_1),(v_1,v_2),(v_2,t),(t,v_3),(v_3,s),(v_1,v_4),(v_4,v_5), (v_5,v_1) \right\}$$ So the cycle's full path is: $s\rightarrow v_1 \rightarrow v_4 \rightarrow v_5 \rightarrow v_1 \rightarrow v_2 \rightarrow t \rightarrow v_3 \rightarrow s$. If the removed edges are $\left\{ (v_4,v_5) \right\}$ then the cycle path is $s\rightarrow v_1 \rightarrow v_2 \rightarrow t \rightarrow v_3 \rightarrow s$. And then the edges that are no longer necessary are $(v_1,v_4)$ and $(v_5,v_1)$, despite not being removed. Basically, my question is how to identify these edges. If the cycle given is simple, then we can simply return an empty set, as there is no longer any smaller cycle. For a non-simple cycle, I'd like to find the minimal set of unnecessary edges (unnecessary as I showed in the example before, meaning edges that are no longer usable for the cycle), such that the remaining edges are an $s,t$-cycle (the remaining edges must contains $s$ and $t$). I tried locating subsets of $C$ that have been hit by $F$ and thus cannot be used, but its seems every rule I try to make has a counterexample. Running an $s,t$-flow algorithm seems a bit unprecise as well, as it might find some sub-cycle which is smaller then the one I want (I want to maximize the cycle out of the remaining edges). |
Posted: 05 Feb 2022 11:16 PM PST The following is on page 227 of Hatcher's algebraic topology:
My question is: Why do we need $R$ to be a field to get the statement in bold? I understand that when $\alpha$ is odd-dimensional then either $\alpha^2=0$ giving $R[α]/(2α^2)=Λ_R[α]$, or $2=0$ leading to $R[α]/(2α^2)=R[α]$. Where should I use the 'being a field' condition? According to the definition of exterior/polynomial algebra in this book, it only requires $R$ to be a (commutative, unital) ring. |
What time did Train B arrive in New York? Posted: 05 Feb 2022 11:13 PM PST Question: Train A leaves New York for Boston at 3 PM and travels at the constant speed of 100 mph. An hour later, it passes Train B, which is making the trip from Boston to New York at a constant speed. Train B left Boston at 3:50 PM. The combined travel time of the two trains is 2 hours. Train B arrived in New York before Train A arrived in Boston. What time did Train B arrive in New York? Other: I have tried diagramming the problem. I have determined that train B does 100 miles in less time than train A does (distance(NY,Bostom)-100) miles. Is there even enough information? |
What is the domain of z=sqrt(xy) (exam question) [closed] Posted: 05 Feb 2022 11:01 PM PST so i had a one interesting final exam question and i was not able to solve, so it goes like that What is the domain of z=sqrt(xy) |
At what point is the line $y = mx + b$ tangent to $y = \sin (mx + b)$? Posted: 05 Feb 2022 11:52 PM PST I was playing around on Desmos to investigate the visual relationship between $y = mx + b$ and $y = \frac{x + m}{b}$. I noticed that the effect of $m$ and $b$ on these two equations were opposite: scaling $m$ increases the slope of $y = mx + b$ while shifting $y = \frac{m + x}{b}$ from left to right, while scaling $b$ does the inverse. Out of curiosity, I applied the sine function to each of these equations and discovered that they respond similarly: scaling $m$ and $b$ contracts or shifts the two waves. I also noticed that the lines were tangent to their respective sine wave, shifting and sloping to, on a visual level, remain tangent to the same location on the wave (e.g., decreasing $b$ causes both the line $y = mx + b$ and its corresponding sine wave $y = \sin (mx + b)$ to visually move together, with the line appearing to "move" with the same wavefront). I'm wondering how to calculate exactly where the line $y = mx + b$ is tangent to $y = \sin (mx + b)$. |
A possible easy proof of Borel determinacy? Posted: 05 Feb 2022 10:57 PM PST I was initially trying to prove determinacy of $\mathbf{\Sigma}^0_2$ games, but surprisingly the proof I came up with seems to generalise all the way to Borel determinacy. Obviously this sounds a little too good to be true, but it still appears correct to me after double-checking many times. Perhaps someone can help me spot mistakes which I might have overlooked? The proof: Lemma. Suppose all $\mathbf{\Pi}^0_\alpha$ games are determined. Let $G(T,A)$ be such a game, and suppose $II$ has a winning strategy. If $W_{II}$ is $II$'s canonical quasistrategy, and $s = \langle x_1,\ldots,x_{2k}\rangle$ is a branch such that $\langle x_1,\ldots,x_{2k-1}\rangle\in W_{II}$ but $s\notin W_{II}$, then $I$ has a winning strategy in the game $G(T_s,A\cap N_s)$. Proof. The subgame $G(T_s,A\cap N_s)$ is also $\mathbf{\Pi}^0_\alpha$ and hence determined. So if $I$ doesn't have a winning strategy, then $II$ does. But then this implies that the $W_{II}$ can be extended to a larger (winning) quasistrategy containing the winning strategy from the branch $s$, and we have $s\in W_{II}$ which is a contradiction. $\square$ Let $G=G(T,A)$ be a $\mathbf{\Sigma}^0_\alpha$ game, where $A=\bigcup_n A_n$ and each $A_n$ is $\mathbf{\Pi}^0_{\gamma_n}$ with $\gamma_n<\alpha$. By induction, the games $G_n=G(T,A_n)$ are determined. If $I$ has a winning strategy for at least one of the $G_n$'s, then this is also a winning strategy for $G$. Otherwise, $II$ has a canonical quasistrategy $(W_{II})_n$ for each $G_n$. Let $W$ be the intersection of these quasistrategies (when viewed as sets of branches). Case 1. If $W$ contains a strategy, then this is automatically a winning strategy for $II$ in $G$. Case 2. Otherwise, $W$ contains what I call conflicting branches: a branch $s=\langle x_1,\ldots,x_{2k-1}\rangle$ is conflicting if
If $s$ is conflicting, then regardless of $II$'s next move $x_{2k}$, by the lemma $I$ always has a winning strategy in some $G_n$ (hence $G$) starting from $s\frown x_{2k}$. Furthermore, $I$ has a strategy that always leads to some conflicting branch (otherwise $II$'s counter-strategy that avoids all conflicting branches is itself a substrategy of $W$, which is a contradiction). Therefore, $I$ has a winning strategy in $G$ overall. Lastly, determinacy of $\mathbf{\Sigma}^0_\alpha$ games implies determinacy of $\mathbf{\Pi}^0_\alpha$ games, completing the inductive step. |
Posted: 05 Feb 2022 11:15 PM PST Problem. Let $U\subset \Bbb C^n$ be bounded. For $\alpha\in \Bbb N^n$ and $z\in \Bbb C^n$, let $z^\alpha:= z_1^{\alpha_1} z_2^{\alpha_2} \ldots z_n^{\alpha_n} \in \Bbb C$. Show that $$\sum_{\alpha\in \Bbb N^n} \frac{z^\alpha}{\xi(\alpha)}$$ converges for every $z\in \operatorname{int} U$, where $\xi(\alpha) := \sup_{z\in U} |z^\alpha|$. The convergence of power series in several complex variables is defined in the following way:
My work. Firstly, since $\operatorname{int} U$ is open, I think for every $z\in \operatorname{int} U$, we must have $|z^\alpha| < \xi(\alpha)$. However, I haven't been able to prove this yet. Also, if we are able to uniformly bound (in $z$) the summands by $q^{|\alpha|}$ for some $0 < q < 1$, we should be able to repeat the strategy in this answer. Please give me a hint or two on how to proceed. Thanks! |
Why does least square method work? Is it related to Jacobian matrix? Posted: 05 Feb 2022 10:48 PM PST There are two questions:
In a least-square method, to approximate $Y_{m \times 1}$ with input $X_{m \times n}$ and parameters $\theta_{n \times 1}$, e.g., the least method is to compress the value $J$, where $J = ({X}{\theta} - Y)^T({X}{\theta} - Y)$ is minimized. Something comes to my mind when I see this, which is the Taylor expansion, $f({\textbf {x}}) = f(\textbf{x}_0) + (\textbf{x} - \textbf{x}_0) \nabla f(\textbf{x}_0) + \frac{1}{2}(\textbf{x} - \textbf{x}_0)^T \textbf{H}_f({\textbf x_0}) (\textbf{x} - \textbf{x}_0) + \textrm{<third-order-partial or higher>}$ The second-order is so much similar to what $J$ looks like. At first I think it is Jacobian, but it is more like part of the $f$ in the expression above. In my mind, if you can suppress the value of the second-order term, then it means second-order and higher terms are suppressed, such that $f({\textbf {x}}) = f(\textbf{x}_0) + (\textbf{x} - \textbf{x}_0) \nabla f(\textbf{x}_0)$ is a good approximation. But I can't prove this rigorously. Is there any good way to prove this? Or is this idea reasonable? Then, since the second order part of $f$ can be written as $\frac{1}{2}(\textbf{x} - \textbf{x}_0)^T \nabla \textbf{J}_f({\textbf x_0}) (\textbf{x} - \textbf{x}_0)$ and I find this, http://www.math.iit.edu/~fass/477577_Chapter_5.pdf Which is saying, you can solve least square problems using Cholesky decomposition. My imagination continues on using Cholesky decomposition on the Jacobian matrix, for example, $\nabla \textbf{J}_f({\textbf x_0}) = \textbf{K}^T\textbf{K}$, and then $(\textbf{x} - \textbf{x}_0)^T \textbf{K}^T \textbf{K}(\textbf{x} - \textbf{x}_0)$ is even more similar to $J = ({X}{\theta} - Y)^T({X}{\theta} - Y)$. Without some rigorous reasoning, I am writing ${X}{\theta} - Y = \textbf{K}\textbf{x} - \textbf{K}\textbf{x}_0$ It seems so well matched because the first terms on both sides are based on the variable and the second terms are constants/references. It is an imagination and similarity does not necessarily mean identity. But I just want a clear picture on these terms. Explaining why they are similar should be very helpful. Also, from some resources, Jacobian $\textbf{J}$ is denoted as a derivative matrix, and the best approximation of affine transformation $\textbf{T}(\textbf {x})$ for a differentiable $\textbf{F}: \mathbb{R}^n \rightarrow \mathbb{R}^m$ on point $\textbf {p}$ is, $\textbf{T}(\textbf {x}) = \textbf{F}(\textbf {p}) + \textbf{J}(\textbf {p})(\textbf {x}-\textbf {p})$. The least square method is to find the best projection from a high-dimension space to a lower-dimension space. Is this related to least square method? |
Ist every axiom of ZFC Independent of the Rest? Posted: 05 Feb 2022 10:41 PM PST I've read that AC has this property, but I have not found if the rest also have it. |
Posted: 05 Feb 2022 11:12 PM PST A couple of weeks ago, I was idly reading some of the problems of the All Soviet Union Math Competitions, when I came across a fascinating problem from the 1992 edition, specifically problem 22 (which I've modified slightly to make more timely):
My first thought was that Alice can always avoid losing by sequentially picking the remaining vector that gives the greatest running magnitude of the resultant vector, but that fails on (for example) the set of vectors composed of $(2,0)$ and $2021$ $(-1,0)$'s. Then my next thought was just to sequentially pick the remaining vector with the greatest magnitude, but that fails on the same set of vectors as above. Then I thought something linear algebra-related might work, but this was supposed to be a contest for high-schoolers, so that would be inappropriate as a solution even if one were to exist. Nothing else has come to me, and I'm burning my brain out trying to come up with new strategies. Any help anyone could give in finding a strategy would be greatly appreciated. |
In 3-dimensional space, what is the set of all points 12 units from the origin? Posted: 05 Feb 2022 11:14 PM PST I have come across this question in an ACT practice paper, Form 0057B. The only search result I can find is an explanation on a website called crackacc, but it has become a broken link. The options are: a circle, a sphere, a line, a cylinder, and 2 parallel lines. My answer was a cylinder, but I got wrong and I have been trying to visualize what the resulting shape would be but I cannot come to a solution. |
Posted: 05 Feb 2022 10:56 PM PST If f is monotonic decreasing for all $x \ge 1$ and if $\lim \limits_{x \to +\infty}f(x) =0 $, Prove that the integral $\int_1^\infty f(x)\,dx$ and series $\sum f(n)$ both converge or both diverge? I try this exercise at 10.24 of book tom apostol's calculus like this: Let $S_n=\sum f(n)$ and $T_n=\int_1^nf(x)\,dx$, since f is monotonic decreasing we can get $$ S_n-S_1\le T_n \le \int_1^a f(t)\,dt \le T_{n+1} \le S_n \\ Where\, a \in [n,n+1] $$ As $a \to +\infty$, $n \to +\infty$, so $\int_1^{\infty} f(x)\,dx$ same convergence or divergence as $T_n$ which also has same as $S_n$. So we have proved. I am not sure what I have try is correct or not. Any help will be appreciated. |
inferior limit of a sequence intuition Posted: 05 Feb 2022 10:40 PM PST With an integral you have the idea of the area below a curve, what is the idea with the notion of inferior limit and superior limit of a sequence?, Imean, what is the picture? Thank you so much. |
Pick a random number uniformly in $[0,1]$. Is this the axiom of countable choice? Posted: 05 Feb 2022 11:49 PM PST I have two questions:
The motivation for these questions is that, using the binary representation of the numbers in $[0,1]$, there is "practically" an equivalence between the interval $[0,1]$ and the countable family of sets $\{0,1\}^\omega$. |
Limit $\lim_{n\to \infty} n\cdot \sin(\frac{n}{n+1}\pi) = \pi$ Posted: 05 Feb 2022 10:42 PM PST Problem: $\lim_{n\to \infty} n\cdot \sin(\frac{n}{n+1}\pi) = \pi$ So far I have: I think that $\lim_{n\to \infty}\frac{ \sin(\frac{n}{n+1}\pi)}{\frac{n}{n+1}\pi} = 1$, but I'm not sure. It would be similar to $\lim_{x\to 0}\frac{\sin(x)}{x}=1$, but that doesn't make a load of sense to me after trying to finish it. |
Posted: 05 Feb 2022 11:50 PM PST Let $X$ be a locally compact Hausdorff space and $\mu$ a Radon measure on $X$. If $f,g: X \to (0, \infty)$ are continuous functions such that $$\int_A f d \mu = \int_A g d\mu$$ for all Borel subsets $A$, can we deduce that $f=g$ almost everywhere? (Note that in general $f(x)\ne g(x)$ for a point $x \in X$ is possible, even if $f,g$ are continuous, because a Radon measure may assign zero measure to an open subset of $X$!). My attempt: It is easy enough to see that $f\chi_U = g \chi_U$ $\mu$-almost everywhere for any $U\subseteq X$ compact subset $U$. Indeed, by continuity and compactness $f$ is bounded on $U$ and compact subsets have finite measure. The same can be said about precompact open sets. However, $X$ may not be $\sigma$-finite so I don't see how to deduce that $f=g$ a.e. |
Posted: 05 Feb 2022 10:43 PM PST If sequence $a_n>0$ is decreasing to $0$ for all $n\geq 1$ and $\sum_{n=1}^\infty a_n=\infty$, is it always true that $\sum_{n=1}^\infty\frac{a_n}{e^{S_n}}<\infty$, where $S_n=\sum_{k=1}^na_k$ is the partial sum? If yes, how to prove; if no, is there any counterexample? I have tried $a_n=\frac{1}{n}$, in that case $e^{S_n}\sim n$ when $n$ is large, so $\sum_{n=1}^\infty \frac{a_n}{e^{S_n}} \sim \sum_{n=1}^\infty \frac{1}{n^2}<\infty$. I also tried $a_n=\frac{1}{n\ln n}$, in this case $e^{S_n}\sim \ln n$ when $n$ is large, so $\sum_{n=1}^\infty \frac{a_n}{e^{S_n}} \sim \sum_{n=1}^\infty \frac{1}{n(\ln n)^2}<\infty$ by Cauchy condensation test. |
Assistance in proving or disproving the existence of a certain matrix Posted: 05 Feb 2022 11:53 PM PST Given a matrix of $12$ non-negative integer elements (out of which at most only a single be of the value $0$), for comfortability arranged as $3$ lines with $4$ columns, I would like to prove or disprove the existence of a special property like matrix. Denote: $$ \begin{pmatrix} a_1 & a_2 & a_3 & a_4 \\ a_5 & a_6 & a_7 & a_8 \\ a_9 & a_{10} & a_{11} & a_{12} \end{pmatrix}$$ I'm looking for a matrix such that, if we denote $s=a_1 +... + a_{12}$, we have:
Additionally, I want these to be the only possible ways one can reach (precisely) $\frac{s}{3}$ by using $4$ elements, so matrices like: $$ \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{pmatrix}$$ don't work for me. Before tackling this problem, I tackled a similiar one in which I require another division to give me $\frac{s}{3}$, that is (additionally to equations $1$ to $6$):
For this, I found many solutions. Some examples are: $$ \begin{pmatrix} 8 & 5 & 19 & 25 \\ 41 & 13 & 1 & 2 \\ 3 & 9 & 16 & 29 \end{pmatrix}$$ And many more. I did it by first solving the linear equations (which got me $6$ free variables). Then, I started checking all possible entries for these free variables (limited them by 30, to keep numbers not too large). The other $6$ numbers were calculated accordingly. The following general solution was used: It took some time, but I was able to come along with many solutions for this which worked as fine examples. Yet now, if I wish to check for my newer version of the problem (with only $2$ possible divisions for $4$ elements such that their sum is $\frac{s}{3}$), I will have $4$ bound variables and $8$ free ones, which is starting to get a bit too much for such a naive program as I wrote. Here comes my question(s):
(Just a small remark to emphasize that all numbers are to be non-negative integers, only at most one of them can be $0$) |
If $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ then $E[X] < \infty$? Posted: 05 Feb 2022 11:07 PM PST Let $X$ be a positive random variable. Suppose that $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ Does this implies that $X$ has finite expectation? that is $E[X] < \infty $ I know that if $E[X] < \infty$ $\Rightarrow$ $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ (For any positive random variable see: Expected value as integral of survival function) , so I was wondering if the converse is true. I have also tried to think in a counterexample but unfortunately I have not been successfull. I would really appreciate any hints or suggestions with this problem. |
Counting total number of local maxima and minima of a function Posted: 05 Feb 2022 11:05 PM PST Find the total number of local maxima and local minima for the function $$ f(x) = \begin{cases} (2+x)^{3} &\text{if}\, -3 \lt x \le -1 \\ (x)^\frac{2}{3} &\text{if}\, -1 \lt x \lt 2 \end{cases} $$ My attempt : I differentiated the function for the two different intervals and obtained the following: $$ f'(x) = \begin{cases} 3\cdot(2+x)^{2} &\text{if}\, -3 \lt x \le -1 \\ \frac{2}{3}\cdot (x)^\frac{-1}{3} &\text{if}\, -1 \lt x \lt 2 \end{cases} $$ How do I obtain the maxima and minima points from here. Any help will be appreciated. |
Formulating an optimisation problem into a mixed-integer problem Posted: 05 Feb 2022 10:40 PM PST |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment