Recent Questions - Mathematics Stack Exchange |
- Are there anymore journals that accept poetry?
- How to find dominating function of $\frac{1}{x^2+\ln x}$ and show $\int _1^{\infty } \, \frac{dx}{x^2+ \ln x}$ converge?
- Find pair of product of four groups that has the same order, but not isomorphic.
- Expanding $\prod_{i=1}^n (a_i+b_i)$
- Proving and intuiting why all two variable linear equations are on a line.
- How do I prove that cos(pi/5) = phi/2? [duplicate]
- Notation of conjugate operator, in functional analysis.
- Density of sum of two independent random variables
- Are these two methods for obtaining the mid point equivalent?
- Is the statement valid?
- Problem 24, Chapter 4 of Introduction to Analytic Number Theory by Apostol:
- In what sense is a Donut isomorphic to a Mug?
- Let $V,W$ be two vector spaces, $T \in L(V,W)$. If $\{v_i\}_{i=1}^p$ are linearly independent in $V$, is $\{T(v_i)\}_{i=1}^p$ so in $W$?
- If $\{v_1+v_2, v_2+v_3, v_1+v_3\}$ are linearly independent then $\{v_1, v_2, v_3\}$ are linearly independent
- calculate the limit of a function
- If $G$ is a triangle-free and $\delta(G)$ is large, then $G$ is bipartite graph
- Does $\frac{ d v(x,y) }{\mathbb d |x-y|}$ denote the change in $v$ with respect to a change in the difference between $x$ and $y$?
- Ordered pair of natural numbers such that the geometric mean is exactly $8$ less than the arithmetic mean.
- Define $g(a) := \liminf_{x \to a} f(x)$ for all $a \in X$. Then $g$ is lower semi-continuous
- Why can't columns of a generalized modal matrix for the same Jordan block be interchanged?
- Getting the area between $y$-axis, $f(x)$, $f(2-x)$ when both function is given by their subtraction?
- What conditions allow one rational sequence to bound another? [closed]
- Efficiently evaluate $\underset{X \gets \mathcal{N}(\mu,\sigma^2)}{\mathbb{E}}\left[\left(1 - s + s \cdot e^X \right)^{\sqrt{-1} \cdot t}\right]$
- Evaluating $\int_0^1\frac{\ln^2(1+x)+2\ln(x)\ln(1+x^2)}{1+x^2}dx$
- Explain the mechanism behind the "1" in the Present value calculation of money
- Frog-Jumping Out Of Well Word Problem
- Area of a cyclic quadrilateral.
- Obtaining positive eigenvalues of the matrix $A$?
- what does wedge or carrot mean in a matrix context
- Are all eigenvectors, of any matrix, always orthogonal?
Are there anymore journals that accept poetry? Posted: 21 Jul 2022 10:17 PM PDT I came across Mathematical Intelligencer, which is journal that accept poetry and humor articles. I do have a poem on maths which I sent to MI. They might accept it or not, in case they do not, are there any other such journals in which I can send my poem? |
Posted: 21 Jul 2022 10:17 PM PDT I want to find a dominating function of $$\frac{1}{x^2 + \ln x}$$ and use it to show the convergence of the integral form. |
Find pair of product of four groups that has the same order, but not isomorphic. Posted: 21 Jul 2022 10:14 PM PDT Find a set of product of four groups (from the set of choices given) that have equal order, but are not isomorphic. The goal is to show that just having the same order of product is not helpful in Isomorphism. Choices to form two equal order products of four groups: $C_1, C_2, C_4, C_8, C_{16}, Q_8.$ Note: Write $C_1$ to fill out any unneeded factors. Need find products of four groups, and to form such two products having equal order, but not isomorphic. Le, the first product of four groups be denoted by $G= a\times b\times c\times d$, and the second by $H= e\times f\times g\times h.$ The choice need to show equal order, while failing at Isomorphism. So, $G_1= C_{16}\times C_1\times C_1\times C_1$, and $H= C_8\times C_8\times C_1\times C_1$ will not help; as then will have equal orders as well as Isomorphism too. A possible choice is : $G_1= C_{8}\times C_1\times C_1\times C_1$, and $H= Q_8\times C_1\times C_1\times C_1$ Another choice: $G_1= C_{16}\times C_1\times C_1\times C_1$, and $H= Q_8\times C_8\times C_1\times C_1$ But, the answer key states only one choice is possible for determining $G,H$; as hint. |
Expanding $\prod_{i=1}^n (a_i+b_i)$ Posted: 21 Jul 2022 10:13 PM PDT I am trying to find an expansion of $$\prod_{i=1}^n (a_i+b_i)$$ Any help would be appreciated. Thanks |
Proving and intuiting why all two variable linear equations are on a line. Posted: 21 Jul 2022 09:46 PM PDT I'm just starting Linear Algebra, and one of the first points made is that for any two variable linear equation, all its solutions lie on one line. I'm trying to intuit that and prove it. i.e. That for Ax + By + C = 0, every solution is on the same line, more explicitly that every point on the line is a solution and every other point is not a solution. |
How do I prove that cos(pi/5) = phi/2? [duplicate] Posted: 21 Jul 2022 09:41 PM PDT This is linked to the golden triangle. I find it quite fascinating that cos(pi/5) = phi/2, but how do I start to find an exact solution to cos(pi/5) if all I know about the golden triangle is that the ratio of a/b = phi? |
Notation of conjugate operator, in functional analysis. Posted: 21 Jul 2022 09:29 PM PDT I'm confused about the definition of the conjugate operator in functional analysis. Definition Let $X,Y$ be normed vector space. For $A\in \mathcal L(X,Y)$, define $A':Y^\ast \to X^\ast$ by $\langle x, A' g\rangle=\langle Ax, g\rangle$ for $x\in X, g\in Y^\ast.$ $A'$ is called conjugate operator. $X,Y$ is normed space so inner product doesn't necessarily defined in $X,Y,$ so $\langle \cdot, \cdot \rangle$ isn't inner product. Does $\langle \cdot, \cdot \rangle$ mean dual pairing ? If so, $\langle x, A' g\rangle=(A'g)(x), \langle Ax, g\rangle=g(Ax)$, right ? |
Density of sum of two independent random variables Posted: 21 Jul 2022 10:00 PM PDT I am trying to solve the following problem
Since $X$ and $Y$ are independent, the density of their sum is given by the convolution of their densities. $$ f_{X+Y}(x)=f_{X} * f_{Y}(x)=\int_{-\infty}^{\infty} f_{X}(\tau) f_{Y}(x-\tau)d\tau=\int_{0}^\infty 2e^{-2 \tau}4 (x-\tau) e^{-2 (x-\tau)}d\tau $$ $$=\int_{0}^\infty 8 (x-\tau) e^{-2x}d\tau$$ However this integral diverges. What have I done wrong? |
Are these two methods for obtaining the mid point equivalent? Posted: 21 Jul 2022 09:17 PM PDT I was working on computer science problem on LeetCode website. The solution required that the midpoint be calculated between two numbers, lets call them This is how I calculated it: Are these two functions equivalent? I tried to use my math skills, but they seem to have deteriorated over the decades. This is what I came up with so far, but just need some insight. |
Posted: 21 Jul 2022 10:16 PM PDT
The above two paragraphs are from an answer on this forum link to answer. I see flaws in the answer, but I was confused since it was getting upvotes. My question is, how can you take two points next to each other? Doesn't that violate the density property of real numbers? For context, the answer was written to explain what derivatives are and what they mean. I just feel the statements used are mathematically invalid; if they are then the answer should be removed as it would mislead many. Let me know your thoughts. |
Problem 24, Chapter 4 of Introduction to Analytic Number Theory by Apostol: Posted: 21 Jul 2022 09:40 PM PDT Let $A(x)$ be defined for all $x >0$ and assume that $$T(x) = \sum_{n\le x} A(x/n) = ax\log x + bx + o\left(\frac{x}{\log x}\right),~~~~ \text{as}~~x \rightarrow \infty$$ where $a$ and $b$ are constants. Prove that: $$A(x)\log x + \sum_{n\le x} A(x/n) \Lambda (n) = 2ax\log x + o(x\log x), ~~~~ \text{as}~~x \rightarrow \infty$$ (Here $\Lambda(n)$ is the Mangoldt function). Verify that Selberg's formula of Theorem 4.18 is a special case. I am having difficulty in seeing how Selberg's formula $$\psi(x) \log x + \sum_{n\le x} \psi(x/n) \Lambda (n) = 2x\log x + O(x)$$ follows from the first part of the problem: Using $A(x) = \psi(x)$, $a=1$ and $b=-1$ of course satisfy the assumptions of the problem but Selberg's formula gives an estimate of $O(x)$ whereas the result of this problem gives an error of $o(x\ln(x))$. Does $o(x\ln(x))$ imply $O(x)$? (I don't think so!) Selberg's formula uses the stronger estimate of $O(\log x)$ for the sum $\sum_{n \le x} \psi(x/n)( = x\log x -x +O(\log x))$, so I am not sure how the weaker assumption of $o(x/\log x)$ leads to the stronger estimate of $O(x)$. |
In what sense is a Donut isomorphic to a Mug? Posted: 21 Jul 2022 09:58 PM PDT I am studying a bit of Topology and I am utterly confused when trying to figure out what sense people say that stuff like "A donut is isomorphic to a cup". What topology (open set definition) are we putting on donut? Is it the induced topology from the ambient space? If so, does it mean we necessarily need ambient space to talk about these things like "topological invariants"? |
Posted: 21 Jul 2022 09:20 PM PDT Question: Suppose $T : V \to W$ is one-to-one and linear. Show that linearly independent vectors in $V$ have linearly independent images under $T$; that is, if $$\{v_1,v_2,...,v_n\}$$ is linearly independent in $V$, then $$\{T(v_1),T(v_2),··· ,T(v_n)\}$$ is linearly independent in $W$. I know that since $T$ is one-to-one, the kernel of $T$ contains only the zero vector. I am supposed to prove this by contradiction. Let's assume that $$\{T(v_1),T(v_2),··· ,T(v_n)\}$$ is linearly dependent. Then there exists a scalar $c_1, c_2, ... , c_p$ such that $$c_1T(v_1) + c_2T(v_2) + ... + c_pT(v_p) = 0$$ where $c_1, c_2, ... , c_p$ are not all $0$. Am I on the right track so far? How do I get to the point where I can reach my contradiction? Or is there a better way of proving the question? |
Posted: 21 Jul 2022 10:02 PM PDT Problem. Prove that
What I tried: First, every element $\begin{pmatrix}m\\n\\p \end{pmatrix} \in \mathbb{R}^3$ can be unique written in terms of $A = \biggl\{\begin{pmatrix}1\\0\\1 \end{pmatrix},\begin{pmatrix}1\\1\\0 \end{pmatrix},\begin{pmatrix}0\\1\\1 \end{pmatrix}\biggr\}$ because $A$ is a basis in $\mathbb{R}^3$, so we can let $\begin{cases} m=a+b \\ n=b+c \\ p=a+c \end{cases}$. So, from $$ \begin{align} (\star) \implies (a+b)v_1 + (b+c)v_2+(a+c)v_3=0 \\ \iff av_1+bv_1+bv_2+cv_2+av_3+cv_3=0 \\ \iff a(v_1+v_3)+b(v_1+v_2)+c(v_2+v_3)=0 \\ \implies a=b=c=0 \implies m=n=p=0 \end{align}$$ $\implies \{v_1, v_2, v_3\}$ are linearly independent Please correct me if I am wrong or not. Thanks! |
calculate the limit of a function Posted: 21 Jul 2022 09:17 PM PDT I want to calculate the limite of this function when $x\to\infty$. $\lim_{x\to\infty}\left(\frac{c+\sqrt{x}}{-c+\sqrt{x}}\right)^x\exp(-2c\sqrt{x})$, where $c$ is a constant. Numerically, I plot a graphic of this function, and I think the answer is 1. But theoretically, I have no idea how to proceed. |
If $G$ is a triangle-free and $\delta(G)$ is large, then $G$ is bipartite graph Posted: 21 Jul 2022 09:39 PM PDT
First of all this result is vacuously true for $n=1,3,5$. For $n=2$ and $n=4$, it is trivially true since the desired graphs are $K_2$ and $K_{2,2}$. We assume that $n\geq 6$ and $V(G)=\{v_1,\dots,v_n\}$. Since $\deg(v_1)>2n/5$, then take $v_2\in N(v_1)$ and consider $N(v_2)$ also. It is easy to see that $N(v_1)$ and $N(v_2)$ are disjoint and independent sets. Moreover, $|V(G)\setminus (N(v_1)\sqcup N(v_2))|<n/5$.
Question 1. What if $|N(w)\cap N(v_1)|=k>0$ and $|N(w)\cap N(v_2)|=\ell>0$? I sketched a picture and I see that $G$ is not bipartite in this case. I was wondering how to prove it rigorously? Question 2. I was wondering is it possible to solve the problem with that approach? It seems a bit difficult to me but I did not checked the details since I stucked in question 1. |
Posted: 21 Jul 2022 09:31 PM PDT I have a function $v(x,y)$, where $x,y \in \mathbb{R}$. I would like to turn the following statement into math: "the function $v(x,y)$ is increasing at a decreasing rate as the difference between $x$ and $y$ is increasing." Is there a simple way to express this mathematically, such as $\frac{ d v(x,y) }{\mathbb d |x-y|} >0$, e.t.c? |
Posted: 21 Jul 2022 09:27 PM PDT
$\sqrt{ab}= \frac{a+b}{2}-8$ Now $a$ and $b$ are natural numbers and the difference between their square roots is a natural number, so what can we conclude from this? I am not able to proceed ahead from here. Please help !!! |
Define $g(a) := \liminf_{x \to a} f(x)$ for all $a \in X$. Then $g$ is lower semi-continuous Posted: 21 Jul 2022 09:16 PM PDT Let $X$ be a topological space. Then I'm trying to prove below result about a function $g$ derived from $f$.
My attempt: Clearly, $g \le f$. Fix $a \in X$. We want to prove $$ g(a) \le \liminf_{x \to a} g(x). $$
Let $(x_d)_{d\in D}$ be a net such that $x_d \to a$ and $g(x_d) \to \alpha$. By our Lemma, it suffices to show $\alpha \ge g(a)$. For each $(V, n) \in \mathcal N_a \times \mathbb N$,
We endow $\mathcal N_a \times \mathbb N$ with a partial order $\le$ defined by $$ (V_1, n_1) \le (V_2, n_2) \iff (V_1 \supset V_2) \wedge (n_1 \le n_2). $$ Then $(\mathcal N_a \times \mathbb N, \le)$ is a directed set. By construction, and $y_{V, n} \to a$ and $f(y_{V,n}) \to \alpha$. By this result, $\liminf_{x \to a} f(x)$ is the smallest cluster point among all convergent nets $(f(x_t))$ such that $x_t \to a$. This implies $\alpha \ge \liminf_{x \to a} f(x)$. This completes the proof. |
Why can't columns of a generalized modal matrix for the same Jordan block be interchanged? Posted: 21 Jul 2022 10:09 PM PDT Suppose we have the matrix $$M=\begin{pmatrix} 1 & 0 & 1\\ 1 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$$ Then we can find a Jordan normal form $J$ and generalized modal matrix $P$ such that $M=PJP^{-1}$. These are $$P=\begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix} \textrm{ and } J=\begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1 \end{pmatrix}$$ I understand that one can swap the order of Jordan blocks. My question is, why must the generalized eigenvectors of a Jordan chain in the modal matrix appear in order of increasing rank? In other words why couldn't we swap the position of two generalized eigenvectors corresponding to the same Jordan block so that $$P=\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$$ would also be a valid modal matrix for this problem. I have some intuitive understanding as to why this is wrong, but I'm interested in a more formal explanation/proof. |
Posted: 21 Jul 2022 10:03 PM PDT
My approach:
And I'm stuck. I can't see how I can get more info about $f(x)$ using these conditions and get areas out of it. |
What conditions allow one rational sequence to bound another? [closed] Posted: 21 Jul 2022 10:15 PM PDT Let $(a_k)$ and $(b_k)$ be infinite sequences of rational numbers with the following properties.
Then:
For example, if $a_k$ is $1/k!$ if $k\ge 2$ and $k$ is even, and 0 otherwise (these are the coefficients of the Taylor series for $\cosh(x)-1$), then $(b_k)$ can be built as $1/(2^{(n-2)/2+1})$ if $k\ge 2$ and $k$ is even, and 0 otherwise. For a different sequence of $(a_k)$, a different sequence of $(b_k)$ has to be built, and so on (e.g, different $(b_k)$ sequences for the Taylor coefficients of $\exp(x/4)/2$, $\sinh(x)/2$, and $\cosh(x)/2$), thus making it hard to determine whether a given sequence will work. Motivation: In my scenario:
Then by sampling $X$ from the distribution $(b_k)$ and flipping a coin with probability of heads $\frac{a_X}{b_X} \lambda^X$, we can thus "flip" a new coin whose probability of heads is $f(\lambda)$ — without estimating the probability $\lambda$ directly. But this only works if, among other things, $a_k$ is bounded above by $b_k$ for every $k$ (in other words, the sequence $(a_k)$ can be "tucked" under the probabilities of the discrete distribution, represented by $(b_k)$), and this condition is not always easy to verify. |
Posted: 21 Jul 2022 10:14 PM PDT Let $\mu, \sigma, s, t \in \mathbb{R}$ with $\sigma>0$ and $0 \le s \le 1$. Define $$a_{\mu, \sigma, s, t} := \underset{X \gets \mathcal{N}(\mu,\sigma^2)}{\mathbb{E}}\left[\left(1 - s + s \cdot e^X \right)^{i \cdot t}\right]$$ $$= \frac{1}{\sqrt{2\pi \sigma^2}}\int_{-\infty}^\infty \exp\left(\frac{-(x-\mu)^2}{2\sigma^2} + i \cdot t \cdot \log\left(1-s+s\cdot e^x\right)\right) \mathrm{d}x,$$ where $i^2=-1$. I would like to be able to efficiently compute the value of $a_{\mu, \sigma, s, t}$ numerically. A closed-form solution seems too good to be true. But I'm hoping for something like a rapidly converging series. To be a bit more precise, I want a procedure (i.e., implementable on a computer) that, given inputs $\mu,\sigma,s,t,\varepsilon\in\mathbb{R}$, computes $\tilde{a}_{\mu,\sigma,s,t}$ with $|\tilde{a}_{\mu,\sigma,s,t}-a_{\mu,\sigma,s,t}|\le\varepsilon$. The running time of the procedure (assuming basic arithmetic operations take unit time) should be polynomial in $\log(1/\varepsilon)$. An explicit series with exponentially decaying terms would suffice for this. Why do I need such fast runtime? The quantity of interest is exponentially small $|a_{\mu,\sigma,s,t}|\approx\exp(-s^2t^2\sigma^2/2)$, so $\varepsilon$ needs to be exponentially small too (otherwise $\tilde{a}_{\mu,\sigma,s,t}=0$ provides a trivial estimate), but I still want polynomial runtime. Firstly, $a_{\mu, \sigma, s, t}$ is well defined. It's the expectation of a bounded and continuous function of a Gaussian. In particular, there is an obvious Monte Carlo algorithm for computing this value: Sample $X \gets \mathcal{N}(\mu,\sigma^2)$ and compute $\left(1 - s + s \cdot e^X \right)^{i \cdot t}$; repeat this procedure and average the results. But to get accuracy $\varepsilon$, this algorithm would require roughly $1/\varepsilon^2$ repetitions. I want an algorithm that runs in something like $\log(1/\varepsilon)$ time. Numerical integration methods are a bit faster (roughly $1/\varepsilon$ steps), but that's still not as rapid as I'd like. There are a few easy special cases:
As a rough approximation $\log(1-s+s\cdot e^x) \approx sx$, whence $a_{\mu,\sigma,s,t} \approx \mathbb{E}[e^{i t s X}] = e^{its\mu - t^2s^2\sigma^2/2}$. One thing I tried is a binomial series expansion: $$\left(1 - s + s \cdot e^x \right)^{i \cdot t} = \sum_{k=0}^\infty {it \choose k} \cdot (1-s)^{it-k} \cdot s^k \cdot e^{k \cdot x}.$$ But this series only converges if $x<\log(1/s-1)$. In particular, when I try to evaluate this series with a Gaussian $X$, the terms grow exponentially, as $\mathbb{E}[e^{k \cdot X}] = e^{\mu k + k^2 \sigma^2 / 2}$. Here's another approach: Define $$f(x) := \left(1-s+s\cdot e^x\right)^{i \cdot t} = \exp(i \cdot t \cdot \log(1-s+s\cdot e^x)).$$ Then $a_{\mu,\sigma,s,t} = \mathbb{E}[f(X)]$ for $X \gets \mathcal{N}(\mu,\sigma^2)$. Now let $$\hat{f}(\xi) = \int_{-\infty}^\infty f(x) \cdot e^{-2\pi i \xi x} \mathrm{d}x$$ be the Fourier transform of $f$. By Parseval's, $$a_{\mu,\sigma,s,t} = \mathbb{E}[f(X)] = \int_{-\infty}^\infty \hat{f}(\xi) \cdot \hat{X}(\xi) \mathrm{d}\xi,$$ where $\hat{X}(\xi) = \mathbb{E}[e^{-2\pi i \xi X}] = e^{-2\pi \mu \xi i -2\pi^2 \sigma^2 \xi^2}$ is the Fourier-Stieltjes transform of $X \gets \mathcal{N}(\mu,\sigma^2)$. Now this would be progress if $\hat{f}$ is easier to work with than $f$. Unfortunately, $\hat{f}$ is not even well-defined because $f$ is not an integrable function. Nevertheless, I do feel like this approach might be salvageable. Any suggestions for how to approach this would be greatly appreciated! |
Evaluating $\int_0^1\frac{\ln^2(1+x)+2\ln(x)\ln(1+x^2)}{1+x^2}dx$ Posted: 21 Jul 2022 09:57 PM PDT How to show that $$\int_0^1\frac{\ln^2(1+x)+2\ln(x)\ln(1+x^2)}{1+x^2}dx=\frac{5\pi^3}{64}+\frac{\pi}{16}\ln^2(2)-4\,\text{G}\ln(2)$$ without breaking up the integrand since we already know: $$\int_0^1\frac{\ln^2(1+x)}{1+x^2}dx=4\,\mathfrak{J}\operatorname{Li}_3(1+i)-\frac{7\pi^3}{64}-\frac{3\pi}{16}\ln^2(2)-2\,\text{G}\ln(2)$$ and $$\int_0^1\frac{\ln(x)\ln(1+x^2)}{1+x^2}dx=-2\,\Im\operatorname{Li_3}(1+i)+\frac{3\pi^3}{32}+\frac{\pi}8\ln^2(2)-\text{G}\ln(2).$$ We can see that the imaginary parts got cancelled out leaving us only a real value and this fact pushed me to propose such a question. I tried integration by parts and subbing $x\to (1-x)/(1+x)$ but no use. The two integrals are given in (here) and (here) respectively. |
Explain the mechanism behind the "1" in the Present value calculation of money Posted: 21 Jul 2022 09:39 PM PDT Explain the mechanism behind the "1" in the Present value calculation of money. The formula is PV=A/(1+DISCOUNT RATE)^NO.OF YEARS.It would be helpful if someone could explain the 1,and if there's a simple math rule,law, convention,or algebraic expression behind it, please expatiate on that as well. Imagine yourself as explaining to a grade schooler when you do it(unfortunately, my math skills are that bad). Thank you for your time. |
Frog-Jumping Out Of Well Word Problem Posted: 21 Jul 2022 10:04 PM PDT A frog is jumping out of a well 30 ft. deep. Each day he jumps 3 feet and slips two feet back. How many days does it take the frog to jump up to 30 ft. (out of the well)? I did this problem long-hand and got 28 days--an answer others got as well. I am wondering if there is some series (Geometric series, etc.) I tried to use a "closed form" Geometric Series, but that does not seem to work. Any other series solutions other than long-hand. Thanks. |
Area of a cyclic quadrilateral. Posted: 21 Jul 2022 09:33 PM PDT Question: The distance $SR$ from $PQ$ is 7cm and arc $SR$ is 48cm and arc $SP \cong$ arc $QR$. Then find the area of quadrilateral $SRQP$($PQRS$ are taken in order and $O$ is centre). What we(me and my friends) tried: In construction,$OM\perp PQ$. By using radian arc formula: $$48=r\cdot 2\theta\dfrac{ \pi}{180}$$ $$\theta=\dfrac{4320}{\pi r}$$ In $\triangle OMR$: Approach 2: Let $MN$ be $x$,$\angle SOR=\theta,\angle ROQ=\angle SOP=\phi$ and $\phi=\dfrac{180-\theta}{2}$ And $$\frac \theta {360}[2\pi(7+x)]= 48$$ Two equations and two variables, so it might be solved( but I was not able to do so ). How to solve this question? Thanks! As per comments, it should be solved by numerical approximation. |
Obtaining positive eigenvalues of the matrix $A$? Posted: 21 Jul 2022 09:46 PM PDT Let us consider the matrix $A$ which has three parameters $R,C1,C3$. This is from the Ikeda map in real form. It is defined as $$x \rightarrow R+(x \cos(\tau)-y \sin(\tau))$$ $$y \rightarrow x\sin(\tau)+y\cos(\tau)$$ The Jacobain matrix is given by: \begin{equation*} A = \begin{bmatrix} \cos(\tau) + x \frac{\partial}{\partial x} \cos \tau - y\frac{\partial}{\partial x}\sin\tau & x\frac{\partial}{\partial y} \cos \tau - \sin \tau - y \frac{\partial}{\partial y} \sin \tau\\ \sin\tau + x \frac{\partial}{\partial x} \sin \tau + y \frac{\partial}{\partial x} \cos \tau & x \frac{\partial}{\partial y}\sin\tau + \cos \tau + y\frac{\partial}{\partial y}\cos \tau \end{bmatrix} \end{equation*} where $$\tau = C_{1} - \frac{C_{3}} {1+x^2+y^2}$$ $x,y$ are solutions of the non-linear equation \begin{equation} R+x \cos \tau - y \sin \tau = x\\ x\sin \tau + y \cos \tau = y \end{equation} After calculating the determinant of the matrix $A$, we get $det(A)=1$(so product of eigenvalues is 1) using $$\frac{\partial \tau}{\partial x} = \frac{2C_{3}x}{(1+x^2+y^2)^2}$$ $$\frac{\partial \tau}{\partial y} = \frac{2C_{3}y}{(1+x^2 + y^2)^2}$$ I am wondering for which values of $R,C_{1},C_{3}$ I can obtain positive eigenvalues? as I see after trying many values I get complex eigenvalues or negative eigenvalues. I am thinking whether the above matrix can have any positive eigenvalues at all? Any sharp hawk eye observations to this? EDIT - If suppose $R=0$, then we see that $x=0,y=0$ satisfies the non linear equation and if we obtain the trace of the matrix at $(0,0)$ we get the trace as $2\cos \tau$ and now for the eigen values to be real and positive we need $\cos \tau > 1$ which is not possible so we can eliminate the $R=0$ case. Now I am thinking whether if for $R \neq 0$, can we have positive eigenvalues for the Jacobian matrix? |
what does wedge or carrot mean in a matrix context Posted: 21 Jul 2022 09:42 PM PDT While reading about coordinate transformations. I came across this $\Omega^{\gamma}_{\beta,\alpha}=[\omega^{\gamma}_{\beta,\alpha}\wedge]$ What does the caret (or wedge) mean? In the book it looks more like a caret than a wedge. Taken from Groves, Paul D Principles of GNSS, Inertial, And Multisensor Integrated Navigation Systems 2nd ed p. 45. Thank you in advance. |
Are all eigenvectors, of any matrix, always orthogonal? Posted: 21 Jul 2022 09:35 PM PDT I have a very simple question that can be stated without proof. Are all eigenvectors, of any matrix, always orthogonal? I am trying to understand Principal components and it is cruucial for me to see the basis of eigenvectors. |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Inbox too full? Subscribe to the feed version of Recent Questions - Mathematics Stack Exchange in a feed reader. | |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment