Sunday, October 24, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Compete direct sum

Posted: 24 Oct 2021 08:16 PM PDT

If $R$ is a ring and $A$ and $B$ are $R$-modules, then I'm familiar with the definitions of the external and internal direct sum. But I wonder what is the definition of a complete sum. Is it different from the aforementioned types of sums above?

Thanks in advance.

Use Euler's theorem to get 3^3^3^3^3^3... (mod 100)

Posted: 24 Oct 2021 08:15 PM PDT

I'm supposed to use the nested Euler's theorem to get 3^3^3^3^3... (mod 100) where there are 2021 3's.

I've tried two approaches $3^{27} = 27^9 = 27*27^8 = 27*(729)^4 \equiv 27*29^4$ and got to the point where I found $3^{3^3}\equiv 87$. But I'm not using Euler's theorem here, nor am I able to stretch this to extend the case where I have 2021 3's.

Another approach is writing down the totient functions. $\phi(100) = 40 ,\phi(40) = 16, \phi(16) = 8, \phi(8) = 4, \phi(4) = 2, \phi(2) = 1$.

In any case, I'm stuck and not sure where I can proceed from here. I'd appreciate any help.

How can we taking derivatives on $UV$ w.r.t a vector $\mathbf{x}$ by using the chain rule and $d(UV)=dU\cdot V + U\cdot dV$?

Posted: 24 Oct 2021 08:15 PM PDT

There is an example about this question on Page 564 in Chapter 11 of B&V's Convex Optimization book. It presents the gradient and Hessian matrix of the following log barrier function,

$$\tag{11.5} \phi(x)=-\sum_{i=1}^m \log(-f_i(x)) $$ $$ \nabla \phi(x)=\sum_{i=1}^m\frac{1}{-f_i(x)} \nabla f_i(x)\\ \nabla^2 \phi(x)=\sum_{i=1}^m\frac{1}{f_i(x)^2} \nabla f_i(x)\nabla f_i(x)^T+\sum_{i=1}^m\frac{1}{-f_i(x)} \nabla^2 f_i(x) $$

I could not figure out the first term of the Hessian. According to the chain rule and the product rule, $U=\frac{1}{-f_i(x)}$ and $V=\nabla f_i(x)$, and $dU=\frac{1}{f_i(x)^2}\nabla f_i(x)$. Hence, the first term of the Hessian is supposed to be $dU\cdot V=\frac{1}{f_i(x)^2}\nabla f_i(x)\cdot \nabla f_i(x)$, but it does not satisfy the requirement of matrix product for dimensionality since $\nabla f_i(x)$ is a vector. I know the quoted form for the first term of Hessian meets the dimensionality requirement and makes sense in the respective of dimensionality consistency with the second term. Can anybody give me some instruction on this kind of case for calculating derivatives using both chain rule and pruduct rule simultaneously? I will appreciate any instructions.

Are quaternions algebraic numbers?

Posted: 24 Oct 2021 08:13 PM PDT

Like the question states, are all quaternions algebraic numbers, i.e. can they be derived from (inverse) exponentiation of rational numbers? This is in my understanding what makes numbers like the imaginary unit algebraic; does this apply to the units j and k as well?

This question may seem extremely simple, but please understand that I don't have a background in formal mathematics and don't even fully understand what quaternions are, I was just wondering about the above and not being able to find the question stated precisely as such.

Set notation inquiry

Posted: 24 Oct 2021 08:12 PM PDT

I have a continuous interval, L, over the positive real numbers, t. And I have a function f(t). I am trying to use set notation to describe the set of intervals in L over which f(t) <= f* and over which a minimum of at least f* - 1 occurs.

So far, I have the following:

Set = {[t_a, t_b] in L | f(t_a) = f(t_b) = f* and (I do not know what to put here)}

Question about convergent sequences in a metric space

Posted: 24 Oct 2021 08:10 PM PDT

I am confused about the following question:

Suppose I have some metric space $(X, d)$, and I have a convergent sequence $(p_{n})_{n \in \mathbb{N}}$ that converges to some $p \in X$.

I have been asked to prove that $(\{p_{n}\})^{\prime} \subset \{p\}$.

I do not quite understand how this is possible since isn't $\{p\}$ just one element? Furthermore, isn't $p$ a limit point, so how could it be a proper subset? Does anyone have an idea of what this could mean?

Prove that $ P(T>n)=P\left(X_{n}>Y_{n}\right)-P\left(X_{n}<Y_{n}\right) $

Posted: 24 Oct 2021 08:08 PM PDT

I am having a very hard time proving the below statement. I keep getting the wrong result so I feel like I am using the wrong probability identities. I would appreciate any help!

$\operatorname{Let}\left(X_{n}\right)_{n \geq 0}$ and $\left(Y_{n}\right)_{n \geq 0}$ be two independent simple symmetric random walks with $X_{0}=x$ and $Y_{0}=y$ where $x, y$ have the same parity and $x>y$

Define $T=\inf \left\{n \geq 1: X_{n}=Y_{n}\right\}$. Prove that $$ P(T>n)=P\left(X_{n}>Y_{n}\right)-P\left(X_{n}<Y_{n}\right) $$

This is what I did:

$$ P\left(T_{a}>n\right)=1-\mathbb{P}\left(T_{n} \leq n\right) $$

By theorem 29 , since $X_{n}$ and $Y_{n}$ are simple symmetric random walk then:

$$ P_{0}\left(T_{a} \leq n\right)=2 P_{0}\left(X_{n} \geq a\right)-P_{0}\left(X_{n}=a\right) $$

Hence:

$$ \begin{aligned} &\mathbb{P}\left(T_{n}>n\right)=1-2 P\left(X_{n} \geq Y_{n}\right)+P\left(X_{n}=Y_{n}\right)\\ &=1-2 P\left(X_{n}>Y_{n}\right)-2 P\left(X_{n}=Y_{n}\right)+P\left(X_{n}=Y_{n}\right)\\ &=1-2 P\left(X_{n}>Y_{n}\right)-P\left(X_{n}=Y_{n}\right) \end{aligned} $$

$$ 1-P\left(X_{n}=Y_{n}\right)=P\left(X_{n}>Y_{n}\right)+P\left(X_{n}<Y_{n}\right) $$

$$ \begin{aligned} P\left(T_{a}>n\right) &=R\left(X_{n}>Y_{n}\right)+P\left(X_{n}<Y_{n}\right)-2 P\left(X_{n}>Y_{n}\right) \\ &=\mathbb{P}\left(X_{n}<Y_{n}\right)-\mathbb{P}\left(X_{n}>Y_{n}\right) \end{aligned} $$

Solving y'+x^3y=0 using power series

Posted: 24 Oct 2021 08:08 PM PDT

I keep running into indexing problems. How does one solve this using power series?

Surjectivity of compact operator after perturbation

Posted: 24 Oct 2021 08:13 PM PDT

Assume $T$ is a compact operator on Hilbert space $H$ and $k>0$. If we can show $I+k^2T$ is injective, does that imply it is surjective as well?

Why is this limit calculation wrong?

Posted: 24 Oct 2021 08:17 PM PDT

$$ \lim _{x \rightarrow 0}\left(\frac{1}{\sin ^{2} x}-\frac{\cos ^{2} x}{x^{2}}\right) $$ $$ =\lim_{x\rightarrow 0}\left(\frac{x^2-\sin^2 x\cos^2 x}{x^2\sin^2x}\right) $$ $$ =\lim_{x\rightarrow 0}\left( \frac{\frac{x^2}{\sin^2x} - \cos^2x}{x^2} \right) $$ $$ =\lim_{x\rightarrow 0}\left( \frac{1-\cos^2x}{x^2} \right)=1 $$ But the correct answer is 4/3

Zero tensors in the tensor product of two vector spaces?

Posted: 24 Oct 2021 08:03 PM PDT

If V and W are two finite dimentional vector spaces over a field k, then we know that for $x\otimes y \in V\otimes W$ we have $x\otimes y =0$ if and only if x=0 or y=0$. But if one of the spaces is of infinite dimension, is this property still true?

Inequality with truncated vectors

Posted: 24 Oct 2021 07:58 PM PDT

I found the following inequality in a paper without a proof. It is supposed to be something simple, but I have yet find a way to confirm it. I would greatly appreciate some help or intuitive insight into this problem.

Let $H$ be a Hilbert space and introduce the function $\theta\colon H \rightarrow H$ defined by

$\theta(h)= \begin{cases} h & \text{if } \vert \vert h \vert \vert\leq 1\\ \frac{h}{\vert \vert h \vert \vert} & \text{if } \vert \vert h \vert \vert>1. \end{cases} $

Then, for all $h_1,h_2\in H$ it holds that $\vert \vert\theta(h_1+h_2)-\theta(h_1)-\theta(h_2)\vert \vert \leq 2\Big(\theta(\vert \vert h_1 \vert \vert)^2+\theta(\vert \vert h_2 \vert \vert)^2\Big).$

Can the $t$ parameter in Hoeffding bound be a variable dependent on $n$?

Posted: 24 Oct 2021 07:58 PM PDT

Can the $t$ parameter in the snippet of proof (from Wikipedia) for Hoeffding bound (see image below) be a positive variable dependent on $n$? For example, can $t = \log{(n+1)}$ where $n \in \mathbb{N}$? I believe it can, but would like confirmation.

enter image description here

Image credit: snapshot taken from Wikipedia

How to prove this upper bound of generalized Mangoldt function?

Posted: 24 Oct 2021 07:55 PM PDT

Define $\Lambda_k(n,x)=\sum\limits_{d|n}\mu(d)\left(\ln\frac xd\right)^k$, where $n$ is a positive integer and $x$ is a positive real number.

Let $n=p_1^{a_1}...p_r^{a_r}$, prove that $\Lambda_k(n,x)\leq r!\binom kr(\ln x)^{k-r}(\ln p_1)...(\ln p_r)$.

Remark. It's Eq(1.49) of Analytic Number Theory of Iwaniec.

An optimization problem involves matrix and elment-wise activate function.

Posted: 24 Oct 2021 07:51 PM PDT

Let $W \in \mathbb{R}^{n\times m}, x, y,\delta \in \mathbb{R}^m$, $p \ge 1$ and let $\sigma$ be the element-wise function where $\sigma(x) = max\{0, x\}$, then what is the optimal value of: \begin{equation*} \begin{aligned} \underset{}{min} \ \sigma(W(x+\delta))^T \cdot y \ \ \ \ s.t. \ \Vert \delta \Vert_{p} \le \epsilon \end{aligned} \end{equation*} Where $\delta$ is the variable.(If it's difficult for a general $p$,you can choose a specific one)

Determine whether if T where $T:P_2\to P_2$ defined by $T(a+bx+cx^2)=a+b(x+1)+b(x+1)^2$ is linear and then prove.

Posted: 24 Oct 2021 07:52 PM PDT

Determine whether if T where $T:P_2\to P_2$ defined by $T(a+bx+cx^2)=a+b(x+1)+b(x+1)^2$ is linear and then prove.

So I know I can either use definition of linear transformation or turn it into a matrix transformation which is thus linear. I don't know how to prove by definition though. Would I just define $u=d+ex+fx^2$ and $v=g+hx+kx^2$ and let $m$ be a scalar, all without loss of generality, and then just plug in to show that $T(u+v)=T(u)+T(v)$ and $T(mu)=mT(u)$?

Function of a gamma-distributed random variable

Posted: 24 Oct 2021 08:14 PM PDT

Let $Y=aX^2+X$ where $X\sim\Gamma(\alpha, \theta)$

I'm assuming that I need to find the pdf in order to calculate the mean and variance of $Y$

If so, any suggestions as to how to find the pdf?

Equivalence of mathematical induction and strong induction

Posted: 24 Oct 2021 07:56 PM PDT

In the book Discrete Mathematics and Its Applications, 8e, Kenneth Rosen it is mentioned that:

... mathematical induction and strong induction are equivalent. That is, each can be shown to be a valid proof technique assuming that the other is valid.

I interpret this to mean that neither proof technique can be used where the other could not be used. That being the case, why bother about strong induction at all?

$\int_0^{2\pi}\cos^n\theta\,d\theta$ by integration in the complex plane

Posted: 24 Oct 2021 08:17 PM PDT

I need help with this problem:

Use the integration in the complex plane to calculate the integral $$\int_0^{2\pi}\cos^n\theta\,d\theta$$ I thought to make $z= e^{i\theta}$ and rewrite the integral like this $$\oint_{C_1}\left (\dfrac{e^{i\theta}+e^{-i\theta}}{2}\right )^n\dfrac{dz}{iz}=\dfrac{1}{i2^n}\oint_{C_1}\dfrac{(z+z^{-1})^n}{z}dz$$ Then with the integral in this form $$\oint_{C_1}\dfrac{(z+z^{-1})^n}{(z-0)}dz$$ apply the cauchy integral formula, but isn't possible because the function $(z+z^{-1})^n$ is not defined in $0$.

What can I do?

Logarithmic differentiation with Riccati's differential equation

Posted: 24 Oct 2021 07:59 PM PDT

The problem of this post is a subset of Riccati's differential equation.

$$ R_{C} \left( y_{1},y_{2},y_{3},y_{4} \right) = \frac{ \left( y_{1}-y_{3} \right) \left( y_{2}-y_{4} \right) }{ \left( y_{1}-y_{4} \right) \left( y_{2}-y_{3} \right) } \tag{1} $$

$$ y_{1} ,y_{2} ,y_{3} ,y_{4} ~\text{are all the special solutions of the following Riccati's differential equation. } $$

$$ \frac{dy}{dx} =f\left(x\right)y^{2}+ g \left( x \right) y + h \left( x \right) \tag{2} $$

$$ f \left( x \right), g \left( x \right) , h \left( x \right) ~\text{are all known(given) as this ODE is defined } $$

$$ R_{C}= R_{C} \left( y_{1},y_{2},y_{3},y_{4} \right) $$

The book says that the below equation can be obtained using logarithmic differentiation(s).

$$ \frac{R_{C}'}{R_{C}} =\frac{y_{1}'-y_{3}'}{y_{1}-y_{3}}+\frac{y_{2}'-y_{4}'}{y_{2}-y_{4}}-\frac{y_{1}'-y_{4}'}{y_{1}-y_{4}} -\frac{y_{2}'-y_{3}'}{y_{2}-y_{3}} \tag{3} $$

How this can be obtained???

The final goal is to derive that a general solution of Riccati ODE can be gained as at least 3 special solutions of it are known.

What I know about logarithmic differentiation currently is as below .

$$ y:= x^x ~~ \leftarrow~~ \text{This function is isolated from the eqn1} $$

$$ \ln\left( y \right) = \ln\left( x^x \right) $$

$$ \ln\left( y \right) = x\ln\left( x \right) $$

$$ \frac{d}{dx} \left( \ln\left( y \right) \right) = \frac{d}{dx} \left( x \ln\left( x \right) \right) $$

$$ \frac{1}{ y } \frac{dy}{dx} = \ln\left(x\right) + x \cdot x^{-1}$$

$$ \frac{1}{ y } \frac{dy}{dx} = \ln\left(x\right) + 1$$

$$ \frac{dy}{dx} = y \left( \ln\left( x \right) +1 \right) $$

$$ \frac{dy}{dx} = x^x \left( \ln\left( x \right) +1 \right) $$

Ratios of integers in Bezout's Identity

Posted: 24 Oct 2021 07:52 PM PDT

Bezout's Identity is a classic of elementary number theory: let $m,n\in\mathbb{N}^+$ with $\gcd(m,n)=1$. Then there are $a,b\in\mathbb{Z}$ with $$ am+bn=1 $$With no loss of generality we can assume $m>n$ and take $a,b\in\mathbb{N}$ with $a<b$, and rewrite $$ |am-bn|=1 $$My question is: can the minimal values of $a$ and $b$ be predicted by looking at the ratio $m/n$? My guess is that $b/a$ is in some sense the best approximation to $m/n$, maybe with the lowest possible value of $a$, or maybe $a$ is chosen such that $|m/n-b/a| \le a^{-p}$ for some power $p$.

Ex: $m=47,n=21$. Now $47/21 \approx 2.2381$: clearly $9/4=2.25$ is the 'closest' to this starting with denominator equal to $1$ and increasing, and indeed we have $a=4, b=9$ with $|4\cdot 47-9\cdot 21|=1$.

Ex: $m=101,n=58$. $101/58\approx 1.7414$; $a=4$ and $b=7$ with $b/a=1/75$ almost works with a RHS of $2$, but the actual pair is $a=27,b=47$ with $b/a\approx 1.7407$.

In summary: if we know $m/n$, can we use it to find $b/a$, and if so, what is the 'threshold' or degree of accuracy?

Is there a Horn formula which is equivalent to $(p \lor q)$?

Posted: 24 Oct 2021 07:55 PM PDT

Is there a Horn formula which is equivalent to $(p \lor q)$?

Hi I have to answer the following question:

Given any formula $\phi$, is it possible to find a Horn formula equivalent to $\phi$?

I know that a formula is Horn's if in its conjunctive normal form all clauses are Horn's.

But in this case $\phi$ is already in it's conjunctive normal form, but $\phi$ has 2 positive literal.

So, this would be a example that proves that what I have to answer is not possible, right?

Thanks in advance

UPDATE

I just thought of something; let me know if I'm right.

So a Horn Clause has the form:

a) $(p_1 ∧...∧ p_n) → q$

b) $ \lnot p_1 \lor \lnot p_2, ... , \lnot p_n $

Since $(p \lor q)$ is not equivalent to $(p \land q) → s$, and

$(p \lor q)$ is not equivalent to $(\lnot p \lor \lnot q)$

There $(p \lor q)$ can't be equivalent to any Horn formula.

Is this reasoning right?

I need help to show about Integral Riemann-Stieltjes.

Posted: 24 Oct 2021 07:59 PM PDT

Let $\alpha:[a,b]\rightarrow \mathbb{R}$ be an increasing monotonic function, Show that $\alpha \in \textit{R}(\alpha)$ Iff $\alpha$ is continuous on $[a,b]$.

My progress is as follows:

Let's suppose $\alpha \in \textit{R}(\alpha)$, so $\forall$ $\epsilon$ > 0 $\exists$ a partition "P", such that:

$$U(\alpha,P,\alpha)-L(\alpha,P,\alpha)<\epsilon$$

I hope you can help me

What is the formula for the expected duration of a 15 sided dice game?

Posted: 24 Oct 2021 08:12 PM PDT

There are n participants. Each participant has a 15 sided dice. What is the formula for the expected number of rolls required for a 1 to be rolled in the game?

Power of 1 in complex domain

Posted: 24 Oct 2021 08:21 PM PDT

Recently when studying about complex numbers I encountered something peculiar.
I wanted to say that $1^n$ is still 1, in another word $1^n=(\cos0+isin(0))^n=(\cos0+i\sin0)=1$,this looks goood while for $1^n=(\cos2\pi+i\sin2\pi)^n=(\cos2n\pi+i\sin2n\pi)$ and when n is not an integer, $1^n\ne 1$
why?
Also this freaked me out totally:
$(e^{2\pi i})^{1.5}=1$
$e^{3\pi i}=-1$
as said by wolfram alpha. Why?

Lifting of principal G-bundles

Posted: 24 Oct 2021 07:57 PM PDT

Let $k$ be an algebraically closed field. We assume all schemes we consider are $k$-schemes.

Let $P \rightarrow X$ be a principal $G$-bundle, where $G$ is an algebraic group. We assume $X$ is affine $(X := \text{Spec}(A))$.

Let $X \rightarrow X' := \text{Spec}(A')$ be a thickening. Note that thickening means $X \rightarrow X'$ is a closed immersion and its ideal sheaf is nilpotent.

Then, do we have a lifting of the principal bundle $P$ ? I.e., We have the following cartesian diagram $\require{AMScd}$ \begin{CD} P @>>> P'\\ @VVV @VVV\\ X @>>> X' \end{CD}

, where $P' \rightarrow X'$ is also a principal $G$-bundle ?

Edit : I originally came across this problem in the following question.

Let $Y$ be a $k$-scheme with a $G$-action and $P \rightarrow Y$ be a $G$-equivariant morphism ($P$ is defined as above).

Then, classify the liftings of $P \rightarrow Y$ to $P' \rightarrow Y$ s.t. they are also $G$-equivariant if a lifting $P' \rightarrow X'$ of a principal $G$-bundle $P \rightarrow X$ with respect $X \rightarrow X'$ exists.

Asymptotic power series expansion of $\int_0^\infty\frac{x^\nu J_\nu(x\alpha)}{e^x-1}{\rm d}x$ around $\alpha=1$ and $\alpha<1$

Posted: 24 Oct 2021 08:21 PM PDT

What is the power series expansion $f(\alpha)$ as $\alpha\to 1$ and also in the limit $\alpha\ll 1$, where

$$f(\alpha)=\int_0^\infty\frac{x^\nu J_\nu(x\alpha)}{e^x-1}{\rm d}x$$

At $\alpha=1$ the integrand is analytic so we should have a convergent power series. The power series expansion for the case $\alpha\gg1$ is already answered here. How does this differ from the expansion at $\alpha=1$ and in the region $\alpha<1$.

I see that the numerical plot ($\nu=2$) of the integration is the following enter image description here

Thus something very interesting is happening at the point $\alpha=1$.

Edit: If it is difficult to find an asymptotic expansion at $\alpha=1$, then at least can we prove that $f(\alpha)$ has a maxima at that point?

Circumscribed sphere about a tetrahedron

Posted: 24 Oct 2021 07:56 PM PDT

$ABCD$ is a tetrahedron such that $AB=2\sqrt{3}$, $BC=\sqrt{7}$, $AC=1$ and $AD=BD$. Plane $ABD$ is perpendicular to plane $ABC$. Find the shortest distance between lines $BD$ and $AC$ if the radius of the circumscribed sphere about the tetrahedron is equal to $2\sqrt{2}$.

Several questions on the mini-max theorem for self-adjoint operators

Posted: 24 Oct 2021 08:20 PM PDT

I am reading the proof of mini-max theorem for bounded self-adjoint operators following "Unbounded Self-adjoint Operators on Hilbert Space" by Konrad Schmüdgen and it seems a totally mess.

Given a bounded below self-adjoint operator $A$ on a Hilbert space $\mathcal{H}$, we define the following sequences:

$$\mu_n(A) := \sup_{\mathcal{D} \in \mathcal{F}_{n-1}}\inf_{x \in \mathcal{D}(A), \|x\| = 1, x\perp \mathcal D}\langle Ax,x\rangle,$$ where $\mathcal{F}_{n-1}$ is the set of all finite dimensional subspaces of $\mathcal{H}$ with dimension at most $n-1$.

If $\sigma_{ess}(A) = \emptyset,$ where $\sigma_{ess}(A)$ denotes the essential spectrum of $A$, then we define $\{\lambda_n(A)\}$ as the sequence of isolated eigenvalues of $A$ with finite multiplicity (counting the multiplicity).

If $\sigma_{ess}(A) \neq \emptyset,$ one defines $$\alpha := \inf \{\lambda : \lambda \in \sigma_{ess}(A)\}$$ and define the same sequence $\lambda_n$ for the isolated eigenvalues with finite multiplicity on the bottom of $\sigma_{ess}(A)$. The rest of the elements of the sequence are $\alpha$. If there are no eigenvalues on the bottom of $\sigma_{ess}(A)$ we make $\lambda_n(A) := \alpha, ~\forall n.$

The mini-max principle states that:

$$\mu_n(A) = \lambda_n(A) = \inf \{\lambda : \dim E_A((-\infty,\lambda))\mathcal H \geq n\},$$ where $E_A$ is the unique spectral measure that represents $A$.

We denote $E_A((-\infty,\lambda))\mathcal{H} := \mathcal E_{\lambda}$ and $d(\lambda) := \dim \mathcal{E}_{\lambda}.$

Now one starts the proof:

It is easy to see that the equality follows for $n=1$ assuming an intermediate lemma that proves that:

$$\mu_n = \inf\{\lambda : d(\lambda) \geq n\}.$$

The idea is proceed by induction, we assume the statement holds for $1,2,3,\ldots,n-1$ and try to prove it holds for $n$. Here I paste the proof:

enter image description here

Questions:

1) Why does $\mu_n$ is isolated? This does not need to be true at all. This Proposition the author referes just says that the support of the spectral measure $E_A$ coincides with the spectrum of $A$ and that the points on the spectrum are points where the identity resolution is not continuous.

2) It seems to me that by the induction hypothesis $\mu_n \geq \lambda_n$, otherwise, $\mu_n = \mu_{j}$ for some $j \leq n-1.$ Is this right? (Note that the book does not even mention where he is using the hypothesis induction).

3) Here is the most concerning point: it says that if one assumes that there is $\lambda \in (\lambda_n,\mu_n)$, then $d(\lambda) \geq n.$ Why? It does not seem to be true according to what we know of $\mu_n.$

Thank you.

Determining third vertex of the right angled isosceles triangle

Posted: 24 Oct 2021 08:05 PM PDT

If A(9,-9), B(1,-3) are the vertices of a right angled isosceles triangle, then the third vertex is?? Here in this question i got stuck to the point that which side is taken as the given coordinates. Whether it will be equal sides or hypotenuse??? If we consider both then how many will be the solution???

No comments:

Post a Comment