Friday, June 17, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Why do we assume the discriminant to be greater than or equal to zero while calculating the range of a function?

Posted: 17 Jun 2022 12:27 PM PDT

Since I don't have enough reputation to comment, I am asking this question again. Why D≥0 while finding the range of rational functions

I cannot understand why we can assume that the quadratic has real roots and then say $D>=0$. The answer states that $b^2 - 4ac = (b + 2ax)^2$ and since this is a squared term it satisfies for all x belonging to all R. But that is the question we began with, how do we know that x is real. Plugging an non real value of x does not satisfy the relation of D. Please help me understand. Thank you

Has the sequence of prime numbers been discovered?

Posted: 17 Jun 2022 12:26 PM PDT

I was working on prime numbers for four year's and i recently fined a recursive sequence to obtain prime numbers by smaller prime numbers . I was wondering if any articles has been written about recursive or not recursive sequence to obtain prime numbers or any attempt to do so . Please if you have any information share it with me I want to know if my work was valuable?

Bounded sequence from coercive function.

Posted: 17 Jun 2022 12:24 PM PDT

I am working through this threat but I don't understand the next afirmation in the first solution given by user

The next step is to extract a convergent subsequence from $x_n$; this is possible because the sequence $x_n$ is bounded (thanks to the coercitivity of $f$)

I understand when the $\inf<\infty$ since if $x_{n}$ is not bounded we have thanks to coercitivity $$ \infty=\displaystyle \lim \sup_{n\to \infty}f(x_{n})\leq m$$ But if $m=\infty$ i don't understand.

Can be $m=\infty$? Thanks!

finding the solution for a specific integral

Posted: 17 Jun 2022 12:27 PM PDT

I got confused, I haven't done calculus in a while... so I have this:
$$x(a) = x(0)+\int_0^a x(t) \,dt$$ also
$$x(t+dt)=x(t)+0.01*x(t)*dt$$ where dt in this case is for example 1 year
so
$$x(1)=x(0)+0.01*x(0) $$ $$x(2)=x(1)+0.01*x(1) $$ etc..

How do I obtain x(a)? Assuming I know a, for example a=10, and I also know x(0)=1000 for example

How many primes congruent to n modulo 3 can be between n and 2n

Posted: 17 Jun 2022 12:20 PM PDT

How many primes congruent to n modulo 3 can be between n and 2n

If enough can be between we can pigeonhole Goldbach's conjecture. Hence the tags.

power series convergence interval

Posted: 17 Jun 2022 12:28 PM PDT

this is an MCQ question

which of the following is true

  1. any power series converges on the non-empty open interval $(c-R,c+R)$
  2. any power series converges on the interval $\{x:|x-c|<R\}$\

where $R$ is radius of convergence and $c$ it's center as usual.

I think that they are the same answer.

Proving the inequlality $\int_{t_0}^{t_1}p(t)\exp(t_0-t)\leq \int_{t_0}^{t_1}p(t)\exp(t-t_1)$

Posted: 17 Jun 2022 12:14 PM PDT

Let $p$ be an increasing, continuous function on the interval $[t_0.t_1]$. I wish to show the inequality $$ \int_{t_0}^{t_1}p(t)\exp (t_0-t)dt\leq \int_{t_0}^{t_1}p(t)\exp(t_1-t) $$ I have checked this inequality in the simple case that $p(t)=\sqrt{t},p(t)=t$ and $p(t)=t^2$, and it should be a simple inequality to proof but somehow it is giving me more struggles than it should.

costas array formed by primitive roots

Posted: 17 Jun 2022 12:08 PM PDT

Prove or disprove: let $p$ be a prime and $a$ a primitive root modulo p. If one defines a $(p-1)\times (p-1)$ matrix $M$ by $M_{ij} = 1$ if $a^i\equiv j\mod p$ and $0$ otherwise, then is $M$ necessarily a Costas array (where the points are the ones corresponding to the locations (i,j) of the nonzero entries)?

I think the claim is true. Since $a$ is a primitive root, clearly $a, a^2, \cdots, a^{p-1}$ are all distinct modulo p. So every row has exactly one 1 and every column has exactly one 1. But I wasn't able to come up with a contradiction when I assumed there exist $(i_1, j_1) \neq (i_2, j_2), (c_1, d_1)\neq (c_2, d_2), \{(c_1, d_1), (c_2, d_2)\} \neq \{(i_1, j_1), (i_2, j_2)\}$ so that $\dfrac{j_2 - j_1}{i_2 - i_1} = \dfrac{d_2 - d_1}{c_2 - c_1}.$ Clearly $i_1\neq i_2, j_1\neq j_2,$ etc. Suppose WLOG that $i_1 < i_2, c_1 < c_2$. There are obviously $p-2$ possibilities for the differences $i_2 - i_1, c_2 - c_1$. It is clearly not enough to just use the fact that there's exactly one entry in each row and column; one can construct a counterexample for $p=4$. What properties of primitive roots might be useful for tackling this problem?

Problem on local compactness

Posted: 17 Jun 2022 12:06 PM PDT

I'm working on a problem from Kelley's book.

If $X$ is a Hausdorff space and $Y$ is a dense locally compact subspace, Then $Y$ is open.

I supposed that if $Y$ is closed and since it's dense, it follows that $Y=\bar{Y}=X$. But I don't have any idea how to get contradiction. Any help would be appreciated!

Reconstruct Quadric from Projections on Plane Conics

Posted: 17 Jun 2022 11:56 AM PDT

I have two projection images of a non-degenerate quadric representing an ellipsoid in 3d-space. My goal is to reconstruct the quadric given the parametrized projected conic outlines. I am trying to recreate the approach described in this paper.

It is stated, that a Quadric $Q$ can be reconstructed from two view-conics $C_{1}, C_{2}$ up to one free parameter. The resulting family of Quadrics can be written as:

$Q^{\ast}(\lambda)=Q_{1}^{\ast}+\lambda Q_{2}^{\ast}$

An additional view or point correspondences would define the reconstruction unambiguously.

I am having a hard time visualizing the geometric parameter family. When I look at visualisations of the problem, I often believe there should be a unique solution. I can however also see specific edge cases where a family of quadrics would be the solution. Can we define further constraints on the problem to make it solvable from two views? For example:

  • orthogonality of projection angles
  • angle between principle axis of Quadrics and projection directions
  • quadric with maximal volume (as constraint not optimization afterwards)

Any thoughts and comments would be appreciated! I visualized the geometry in this interactive plot.

Abelianization of $PGL(n,F)$

Posted: 17 Jun 2022 11:55 AM PDT

For any field $F$, it's known that the abelianization of $GL(n,F)$ is $F^{*}$ except for $GL(2,\mathbb{F}_{2})$. What is the abelianization of $PGL(n,F)$? It seems to me there are non-trivial elements in some cases but I cannot find good reference on it.

Help with the graph of a unit step function

Posted: 17 Jun 2022 12:16 PM PDT

If I have $$f(x) = \sum_{n=1}^{\infty} \frac{1}{n^{2}}I(x - \frac{1}{n})$$
where f is the unit step function, what does the graph of f look like and where is it discontinuous?


My thinking: If $x \leq 0$ then $f(x)$ will always evaluate to 0 since the unit step function will have a negative input $\forall n$. For $ x > 1$, we will always have $\frac{\pi^{2}}{6}$, and for everything in between, we will get increasing smaller values of f(x) for a smaller x, but I can't tell if it's continuous or it's continuous or what it's really supposed to look like. Any help is appreciated. Thanks!

Topological spaces whose quasi-components are connected

Posted: 17 Jun 2022 11:50 AM PDT

Let ${\mathcal X}$ be the category of topological spaces whose quasi-components are connected (and all continuous functions between them). I know that compact Hausdorff spaces are in ${\mathcal X}$. I also know that locally connected spaces are in ${\mathcal X}$. What else is known about ${\mathcal X}$? (I could not find any additional information in the web.) For example: Is it complete? Is it (co)reflective as a subcategory of topological spaces? ... Thanks.

Image of Sum of Orthogonal Projections

Posted: 17 Jun 2022 12:02 PM PDT

Let $V$ be a finite $d$-dimensional real vector space such that $V = \sum_{i=1}^n V_i$ where the sum is not necessarily direct and each $V_i$ is a linear subspace. In other words, $V$ is spanned by the linear sub-spaces $\{V_i\}_{i=1}^n$.

Show that $$\text{Im}\left(\sum_{i=1}^n\text{proj}_{(V_i)}\right) = V$$ where $\text{proj}_{(V_i)}$ is the orthogonal projection operator on $V_i$ with respect to the Euclidean inner product.

I speculate that induction or dimensionality count would get us the result. Also, note that the statement is equivalent to the kernel of the same operator being zero by symmetry of projection operators.

GAP Program Efficiency for Number of Orbits

Posted: 17 Jun 2022 12:01 PM PDT

Let $S$ be a nonabelian finite simple group and $p$ a prime divisor of $|S|$.

I'm interested in finding the number of $\textrm{Aut}(S)$-orbits acting on the set $\textrm{Cl}_{p'}(S)$, the set of all $p$-regular conjugacy classes in $S$. We denote this as $n(\textrm{Aut}(S), \textrm{Cl}_{p'}(S))$.

I have written my own script which seems to be giving me the desired results (I've been testing it against Table 1 found here.) However, when I try to run the script on the entirety of the groups in Table 1, I quite quickly run out of memory.

As I am new to $\texttt{GAP}$ I'm sure my script is rather inefficient and could be improved upon. Any improvements would be appreciated.

################################################################################  #  # P_regl(G, p, a)  #  # Returns list of elements in G with p-regularity level a  #  P_regl := function(G, p, a)  local g, q, S;      S := [];      for g in G do          if Order(g) mod p^a = 0 then              q := Order(g) / p^a;              if GCD(q, p) = 1 then                  Add(S, g);              fi;          fi;      od;      return S;  end;  ################################################################################  #  # bound(S, p)  #  # Returns number of Aut(S)-Orbits acting on CL_p'(S)  #  bound := function(S, p);      return Size(OrbitsDomain(AutomorphismGroup(S), P_regl(S,p,0)));  end;  

Matrix equation ... solvable?

Posted: 17 Jun 2022 12:17 PM PDT

In my condensed-matter research I've arrived at an equation like this:

$$A_{k'k} - X_{k'k} + \sum_q f(\epsilon_q)\Big[ \frac{X_{k'q}A_{qk}}{\epsilon_q-\epsilon_k'} - \frac{A_{k'q}X_{qk}}{\epsilon_k-\epsilon_q} \Big] = 0 $$

Here, $A_{k'k}$ is known (well, not explicitly) and $X_{k'k}$ is to be found (or rather it's relation to $A_{k'k}$). The $\epsilon_k$ are numbers (eigenenergies of a Hamiltonian, an infinite number of them), again - consider them known, and $f(\epsilon_q)$ is a function (actually, it's $1/2$ minus the Fermi-Dirac distribution function). Obviously, the equation must hold for all $k$, $k'$.

I suspect it's not possible to find $X_{k'k}$ in terms of the other quantities, but I don't know. Can anyone please share your knowledge?

(P.S. This is just one critical step of a long calculation, but if you can solve it and this ever reaches a journal I can make you co-author on Phys. Rev. B or something.)

Functional Equation $G(x^2) = \frac{1}{2} (G(x) + G(-x))$

Posted: 17 Jun 2022 12:04 PM PDT

I am working on this problem to find all functions $f:\mathbb{N} \rightarrow \mathbb{R}$ such that $f(x) = f(2x)$. Specifically, I am trying to work out the generating function for this problem. I found the following relationship for the generating function $G(x^2) = \frac{1}{2} (G(x) + G(-x))$.

$$ G(x) = \sum_{k=1}^{\infty} f(k) x^k $$ $$ G(x)= \sum_{k=1}^{\infty} f(2k)x^{2k} + f(2k-1)x^{2k-1} $$ $$ G(x)= \sum_{k=1}^{\infty} f(k)x^{2k} + \sum_{k=1}^{\infty} f(2k-1)x^{2k-1} $$ $$ G(x)= G(x^2) + \frac{G(x)-G(-x)}{2} $$ $$ G(x^2) = \frac{1}{2} (G(x) + G(-x)) $$

I tried to find a way to solve this functional equation online but could not find anything. What is G(x), and what are the coefficients of the polynomial form of G(x)?

Edit

Any example of a functions that satisfys $f(x) = f(2x)$ is $$ f(x) = \sin(2 \pi log_2(x))) $$

Is my proof for any finite-dimensional vector space being isomorphic to its double dual correct?

Posted: 17 Jun 2022 12:19 PM PDT

Learning about dual vector spaces I began thinking about how the product of a row vector and a column vector acts like a dot product and that furthermore, the row vector is almost acting like a functional that takes in the column vector and outputs a real number. This made me think of the following way to prove that all vector spaces are isomorphic to their double dual:

First, recall that the set of all column vectors with $n$ entries forms a $n$-dimensional vector space. Furthermore, every vector space of dimension $n$ is isomorphic to the vector space of column vectors with $n$ entries, $\mathscr C_n$. This can be shown through a simple change in basis. Therefore to show that any finite-dimensional vector space $V \cong V^{**}$, it suffices to show that $\mathscr C_n \cong \mathscr C_n^{**}$.

Consider the map $\phi : \mathscr C_n \rightarrow \mathscr C_n^*$ defined such that $$\forall \mathbf v,\mathbf x \in \mathscr C_n : \phi(\mathbf v)(\mathbf x) = \mathbf v^T\mathbf x$$ If we can show that this map is (1) one-to-one, (2) onto, and (3) linear, then it follows that the image of any basis in $\mathscr C_n$ under $\phi$ is a basis of $\mathscr C^*$ and so $\phi$ is an isomorphism.

Denote the vector space of all row vectors with n entries as $\mathscr R_n$. Then

$$\forall \mathbf f \in \mathscr R_n : \exists! \mathbf v \in \mathscr C_n : \mathbf f = \mathbf v^T \implies \forall \omega \in \mathscr C_n^* : \exists! \mathbf v \in \mathscr C_n : \phi(\mathbf v) = \omega$$ So $\phi$ is (1) one-to-one and (2) onto.

Let $\mathbb F$ denote the underlying field for $\mathscr C_n, \mathscr R_n,$ and consequently $\mathscr C^*$. Now, note that $\forall \mathbf v,\mathbf w,\mathbf x \in \mathscr C_n, c \in \mathbb F:$

$$\phi(c\mathbf v)(\mathbf x) = (c\mathbf v)^T\mathbf x = c(\mathbf v^T\mathbf x) = c\phi(\mathbf v)(\mathbf x)\\\phi(\mathbf v+\mathbf w)(\mathbf x)=(\mathbf v+\mathbf w)^T\mathbf x=(\mathbf v^T+\mathbf w^T)\mathbf x=\mathbf v^T\mathbf x+\mathbf w^T\mathbf x=\phi(\mathbf v)(\mathbf x)+\phi(\mathbf w)(\mathbf x)$$ And so $\phi$ is (3) linear. Therefore $\phi$ is an isomorphism and so $\mathscr C_n \cong \mathscr C_n^*$, which implies that any finite-dimensional vector space $V$ is isomorphic to $V^*$.

I know of the much simpler canonical isomorphism between $V$ and $V^*$, I just want to test my knowledge and see if this proof is also valid. Any feedback would be appreciated!

Clarification needed for chapter 9 example 15 of Gallian's text 10th edition on the notion of "pulling back" vs "the pullback of..."

Posted: 17 Jun 2022 12:09 PM PDT

We assume the following two properties for both elements and subgroups of a group under homomorphisms and also example (example 15) are from Gallian's Contemporary Abstract Algebra 9th edition text.

$(1)$ Let $\phi$ be a homomorphism from a group $G$ to a group $\bar{G}$ and let $g$ be an element of $G.$ Then
If $\phi(g)=g'$, then $\phi^{-1}(g')=\{x\in G|\phi(x)=g'\}=gKer\; \phi.\\$

$(2)$ Let $\phi$ be a homomorphism from a group $G$ to a group $\bar{G}$ and let $H$ be a subgroup of $G$. Then
If $\bar{K}$ is a subgroup of G, then $\phi^{-1}(\bar{K})=\{k\in G| \phi(k)\in \bar{K}\}$ is a subgroup of $G$.

Both properties $(1)$ and $(2)$ are respectively called the inverse image of $g'$ (or the pullback of $g'$) and the inverse image of $\bar{K}$ (or the pullback of $\bar{K}$)

For the following example which is Example 15 in fore mentioned Gallian's text, is suppose to demonstrate how one can find a subgroup of a group $G$ by "pulling back" a subgroup of a factor group of $G$.

Example 15: Let $H$ be a normal subgroup of a group $G$ and let $\bar{K}$ be a subgroup of the factor group $G/H$. Then the set $K$ consisting of the union of all elements in the cosets of $H$ in $\bar{K}$ is a subgroup of $G$. To verify that $K$ is a subgroup of $G$ let $a$ and $b$ belong to $K$ ($K$ is nonempty because it contains $H$). Since $a$ and $b$ are in $K$ the cosets $aH$ and $bH$ are in $\bar{K}$ and $aH(bH)^{-1}=aHb^{-1}=ab^{-1}\; H$ is also a coset in $\bar{K}$. Thus $ab^{-1}$ belongs to $K$. Note that when $G$ is finite $|K|=|\bar{K}||H|.$

I don't understand in Example 15 above, how $K$ is consider to be "pulling back" a subgroup of a factor group $G/N$ and if that is related to the notion of the pull back described in properties (1) and (2) above. Is "pulling back" in the example mean the same as being the pullback of $K$ in the example. If so, can someone give some clarification in terms of a more detailed mathematical explanations please. Thank you in advance.

Given points A(0,0) and B(10, 0) and a distance d = 15 find the shortest arc between the two.

Posted: 17 Jun 2022 12:25 PM PDT

I'm trying to find an arc between two points that maps a path between them given some set distance. I'm not a math major so pardon if I make some inaccurate statements below. I'm trying to figure out the movement of a robot.

I have point A(0,0) and point B(10, 0). The shortest path would simply be a straight line of distance 10. However, I know the given distance travelled is 15. Therefore, I assume there exists a way to determine the shortest arc between the two points (in the positive xy-axis). Perhaps the word arc may be misleading. It can be any path from point A to point B as long as the path is of the given length 15. Is there a way to find an equation for the domain of possible paths?

Any ideas would be most appreciated. Thank you.

Polynomial Pigeon hole principle problem

Posted: 17 Jun 2022 12:15 PM PDT

Find all triples of positive integers a<b<c for which 1/a +1/b + 1/c = 1 holds.

How should I proceed? I am self learning combinatorics and I have to solve it using pigeon hole principle. This is the first exercise question from the book ' A walk through combinatorics'.

Runge-Kutta method $4$ without initial function $y'$ using Octave

Posted: 17 Jun 2022 11:49 AM PDT

I'd like to solve this system using RK4, on octave:

$y''(x)+\sin(x)y(x)=0$

$y(0)= α$

$y'(0)= β$

I know we have to first write $z(x) = y'(x)$ so we have $z'(x)=y''(x)=-\sin(x)y(x)$ and $z(0)=β$. However, for RK4, we need to know what's y' as a function of the other values, to do: $$ K11=h·y'(Xn,Yn,Zn), \\ K21=h·y'(Xn+(h/2), Yn+(K11/2), Zn+(K12/2)) \\ ect... $$ (with $K12 = h·z'(Xn,Yn,Zn)$, ect... and $h$ the step)

To then calculate: $$ Y_(n+1)= Yn+ (1/6)(k1+2k2+2k3+k4) $$

How can I find $y'$ as a function of $x$, $y$ and $z$? Since we need it to do the approximation but we do not have it by default in the main equation.

Count pairwise coprime triples such that the maximum number of the triple is not greater than N

Posted: 17 Jun 2022 12:20 PM PDT

Given N you are to count the number of pairwise coprime triples which satisfy $1≤a,b,c≤N$.

For example N=3, valid triples are (1,1,1),(1,1,2),(1,2,1),(2,1,1),(1,1,3),(1,3,1),(3,1,1),(1,2,3),(1,3,2),(2,1,3),(2,3,1),(3,2,1),(3,1,2)

Hence answer for N=3 is 13.

Source: own problem. I have found the oeis series A256390 but Reyna & Heyman's work seems too overwhelming for me to understand.If someone can explain that in simple terms with an example it would be great or if you have an alternate solution you are welcome too.

I am not sure I understand the oeis formula even, I tried to implement the formula, I get 25 but the correct answer is 13

import math  N = 3  mu = []  mu.append(0)  mu.append(1)  mu.append(-1)  mu.append(-1)  s = 0  for a in range(1, N + 1):      for b in range(1, N + 1):          for c in range(1, N + 1):              v = (                  mu[a]                  * mu[b]                  * mu[c]                  * int(N / math.gcd(a, b))                  * int(N / math.gcd(b, c))                  * int(N / math.gcd(c, a))              )              # print(v)              s += v  print("s=", s)  

Vector spaces isomorphic to its dual

Posted: 17 Jun 2022 12:24 PM PDT

Let $X$ be a finite-dimensional normed vector space. Is there an isometry $L:X\to X^*$? I know there is an isomorphism. I guess it's not true for all spaces, but I wonder on which conditions it is true. More precisely, I'm looking for conditions on a Banach $E$ to have: each finite-dimensional subspace of $E$ satisfies the property.

Let $N \unlhd G$. Then $G/N$ is nilpotent of class $c\in\Bbb{N}$ iff $c$ is the smallest natural number such that $\gamma_c(G) \subset N$

Posted: 17 Jun 2022 12:02 PM PDT

I think I have established the following proposition:

Theorem: Let $G$ be a group and $N \unlhd G$. Then, $G/N$ is nilpotent of class $c \in \mathbb{N}$ if, and only if, $c$ is the smallest natural number such that $\gamma_c(G) \subset N$

Note: $\gamma_0(G) = G$, $\gamma_1(G) = [G, G] = G'$ and so on

Proof

$\implies$) Suppose $G/N$ is nilpotent of class $c$. Then $c$ is the smallest natural number such that $\gamma_c(G/N) = 1$. But then $1 = \gamma_c(G)N/N$, which implies $\gamma_c(G)N \subset N$, meaning $\gamma_c(G) \subset N$. The minimality of $c$ comes from a similar argument with $\gamma_{c-1}(G/N)$

$\impliedby$) Now suppose $c$ is the smallest natural number such that $\gamma_c(G) \subset N$. This means that $\gamma_c(G)N \subset N$, which, in turn, implies $\gamma_c(G)N/N = 1$. Therefore, $\gamma_c(G/N) = 1$. Again, we can conclude that $c$ is the class of $G/N$ by using a similar argument with $c-1$. $\square$

Is the above correct?

While I don't really see any mistakes, this seems like a quite straightforward generalization of the famous "$G/N$ is abelian iff $G' \subset N$", and yet I have never seen this result before. This gave me some uncertainty as to its validity…

Thanks in advance!

properties of projection matrix

Posted: 17 Jun 2022 12:08 PM PDT

For any $n \times p$ matrix $B \in \mathbb{R}^{n \times p}$ such that $B^{\top} B$ is invertible, define the projection matrix $P_B$ as $$ P_B = B (B^{\top} B)^{-1} B^{\top} \in \mathbb{R}^{n \times n}. $$ Now consider four matrices as follows $$ B \in \mathbb{R}^{n \times p_1}, \qquad C \in \mathbb{R}^{n \times p_2}, \qquad C \in \mathbb{R}^{n \times p_3}. $$ What I want to do is find the relationship between $P_{[B, C, D]}$ with $[B, C, D] \in \mathbb{R}^{n \times (p_1 + p_2 + p_3)}$ and $P_{[B, D]}$ with $[B, D] \in \mathbb{R}^{n \times (p_1 + p_3)}$. I wonder whether there exists some simple expressions for the difference $$ P_{[B, C, D]} - P_{[B, D]}? $$ Here the assumptions we have is the inverses of matrices in $P_{[B, C, D]}$ and $P_{[B, D]}$ exist. Could anyone help me? Thanks in advance.

PS: I found some formula of Inverse of a 3x3 block matrix, see here, and tried several times. I wonder there exists another approach to solve this problem more easily.

How to find the solution of series $\frac{1}{1^2}+\frac{1}{2^2}+\ldots+\frac{1}{n^2}+\ldots=\frac{\pi^2}{6}$ using spiral right angle triangle method?

Posted: 17 Jun 2022 12:06 PM PDT

I see this formula given below on You tube video of mathologer channel and then I try to find some new method to prove it:

$$\sum_{n=1}^\infty \frac1{n^2} = \frac{\pi^2}6$$

I tried to prove it geometrically like this Reimann hypothesis

Our attempt:

(1) First I tried to convert it in inverse trignometric form like this but that doesn't help much: enter image description here

(2) In my second attempt I rotate the length of $1/2$ length from $1$ then I rotate $1/3$ length from remaining $1/2$ but that thing doesn't help us.

(3) In my third attempt, I tried to use coordinate geometry but that makes things more complex.

My question:How to prove that that summation of $1/n^2$ where $n$ tends to infinity is equal to $π^2/6$ by using spiral right angle triangle method?

EDIT

NOTE: Sinc the last line segment Whose length tends to Square root of $π^2/6$ but not exactly equal to Square root of $π^2/6$ so it is probably not possible to solved it by using pure geometry. we understand that there must be needs of theory of Limit to prove it . so we will also accept the solution which take the use of both concept means geometry with slight use of calculus.

Show that $\Bbb S^n$ contains an affine independent set with $n+2$ points

Posted: 17 Jun 2022 12:17 PM PDT

Show that $\Bbb S^n$ contains an affine independent set with $n+2$ points. (Hint: For every $k\ge 0$, euclidean space $\Bbb R^n$ contains $k$ points in general position.)

I couldn't find any intuitive explanation for points being in general position. Apparrently in $\Bbb R^2$ the points $\{a_1,a_2,a_3\}$ are in general position if neither of them are collinear? So is the idea for the proof that in say $\Bbb S^1$ you cannot pick three points to lie on the same line etc?

How to read an equivalence of open sentences

Posted: 17 Jun 2022 12:14 PM PDT

It is well-known that $(x+1)(x-1)=0 \iff x=-1~\text{or}~x=1$. Does this mean that the open sentences $(x+1)(x-1)=0$ and $x=-1~\text{or}~x=1$ are equivalent because they have the same truth set $\{-1,1\}$?

Efficient computation of Cholesky decomposition during tridiagonal matrix inverse

Posted: 17 Jun 2022 12:02 PM PDT

I have a symmetric, block tridiagonal matrix $A$. I am interested in computing the Cholesky decomposition of $A^{-1}$ (that is, I want to compute $R$, where $A^{-1}=RR^T$). I know how to compute the blocks of the inverse efficiently using an iterative algorithm. However, is there an efficient algorithm for computing the Cholesky factors $R$ directly (rather than first computing the inverse, and then performing the Cholesky decomposition)?

No comments:

Post a Comment