Sunday, October 31, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


How to calculate gcd?

Posted: 31 Oct 2021 07:56 PM PDT

Note, In this question: $(a,b)$ means gcd $(a,b)$

Given $(a,b)=2$ Plus, $5a+7b=2$

I need to calculate: $(7a+14b, 14a+21b)$

I did the following:

$$(7a+14b, 14a+21b) = (7(a+2b), 7(2a+3b)) = 7(a+2b, 2a+3b) = (4-3a, 6-a)$$

but stuck here...

what is the purpose and consequences of Eigenvalues in graph theory?

Posted: 31 Oct 2021 07:55 PM PDT

I am trying to understand Eigenvalues and their repercussions in graph theory. I have read that Eigenvalues help describe certain parameters of graphs which provide information about the general structure of graphs. I know that we can take the adjacency matrix of a given graph, and obtain the eigenvalues, but I don't understand their consequences and what they tell us about a graph. Any help would be appreciate it.

Is $\Omega$ the limit of all increasing sequence of sets $A_n$;

Posted: 31 Oct 2021 07:53 PM PDT

Let $\Omega$ be some set, and let $\mathcal{F}$ consist of all subsets of $\Omega$. Defina $\mu (A)=0$ if $A$ if finite, $\mu (A)=\infty$ if A if infinite. Is $\Omega$ the limit of an increasing sequence of sets $A_n$, with $\mu (A_n)=0$, but $\mu (\Omega)=\infty$.

A solution to this question for $\Omega$ countably infinite is given here Show that $\Omega$ is the limit of an increasing sequence of sets $A_n$;

My question

  1. Is this always true or only when $\Omega$ countable infinite.
  2. When $\Omega$ countable infinite, Is $\Omega$ the limit of one or all increasing sequence of sets $A_n$, with $\mu (A_n)=0$, but $\mu (\Omega)=\infty$.

multiplication of functions

Posted: 31 Oct 2021 07:50 PM PDT

I've been multiplying functions for ages but I never really stopped and considered why I could multiply functions and have it "just work." Here's an example

Say $f(1) = 9, g(1) = 7$. Then $(f(1)-g(1))^2 = (9-7)^2=2^2=4$.

We could also do $(f-g)^2 = f^2-2fg+g^2$, and so $f^2(1)-2f(1)g(1)+g^2(1)=81-126+49=4$.

Was this ever explained in grade school??

edit 1: Wait actually it makes sense. actually I think the larger question is why $(x+y)^2 = x^2+2xy+y^2$

What is the volume of the region that is within a distance r outside of the surface of an n-dimensional hypersphere of radius R?

Posted: 31 Oct 2021 07:44 PM PDT

Suppose you have an n-dimensional hypersphere of radius R living within a d-dimensional space. Imagine the region in d-dimensional space consisting of all points that are a distance r outside the surface of that hypersphere. What is the volume of this region for any n, d, R and r?

So take the special case of d=3 dimensions to begin with. For n=1 we have two hemispheres on either end of a flat line (the 1-d sphere). Their combined volume is just $2*(1/2)*(4/3)*\pi$$r^{3}$. For n=2 the region desired is the outer half of a donut. To find this volume I understand we can use the second centroid theorem of Pappus-Guldin by using the centroid of the semicircle of radius r, which is $\frac{4r}{3\pi}$. Then we find the volume of the desired region by taking the area of the semicircle of radius r and 'sweeping' it around the circumference of the circle of radius R + $\frac{4r}{3\pi}$. This gives the formula $2\pi(R+\frac{4r}{3\pi})*\frac{1}{2}\pi*r^{2}$.

For n=3 you could obviously find the desired volume by taking the difference in volume of the sphere of radius R+r and the sphere of radius R, but I want a method that can generalise to a hypersphere of any dimension n in a space of any dimension d. Can the approach using the Pappus-Guldin theorem be generalised? Alternatively, I'm thinking a solution using calculus must be possible, but I don't know how to find it. So is there a neat formula for this?

Thanks :)

Jordan canonical form of $T: \mathbb{Q}(\alpha) \rightarrow \mathbb{Q}(\alpha)$ where $\alpha \in \mathbb{C}$ and $T$ is multiplication by $\alpha$

Posted: 31 Oct 2021 07:39 PM PDT

Suppose $p \in \mathbb{Q}[x]$ is irreducible and $\alpha \in \mathbb{C}$ is a root of $p$. Then consider the linear transformation $T : \mathbb{Q}(\alpha) \rightarrow \mathbb{Q}(\alpha)$, $x \mapsto \alpha x$.

What is the Jordan canonical form of $T$, considering $\mathbb{Q}(\alpha)$ to be a $\mathbb{Q}$ vector space?

Suppose $\deg(p) = n$ and $p = x^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0$. Then $\mathbb{Q}(\alpha)$ has dimension $n$ with basis $\{1, \alpha, \alpha^2, \cdots, \alpha^{n}\}$. Moreover, obviously the transformation $T$ acts on the basis element as $\alpha^i \mapsto \alpha^{i+1}$ for $i < n-1$. But $\alpha^{n-1} \mapsto \alpha^n = -a_{n-1}\alpha^{n-1} - \cdots - a_1\alpha - a_0$ in $\mathbb{Q}(\alpha)$.

Thus $T$ is represented by the companion matrix, $C_p$

Not sure what the next steps would be- I'm confusing myself a bit here. It is well known that the companion matrix $C_p$'s characteristic and minimal polynomial is $p$ itself. Moreover, we know that $\alpha$ is thus an eigenvalue, which means that $\bar{\alpha}$ is as well. Thus $(x-\alpha)(x-\bar{\alpha}) | p$. One cannot conclude at this point that since $p$ is irreducible, $(x-\alpha)(x-\bar{\alpha}) = p$, since it is possible that $(x- \alpha)(x-\bar{\alpha}) \in \mathbb{R}[x]\setminus \mathbb{Q}[x]$.

Even if $p = (x-\alpha)(x-\bar{\alpha})$ after all, I'm still a bit confused. Now that we've expanded our scope to $\mathbb{C}$, it's pretty clear that $\alpha$ is the only eigenvalue, so would the JCF simply be $\alpha I$? I'm getting a bit confused with the JCF since we traditionally start with an algebraically closed field and $\mathbb{Q}$ isn't. Would appreciate some help - I believe the intent is to proceed as if $C_p \in \mathbb{C}^{n\times n}$.

Would appreciate some help.

The $E_k$ in the definition of the simple function on pg. 61 in Royden real analysis "4th edition".

Posted: 31 Oct 2021 07:50 PM PDT

I want to prove that the sum of 2 simple functions is a simple function and to do so I want to use the following facts:

$$\chi_{A_i}=\sum_{j}\chi_{A_i\cap B_j} \text{ and } \chi_{B_j}=\sum_{i}\chi_{A_i\cap B_j}.$$

But say if I want to prove the first fact, upon fixing $i,$ I want $A_i$ to be subset of $E$ and $E = \cup_{j}B_j$

Here is the definition of the simple function stated in Royden:

If $\varphi$ is simple, has domain $E$ and takes the distinct values $c_1, \dots, c_n,$ then $$\varphi = \sum_{i=1}^{n_1} c_k \chi_{E_k} \text{ where } E_k = \{x \in E| \varphi(x) = c_k \}.$$

My question is:

Are we considering the $E_k's$ in the definition of the simple function to be a partition of $E$? Could someone clarify this to me please?

T a diagonalizable linear operator on V $\implies$ By the Complex Spectral Theorem, T is normal

Posted: 31 Oct 2021 07:56 PM PDT

Prove if the following statement is true.

Let V be a finite dimensional C-vector space with inner product and T a diagonalizable linear operator on V. Then there is a basis of eigenvectors of T for V. Applying the Gram-Schmidt orthogonalization process to this basis and then normalizing, an orthonormal basis of eigenvectors for V is obtained. By the Complex Spectral Theorem, T is normal.

Attempt:

T a diagonalizable linear operator on V if and only if T has a basis of n eigenvectors, in which case the diagonal entries are the eigenvalues for those eigenvectors. Let $\beta$ be that basis.

By Gram-Schmidt, ${\beta}'$ is an orthonormal basis of V.

Since T is diagonalizable, $[T]_{{\beta}'}$ is diagonalizable.

There exists an orthonormal basis ${\beta}'$ of V such that $[T]_{{\beta}'}$ is diagonalizable if and only if T is normal.

Therefore, the statement is true.

Am I wrong?

Prove $\lim_{x\to \infty}\frac{\sin x}{x^2} = 0$.

Posted: 31 Oct 2021 07:36 PM PDT

Prove using definition of a limit that $\lim_{x\to \infty}\frac{\sin x}{x^2} = 0$.

Proof:

Let $\epsilon > 0$. Note that $\left|\frac{\sin x}{x^2}\right| \leq \frac 1 {x^2}$ for $x\ne 0$. Then let $M= \frac 1{\sqrt{\epsilon}}$. Then for $x> M$ implies that $\left|\frac{\sin x}{x^2}\right| \leq \frac 1 {x^2} < \epsilon$.

Am I allowed to give an $M$ the way I did?

$U_{1} \subseteq U_{2}^{\perp}$ if and only if $P_{W}=P_{U_{1}}+P_{U_{2}}$

Posted: 31 Oct 2021 07:38 PM PDT

Let $V$ be a finite dimensional vector $\mathbb{C}$-space with inner product. Consider two subspaces $U_{1}$ and $U_{2}$ of $V$. Define $W=\left \{ u_1 +u_{2}:u_{1} \in U_{1}, u_{2} \in U_{2}\right \}$. Prove $U_{1} \subseteq U_{2}^{\perp}$ if and only if $P_{W}=P_{U_{1}}+P_{U_{2}}$.

$\bullet \Rightarrow$ Let $U_{1} \subseteq U_{2}^{\perp}$. Then $\left \langle u_{1},u_{2} \right \rangle=0$ and $U_{1}$ and $U_{2}$ are orthogonal. This implies there exists an orthonormal basis of $U_{1}$ that can be extended to an orthonormal basis of $U_{1} \cup U_{2}$ and then of $V$. Let $x \in V$. $x=\sum_{i=1}^{k}c_{i}u_{i}+\sum_{i=k+1}^{m}c_{i}u_{i}+\sum_{i=m+1}^{n}c_{i}u_{i}$ and $P_{W}x=\sum_{i=1}^{k}c_{i}u_{i}+\sum_{i=k+1}^{m}c_{i}u_{i}=P_{U_{1}}x+P_{U_{2}}x$.

Therefore, $P_{W}=P_{U_{1}}+P_{U_{2}}$.

$\bullet \Leftarrow$ Let $P_{W}=P_{U_{1}}+P_{U_{2}}$ and $x \in U_{1}$. $P_{W}x=P_{U_{1}}x+P_{U_{2}}x\implies x=x+P_{U_{2}}x\implies P_{U_{2}}x=0\implies x\in U_{2}^{\perp}$.

Therefore, $U_{1} \subseteq U_{2}^{\perp}$.

$\bullet\therefore$ $U_{1} \subseteq U_{2}^{\perp}$ if and only if $P_{W}=P_{U_{1}}+P_{U_{2}}$.

Question on prop 0.18 by Hatcher

Posted: 31 Oct 2021 07:29 PM PDT

Here is prop 0.18 from Hatcher: if $(X_1,A)$ is CW pair and we have attaching maps $f,g:A\rightarrow X_0$ that are homotopic, then $X_0\sqcup_f X_1$ is homotopy equivalent to $X_0\sqcup_g X_1$ relative to $X_0$.

Hatcher says if $F:A\times I\rightarrow X_0$ is a homotopy from $f$ to $g$, consider the space $X_0\sqcup_F(X_1\times I)$; this contains both $X_0\sqcup_f X_1$ and $X_0\sqcup_g X_1$ as subspaces. In the online errata by Hatcher, he explains that this follows from the fact if $q:X\rightarrow Y$ is any quotient map, $A\subset X$ is a closed saturated set, then $q|_A$ is a quotient map.

My question is but $X_0\sqcup_f X_1$ is not saturated. I think instead Hatcher's observation gives that $X_0\sqcup_F(X_1\times\{0\}\bigcup A\times I)$ is a subspace. Can anyone please explain this to me? Thank you!

Is ℒ{ t·γ(t-1) } = ℒ{ (t-1)·γ(t-1)+γ(t-1)} ? I'm confused if these two are the same or if I can even write the second one like this .

Posted: 31 Oct 2021 07:21 PM PDT

The step function γ(t-1) is 0 for all values less then 1. Thus, my reasoning is that multiplying (t-1) by γ(t-1) and adding γ(t-1) should be the same to the original (ℒ{ t·γ(t-1) }). In addition, I was wondering is there is any property for transforms of the form ℒ{ f(t)·γ(t-a) }? If so, how can we use these? I know that [ ℒ{ f(t-a)·γ(t-a) } = e^(-as)F(s) ], but I do not think this will be helpful for transforms in the form ℒ{ f(t)·γ(t-a) }. I just started learning Laplace transforms a day ago and my professor has not provided us anything in regards to this.

Find polynomials with rational coefficients whose roots are sin^2(pi/4m), sin^2(3pi/4m),sin^2(5pi/4m),.....,sin^2(((2m-1)pi)/4m)

Posted: 31 Oct 2021 07:18 PM PDT

I am stuck on this problem. Find polynomials with rational coefficients whose roots are sin^2(pi/4m), sin^2(3pi/4m),sin^2(5pi/4m),.....,sin^2(((2m-1)pi)/4m)

I know cos2mx=(1-sin^2x)^m-(2m 2)((1-sin^2x)^(m-1))(sin^2x)+(2m 4)((1-sin^2x)^(m-2))(sin^4x)-...

and we use that to get the roots however im having trouble getting there.

Finite group with sequence of index 2 subgroups, such that the smallest one has odd order

Posted: 31 Oct 2021 07:17 PM PDT

This is a homework question so please only provide me with some small hints. For a finite group $G$ suppose that $G_k \subset \dots \subset G_0 = G$ such that each $G_i$ has index 2 in $G_{i-1}$. The claim is that if $G_k$ has odd order then it is normal in $G$.

By the index assumption each $G_{i}$ is normal in the group above it. Thus I would like to transitively conclude $G_k$ is also normal in $G$. For this I would need to show that $G_k$ is characteristic in $G_{k-1}$, and then I could proceed by induction. Does this seem like a good first step? Can you give me a hint as to how to proceed?

Lemma & Proof of Independent indicator random variables

Posted: 31 Oct 2021 07:16 PM PDT

What is a sufficient lemma and proof I can provide for the following question?

enter image description here

How do I go about modifying Euler Forward matlab code for differential equations Initial Value Problem?

Posted: 31 Oct 2021 07:21 PM PDT

Euler Code:

function [x,y]=euler_forward(f,xinit,yinit,xfinal,n)  % Euler approximation for ODE initial value problem  % Euler forward method  % Calculation of h from xinit, xfinal, and n  h=(xfinal-xinit)/n;  % Initialization of x and y as column vectors  x=[xinit zeros(1,n)]; y=[yinit zeros(1,n)];  % Calculation of x and y  for i=1:n  x(i+1)=x(i)+h;  y(i+1)=y(i)+h*f(x(i),y(i));  end  end  

Supporting File (to be modified). (Note, I am confused what I need to add):

A=50;  f=@(x,y) A*y-2;  % Calculate exact solution  g=@(x) exp(A*x); %this should be a function of x and A (since A is a parameter)  xe=0:0.05:1;  ye=g(xe);  % Call functions  [x1,y1]=euler_forward(f,0,1+2/50,0.1,2);  % Plot  plot(xe,ye,'k-',x1,y1,'k-.')  xlabel('x')  ylabel('y')  legend('Analytical','Forward')    Absolute_Error = abs(ye(end)-y1(end))  

Integrating wrt a measure that is a collection of probability measures

Posted: 31 Oct 2021 07:10 PM PDT

Let $\mu$ be a probability measure defined on $\mathcal{A} \times \mathbb{R}$ where $\mathcal{A}$ is some measurable set and $\mathbb{R}$ is the set of real numbers. We have that $\mu(a, \cdot)$ is a probability measure so that $\int_{\mathbb{R}} \mathrm{d}\mu(a, x) = 1$, for all $a \in \mathcal{A}$.

Now, let $f$ be a measurable function that I am currently not imposing any constraints on. I would like to have some sort of result where we have $$ \int_{(a,x) \in \mathcal{A} \times \mathbb{R}} f(a) \mathrm{d} \mu(a,x) = \int_{a \in \mathcal{A}} f(a) \mathrm{d} a. $$ I was wondering what kind of assumptions do we need to conclude something like this equality? And in what conditions can we extend it to $f$ being a finite measure so that the same expression equals to something like $\int_{a \in \mathcal{A}}\mathrm{d} f(a)$?

Show that a certain set of $n \times n$ matrices is a subspace of vector space $\mathbb{R}^{n\times n}$

Posted: 31 Oct 2021 07:20 PM PDT

how do I prove (or disprove) that the set

$$\left\{X \in \mathbb{R}^{n \times n} \mid A X+X B+X^{T} C+D X^{T}=0\right\}$$

is a subspace of $\mathbb{R}^{n \times n}\,$?

I only know that the zero vector exists, but I don't know how to proceed further.

Any help appreciated.

How to determine regions of positive and negative concavity of $h(g)= \frac{(g-a)(g-b)}{(g+a)(g+b)}$?

Posted: 31 Oct 2021 07:24 PM PDT

We may assume that $a<0,b<0, |b|>|a|$

How to check where the concavity of the function $h(g)= \frac{(g-a)(g-b)}{(g+a)(g+b)}$ is positive or negative?

$h''(g)=\frac{4 \, {\left(a^{3} b + 2 \, a^{2} b^{2} + a b^{3} - {\left(a + b\right)} g^{3} + 3 \, {\left(a^{2} b + a b^{2}\right)} g\right)}}{a^{3} b^{3} + 3 \, {\left(a + b\right)} g^{5} + g^{6} + 3 \, {\left(a^{2} + 3 \, a b + b^{2}\right)} g^{4} + {\left(a^{3} + 9 \, a^{2} b + 9 \, a b^{2} + b^{3}\right)} g^{3} + 3 \, {\left(a^{3} b + 3 \, a^{2} b^{2} + a b^{3}\right)} g^{2} + 3 \, {\left(a^{3} b^{2} + a^{2} b^{3}\right)} g}$

The second derivative seems unhelpful.

Asymptote at $g=-a$, and $g=-b$

Question about Real and Complex primes (Algebraic number theory)

Posted: 31 Oct 2021 07:07 PM PDT

I posted same question ago, but there was no answer.

I'm reading Jurgen Neukirch, Algebraic number theory, p.183

enter image description here

enter image description here

Now I can't understand the underlined statement. Why that $\mathfrak{p}$ is real or complex embedding depends whether the $K_{\mathfrak{p}}$ is isomorphic to $\mathbb{R}$ or to $\mathbb{C}$? (By the Ostrowski theorem, $K_{\mathfrak{p}}$ is isomorphic to $\mathbb{R}$ or $\mathbb{C}$)

A text message plan costs $2$ per month plus $0.12$ per text. Find the monthly cost for $x$ text messages.

Posted: 31 Oct 2021 07:17 PM PDT

A text message plan costs $2$ per month, plus $0.12$ per text. Find the monthly cost for $x$ text messages.

Prove that the number of prime factors of an integer n greater than 1 is at most log$_2$(n).

Posted: 31 Oct 2021 07:47 PM PDT

Prove that the number of prime factors of an integer n greater than 1 is at most log$_2$(n). For example 28 = 2 x 2 x 7 has 3 prime factors, and 3 < log$_2$28 = 4.80735.

Signed Borel Measures and Functions of Bounded Variation

Posted: 31 Oct 2021 07:51 PM PDT

Let $\nu$ be a finite signed Borel measure on the closed interval $[a,b]$. We can define a function $F_\nu : [a,b]$ by $$F_\nu(x) = \nu([a,x]).$$It can be shown that $F_\nu$ has bounded variation and is right-continuous. An exercise in my class notes asks to prove that $|\nu|([a,b]) = V(F_\nu, [a,b])$, where the right-hand side denotes the total variation of $F_\nu$. Immediately after this, the notes state the following theorem:

"The map $\nu \mapsto F_\nu$ is a bijection from the set of finite signed Borel measures on $[a,b]$ to the set of functions of bounded variation that take value $0$ at $a$ and that are right-continuous."

My problem is that neither of these statements are true. Both break when we consider the measure $\delta$ given by $$\delta(A) = \begin{cases} 1, &\quad a \in A \\ 0, &\quad a \notin A \end{cases}.$$

In this case, $F_\delta$ is identically $1$, and so $V(F_\delta,[a,b]) = 0$ while $|\delta|([a,b]) = \delta([a,b]) = 1$. Moreover, $F_\delta$ does not take the value $0$ at $a$, so the second statement is false too.

How do we fix this? Can we just limit ourselves to the meausures $\nu$ for which $\nu(\{a\}) = 0$? Or is there a change we can make so that the results still apply to all signed measures?

$\beta \in \mathbb{Q}(\alpha)$ such that $\mathbb{Q}(\alpha) = \mathbb{Q}(\beta)$ and $\beta^3 \in \mathbb{Q}$ does not exist.

Posted: 31 Oct 2021 07:09 PM PDT

I'm stuck on proving that.

Let $f \in \mathbb{Q}[X]$ is irreducible cubic polynomial which has three real roots, and $f(\alpha)=0$, then there does not exist $\beta \in \mathbb{Q}(\alpha)$ such that $\mathbb{Q}(\alpha) = \mathbb{Q}(\beta)$ and $\beta^3 \in \mathbb{Q}$.

I'll write down what I noticed.

  • $1, \alpha, \alpha^2$ is base of $\mathbb{Q}(\alpha)$
  • Let $g$ is minimal polynomial of $\beta$ on $\mathbb{Q}$, $\mathrm{deg}(g) = 3$.
  • Let $x, y, z \in \mathbb{R}$ are roots of $f$ ($\alpha$ is one of them), then $xyz, x+y+z, xy+xz+yz \in \mathbb{Q}$

please some hint or solution.

Thank you!

How do I prove that this linear transformation is bijective?

Posted: 31 Oct 2021 07:48 PM PDT

Let $V$ be a finite-dimensional inner product space (either real or complex) and $V^*=\mathrm{Hom}(V,k)$ be the dual space. Let $v\in V$. Define a mapping $L_v$ from $V$ to the ground field $k$ by $L_v(u)=\langle u,v\rangle$. Note that $L_v$ is linear.

Define $$\begin{align}L:V&\to V^*\\v&\mapsto L_v\end{align}$$

Now, $L$ is linear if $V$ is a real inner product space and anti-linear if $V$ is a complex inner product space, i.e., $L(av+bw)=aL(v)+bL(w)$ in the real case, and $L(av+bw)=\bar aL(v)+\bar bL(w)$ in the complex case.

I would like to prove that $L$ is bijective.

Suppose $L(v)=L(w)$. Then, for all $u\in V$, $$\begin{align}\langle u,v\rangle&=\langle u,w\rangle\\\langle u,v\rangle-\langle u,w\rangle&=0\\\langle u,v-w\rangle&=0\end{align}$$ The only vector that gives zero inner product with all vectors is the zero vector. Therefore, $v=w$, and $L$ is injective. It seems to me that surjectivity holds by construction. However, in the problem, there is a caution to "take care to account for the not-quite linearity of $L$ in the complex case". Am I missing something?

Heun's method for non-homogeneous LTI

Posted: 31 Oct 2021 07:26 PM PDT

Heun's method for homogeneous system is simple enough:

$$\bar{x}_{k+1} = x_k + hf(x_k)$$ $$x_{k+1} = x_k + \frac{h}{2}\left( f(x_k) + f(\bar{x}_{k+1}) \right)$$

I have a non-homogeneous LTI system:

$$\dot{x} = Ax + Bu$$

where $A=\begin{bmatrix} 0 & 1 \\ -3730.2 & -26.15 \end{bmatrix}$, $B = \begin{bmatrix} 0 \\ 26.04 \end{bmatrix}$, $u = 0.5\sin{3t}+0.4$

With Forward Euler's method, I have the following discretized state space:

$$A_d = I + hA$$ $$B_d = hB$$

With Heun's method, I have the following discretized state space:

$$A_d = I + hA + \frac{h^2}{2}A^2$$ $$B_d = hB + \frac{h^2}{2}AB$$

It's a simple enough system that I can get an exact analytical solution. When I compare the three, I notice that Heun's always does worse than forward Euler's after the initial transient dies out. For reference, $h=0.005$ and the initial conditions are all zero.

Is there a mistake with my derivation of Heun's for this LTI?

enter image description here

Proof of the equivalence of validity of formula and its closure in a structure (Shoenfield's book)

Posted: 31 Oct 2021 07:50 PM PDT

Shoenfield's Mathematical Logic contains the following theorem and its corollary:

Closure Theorem. If ${\bf A}'$ is the closure of ${\bf A}$, then $\vdash {\bf A}'$ iff $\vdash {\bf A}$.

Corollary. If ${\bf A}'$ is the closure of ${\bf A}$, then ${\bf A}$ is valid in a structure ${\cal A}$ iff ${\bf A}'$ is valid in ${\cal A}$.

The proof of the corollary is presented as follows: Suppose that ${\bf A}$ is valid in ${\cal A}$. If $T$ has ${\bf A}$ as its only nonlogical axiom, then ${\cal A}$ is a model of $T$. By the closure theorem, $\vdash_T {\bf A}'$; so ${\bf A}'$ is valid in ${\cal A}$ by the validity theorem. The converse is proved similarly. $\square$

I can see intuitively that this is true, since the definition of validity in a structure is essentially asserting that the universally quantified version of ${\bf A}$ is valid (i.e. every instance of ${\bf A}$ is valid). Now, I don't completely understand why the case when ${\bf A}$ is the only nonlogical axiom of $T$ is sufficient for the proof. Note that at this point in the book, the Completeness Theorem is not yet proved and thus we can only infer that if a formula is a theorem, then it is valid (called the Validity Theorem in the book), but not the converse. Say that you had a theory $T$ with multiple nonlogical axioms or where ${\bf A}$ is not a theorem: how does the proof cover this case? Is it because the structure is applied to the language, rather than the theory itself?

How ti prove the sum of squares degrees $d^2_{1}+d^2_{2}+\cdots+d^2_{95}\le 300000$ in a graph

Posted: 31 Oct 2021 07:21 PM PDT

Let $G$ be a simple graph with $95$ vertices,$2021$ edges,and vertex degrees $d_{1},d_{2},\cdots,d_{95}$,show that $$d^2_{1}+d^2_{2}+\cdots+d^2_{95}\le 300000$$

My attempt: Using Euler's theorem $$d_{1}+d_{2}+\cdots+d_{95}=2\cdot 2021=4042$$,and note $d_{i}\in N^{+}$.By the nature of the inequality, as if by an adjustment, but then How to prove $$d^2_{1}+d^2_{2}+\cdots+d^2_{95}\le 300000$$

Proving coefficient $a_n = O(1/n) $ and $b_n =O(1/n)$ in fourier series

Posted: 31 Oct 2021 07:20 PM PDT

This question was asked in my real analysis quiz and I was unable to solve it.

So, I am asking for help here.

Question : If $f(x) \sim a_0 /2 + \sum_{n=1}^{\infty} \left(a_n \cos(nx) + b_n \sin (nx)\right) $ and if f is of bounded variation on $[0,2\pi]$ show that $a_n =O(1/|n |) $ and $b_n =O(1/|n|)$.

Using the condition of bounded variation i wrote $f =g-h$ where $g$ and $h$ are increasing on $[0,2 \pi]$. But I am not able to use any result of fourier series to proceed.

Do you mind giving me some hints on how to prove it?

Thanks for your time.

When we have the power set $2^S$, does the 2 actually mean anything?

Posted: 31 Oct 2021 07:44 PM PDT

I have seen that most math books refer to the power set as $2^S$, usually in a cursory manner and without much detail. I was wondering if the 2 meant anything, because I normally just interpret it as a cardinality thing, like if S has two elements, then the power set has $2^2 = 4$ elements. Thanks!

No comments:

Post a Comment