Tuesday, May 31, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Integral of a trigonometric function with an argument of linear function

Posted: 31 May 2022 03:48 AM PDT

I would like to know how to do the following integral:

$$\int\dfrac{\sin^2(y)}{\sin(x-y)}dy.$$

Thanks for all.

compact but not b-compact set?

Posted: 31 May 2022 03:48 AM PDT

B is a b-open set if $B\subset Cl(IntB) \cup Int(ClB)$

I am looking for a set in some topology that is compact but not b-compact. Open sets are b-compact so b-compactness (cover by b-open sets has finite subcover) implies compact.

I guess in metric spaces both definitions coincide so I need to look either in some space that is not even T1, or to be honest nothing is striking me head about this.

I might not be correct about what implies what, I just am not topology guy so please understand if it is trivial.

How do I prove that f : R\ [2,4], x → 1/(x−3) is Lipschitz continous?

Posted: 31 May 2022 03:45 AM PDT

It sounds simple enough, but i how do i work with the interval? Do I have to apply a case distinction for > [2,4] and < [2,4]?

Finding eigenvectors and eigenvalues of a matrix $\begin{pmatrix}0 & D^{(2)} \\ D^{(N-2)} & 0 \end{pmatrix}$

Posted: 31 May 2022 03:44 AM PDT

Recently encountered the following matrix:

$A = \begin{pmatrix} 0^{(2 \times N-2)} & D^{(2 \times 2)} \\ D^{(N-2 \times N-2)} & 0^{(N-2 \times 2)} \end{pmatrix}$,

$0^{(k \times l)}$ is the matrix of all zeros of the corresponding size, and $D^{(k \times l)}$ is diagonal. So, it consists only of two non-trivial diagonal blocks, however, they are not on the diagonal.

Is there any simple way to diagonalize that?

What I also know is that $A^{m}$ is diagonal, where $m=N$ if $N$ is odd, and $m=N/2$ if $N$ is even. I believe that there is an analytical formula for that as I ran some symbolic tests.

Flux of f through surface $S$.

Posted: 31 May 2022 03:39 AM PDT

At class, I have been proposed the following exercise:

Consider the vector field f $(x,y,z) = (y-z,z-x,x-y)$ and $ b > a >0 $. Calculate the flux of f through the surface $S = \{x^2+y^2+z^2 = 2ax, x^2+y^2 \leq 2bx, z \geq 0\} $ oriented upward using Stokes´ Theorem appropriately.

My first idea was to use Gauss' Theorem. However, I am specifically asked to use Stokes' Theorem to calculate the flux.

My question is: Which application of Stokes' Theorem should I use then?

Perhaps it is not immediate.

Any help would be greatly appreciated.

A Question about evaulating surface integrals

Posted: 31 May 2022 03:39 AM PDT

Vector calculus is relatively new to me so I have a little trouble understanding double integrals intuitively. The question is esssentially this:

Calculate the surface integral of $\vec v=x^2 \hat j $ over the triangle formed by the points $(1,0,0),(0,2,0)$ and $(0,0,1)$.

The plane is $2x+y+2z=2$ and I am aware of the algorithm to solve this problem(projecting the infinitesimal area element onto the $xy$ plane, taking its dot with the given vector field etc. )
The double integral that we are supposed to end up with is $\int_0^1 \int_0^{2-2x}\frac{x^2}{2}dxdy$.
I am having trouble with understanding the limits that have been substituted. I understand that a projection of the area element on the $xy$ plane means that $z=0$ so the limits will be a result of that.
But I don't see why the same procedure can't be done on a plane parallel to the $xy$ plane,for instance $z=3$. I do not think anything in the beginning would change- Even the projected area of the infinitesimal element would remain the same since both $z=0$ and $z=3$ have $\hat k$ as their unit normal. But how can one explain the limits in this case?
Hope I've been able to explain my problem properly!Thanks in advance.

Details of the Realization of a stochastic process

Posted: 31 May 2022 03:39 AM PDT

A well known example of a strict-sense stationary random process is along the lines of $X_t = sin(2*pi*f*t + \theta)$ where $\theta$ is some random variable, usually $\theta\sim Uniform(0,2\pi)$. I am slightly confused about how a realization of this process looks like. So we have

$X_t(\omega) = sin(2*pi*f*t + \theta(\omega))$ and it all comes down how exactly such an $\omega$ looks like. In this example I assume $\omega$ is a scalar and is "drawn" before the start of the random process such that $\theta(\omega)$ is constant with respect to t. The randomness then stems from not knowing beforehand which $\omega$ would be drawn. However, after the random process has "started" \theta(\omega) is just a constant phase. Do I understand this correctly? An alternative would be to draw to have $\omega = \omega_t$, i.e. a new $\omega$ for every t.

Check if graph is multigraph given degree sequence

Posted: 31 May 2022 03:44 AM PDT

I am given some degree sequences of graphs, and my question is what is the method to determine which of them are sequences corresponding to multigraphs.

  1. 1,2,2,3
  2. 0,1,2,3
  3. 2,2,2,2

What is this Curve name and what is its characteristics

Posted: 31 May 2022 03:37 AM PDT

i am looking for the name and the characteristics of below curve a (cos x cos y)^2+ b(cos x cos y)+c=0

it is close to Lamé curve but the equations are different, Please advise.

Volterra nature of kernel and resolvent means we need to prove continuity on compact intervals

Posted: 31 May 2022 03:33 AM PDT

Given some interval $I = [a,b)$ with $-\infty < a < b \leq \infty$. We consider Volterra kernels $K$ on $I$, $K: I^{2} \mapsto \mathbb R$, and in particular, this means that $$ K(t,s)=0$$ for $(t,s)\in I$ with $ s > t$.

In order to prove that the resolvent $R$ is continuous on the entire interval $I$, I was told that it suffices to prove that the continuous Volterra kernel $k$ has a continuous resolvent on every compact subinterval of $I$ --- why is this true in the particular case of the Volterra kernels?

NOTE: $R: I^{2}\to \mathbb R$ is the resolvent of $K$ on $I$ if we have $$ K - K\star R = R = K - R \star K,\; \text{ on }I^{2}$$

where $$ (K\star R)(t,s):=\int _{I}K(t,u)R(u,s)du.$$

Finding sum of coefficients of permutations of function compositions

Posted: 31 May 2022 03:24 AM PDT

I have a pair of functions: $$ \begin{align} g(x) &= 3 x \\ h(x) &= -4 x ^ 3 \end{align} $$ I want to compose $g$ and $h$, applying each $N_g$ and $N_h$ times, respectively. A few examples for $N_g = 3$ and $N_h = 2$:

$$ \begin{align} F_0(x) &= g(g(g(h(h(x)))))) \\ F_1(x) &= g(g(h(g(h(x)))))) \\ F_2(x) &= g(g(h(h(g(x)))))) \end{align} \\\\ ... $$

In this way, I'm permuting the order of function applications. The end result I'm after is the value of:

$$ S(x) = \sum_{i=1}^{{N_g + N_h \choose N_g}} F_{i-1}(x) $$

That is, the sum of all permutations of those function applications. The result will look like $S(x) = c x ^ {3 ^ {(N_h)}}$, I just don't know how to efficiently find $c$.

What I've tried? Well, this is where I'm stuck. I see how if $h(x)$ were linear I could solve this- it'd just be ${N_g + N_h \choose N_g} (-4)^{N_h}3^{N_g}$, but that cubic term is really giving me trouble.


For context, this problem stems from an attempt to calculate $\sin(\theta)$ for large floating-point exponentials (I'm a programmer, not a mathematician). I figure that I can use the identity:

$$ \sin(3\theta) = 3 \sin(\theta) - 4 \sin ^ 3 (\theta) $$

I can convert the expression for $\theta$ to a power of $3$, compute a specific base case value, and apply the recurrence relation:

$$ R_{n+1} = 3 R_n - 4 R_n ^ 3 $$

This works, up to a point. The problem is I still run into hardware-related rounding errors. I'm seeking to reorder the computation to minimize these errors. It doesn't seem like this is recurrence relation that will have a closed-form solution, so I've settled for calculating expanding the relation and instead finding a closed-form solution for the coefficients of each term (this question).

Even if this approach is doomed / won't work due to other accuracy problems or performance issues, I'd still like to learn how to solve a problem like this.

rank 1 partial isometry in a matrix algebra

Posted: 31 May 2022 03:18 AM PDT

Is there any general form of a rank 1 partial isometry in a matrix algebra $M_n(\mathbb{C})$? Are they similar to matrix units?

Meaningful upper bound on $\sum_{i=1}^n v_i^T \Big(\sum_{i=1}^n v_i v_i^T\Big)^{-1} v_i$

Posted: 31 May 2022 03:13 AM PDT

Let $v_1, \dots, v_n \in \mathbb{R}^d$ and $n \ge d$. Assume that the matrix $A$ is invertible. $$ A = \sum_{i=1}^n v_i v_i^T $$

Is it possible to simplify the expression $$\sum_{i=1}^n v_i^T \Big(\sum_{i=1}^n v_i v_i^T\Big)^{-1} v_i?$$

Or obtain a simple upper bound in terms of $v_i$'s.

For the case when $d=1$ it is straightforward to see that the expression simplifies and is always equal to $1$. I wonder what tricks there are in understanding the behavior of this function.

Finding Jordan Canonical Form and its transformation matrix

Posted: 31 May 2022 03:36 AM PDT

I'm a student and do my homework.
My 4x4 matrix: $$ \begin{bmatrix} 0 & 4 & -1 & -1 \\ -1 & 4 & 0 & -1 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 \end{bmatrix} $$
The characteristic polynomial is $(2-\lambda)^4,$ so there are 4 equal eigenvalues $\lambda=2$, so
\begin{align*} \operatorname{am}_A(2) &= 4 & \operatorname{gm}_A(2) &= 2 \end{align*} After all, I know that the Jordan form is 4x4 matrix with $\lambda=2$ on the main diagonal and there are 2 Jordan blocks. So it looks like this but instead of x it should have either 0 or 1. Here is the first issue. $$ \begin{matrix} 2 & x & x & 0 \\ 0 & 2 & x & x\\ 0 & 0 & 2 & x\\ 0 & 0 & 0 & 2 \\ \end{matrix} $$ Also I need to find a transition matrix P such that $J=P^{-1}AP$. As far as I understand it's a 4x4 matrix whose columns are eigenvectors of the initial matrix A. So 2 of the columns of P are $u1 = (-1, 0, 0, 0)^{T}$ and $u2 = (0, 0, 1, 0)^{T}$. How to find the other 2 columns?
PS I'm don't ask for an exact answer (have Wolfram for it), but try to understand how to do it on my own.

Which points in this topology on R will be closed or open?

Posted: 31 May 2022 03:23 AM PDT

𝜏 : (a,b)∩Q a,b∈Q.

I checked a lot of information, but I didn't find anything on the issue with dots.

I understand what an open and closed set is, what are the limit points. But specifically in this example, I can't understand what the points are.

As I understand it, in this case the points will be open, because for any point there is an area of the ration points?

Finding Fourier series of $f(x)+c$ given that of $f(x)$

Posted: 31 May 2022 03:19 AM PDT

So, I have the function $f(x)$ over the interval $[-\pi,\pi]$ defined as under.

$$f(x)=\begin{cases}1+2x/\pi , -\pi\le x\le 0 \\ 1-2x/\pi , 0< x\le \pi\end{cases}$$

The thing is computing the Fourier coefficients for it directly is highly tedious whereas computing it for $f(x)-1$ gets us rid of the need to calculate $b_{n}$ as the function becomes an even function. So, in general, is there any way to arrive at the Fourier series expansion of $f(x)+c$ given the series expansion of $f(x)$?

Should I integrate constant with x or t while solving integral using substitution?

Posted: 31 May 2022 03:44 AM PDT

I am high school student and found a question in NCERT Class 12 book.

The question is:

$$\int\frac{1}{1-\tan(x)}dx$$

After simplifying the question, I got:

$$\frac{1}{2}\left(\int\frac{\cos(x)+\sin(x)}{\cos(x)-\sin(x)}+1\right)dx$$

I then used substitution method.

$$\cos(x)-\sin(x)=t$$

$$dx=\frac{-1}{\cos(x)+\sin(x)}\cdot dt$$

After solving, I got

$$\frac{1}{2}(\cos(x)-\sin(x))-\frac{1}{2}(\ln(|\cos(x)-\sin(x|))+C$$

But the answer in the book is different:

$$\frac{x}{2}-\frac{1}{2}\ln(|\cos(x)-\sin(x)|+C$$

I think that the textbook answer is different because they integrated 1 with respect to x whereas I integrated 1 with respect to t. What's the right way to integrate?

I don`t understand this little incongruence about the error function of Lagrange interpolation polynomial.

Posted: 31 May 2022 03:28 AM PDT

Given $f \in C^{n+1}([a,b])$ and a set of $n+1$ points in $[a,b]$. And given $P$ the Lagrange interpolation polynomial, the error function is $f - P = \frac{f^{(n+1)}(\eta_x)}{(n+1)!}w_S(x)$ where $w_S(x) = \Pi (x-x_i)$, $x_i$ are the nodes of interpolation and $\frac{f^{(n+1)}(\eta_x)}{(n+1)!}$ is just a constant so the overall error function is a polynomial.

This all means that $f = P + \frac{f^{(n+1)}(\eta_x)}{(n+1)!}w_S(x)$.

The right hand side is a polynomial, so $f$ is a polynomial but this is not always true. What's the problem with this?

Computing the optimal price from an auction with uniform distributed values

Posted: 31 May 2022 03:13 AM PDT

If I sell an item with value uniformly distributed between $0$ and $500$, and I value this item at $200$, then what is the ideal price to try to sell the item for? Note that once I set a price, I cannot change it . And there is only one potential buyer.

The two ideas that I have are that the ideal offer should be one of two possibilities.

  1. As it is uniformly distributed between $0$ and $500$, this implies that the expected offer ought to be $250$.

  2. As I value the item at 200, this implies that I will not sell the item for less than 200, and perhaps it would be better to model the value as being uniformly distributed between 200 and 500. This would give the expected value 350 (which I believe could also be the optimal offer).

I was wondering if I either of these approaches are correct, and if not, what is the best way to tackle this type of problem.

Odds ratio of the intesections of 3 events with dependence

Posted: 31 May 2022 03:13 AM PDT

Let's consider 3 events A,B and C. On a Venn diagram representing these three events, the different intersections are noted as follows:

$$abc = P(A\cap B\cap C)\;;\; ab = P(A\cap B \cap \bar C)\;;\; ac = P(A\cap \bar B \cap C)\;;\; a = P(A\cap \bar B \cap \bar C)$$

Venn diagram

enter image description here

If the three events A, B and C are mutually independent then :

$$ OR = \frac{a/ac}{ab/abc} = 1$$

I know that if event B and C are not independent then the odds ratio define above will no longer be equal to one.

Now here's my question, what happens to this odds ratio when event B and C are independent but events A and B (or A and C) are not independent ?

Thank you in advance for your clarifications on this subject !

Can I "scale" Chernoff Bounds in this inequality?

Posted: 31 May 2022 03:48 AM PDT

I have $X = \sum_{i=1}^n x_i$ where $x_i$ are independent random variables s.t $$x_i \sim Bern(1/n) $$ .
Thefore $X \sim Binom(n, 1/n)$ with $E[X]= np_i = n*1/n = 1$
I have this quantity that I want to find an upper bound with chernoff bounds:
$P[X > (1+ \epsilon)np] $ where p is a constant in $[0, 1]$.

Chernoff bound tells me :
$P[X > (1+\epsilon)\mu ] \leq e^{-\epsilon^2 \mu/3}$
in our case $\mu = 1$

Can I somehow scale this (?) and instead of $(1 + \epsilon) $ have $(1 + \epsilon)np$ in order to use chernoff ?

Why do we have $\|J\|_{\infty}=\sup_{\|v\|=1}\langle v, Jv\rangle ? $

Posted: 31 May 2022 03:32 AM PDT

Based on one previous question: Why is the function $\|\mathbf{J}\|_{\infty}$ $1$-Lipschitz w.r.t to the Euclidean norm?.

For a real symmetric matrix $J$, let $\|J\|_{\infty}$ be the spectral radius of $J$. Why do we have $$ \|J\|_{\infty}=\sup_{\|v\|=1}\langle v, Jv\rangle ? $$ (Here I think that the sup is taken over $\|v\|_2=1$?)

I check the definition of the spectral radius of a matrix, which is $\rho(J):=\max_{1\le i\le n}|\lambda_i|=\|J\|_2$ with eigenvalues $\lambda_i$.

And the right hand side on the first display looks like the operator norm of the matrix $J$?

Here I get $$ \|J\|_2=\sup_{\|x\|_2=1}\|Jx\|_2=\sup_{\|x\|_2=1}\langle Jx, Jx\rangle=\sup_{\|x\|_2=1}\langle x, J^TJx\rangle $$ but this is not equal to $\sup_{\|x\|_2=1}\langle x, Jx\rangle$?

Finding $\cos\theta$ given $\sin\theta$.

Posted: 31 May 2022 03:26 AM PDT

We know that, $\sin(75^\circ)=\sin(30^\circ+45^\circ)=\sin45^\circ.\cos30^\circ+\sin30^\circ.\cos45^\circ=\frac{1}{\sqrt{2}}.\frac{\sqrt{3}}{2}+\frac{1}{2}.\frac{1}{\sqrt{2}}=\frac{\sqrt{3}+1}{2\sqrt{2}}$

And, $\cos(75^\circ)=\cos(30^\circ+45^\circ)=\cos45^\circ.\cos30^\circ+\sin30^\circ.\sin45^\circ=\frac{1}{\sqrt{2}}.\frac{\sqrt{3}}{2}-\frac{1}{2}.\frac{1}{\sqrt{2}}=\frac{\sqrt{3}-1}{2\sqrt{2}}$

Now, $\cos\theta=\sqrt{1-\sin^2\theta}$

So, $\cos(75^\circ)=\sqrt{1-\sin^2(75^\circ)}=\sqrt{1-(\frac{\sqrt{3}+1}{2\sqrt{2}})^2}=\sqrt{1-\frac{(\sqrt{3}+1)^2}{8}}=\sqrt{\frac{8-(\sqrt{3}+1)^2}{8}}=\sqrt{\frac{8-(4+2\sqrt{3})}{8}}=\sqrt{\frac{4-2\sqrt{3}}{8}}=\sqrt{\frac{2-\sqrt{3}}{4}}$

Thus, $\frac{\sqrt{3}-1}{2\sqrt{2}}=\sqrt{\frac{2-\sqrt{3}}{4}}$

Now, how do we simplify the RHS to represent the LHS? I have checked that LHS and RHS both have a value of $0.2588190451$. But, I am unable to show that by simplification.

Limit of oscillatory integral where dominate convergence theorem does not apply

Posted: 31 May 2022 03:42 AM PDT

Fix $\xi>0$. Let $f:\mathbb R \to \mathbb C$ be not a $L^1$ function, but is locally integrable and furthermore $$I = \int_{-\infty}^\infty f(x) e^{ix\xi}dx$$ exists as an improper Riemann integral. For $\alpha>0$, define a triangle-shaped cutoff function $h_\alpha(x) = h(x/\alpha)$ where $$h(0)=1, \quad h(x)=0 \textrm{ for }|x|>1, \quad h \textrm{ is linear on $[-1,0]$ and on $[0,1]$}.$$ Hence, as $\alpha\to\infty$, $h_\alpha$ converges pointwisely to the constant function 1.

I hope that the following holds: $$\lim_{\alpha\to\infty} \int_{-\infty}^\infty e^{ix\xi} f(x) h_\alpha(x) dx \stackrel != I.$$ Sadly, since $f\notin L^1$, the dominated convergence theorem cannot be used.

Question: Is there any general theory that tells me under what condition the above limit is equal to $I$? (Maybe the theory of oscillatory integral helps, but I am not an expert in that field. Actually, I am a physicist.) Any reference or theory that is possibly related to my question is welcome.

Remark: One example for $f$ is $f(x) = 1/x^\beta$ with $0<\beta<1$. In this case, direct calculation shows that the limit is equal to $I$.

Average distance of 2 points on distinct lines

Posted: 31 May 2022 03:27 AM PDT

First, I need to find the function that finds the average chord length between $(t,0)$ and $(x,1)$

Then I can integrate this function to get the average distance of 2 points on opposite sides of a unit square

The distance between $(t,0)$ and $(x,1)$ is $\sqrt{(t-x)^2 +1}$

Thus, the average chord length between $(t,0)$ and $(x,1)$ is defined as

$$C(t) = \int_0^{1} (\sqrt{(t-x)^2 +1} )dx $$

My question(s) are, is this work correct? I integrated $C(t)$ from $0$ to $1$ and I got close to ~1.076

How to evaluate $\int_{0}^{1}\int_{0}^{1}\frac{(x+y+xy)\log{(x+y+xy)}}{x+y}dxdy$?

Posted: 31 May 2022 03:13 AM PDT

I am trying to evaluate this integral: $$\int_{0}^{1}\int_{0}^{1}\frac{(x+y+xy)\log{(x+y+xy)}}{x+y}dxdy.$$ It looks like this integral is symmetry with variables $x$ and $y$ but I can't find the way to exploit it. I tried to use this: $$\int_{0}^{1}\int_{0}^{1}\frac{(x+y+xy)\log{(x+y+xy)}}{x+y}dxdy\\=\int_{0}^{1}\int_{0}^{1}2x\frac{(x+xy+x^2y)\log{(x+xy+x^2y)}}{x(1+y)}dxdy,$$ but still can't isolate $x$ and $y$. I really need some advices here, thank you.

EDIT After trying more, i end up with this:$$\int_{0}^{1}\int_{0}^{1}\frac{(xy)\log{(x+y+xy)}}{x+y}dxdy.$$ I cant process more, any advices, thank you.

Why is there a 1/2 coefficient for the curvature formula in the Wikipedia Curvature Form article?

Posted: 31 May 2022 03:27 AM PDT

The formula for the principal curvature form in the Wikipedia article Curvature Form, when applied to the $X, Y$ tangent vectors to the principal bundle $P$, goes on like $$\Omega(X, Y)=d\omega(X, Y)+\frac12[\omega(X), \omega(Y)]$$ where $\omega$ is the connection form with value in the lie algebra of the structure group.

It only depends on the lie bracket of the lie algebra, not on any conventional definition of the wedge product of forms

Having learned that there is no $\frac12$ coefficient, I was about to edit the article when I saw a caveat from the authors "do not remove the $\frac12$ coefficient". They did motivate its presence by a correct calculation made from the definition given in the article Lie-Algebra Valued Forms of Wikipedia where there is a $\frac1{(p+q)!}$ coefficient in front of the definition of the wedge product of two lie-algebras valued forms.

This definition is different from the one I saw in Michor's book Natural Operations in Differential Geometry, Cap. 19, or other textbooks, where the coefficient is $\frac1{p!q!}$. This explain the difference in the formula for the curvature.

What is right? It seems to me that there should be no such a coefficient $\frac12$ and that I should edit the article on Lie-Algebra Valued Forms. Am I right?

Edit 2

Well, the only explanation that makes sense for me is the following: the authors of the article are using an "old" convention for the definition of the exterior derivative of a one form, following there the one given in the Kobayashi & Nomizu book they referred to in a note.

In this book, the convention used put a $\frac1{p+1}$ in front of the definition of the exterior derivative of a $p$-form compare to the "modern" definition.

So, for the connection form , we have a $\frac12$ coefficient in the formula

$$d\omega(X,Y)=\frac12(X.\omega(Y)-Y.\omega(X)-\omega([X,Y]))$$

With this convention, it is easy to see that the formula for the curvature given in the Wikipedia article is right, albeit giving a curvature that is half of the modern definition for it.

Having two competing definitions for such an important and basic operator is a source of confusion and waste of time IMHO. I will definitely go for the "modern" approach.

Thus, I just edited the wikipedia article with a reference to the choice of convention made for the exterior derivative definition, following my answer. It seems so me the safest path of action, since getting rid of the $\frac12$ coefficient would have implied a change in other articles.

Therefore, my remaining questions are now 1/ is my reasoning right? 2/ is there any rationale behind the choice of the Kobayashi convention for the exterior derivative versus the modern one or is it just a matter of convention? In any case, the choices of some wikipedia articles (like Curvature Form, or Lie-Algebra Valued form, etc.) of the Kobayashi convention is confusing when it is not clearly specified.

determinant of a matrix with binomial coefficient entries

Posted: 31 May 2022 03:30 AM PDT

I trying to prove a statement, which boils down to showing that the determinant of a specific matrix is nonzero. I use the convention that $\binom{n}{k} = 0$ if $k > n$ or $k < 0$. Let $k,l$ be natural numbers such that $k \le l$. Then the $n\times n$-Matrix $A$ is defined to have the entries

$a_{ij} = \binom{l}{k +i - j}$. So it looks like $A = \left( \begin{array}{cccccc} \binom{l}{k} & \cdots & \binom{l}{0} & 0 & \cdots & 0 \\ \vdots & \ddots & & \ddots & \ddots & \vdots \\ \binom{l}{l} & & \ddots & & \ddots & 0 \\ 0& \ddots && \ddots & & \binom{l}{0}\\ \vdots&\ddots&\ddots&&\ddots&\vdots\\ 0&\cdots&0& \binom{l}{l} & \cdots & \binom{l}{k} \end{array}\right)$. Clearly the cases $k = l$ and $k = 0$ are trivial, since $A$ is then triangular. My first idea was to use the formula $\binom{r}{s} = \binom{r-1}{s-1} + \binom{r}{s}$ and add columns/rows to each other. But that does not work out that well...

So if anyone has any ideas, or this matrix is known to be invertible, I would be very thankful.

Is $f(x,y) = f(\mathbf{x})$ abuse of notation?

Posted: 31 May 2022 03:44 AM PDT

A scalar function $f(x,y)$ is often written as $f(\mathbf{x})$, where $\mathbf{x} = (x,y)$, but as far as I know, there is a difference between the scalar function inputs $(x,y)$ and the vector input $(x,y) = x\imath+y\jmath$. As I see it $f(\mathbf{x}) = f((x,y)) = f(x\imath+y\jmath) \neq f(x,y)$. Am I wrong, or is there a simple bijection between the two concepts?

Is it simply shorthand for $f': \mathbf{x} \mapsto f''(\imath\cdot\mathbf{x},\jmath\cdot\mathbf{x})$, s.t. $f'(\mathbf{x})= f''(x,y)$?

If it's of an relevance, I'm reading about scalar fields, and this definition came up:

$\displaystyle\dfrac{\partial f}{\partial x}(x,y) = \lim_{h\to 0} \dfrac{f(x+h, y)-f(x,y)}{h} \overset{\color{green}{?}}{=} \dfrac{f(\mathbf{x}+h\imath)-f(\mathbf{x})}{h} = \dfrac{\partial f}{\partial\imath}(\mathbf{x})$

While it looks nice I'm just curious if it's correct. However I don't see how $f$ can be differentiated with respect to both $\imath$ (a vector) and $x$ (a scalar), unless it's actually two different functions $f'$ and $f''$...

The complement of a finite union of rectangles has only finitely many components

Posted: 31 May 2022 03:31 AM PDT

Problem. Assume that $V_i\subset \mathbb R^2$, $i=1,\ldots,n$, are open rectangles, and their sides are parallel to the axes. Show that $\mathbb R^2\setminus \bigcup_{i=1}^n V_i$ possesses finitely many connected components.

One idea is the following: Let $\mathcal A$ be the algebra of subsets of $\mathbb R^2$ generated by the rectangles $\{V_1,\ldots,V_n\}$. Find elementary generalised rectangles, i.e., of the form $I\times J$, where $I$ and $J$ are intervals (finite or infinite, open, closed or semi-open), such that that every element of the algebra is a union of such elementary generalised rectangles. Then $\mathbb R^2\setminus \bigcup_{i=1}^n V_i$ will be a union of such sets, and so will be each component. Show that if a point of an elementary rectangles lies in a component, then so does the whole rectangle. Hence the components are elements of the algebra. There are a lot of details to take care though.

No comments:

Post a Comment