Wednesday, April 7, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Ordinary Generating function for $f(n)$

Posted: 07 Apr 2021 09:10 PM PDT

Let $f(n)$ be the number of $n\times n$ matrices $M$ of $0$'s and $1$'s such that every row and column of $M$ has three $1$'s. What is the ordinary generating function for $f(n)$? The most explicit formula known at present for $f(n)$ can be found on page number $2$ of the book Enumerative Combinatorics by R.P. Stanley which is

$$f(n) = \dfrac{n!^2}{6^n}\sum \dfrac{(-1)^\beta(\beta + 3\gamma)!2^{\alpha}3^{\beta}}{\alpha!\beta!\gamma!^26^{\gamma}}$$

for example we have $f(0) = 1, f(1) = 0, f(2) = 0$ and $f(3) = 1$. I have tried this problem for quite considerable amount of time. I tried to extract some sort of ordinary geerating function from the above mentioned formula but it seems to get more complicated as we perform coefficient extraction.

I am wondering weather there is a nice generating function for $f(n)$. I am looking for a neat detailed answer, your help would be highly appreciated. Thanks in advance.

Out of curiosity: Can the formula for $f(n)$ be simplified some more?

Hausdorff dimension of julia set

Posted: 07 Apr 2021 09:08 PM PDT

Can anyone show me the proof of the Hausdorff dimension of julia set always positive? we have to estimate the lower bound by using the backward orbit .

Find matrix exponentiation

Posted: 07 Apr 2021 09:07 PM PDT

$M = \begin{bmatrix} 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & -1 & 1\\ 0 & 0 & 0 & 0 & 1\\ \end{bmatrix}$

I'm trying to find the exponentiation of this matrix, $e^M$.

Wolfram tells me the answer is \begin{bmatrix} e & e & e/2 & 0 & 0\\ 0 & e & e & 0 & 0\\ 0 & 0 & e & 0 & 0\\ 0 & 0 & 0 & e^{-1} & e^{-1}\\ 0 & 0 & 0 & 0 & e^{-1}\\ \end{bmatrix}

But I don't follow how to get here. I tried finding the eigenvectors of $M$ and got $(1,0,0,0,0)$ and $(0,0,0,1,0)$ but I don't know where to go from here.

A sufficient condition of vector-valued function invertible

Posted: 07 Apr 2021 09:06 PM PDT

Suppose that $f:\mathbb{R}^n\to\mathbb{R}^n$ is a $C^1$ mapping,and $$ \lVert f\left( x \right) -f\left( y \right) \rVert \ge \lVert x-y \rVert . $$ Show that $f$ is invertible,and its inverse is also $C^1$.

Obviously, $f$ is a injection.Besides,I think the condition(ineqaulity) tells us that the range of $f$ is "expanding",so it seems that $f$ is also a surjection. In fact,I don't know how to show it explicitly.Is it possible to apply the inverse mapping theorem to this problem?

How do I show that $W$ is a subspace of $\mathbb{R}^{2}$?

Posted: 07 Apr 2021 09:08 PM PDT

Given that $w = (x, 0)$ is the set of points in the plane with ordinate $0$, show that $w$ is a subspace of $\mathbb{R}^{2}$.

MY ATTEMPT

Let $W = \{(x, 0) \mid x\in\mathbb{R}\}$.

Then $(0,0)\in W$, which implies that $W\neq\varnothing$.

It is clear that $W\subseteq\mathbb{R}^{2}$.

Moroever, $u = (x_{1},0) + (x_{2},0)\in W$ for every $x_{1},x_{2}\in\mathbb{R}$.

Can someone tell me what should I do after that?

Set of orthonormal functions

Posted: 07 Apr 2021 09:03 PM PDT

In my analysis class, we talked about the set of integrable functions in $L^2(\mathbb{R})$ with inner product $\langle f, g \rangle = \int_{\mathbb{R}} f(t)g(t) dt$. I was interested to know if there was a way to create a set of functions $\{m_k\} \in L^2(\mathbb{R})$ that is orthogonal to the inner product also with norm $1$. Is there an easy way to do so? I have a hunch the set is not necessarily compact; is there a way to prove this?

is there a mistake in this proof that union of finite nowhere dense sets is nowhere dense

Posted: 07 Apr 2021 09:00 PM PDT

enter image description here

I think there is a mistake near the end. The union of interiors is a subset of the interior of unions and not the other way around. Is there still a way to do a proof by contradiction along similar lines even though this proof is incorrect.

Query in the proof of $\overline{C_c(X)}=L^p(\mu)$ of Rudin's Real & Complex Analysis (Theorem 3.14)

Posted: 07 Apr 2021 08:58 PM PDT

I am reading the proof of the Theorem 3.14 from Rudin's Real & Complex Analysis book which says

For $1\le p<\infty, C_c(X)$ is dense in $L^p(\mu)$ where $X$ is locally compact hausdorff space.

Approximation by continuous functions In the proof, he used Lusin's Theorem on a simple function $s$. But the statement of Lusin's Theorem says (Theorem 2.24)- Lusin's Theorem To apply this theorem on a simple function $s$, we should have a set $A$ with finite measure (i.e. $\mu(A)<\infty$) such that $s$ is zero outside $A$. Now if $s=\sum\limits_{i=1}^n \alpha_i \chi_{A_i}$ where $\alpha_i\in\Bbb{C}\setminus\{0\}$ and $A_i$'s are disjoint measurable sets. So, $s$ is zero outside $\bigsqcup\limits_{i=1}^n A_i$ . But it's measure $\mu(\bigsqcup\limits_{i=1}^n A_i)=\sum\limits_{i=1}^n \mu(A_i)$ may be $\infty$. Then how can we apply Lusin's Theorem here? Can anyone clarify my doubt? Thanks for help in advance.

Permutation/ combination Question. How many ways can Mr. and Mrs. Rain take a photo if they can't stand side by side?

Posted: 07 Apr 2021 09:09 PM PDT

Mr. and Mrs. Rain and their 8 dogs are lining up for a photo. In how many ways can Mr. and Mrs. Rain take this photo if they can't stand side by side? I have been having trouble with this question because I don't know whether I should use a permutation formula or a combination formula.

$\int_{0}^{1}\frac{\sin(x^p)}{x}dx$ values of convergence

Posted: 07 Apr 2021 08:58 PM PDT

Consider the integral $\int_{0}^{1}\frac{\sin(x^p)}{x}dx$ for $p>0$. Then for which values of $p$, does the integral converge?

So we know that for x=0, the integral is undefined. So I should make it $lim_{t \to 0^+}$. After that, I evaluated the integral $\lim_{t \to 0^+}\int_{t}^{1}\frac{sin(x^p)}{x}$. My answer is that $p>0$ is the condition for the integral to converge. Is this right?

Describe the subset of $\mathbb{P}^2$ corresponding to affine lines in $\mathbb{A}^2\cong U_0\cong\mathbb{P}^2\setminus V(x)$

Posted: 07 Apr 2021 09:09 PM PDT

Given this definition: A projective line in $\mathbb{P}^2(\mathbb{C})$ is defined by $ax+by+cz=0$ where $0\neq(a,b,c)\in\mathbb{C}^3$. I was asked to "Describe the subset of $\mathbb{P}^2$ corresponding to affine lines in $\mathbb{A}^2\cong U_0\cong\mathbb{P}^2\setminus V(x)$"

I'm not sure I understand what it is asking or the notation, would anyone be able to help?

Riesz-Markov-Kakutani theorem with absolutely continuous measure?

Posted: 07 Apr 2021 08:48 PM PDT

Suppose that $X$ is a locally-compact Hausdorff space and also $(X,\mathcal F, P)$ is a probability space. Let $\psi$ be any positive, linear functional on the space $C_c(X)$ of continuously compactly supported functions $f$ on $X$.

The Riesz-Markov-Kakutani Theorem implies that there exists a unique regular Borel measure $\mu$ such that for all $f \in C_c(X)$, $$ \psi(f) = \int f(x) d\mu. $$

Are there conditions that guarantee $\mu \ll P$, so that---by the Radon-Nikodyn theorem---we can write $$ \psi(f) = \int f(x) \frac{d\mu}{dP} dP \quad ? $$

How do I compute the Wronskian after finding the solution of an ODE that satisfies 2 initial conditions?

Posted: 07 Apr 2021 09:05 PM PDT

In the question, I was first given the following differential equation and was asked to compute the solution.

$$\frac {\mathrm{d}^{2}x}{\mathrm{d}t^2} + 2\frac{\mathrm{d}x}{\mathrm{d}t} + 5x = 0$$

After that, I found the solution satisfying the initial conditions $x(0) = 1$ and $x'(0) = 1$.

My solution was $x = e^{-t}\cos(2t) + e^{-t}\sin(2t)$.

But now I don't know how to compute the Wronskian from the solution.

If anybody could point me in the right direction that would be extremely helpful. Thanks!

How to prove Isomorphism in this case

Posted: 07 Apr 2021 08:47 PM PDT

I am trying to prove that the following objects are isomorphic: $$\mathbb{Q}[x]/<x+1> \cong \mathbb{Q}$$

but I am not sure where to start. Is finding an actual map from one to another the way to approach this? I would appreciate any hint on this.

Find all real polynomials such that $p(x)=p(\alpha x)$ for some $\alpha>0$

Posted: 07 Apr 2021 09:06 PM PDT

for $p(x)$ we write $$ p(x)=a_{0} x^{0}+a_{1} x^{1}+a_{2} x^{2}+\ldots+a_{n} x^{n}=\sum_{j=0}^{n} a_{j} x^{j} $$ and for $p(\alpha x)$ $$ p(\alpha x)=a_{0}(\alpha x)^{0}+a_{1}(\alpha x)^{1}+a_{2}(\alpha x)^{2}+\ldots+a_{n}(\alpha x)^{n}=\sum_{k=0}^{n} a_{k}(\alpha x)^{k}: \alpha>0 $$ If $\alpha=1$, then $p(x)=p(\alpha x)$. I'm not sure how to prove when $\alpha \neq 1$ or how to apply induction in this.

ODE eigenvalue problem with unusual boundary conditions

Posted: 07 Apr 2021 08:43 PM PDT

I am given:

$$y'' + \lambda y = 0$$

where $y(0) = 0$, $(1 - \lambda)y(1) + y′(1) = 0$.

As usual we are looking for not trivial solutions.

Looks like a standard eigenvalue problem and yet I am totally stuck.

The case when $\lambda = 0$ is rather obvious. $A = B = 0$. Not much fun.

But when I start trying for lambda greater or smaller than zero, I get to this:

  1. If $\lambda < 0$, the solution is of the form:

$$B((1 + \omega^{2})\sinh(\omega) + \omega\cosh(\omega)) = 0$$

where $\lambda =-\omega^{2}$

  1. If $\lambda > 0$, the solution is of the form:

$$B((1 - \omega^2)\sin(\omega) + \omega\cos(\omega)) = 0$$

where $\lambda = \omega^{2}$.

The question states:

Find the nontrivial stationary paths, stating clearly the eigen-functions $y$.

In the case 1), I can't see any non trivial solutions but... well in the second case I cant see either. I know there are solutions.

Any help would be highly appreciated.

Permutation multiplication in S3

Posted: 07 Apr 2021 08:59 PM PDT

I have a question regarding multiplication in S3 with a=(2 3) and b=(1 2)

The video I watched was showing left and right cosets in this case being different with

ab=(1 3 2)

ba=(1 2 3)

I understand ba as (1 2)(2 3)= (1↦2↦3)

but im not sure of ab as (2 3)(1 2)= 2↦3 but after this im stuck and have no idea. Im trying to think how ( 1 3 2) is the result of this but I have no idea

Confusion about the Galois group $\mbox{Gal}(\mathbb{Q}(\zeta_{p^\infty})/\mathbb{Q})\cong \mathbb{Z}_p$

Posted: 07 Apr 2021 08:38 PM PDT

Let $p$ be a prime and $\zeta_{p^r}$ be a primitive $p^r$-th root of unity. Then many textbooks, e.g. Milner's, ask one to prove that $\mbox{Gal}(\mathbb{Q}(\zeta_{p^\infty})/\mathbb{Q})\cong \mathbb{Z}_p$ where $\mathbb{Q}(\zeta_{p^\infty})=\cup_{i\geq 1} \mathbb{Q}(\zeta_{p^i})$.

But as I try to understand it, I know that $\mbox{Gal}(\mathbb{Q}(\zeta_{p^n})/\mathbb{Q})\cong (\mathbb{Z}/p^n\mathbb{Z})^\times$.

And so when $p=2$, we have $\mbox{Gal}(\mathbb{Q}(\zeta_{2^n})/\mathbb{Q})\cong \mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2^{n-2}\mathbb{Z}$ and so $$\mbox{Gal}(\mathbb{Q}(\zeta_{2^\infty})/\mathbb{Q})\cong \varprojlim_{n\geq 1}(\mathbb{Z}/2^n\mathbb{Z})^\times\cong \varprojlim_{n\geq 1} (\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2^{n-2}\mathbb{Z})\cong \mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}_2 (\cong \mathbb{Z}_2^\times).$$

When $p\geq 3$, we have $\mbox{Gal}(\mathbb{Q}(\zeta_{p^n})/\mathbb{Q})\cong \mathbb{Z}/(p-1)\mathbb{Z}\times \mathbb{Z}/p^{n-1}\mathbb{Z}$ since $(\mathbb{Z}/p^n\mathbb{Z})^\times$ is cyclic of order $p^{n-1}(p-1)$ and so \begin{align*} \mbox{Gal}(\mathbb{Q}(\zeta_{p^\infty})/\mathbb{Q})\cong & \varprojlim_{n\geq 1}(\mathbb{Z}/p^n\mathbb{Z})^\times \\ \cong & \varprojlim_{n\geq 1} (\mathbb{Z}/(p-1)\mathbb{Z}\times \mathbb{Z}/p^{n-1}\mathbb{Z})\\ \cong & \mathbb{Z}/(p-1)\mathbb{Z}\times \mathbb{Z}_p(\cong \mathbb{Z}_p^\times). \end{align*}

Surely, I (mis)use that $\varprojlim$ is exchangeable with group product (and taking units). But surely, $\mathbb{Z}_{p}$ and $\mathbb{Z}_p^\times$ are not the same.

Can anyone explain where I go wrong?

How to find the points of intersection in a curve and a line

Posted: 07 Apr 2021 08:52 PM PDT

The equation of a curve is $$ y=8\sqrt x -2x $$ We have to find the values of $x$ at which the line $y = 6$ meets the curve

I tried equating them and doing using the quadratic formula like this: $$ 8\sqrt x -2x = 6 $$ $$ 64x + 4x^2 = 36 $$ $$ 4x^2 + 64x -36 = 0 $$

The answer to the question is $x=9, x=1$ but after solving this quadratic, I'm getting a completely different answer. What am I doing wrong?

Stabilization of gradient systems for finite time

Posted: 07 Apr 2021 08:37 PM PDT

I ask respected experts to help in solving the next task.

We have the following differential equation:

$\frac{dx}{dt}=\frac{d}{dx}(f(x))$

where $f(x)$ - arbitrary unimodal function (like $-x^2$,$-(x-1)^4$,$e^{-x^2}$). We may take $-(x-1)^4$ for example.

The system trajectory $x$ is a transition from the initial state $x(0)=-1$ to the equilibrium position at which $\frac{d}{dx}(-(x-1)^4)=0$. This is achieved at $x = 1$ in this case. I illustrated it on a simple picture (numerical solution in the program Mathematica).

enter image description here

I want to make the transition from the initial state to the equilibrium state at the finite time. It seems to me that for this you need to use the results of the work like this:

https://strathprints.strath.ac.uk/64842/1/Liu_etal_Control2018_Fixed_time_stabilization_of_second_order_systems.pdf

PROBLEM: Forms in general. What will be the structure of the control system to bring the system to the equilibrium state for the final time (using the method that I led by reference). To impose any criteria on the $x(t)$ coordinate, or enter the control signal $u(t)$? How to form it then?

I would be grateful for the advice and help.

Combinatorics - How many ways are there to arrange the string of letters AAAAAABBBBBB such that each A is next to at least one other A?

Posted: 07 Apr 2021 09:04 PM PDT

I found a problem in my counting textbook, which is stated above. It gives the string AAAAAABBBBBB, and asks for how many arrangements (using all of the letters) there are such that every A is next to at least one other A.

I calculated and then looked into the back for the answer, and the answer appears to be $105$. My answer fell short of that by quite a bit. I broke down the string into various cases, and then used Stars and Bars to find how many possibilities there are for each. Now here's what I have got so far. First case would be all As are right next to each other, leaving $2$ spots for the $6$ Bs. That gives $\binom{7}{1}$ from Stars and Bars, $7$ possibilities. Second case was dividing the As into $2$ groups of 3 As. There would have to be $1$ B between the two, which leaves $5$ Bs that can be moved. Using Stars and Bars, there are $3$ possible places to place a B and $5$ Bs in total, so $\binom{6}{2}$, $15$ possibilities. Then there's a group of $4$ As and another group of $2$ As. $1$ B would be placed inbetween, and then the calculation would be the same as the second case, except it would have to be doubled to account that the groups of As can be swapped and it would be distinct. That gives $30$ possibilities. Then I found one final case of dividing the As into 3 groups of 2 As. 2 Bs would immediately be placed between the 3 groups, leaving 4 Bs to move between the 4 possible locations. I got $\binom{5}{3}$ for that, which adds $10$ possibilities. Summing it up, I only have $62$ possibilities, which is quite far from the $105$ answer. Any ideas where I might have miscalculated or missed a potential case? Additionally, are there any better ways to calculate this compared to this method of casework?

compute $ \int_0^{\infty} \frac{ \cos \pi x }{ e^{2\pi \sqrt{x}} -1 } dx $

Posted: 07 Apr 2021 09:11 PM PDT

$$ \int_0^{\infty} \frac{ \cos \pi x }{ e^{2\pi \sqrt{x}} -1 } dx $$

I guess one common trick is $\frac{1}{1-e^{2\pi \sqrt{x}}} = \frac{1}{1-e^{\pi \sqrt{x}}} + \frac{1}{1+e^{\pi \sqrt{x}}}$ but it doesn't seem to help much?

About the commutants of the group representation $D_4$

Posted: 07 Apr 2021 08:44 PM PDT

Let us consider the natural representation $\pi$ of the $D_4$ group on the vector space with orthonormal basis $\{e_1, e_2, e_3, e_4 \}$. Note that this gives us a $C^*$ algebra $A$ such that $A = \tilde{\pi} (\mathbb{C}(D_4))$, where $\tilde{\pi}: \mathbb{C}(D_4) \rightarrow B(H)$ is a *- homomorphism of $\mathbb{C}(D_4)$.

I have that $$ \mathrm{Rotation} = \rho= \begin{bmatrix} 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \text{.} $$

$$ \mathrm{Reflection}= \sigma = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} \text{.} $$.

My questions are: Let $A'$ be the commutant of $A$. What is $dim(A') $ in this case?

I know that if $x \in A'$, then we must have $x\rho=\rho x$ and $x \sigma = \sigma x$.

So this gives us some equations to satisfy. But I'm not sure how to proceed from here.

Thank you in advance!

Is a balanced undirected signed graph bipartite in exactly the case where all signs are negative?

Posted: 07 Apr 2021 09:01 PM PDT

I was reading Graph Theory and Complex Networks by Maarten van Steen. I was reading about undirected signed graphs in the section on social network analysis, which contains the following theorem:

An undirected signed graph G is balanced if and only if V(G) can be partitioned into two disjoint subsets $V_0$ and $V_1$ such that the following two conditions hold:
(1) $E^-(G) = \{\langle x, y \rangle | x \in V_0, y \in V_1 \}$
(2) $E^+(G) = \{ \langle x, y \rangle | x, y \in V_0 \thinspace or \thinspace x, y \in V_1 \}$

Case #1 (especially given the prior description in the book) seems to me to be quite a bit like the criteria for a bipartite graph. Does this theorem basically mean that if you had a graph where every single edge was signed negative (i.e. $E^-(G) = E(G)$) then $G$ must be bipartite?

Edit: A signed graph is a simple graph in which each edge is labeled with a sign (- or +). An undirected sign graph is balanced when all of its cycles are positive. The sign of a set of edges can be found by multiplying all of the signs of the edges in the set; the result of multiplying two signs is negative if exactly one of the signs is negative and is positive otherwise (so, for example, - * - = + and - * + = -).

Question about $\int e^{x^2} dx.$ Why can't it be integrated using standard methods?

Posted: 07 Apr 2021 08:46 PM PDT

I have a question about $\int e^{x^2} dx.$

I understand that if you try and use u-substitution, you will end up with an integral that has both u and x in the integrand, which cannot be evaluated.

However, there is a "shortcut method" of the reverse chain rule by saying that since d/du (e^u) = (e^u)*u', therefore the integral of e^u du is equal to (e^u)/u'. This rule works in most cases, such as with e^(3x), for example.

If I were to use this "shortcut" version of the reverse chain rule, we could then say that the integral of e^(x^2) dx = e^(x^2)/(2x) + c (if doing an indefinite integral). We could verify that this is correct by taking the derivative of our answer and sure enough, it takes us back to where we started.

However, if I evaluate this as a definite integral between two bounds by hand, I get a different result than if I evaluate it using my TI-84 calculator. Therefore, I assume the integral I calculated earlier must be incorrect. Why? I'm hoping someone can help me understand. And since it is not correct, why can I take the derivative of the result and it takes me back to where I started? Usually, this is a good check to see if you executed the integral properly.

I appreciate any help and thanks in advance!

Find real and imaginary part of $w=z^{3i}$ function

Posted: 07 Apr 2021 09:04 PM PDT

Find real and imaginary part of $w=z^{3i}$ function. When I try to do something with that, I stuck with $(x+iy)^{3i}$.

Extremes of: $f(x,y,z)= (x-1)^2y^2(z+1)^2$ with: $x^2 + y^2 + z^2 \leq 1$

Posted: 07 Apr 2021 08:38 PM PDT

I tried using Hessian Matrices, but failed. Mainly because i did not find all the possible points to be maxima, minima or saddles. Any idea for finding the other points?

The points I've found are: $(1,0,0)$ and $(0,0,-1)$

enter image description here

Do I have to use Lagrange Multiplier to solve it? Any idea to start?

A Difficult Area Problem involving a Circle and a Square

Posted: 07 Apr 2021 08:34 PM PDT

A few days ago, I encountered the following problem:

enter image description here

After a little bit of thinking, I managed to come up with the following solution:

  1. Rotate the square $90^\circ$ clockwise and let the new bottom left corner of the square be $(0,0)$.
  2. The circle inscribed in the square is hence centered at $(5,5)$ with a radius of $5$. The circle equation thus becomes $(x-5)^{2} + (y-5)^{2} = 25 \Rightarrow y = 5 + \sqrt{25 - (x-5)^{2}}$ in the first quadrant.
  3. Similarly for the quarter circle, the equation becomes $y = \sqrt{100-x^2}$.

The graph hence looks like this:

enter image description here

My intention is to find the shaded area in the above graph. To do so, first I find $X$ by equating $5 + \sqrt{25 - (x-5)^{2}} = \sqrt{100-x^2} \Rightarrow x=\frac{25 - 5\sqrt{7}}{4}$.

From this, I calculate the area of the shaded region as follows: $$\text{Area} = (10 \cdot \frac{25 - 5\sqrt{7}}{4} - \int_0^\frac{25 - 5\sqrt{7}}{4} \sqrt{100-x^2} \,\mathrm{d}x) + (10 \cdot (5 - \frac{25 - 5\sqrt{7}}{4}) - \int_\frac{25 - 5\sqrt{7}}{4}^5 5 + \sqrt{25 - (x-5)^{2}} \,\mathrm{d}x) \approx 0.7285$$

Now, the diagram looks like this:

enter image description here

From here, I figured out the shaded area as follows: $$\text{Area} \approx 10^{2} - \frac{\pi(10^{2})}{4} - (\frac{10^{2} - \pi(5^{2})}{4} + 2 \times 0.7285) \approx \boxed{14.6 \: \text{cm}^{2}}$$

While I did figure out the correct solution, I find my approach to be rather lengthy. I was wondering if there is a quicker, simpler and more concise method (that probably does not require Calculus) that one can use and I would highly appreciate any answers pertaining to the same.

Bitwise inner product and orthogonality

Posted: 07 Apr 2021 09:02 PM PDT

I am confused about the definition of bitwise inner product used in quantum algorithms. For example, bitwise inner product of 01111 with itself (in mod2) gives us 0. But they are not orthogonal to each other. How come the inner product is 0? Am I missing a point here?

How many non-overlapping circles can you fit into an area?

Posted: 07 Apr 2021 09:02 PM PDT

Say you have an area, let's use a circle, and you want to cover it with circular objects (coins for example). How many coins can fit completely into this area without overlaps or deformation the coins. All the coins are the same size.

I've noticed there are two ways for the coins to tessellate:

  • in a square pattern with coins being placed with their centers at the intersection of grid lines.
  • in a triangular pattern where coins are placed with their centers at the vertices of a triangle. Like:

    example

No comments:

Post a Comment