Recent Questions - Mathematics Stack Exchange |
- A simple problem using gradient vector
- Find a sequence of continuous functions that converges to a piecewise function
- how to simplify polynomials use karnaugh map
- interpreting decreasing negative percentages
- What are some preferred practices to handle systematics uncertainties for fitted model parameters?
- Help with volume of solids of revolution
- How can we be sure that axioms are sufficient?
- derivative of square root of quadratic form with respect to matrix
- Possible to prove strong induction implies weak induction?
- Formality of Treating Differential Operators as a Variable in Solving Diff Eqs.
- How would this lambda function reduce?
- Uniform Convergence of series to function on subinterval
- Increasing sequence whose supremum is $x\in X$
- Prove or disprove $\langle a,b,c\rangle$ is free
- how to interpret decreasing negative percentages
- General formula for coplanar vectors in higher dimensions
- $\sigma$-algebra of coin tosses until first head
- direction derivative geometric proof [closed]
- Which is correct, singular or plural? "interval $(-2, 0)\cup(2, \infty)$" vs "intervals $(-2, 0)\cup(2, \infty)$"
- Van Kampen on two tori joined at a point.
- How to prove $2x^2 + x + 1$ is injective on the domain and codomain of all integers
- Limit of $\left(y+\frac23\right)\mathrm{ln}\left(\dfrac{\sqrt{1+y}+1}{\sqrt{1+y}-1} \right) - 2 \sqrt{1+y}$
- cardinality of vector space with uncountable basis
- finding the minimum value of a multivariable function
- Another question in the proof of A&M Thm 9.5
- How is the inequality with modulus function holds good?
- Complexity of representing all satisfying assignments
- Simplifying a Hessian
- $O\left(\dfrac{1}{x}\right)+O(1)=O(1)$?
- What is the probability of picking 3 aces and 2 kings out of a 54-card deck (52-card standard with jokers)?
A simple problem using gradient vector Posted: 15 May 2021 07:49 PM PDT I'm asking for a simple problem that requires using gradient vector. This one is from the university assignment and I couldn't find the source.
'Gradient of T(2, 3, 3)' would be the answer but I have no idea how to come up with this since there's no information on the polynomial T. How can I solve this? | ||||||||
Find a sequence of continuous functions that converges to a piecewise function Posted: 15 May 2021 07:47 PM PDT Let $f(x)= \left\{ \begin{array}{lc} 0, & x \leq 0 \\ \\ 1, & 0 < x \\ \end{array} \right.$ Find a sequence of $f_{n}$ such each one of $f_{n}: \mathbb{R} \rightarrow \mathbb{R}$ is continuous and $f_{n}(x) \rightarrow f(x)$ for all $x \in \mathbb{R}$. I think $$f_{n}(x)= \left\{ \begin{array}{lc} \frac{1}{n}x, & x \leq 0 \\ \\ x^{\frac{1}{n}}, & 0 < x \\ \end{array} \right.$$ could work, Am I right? | ||||||||
how to simplify polynomials use karnaugh map Posted: 15 May 2021 07:46 PM PDT | ||||||||
interpreting decreasing negative percentages Posted: 15 May 2021 07:45 PM PDT Here's a table of decreasing negative percentages for items shipped on time. I think more items were shipped on time in July than in May or June. Is that correct ? The table is showing the difference in negative percents.
| ||||||||
What are some preferred practices to handle systematics uncertainties for fitted model parameters? Posted: 15 May 2021 07:34 PM PDT I have a set of data $(x_{\rm data}, y_{\rm data}, \Delta y_{{\rm data}})$, where $\Delta y_{{\rm data}}$ is the uncertainty in the measurement. I also have some model used to explain the data. The model is non-linear:$$y_{\rm model} =\mathcal{F}(\bf{p}; x_{\rm model})~,$$ $\bf{p}$ being the model parameters. I have some prior knowledge about $\bf{p}$, so I run MCMC chain to estimate the posterior distribution of $\bf{p}$, and use that distribution to find the best-fit values and the uncertainties for $\bf{p}$. This is all good. However, I have few (less than 10) candidate models to explain the dataset, and the true model is unknown. All these models give different estimate for best-fit values and the uncertainties of $\bf{p}$. Thus, I need some good method which could be used to calculate the best-fit values and the uncertainties of $\bf{p}$, such that the model uncertainty is also included. While calculating such systematic uncertainty I also need to keep in mind that some of these models fit the data poorly compared to others (measured with some metric like $\chi^2 =\sum (y_{\rm data}- y_{\rm model})^2/\Delta y_{\rm data}^2$). Any suggestions? P.S. For model selection, I came across information criterion like the Akaike Information Criterion, so my naive thought was choose the best model from such criterion and report the best-fit value and uncertainty of the parameters for such best-fit model. | ||||||||
Help with volume of solids of revolution Posted: 15 May 2021 07:23 PM PDT I have a region $R$ defined by $y=x^2$, $y=2+x$ and $x=0$. How would be the integrals (no need to develop them) of the solid obtained by the revolution of $R$: $a)$ around $x$-axis integrating in relation to $1)$ $x$ and $2)$ $y$? $b)$ around $y$-axis integrating in relation to $1)$ $x$ and $2)$ $y$? I just started studying calculus by myself and I would like to check some answers. In the item a.1 I found $\int_0^2\pi|x^4-(1+x)^2| dx$. Is that correct? How would be the others? | ||||||||
How can we be sure that axioms are sufficient? Posted: 15 May 2021 07:23 PM PDT My question by itself is hard to ask and hard to answer. I don't mean to be ambiguous, but it is one of those philosophical questions that requires a bit of specification. Let me provide some context. Axioms are often used in two ways. The older and original concept seems to me like a top-down description. You can work with logical systems like numbers or lines on planes all you want, but when you try to backtrack your deductions, you come across axioms that can not be traced further back, much like how atoms can be not split further. Axiosm are also often used to describe assertions that systems are built from, like building blocks. I call these bottom-up axioms. That there may be no distinction between the two types is a philosophical question for another thread. Let me use Euclidean axioms as an example. Mathematicians thought they were top-down for hundreds of years until they realized that the first four can be used independently of the fifth. Then they realized that you can "play" with axioms and basically develop novel systems that may or may not correspond to real world things. A similar process with number systems lead to the develop of modern algebra. After trying to find the most fundamental statements underlying number systems, mathematicians realized you can basically mix and match properties of operations to create new systems. But how did these mathematicians become convinced that their axioms were "true" in the first place? For further clarification, I have no doubt that the axioms, in whatever direction, used by mathematicians do indeed describe mathematical objects very well. But I am referring to top-down axioms in my question. With this additional information, maybe my initial question is easier to understand. How did/do mathematicians deduce that their axioms were/are sufficient in describing whatever abstract object that they are pointing to? I believe that this answer boils down to experimentation. Assert the axioms and if anything "breaks" then add more axioms, roughly speaking. Further independent study of the axioms themselves can provide more intuitive confirmation. At some point, you become so sure of the axioms that it is like you might as well define objects by their axioms. Hence, this is why top-down axioms and bottom-up axioms are used interchangeably. If you could direct to me to any relevant texts, that would be very welcome. | ||||||||
derivative of square root of quadratic form with respect to matrix Posted: 15 May 2021 07:22 PM PDT How would I go about calculating $ \frac{d\textit{$\alpha$}}{d\boldsymbol{A}}$ for $$\textit{$\alpha$} = \sqrt{\boldsymbol{x}^{\intercal}\boldsymbol{A}\boldsymbol{x}}$$ where $\alpha$ is a scalar, $\boldsymbol{x}\in\mathbb{R}^{n}$, and $\boldsymbol{A}\in\mathbb{R}^{n\times n}$. Sorry if this question is straightforward. I'm trying to implement an algorithm and came across this equation. I'm not familiar with matrix and vector derivatives. Also, any links to a comprehensive introduction to matrix/vector calculus would be appreciated. | ||||||||
Possible to prove strong induction implies weak induction? Posted: 15 May 2021 07:36 PM PDT I have managed to formally prove (using my own system, not ZFC) that weak induction implies strong induction. I started with Peano's 5 Axioms for the natural numbers, constructed (in order) the addition function, partial ordering and strict ordering as sets of ordered n-tuples of $N$. Then I proved that strong induction will hold: $\forall P\subset N: [0\in P ~\land~ \forall x \in N:[\forall y \in N:$ What about proving the converse? How can I show that the strong induction implies weak induction when I needed weak induction to construct addition and the orderings? | ||||||||
Formality of Treating Differential Operators as a Variable in Solving Diff Eqs. Posted: 15 May 2021 07:16 PM PDT There is these methods in an Engineering Book of an Indian author where the particular solution of a linear Differential Equations of higher order are treated using differential operator $D$. Now, the author justified that in the case where the diff eq. are : $(D^3 - 3D + 2)y = e^x$ For the particular soln simply put: $y = \frac{e^x}{D³-3D+2}$ where $D=1$ Clearly the denominator will become zero so the author stated another method in that particular case: Differentiate the denominator with respect to D and multiply the numerator by $x$ $y= x\frac{e^x}{d\frac{(D³-3D+2)}{dD}} $ where $D=1$ So by induction the general formula for finding the particular soln of that forcing function becomes: $y = x^n\frac{e^{ax}}{\frac{d^nf(D)}{dD}}$ where $D=a$ until the numerator is not 0 upon substitution My question is, why is this valid? Similarly for the particular soln for linear Differential Equation of the forcing function $x^n$ Simply use the negative Binomial Theorem for the $f(D)$: Ex. $(D^2+D)y = x^2$ Put $y = (D^2+D)^{-1}(x^2)$ By separating $\frac{1}{D}$ and from the binomial Theorem: $y = \frac{1}{D}(1-D+D^2....)(x^2)$ For $D^n$ and $n > 2 $ the $ x^2$ becomes 0 and the answer becomes: $y = \frac{x²-2x+2}{D}$ and operate by $\frac{1}{D}$ Why is Differential Operators allowed to be treated in these way? Also, what particular subject or topic can I read to understand things like these? | ||||||||
How would this lambda function reduce? Posted: 15 May 2021 07:08 PM PDT Let's say I have $$\pmb{T} = \lambda ab.a$$ $$\pmb{F} = \lambda ab.b$$ Now I can define a $$\pmb{N} = \lambda p.p F T$$ My question is if I were to substitute in the $$\pmb{N} = \lambda p.p F T$$ $$\pmb{N} = \lambda p.p (\lambda ab.a) (\lambda ab.b)$$ How does it get reduced from there? (And on another note, is there a way to (1) align the equations in TeX so the | ||||||||
Uniform Convergence of series to function on subinterval Posted: 15 May 2021 07:42 PM PDT If $\sum_{k=0}^\infty {a_k}x^k$ has a interval of convergence $(-1, 1)$, and $f(x)=\sum_{k=0}^\infty {a_k}x^k $, for $x\in[0,1)$, then the series does not converge uniformly to $f(x)$ on $[0,1)$. I'm trying to either come up with a proof for this statement or a counterexample. I was not totally sure where to start a proof for this, so I was looking for counterexamples, but I didn't have anything that jumped out to me. | ||||||||
Increasing sequence whose supremum is $x\in X$ Posted: 15 May 2021 07:37 PM PDT Let $X$ be an ordered set. Let $x$ be a point in $X$ without an immediate predecessor. That is, there does not exist a point $a<x$ in $X$ such that $(a, x)=\emptyset$. Prove that there exists an increasing sequence in $X$ whose supremum is $x$. Edit: Seems like this is untrue if there is no stricter condition on $X$. Then what if $X$ is only countably infinite? If the claim is true for a countably infinite $X$, can one explicitly construct such a sequence? As for this, I've tried to construct a sequence based on a bijection $f:\mathbb{Z}_+\to \{a\in X\,|\, a<x\}$, but failed to construct one whose supremum is exactly $x$. | ||||||||
Prove or disprove $\langle a,b,c\rangle$ is free Posted: 15 May 2021 07:06 PM PDT Suppose that $\langle a,b\rangle$, $\langle b,c\rangle$, and $\langle a,c\rangle$ are free, is it true then that $\langle a,b,c\rangle$ is a free group of rank 3? We assume that $a,b,c$ are distinct group elements. I know this is false if only $\langle a,b\rangle$ and $\langle b,c\rangle$ are free, for example we can take $a=\sigma_1$, $b=\sigma_2$, and $c=\sigma_3$ in the braid group $B_n$ for $n>3$. | ||||||||
how to interpret decreasing negative percentages Posted: 15 May 2021 07:08 PM PDT Here is a table of decreasing negative percentages, how would this be interpreted? I think more items were shipped on time in July than in May or June. Is this correct ?
| ||||||||
General formula for coplanar vectors in higher dimensions Posted: 15 May 2021 07:19 PM PDT If I have 3 vectors, a, b and c in 3D, I can check if they fulfill $c=\alpha a + \beta b$ (i.e. if they lie in a 2D plane) for some real parameter $\alpha$ and $\beta$ by checking if $(a\times b)\cdot c = 0$. If I have 3 vectors in n dimensions, is there a similar, general formula to check if $c=\alpha a + \beta b$? Thank you! | ||||||||
$\sigma$-algebra of coin tosses until first head Posted: 15 May 2021 07:17 PM PDT Suppose that a coin is tossed until a heads is obtained. For this situation, what would be the tripe $(\Omega,\mathcal F, P)$? For the sample space, I suppose it would be $$\Omega := \{H, TH, TTH, TTTH, ...\}$$ Now, to properly define a probability measure such that for $n$ tosses we have $P( TT...TH)= 1/2^n$. I suppose we need to define an algebra where this probability measure works, and then expand using Caratheodóry. Am I correct in assuming this? If so, what would be the algebra to which I'd expand in order to use Caratheodory? Or does the $\sigma$-algebra of the set of all combinations (i.e. $2^\Omega$) works as my $\mathcal F$? | ||||||||
direction derivative geometric proof [closed] Posted: 15 May 2021 07:00 PM PDT I understand the proof with limits but it doesn't feel intuitive. Is there a geometric proof? I just learned about them in school, so don't use fancy notation please | ||||||||
Posted: 15 May 2021 07:20 PM PDT I am interested in understanding the correct grammar for the use of the term "interval" vs. "intervals" (plural). For example: which of these is correctly stated?
I understand that the union of two intervals is not nessarily an interval (can be proven by counterexample because the intersection of two intervals may be an empty set). That being the case, it seems that when defining the solution set to an inequality using interval notation the use of "intervals" is more appropriate than "interval." But, I have seen both statements. Which is correct? | ||||||||
Van Kampen on two tori joined at a point. Posted: 15 May 2021 07:17 PM PDT Let $X$ be the space given by joining two disjoint tori at a single point p. By Van Kampen this can be split into the spaces of the two tori, with their intersection the point (n.b. You can also consider adding a "piece" of the other torus to each subspace, but it still deforms to these sets) Van Kampen then gives us: $$\pi_1(U) = \langle a,b ; ab=ba \rangle$$ $$\pi_1(V) = \langle c,d ; cd=dc \rangle$$ $$\pi_1(U\cap V) \cong 0$$ Now, from my notes this gives us that $\pi_1(X=U\cup V) = \langle a,b,c,d ; ab=ba, cd=dc \rangle$. However in thinking about this it occurs to me that by our group action surely $ac = ca$ (informally, attaching two loops around the seperate tori is the same regardless of which "direction" you go around), so how do I include this relation, considering it can't be inherited from the inclusion on the intersection? Additionally, how do I write it? | ||||||||
How to prove $2x^2 + x + 1$ is injective on the domain and codomain of all integers Posted: 15 May 2021 07:09 PM PDT I need to determine if $2𝑥^2+𝑥+1$ is injective over the domain and codomain of all integers, and if so show a proof of it. Based on what I can tell graphically I have assumed that the claim is true and now need help to prove it. So I know that to prove a function is injective we assume that: $∀𝑥,𝑦 ∈ ℤ, 𝑓(𝑥)=𝑓(𝑦) → 𝑥=𝑦$ And this is what I've done thus far: $2𝑥^2+𝑥+1=2𝑦^2+𝑦+1$ subtracting 1 and factoring: $𝑥(2𝑥+1)=𝑦(2𝑦+1)$ I know I want to show that $𝑥=𝑦$ and to do that here it seems I need to show the other factors are equivalent so that they vanish and I'm left with just $𝑥=𝑦$ but I can't seem to figure out how to do that. I've graphed the equation as well and it appears to be injective on integers as far as I can (and have the patience) to manually test but that doesn't really help me write the proof, just that I know I need to end up with $𝑥=𝑦$ at the end and not a contradiction. Any help would be greatly appreciated. | ||||||||
Posted: 15 May 2021 07:22 PM PDT In Scott Dodelson's 'Modern Cosmology', the limit for large $y$ of the following solution to the Mészáros equation is considered: $$\left(y+\frac23\right)\mathrm{ln}\left(\dfrac{\sqrt{1+y}+1}{\sqrt{1+y}-1} \right) - 2 \sqrt{1+y}.$$ Dodelson and many other references state that the large $y$ behaviour of this last expression is $y^{-3/2}$. I have been trying to show this in any way possible but I'm only hitting dead ends. Wolframalpha gives a Puiseux series of that expression which makes this behaviour explicit, but I do not understand how to get to this series either. Could you please help me on this? | ||||||||
cardinality of vector space with uncountable basis Posted: 15 May 2021 07:37 PM PDT
Clearly, $|\mathbb{R}| \leq |V|$ as the map $k\mapsto kb,$ where $ b\in\mathcal{F},$ is injective. Since $V$ is a vector space over $\mathcal{F},$ any element of $V$ can be written as a unique finite linear combination of elements of $V$ (with nonzero coefficients ignored, unless $v=0$, in which case $0=0 b_1$ for any $b_1\in \mathcal{F}$). Thus we can map each element $v\in V$ to the finite subset $S_v$ of the set $ \mathcal{F}\times \mathbb{R}$ such that if $\alpha \neq 0$ is the corresponding coefficient of $b\in\mathcal{F}$ in the unique linear combination corresponding to $v,$ then $(b,\alpha)\in S_v$ (so if $v = 0,$ map $v$ to the subset $\emptyset$). This map is indeed injective, as if $S_v = S_w $ for some $v, w\in V,$ then the linear combination corresponding to $S_v,$ which is $v,$ equals the linear combination corresponding to $S_w,$ which is $w.$ Observe that $|\mathcal{F}\times \mathbb{R}| = |\mathcal{F}||\mathbb{R}| = 2^{\aleph_0} 2^{\aleph_0} = 2^{\aleph_0+\aleph_0} = 2^{\aleph_0}.$ For a set $X,$ let $S(X)$ denote the set of finite subsets of $X$ and $S_n(X)$ denote the set of finite subsets of $X$ with cardinality $n.$ We now claim that if $X$ is infinite, then $|S(X)|=|X|.$ Clearly, the map $X\to S(X) : x\mapsto \{x\}$ is injective, so $|X|\leq |S(X)|.$ For any $n\geq 2, |X|^n = |X|,$ which can be shown by induction (the base case of $|X|^2 = |X|$ can be shown using Zorn's lemma). Also, the map $X^n \to \cup_{n\geq 1}S_n(X) : (x_1,\cdots, x_n)\mapsto \{x_1,x_2,\cdots, x_n\}$ is surjective, so $|S_n(x)|\leq |\cup_{1\leq k\leq n} S_k(x)|\leq |X|^n = |X|.$ Therefore, $|S(X)| = |\cup_{n\geq 0} S_n(X)| =1 + |\cup_{n\geq 1} S_n(X)| \leq \sum_{n=1}^\infty |X|^n = \sum_{n=1}^\infty |X| = \aleph_0 |X| = |X|$ (indeed $|X|\leq \aleph_0|X| \leq |X|\cdot |X| = |X|$). Thus, $|S(X)| = |X|.$ Thus $S(\mathcal{F}\times \mathbb{R}) = |\mathcal{F}\times \mathbb{R}| = 2^{\aleph_0}.$ So $|V|\leq |\mathcal{F}\times \mathbb{R}| = 2^{\aleph_0}$ by the described injection $v\mapsto S_v$. Thus, $|V|\leq |\mathbb{R}|.$ By the Cantor-Schroedner-Bernstein theorem, $|V| = |\mathbb{R}|.$
| ||||||||
finding the minimum value of a multivariable function Posted: 15 May 2021 07:00 PM PDT let $f$ be a function of three variables s.t $$f(X)=(X-A,X-A)+(X-B,X-B)+(X-C,X-C)$$ Where $X=(x,y,z)$, (X,X) is the inner product (X, Y) when Y = X, and those three vectors are distincs $$\cases{A=(a_1,a_2,a_3) \\ B=(b_1,b_2,b_3) \\ C=(c_1,c_2,c_3)}$$ Find the minimum value of this function. | ||||||||
Another question in the proof of A&M Thm 9.5 Posted: 15 May 2021 07:47 PM PDT I have a question in the proof of Thm 9.5. In the middle of the proof, it shows that the ring of integers (integral closure of $\Bbb Z$ in $K$ where $K/\Bbb Q$ is a finite field extension). Then it says that $A$ is integrally closed by (5.5)
From this, we can conclude $A$ is integrally closed in $K$ not in its field of fraction. Why can one say $A$ is integrally closed? | ||||||||
How is the inequality with modulus function holds good? Posted: 15 May 2021 07:15 PM PDT Consider the inequality |a|·|b| ⟨ ε. (1) Where ε is infinitesimal. If Then is it possible that |a|·k ⟨ ε. (3) ? If yes, then how? My concern is as ε is infinitesimal hence if we take k to be 10 then i dont see inequality (3) holding good on that because if ε is close to zero then the L.H.S value will become much greater than R.H.S therefore inequality sign should be opposite of what is being shown in (3). I encounteered the above doubt while going through https://tutorial.math.lamar.edu/classes/calcI/defnoflimit.aspx where in example 3, this reasoning becomes a major factor in deciding value of δ . | ||||||||
Complexity of representing all satisfying assignments Posted: 15 May 2021 07:41 PM PDT I am not formally educated in Complexity Theory hence asking this question. In which complexity class should the problem of representing all satisfying assignments of a Boolean system (equivalently a Boolean formula) belong? This is not a decision problem because we want to know all satisfying assignments. Also this not a counting problem because the set of all satisfying assignments can be far bigger than the set of partial assignments (such as those defined by implicants of the formula) which represents all satisfying assignments. | ||||||||
Posted: 15 May 2021 07:35 PM PDT Let $b: \mathbb R^m \to \mathbb R$ be a differentiable function, $A \in \mathbb R^{d \times m}\;$ and $x \in \mathbb R^d$. Consider the function $f(A) = b(A^T x)$. Let us denote the argument of $b$ as $\theta$. We would like to compute the Hessian of $f$. My calculation suggests that $$ \frac{\partial^2 f}{\partial A_{uv} \,\partial A_{st}} = x_s \,x_u \,\frac{\partial^2 b}{\partial \theta_v \,\partial \theta_t} $$ where the last term is evaluated at $\theta = A^T x$. Suppose we would like to write this as a Hessian matrix w.r.t. $\text{vec}(A) \in \mathbb R^{md}\;$ which is obtained by vertically concatenating the columns of $A$. It seems to me that this Hessian can then be written as $$ \Big(\frac{\partial^2 b}{\partial \theta_v \,\partial \theta_t}\Big)_{u,v = 1}^m \otimes xx^T = \nabla^2 b \,\otimes \, xx^T. $$ where $\otimes$ is the Kronecker product and $\nabla^2 b = (\frac{\partial^2 b}{\partial \theta_v \,\partial \theta_t})$ is the Hessian matrix of $b$. I did convince myself that this is true, but checking it is an indexing mess and I might have fooled myself into believing it.
| ||||||||
$O\left(\dfrac{1}{x}\right)+O(1)=O(1)$? Posted: 15 May 2021 07:32 PM PDT Intuitively I think the question posed in the title is rather obvious. That is, at least when $x>1$ then $O\left(\dfrac{1}{x}\right)+O(1)$ is bounded by some constant times $1$, $\forall x$. But how would you prove the statement in the title? I tried using a working definition that I found on a youtube lecture: If $f$ and $g$ are functions that take on positive values, defined for all $x$ in some set $S$. then we say $f$ is $O(g)$ and write $f(x)=O(g(x))$ if there exists a constant $K$ such that $f(x)\le K\cdot g(x)$, $\forall x\in S$. I tried the following: Let $f_{1}(x)=O(1/x)$ and $f_{2}(x)=O(1)$. Then we know that $f_{1}(x)\le K_{1}\cdot 1/x$ and in particular $$f_{1}(x)\le K_{1}\cdot 1/x < K_{1}\cdot 1$$ $\forall x>1$. We also know that $f_{2}(x)\le K_{2}\cdot 1$, which implies that $$f_{1}(x)+f_{2}(x)\le K_{1}\cdot 1/x + K_{2}\cdot 1\le K_{1}\cdot 1+K_{2}\cdot 1 = K\cdot 1$$ $\forall x>1$. thus $$f_{1}(x)+f_{2}(x)=O(1)$$ $\forall x>1$. Meaning $$O(1/x)+O(1)=O(1).$$ | ||||||||
Posted: 15 May 2021 07:01 PM PDT A fundamental exercise is to calculate the probability of picking 3 aces and 2 kings while randomly picking a 5-card hand out of a 52-card deck. Our sample space would be $52\cdot51\cdot50\cdot49\cdot48\over5!$, and the event would be $4! \over3!$ $\cdot$ $4!\over2!$. So far so good. But what if we add the jokers in the deck? The probability for the hand to contain a joker would be the same as any other card (and of course the sample space should increase to 54, etc.,) but since the joker can be anything, I can't find a way to handle the rest. Can I assume a $6!\over3!$ event probability for the aces, let's say, since there are now 6 "possible aces" (the 4 "real" aces and the 2 jokers) in the deck? But if that holds true, I can't concurrently have 6 possible kings. |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment