• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Quantitative Interview questions and answers

second was bayes theorem .. not sure about the first one .. doors 2 and 3 give an infinite series and hence equal to expectation after adding up their individual time ..

what are the correct answers ??
 

Attachments

  • IMG_20150107_201825449.jpg
    IMG_20150107_201825449.jpg
    785.4 KB · Views: 147
I have found this one (interview question for junior quant)

You have to walk from point A to point B 10 times, blindfolded. Every time before you start walking, a fair coin is tossed, and if it comes out heads, a wall in the middle is present (situation 2 below), while if it comes out tails it is the situation one below

junior_researcher.gif

. As you are blindfolded, you do not know if the wall is present or not.
Even though you are blindfolded, you can move in perfectly straight lines towards any point you choose. Can you give a prescription how to walk from A to B so that the total distance you cover is as small as possible? What distance do you expect to cover in those 10 walks?

At least, the distance can be obtained exactly.

When the wall is not present, the distance we should walk is 2x. When there is a wall there, the distance is (by using the Pythagorean theorem) x+1-sqrt(x^2-1)+sqrt(2). We need to find the minimum of the sum of these two distances: 3x+1-sqrt(x^2-1)+sqrt(2). The derivative of this function is 3-x/sqrt(x^2-1). The minimum is obtained when the derivative is zero: 3-x/sqrt(x^2-1)=0. That gives x=3/[2*sqrt(2)]. The distance we expect to cover in 10 walks is

10{1/2*2x+1/2*[x+1-sqrt(x^2-1)+sqrt(2)]}=5+15*sqrt(2).

That is approx. 26.2132. Furthermore, the angle mentioned in the previous replies is arccos(2*sqrt(2)/3). By using a calculator, it is approx. 19.48 degree.
 
Nifty question. My gut was to say that the answer here is \(\frac{1}{4}\), since we're conditioning on the report of the roll, which gives us more information to work with. A closer look shows that this is a reasonable answer to the question, but not the only possible one....

This is actually a conditional probability:

\(P(rolls 6 | reports 6) = \frac{P(rolls 6, reports 6)}{P(reports 6)}\)

Here's where the assumptions begin to come in. For one thing, we assume that the roll and the decision to lie are independent; under this assumption, the numerator is easy:

\(P(rolls 6, reports 6) = P(rolls 6, tells the truth) = \frac{1}{6}*\frac{1}{4} = \frac{1}{24}\)

The denominator is trickier. This is where the biggest assumptions come in.

\(P(reports 6) = P(rolls 6, tells the truth) + P(rolls non-6, lies, reports 6)\)

We've already computed the first term, but how do we compute the second? We have to know something about how the person lies.

Are the person's lies always plausible? (That is, would the person lie by saying a 3.4 had been rolled, or a 529?) Given that they're always plausible, are they "fair?" (That is, is the person's lying strategy to choose a plausible lie at random with an equal probability of each lie, or is there some other distribution to the lies? Is the lie even random, aside from the initial throw of the die?)

For the sake of convenience, let's assume that, when the person lies, he or she does so by reporting some other plausible result at random, with equal probability of each lie. Then:

\(P(reports 6) = \frac{1}{24} + \frac{5}{6}*\frac{3}{4}*\frac{1}{5}=\frac{1}{24}+\frac{1}{8}=\frac{1}{6}\)

Overall, then:
\(P(rolls 6 | reports 6) = \frac{\frac{1}{24}}{\frac{1}{6}} = \frac{1}{4}\), as expected.

For the sake of illustration, though, let's work the problem under a different lying strategy: suppose the person always says "6" when lying about a non-6. (We don't really care what he does when lying about a 6.) Then:

\(P(reports 6) = \frac{1}{24} + \frac{5}{6}*\frac{3}{4}*1 = \frac{1}{24} + \frac{5}{8} = \frac{2}{3}\)

Under this alternative assumption:

\(P(rolls 6 | reports 6) = \frac{\frac{1}{24}}{\frac{2}{3}} = \frac{1}{16}\)

As you can see, how you assume the person lies can have a rather dramatic impact on the answer....
Just joined this community. Your solution is boggling me down. It ays the truth is told 3 out of 4 times. But in your solution to the first part, you made it \(P(tellsthetruth) =1/4\). Is there a reason to that, or I am getting something missed up. Thanks.
 
3. (GS) There are 5 thieves numbered 1,..,5 trying to divide 100 gold coins using this algorithm: the number 1 will come up with a way to divide money, if there is more than 50% agreement among them about his method (including the dividing thief) then it's done. If not, then they will kill the first thief and the second thief will divide money coming up with his own method. If you are the first one, what method you will use to divide money ?
#1 knows:
#5 will vote against #4 if its between #4 and #5, so #5 will vote no.
#3 will win because #4 will vote yes, so #3 will vote no.
#2 will vote yes because he will lose the vote to #3 and #5

So, #1 takes all 100 coins, #2 and #4 escape with their lives, while #3 and #5 get nothing.

I liked this one.
 
Hi,
Here is a question I got stuck on:
X and Y are independent non-negative integer valued random variables. Pr{X=i} = f(i), Pr{Y=i}=g(i). f(i) > 0 and g(i) > 0 for i=0,1,...
Sum[f(i), i=0...inf] = 1 and Sum[g(i), i=0...inf] = 1 of course.
Given Pr{X=k | X+Y = l} = Binomial(l,k) (p^k)(1-p)^l-k , 0<= k <= l, prove that X and Y are Poisson distributed.
 
Hi,
Here is a question I got stuck on:
X and Y are independent non-negative integer valued random variables. Pr{X=i} = f(i), Pr{Y=i}=g(i). f(i) > 0 and g(i) > 0 for i=0,1,...
Sum[f(i), i=0...inf] = 1 and Sum[g(i), i=0...inf] = 1 of course.
Given Pr{X=k | X+Y = l} = Binomial(l,k) (p^k)(1-p)^l-k , 0<= k <= l, prove that X and Y are Poisson distributed.

It is indeed easier/more classic to prove that if X and Y are Poisson then the conditionnal law is a Binomial.

But you can prove your result by using Probability Generating Functions and the fact that they do characterize the discrete laws
 
It is indeed easier/more classic to prove that if X and Y are Poisson then the conditionnal law is a Binomial.

But you can prove your result by using Probability Generating Functions and the fact that they do characterize the discrete laws
It is straightforward to prove "<--" but very difficult to prove "-->"
 
I think I got it **

You have:

E(z^X | X+Y)= (pz + 1-p)^(X+Y) (1)

E(z^Y | X+Y)= ((1-p)z + p)^(X+Y) (2).

Then using (1) and (2)
(We will note f the PGF of X+Y)

f(z)=f(pz + 1-p) f((1-p)z+p) (3)

Which should implie * that for every z1 and z2

f(z1+z2)=f(z1)f(z2+1)
Which gives f(z1+z2)=f(z1)f(z2)f(2).

Let F=f(2).f

Then F is the power Function (F(z1)F(z2)=F(z1+z2))
F(z)=exp(a z) for some a.

And f(z)=exp(a(z-1))
Which is the PGF of a Poisson random variable with parameter a.

Then you take the expectation of (1) and you have that X is a Poisson of parameter a.p. The same with (2) proves that Y is a Poisson with parameter a.(1-p).

Tell me guys what you think.


* That might not be true. I thought it could be done using identity theorem for holomorphic Functions, but it doesnt seem possible to use the theorem actually.

It would be sufficient to prove that relation (3) implies f is proportionnal to exponential Function to prove the result.

** Not too sure ^^
 
Last edited:
I think the proof is as follows.

1) random variables X and Y are independent variables.
2) P(X= K | X+Y = L) follows binomial distribution

Let's assume A1) the conditional probability is a function of two variables (X, Y)
P(X= K | X+Y = L) = F(X, Y)

Using the definition of conditional probability:

P(X= K | X+Y = L) = P(X =K , X+Y = L)/P(X+Y=L), = P(X=K, Y=L-K)/P(X+Y=L) = P(X=K)*P(Y=L-K) / P(X+Y = L)

Note that independent variables' probability mass function (density function) can be expressed as follows : f(x, y) = f1(x) * f2(x).

The corollary is that if a probability density function can be split cleanly into g(x) and h(y), then the two variables are independent.

The binomial distribution is L!/[(K!)(L-K)!]*(P^K)*([1-P]^[L-K]) which is equivalent to (X+Y)!/[(X!)(Y)!]*(P^X)*([1-P]^[Y]).

This can also be written as (X+Y)!/[(X!)(Y)!]*(P^X)*([1-P]^[Y]) exp(-p)*exp(p-1)*exp(1) since the latter addition equals 1.

This can be written as [(P^X)*exp(-P)/X!] * [([1-P]^[Y))exp(P-1)/(Y)!] / [(exp(-1)/(X+Y)!].

Since X and Y are independent variables, and (X+Y)! cannot be separated cleanly into functions
of X and Y. This violates the initial assumption (A1) which was that the conditional probability is a function of two variables (X, Y). It follows that [(exp(-1)/(X+Y)!] is pdf of another variable Z. This Z must be independent of X and Y since the conditional probability as be expressed as
f1(X)*f2(Y)*f3(Z)

Therefore, the binomial pmf is the product of three Poisson random variables, X, Y, and Z.
Z follows Poisson distribution with parameter P + (1-P) = 1. (This can be proved by using moment generating function.)

It's not surprising that given X, Y, and Z are Poisson random variables, X and Y and Z are independent (despite the fact that Z = X+Y, which in normal case would indicate dependence)

Remember that in case of Poisson distribution X, Y, X+Y represents the length of each intervals without pointing to specific coordinates the intervals resides. In each interval, the probability distribution is independent as long as the intervals do not overlap.

I don't know if this solution is correct. Feel free to comment
 
Last edited:
Hi, I dont know if I missed something but have you basically used the following argument ?

x*y/z=2*3/4;
implies x=2,y=3 and z=4.
 
The argument is close to :
x*y/z=2*3/4;
(z=4 "if and only if" x and y follow Poisson distribution.)

So I am saying that the conditional probability can be divided into three probability functions of Poisson distribution of (x, y, z=x+y)
In order for Z=X+Y to follow a Poisson distribution with parameter 1, X and Y must be Poisson distribution as well.

Actually I'm not sure about this solution....
 
Yes.

If Z = X+Y

Assume X and Y are 1) independent 2)Poisson variables with parameters a and b respectively.

MGF of Z is :
Mz(t) = E(exp[tz]) = E(exp{t[x+y]}) = E(exp[tx]) * E(exp[ty]) = exp[a(exp(t)-1)] * exp[b(exp(t)-1)] = exp[(a+b)(exp(t)-1)]

Because MGF of Z is unique (MGF of any random distribution is unique), Z is also a Poisson random variable with parameter a+b.

Therefore, given that X and Y are Poisson random variables with parameters a and b, Z = X+Y is also a Poisson distribution with parameter a+b.

This is a generalized result. Applying this to the specific case of the problem above:

X is a Poisson distribution with parameter p
Y is a Poisson distribution with parameter 1-p

Therefore, Z=X+Y is a Poisson distribution with parameter p + (1-p) = 1.
 
You proved that if X and Y are indépendant and Poisson then Z is Poisson. But what you said is if Z is a Poisson and X and Y indépendant, then X and Y are Poisson. I am not sûre this is true.
 
Have you read this part:

Since X and Y are independent variables, and (X+Y)! cannot be separated cleanly into functions
of X and Y. This violates the initial assumption (A1) which was that the conditional probability is a function of two variables (X, Y). It follows that [(exp(-1)/(X+Y)!] is pdf of another variable Z. This Z must be independent of X and Y since the conditional probability can be expressed as
f1(X)*f2(Y)*f3(Z)

Therefore, the binomial pmf is the product of three Poisson random variables, X, Y, and Z.
Z follows Poisson distribution with parameter P + (1-P) = 1. (This can be proved by using moment generating function.)

I'm not saying that this is the solution though... I may be wrong
 
I think I got it **

You have:

E(z^X | X+Y)= (pz + 1-p)^(X+Y) (1)

E(z^Y | X+Y)= ((1-p)z + p)^(X+Y) (2).

Then using (1) and (2)
(We will note f the PGF of X+Y)

f(z)=f(pz + 1-p) f((1-p)z+p) (3)

Which should implie * that for every z1 and z2

f(z1+z2)=f(z1)f(z2+1)
Which gives f(z1+z2)=f(z1)f(z2)f(2).

...

* That might not be true. I thought it could be done using identity theorem for holomorphic Functions, but it doesnt seem possible to use the theorem actually.

It would be sufficient to prove that relation (3) implies f is proportionnal to exponential Function to prove the result.

** Not too sure ^^

Looks perfect up to (3) but I couldn't follow the rest. It seems you were close with the idea of using the Identity Theorem.

I can do the special case \(p=\frac12\) with the additional assumption that the generating function is an entire function:
\(f(z)=f(\frac{z+1}2)^2\qquad\) (4)
Then \(f'(z)=f'(\frac{z+1}2)f(\frac{z+1}2)\), and dividing by (4) we have
\(\frac{f'(z)}{f(z)}=\frac{f'(\frac{z+1}2)}{f(\frac{z+1}2)}\qquad\). (5)
Iterating \(\frac{z+1}2\) staring with \(z=0\) it follows that \(\frac{f'(z)}{f(z)}\) is constant on the infinite set \(\{0,\frac12,\frac34,...\}\) accumulating to 1, which is in the domain of \(f\) by the extra assumption. It now follows from the Identity Theorem that \((\log\circ f)'\) is constant, proving that \(f(z)=e^{az+b}\).

So it still needs to be proved that \(b=-1\).

And the assumption on \(f\) needs to be proven, if possible.

It's not clear to me how this could be generalized to all p.
 
Last edited:
Looks perfect up to (3) but I couldn't follow the rest. It seems you were close with the idea of using the Identity Theorem.

I can do the special case \(p=\frac12\) with the additional assumption that the generating function is an entire function:
\(f(z)=f(\frac{z+1}2)^2\qquad\) (4)
Then \(f'(z)=f'(\frac{z+1}2)f(\frac{z+1}2)\), and dividing by (4) we have
\(\frac{f'(z)}{f(z)}=\frac{f'(\frac{z+1}2)}{f(\frac{z+1}2)}\qquad\). (5)
Iterating \(\frac{z+1}2\) staring with \(z=0\) it follows that \(\frac{f'(z)}{f(z)}\) is constant on the infinite set \(\{0,\frac12,\frac34,...\}\) accumulating to 1, which is in the domain of \(f\) by the extra assumption. It now follows from the Identity Theorem that \((\log\circ f)'\) is constant, proving that \(f(z)=e^{az+b}\).

So it still needs to be proved that \(b=-1\).

And the assumption on \(f\) needs to be proven, if possible.

It's not clear to me how this could be generalized to all p.

Well done !
Very good to use the identity theorem that way.

Indeed it seems we need to prove that f is holomorphic outside the unit circle because we need an accumulation point inside an open connexe space on which f is holomorphic.

you can see that b=-a because all PGFs verify f(1)=1.

Maybe we can prove that the Function f is independant of 0<p<1.

Or we have to use the Identy theorem using your idea for other p, which, as you say doesnt seem easy...

Well done again.
 
Last edited:
Looks perfect up to (3) but I couldn't follow the rest. It seems you were close with the idea of using the Identity Theorem.

I can do the special case \(p=\frac12\) with the additional assumption that the generating function is an entire function:
\(f(z)=f(\frac{z+1}2)^2\qquad\) (4)
Then \(f'(z)=f'(\frac{z+1}2)f(\frac{z+1}2)\), and dividing by (4) we have
\(\frac{f'(z)}{f(z)}=\frac{f'(\frac{z+1}2)}{f(\frac{z+1}2)}\qquad\). (5)
Iterating \(\frac{z+1}2\) staring with \(z=0\) it follows that \(\frac{f'(z)}{f(z)}\) is constant on the infinite set \(\{0,\frac12,\frac34,...\}\) accumulating to 1, which is in the domain of \(f\) by the extra assumption. It now follows from the Identity Theorem that \((\log\circ f)'\) is constant, proving that \(f(z)=e^{az+b}\).

So it still needs to be proved that \(b=-1\).

And the assumption on \(f\) needs to be proven, if possible.

It's not clear to me how this could be generalized to all p.
If I can prove that the factorial moments of X (or Y for that matter) are of the form \(\lambda^k\) which is what one gets in case of Poisson, would that be considered a proof?
 
Last edited:
Back
Top