Showing posts with label Math Proof. Show all posts
Showing posts with label Math Proof. Show all posts

Sunday, April 17, 2011

Prince William and Kate – Let the Market Decide

Apparently, Brits will bet on anything. I was just watching the BBC and there was a short clip on questions Brits were asking and what bookies were placing in terms of odds of various events happening at the wedding: Kate’s car breaking down, Prince Harry dropping the ring etc.

This got me to thinking about mathematically predicting the future. Not just in physical models (for instance forecasting weather) which is difficult enough, but also in these sort of abstract ideas. After all, how do you predict what is Prince Harry’s chance of dropping the ring? Let’s say you could even come up with a very convincing model that would give you one good guess.

Another method would be to let the invisible hand of the gambling market decide. Assuming you could get enough interested people to bid in the “market” on an individual question, having more or less people bid for or against any particular question can give you an indication as to whether the odds offered are too “generous” and allow you to recalibrate the odds directionally until you arrive at a relatively stable number.

For example, when asked: “What’s the chance of the Queen wearing a blue hat?” initially it’s even odds. However, people start doing their homework, statistical multi-factor regressions, etc and discover that she actually tends to wear a blue hat quite often (especially at royal weddings) and people starting overwhelmingly taking bets that she will wear a blue hat. If the bookie is paying attention, they will start moving the odds, maybe now offering 2:1 odds that she will wear a blue hat. People start fine tuning their model and continue to make bets until the odds rest at 3:1 (the current odds of the Queen wearing a blue hat). The bookie takes bets on either side taking the spread.

The invisible hand of the gambling market has determined the probability of the Queen wearing a blue hat. While we aren’t even sure what models were used, assuming intelligent people using real money have staked a “best guess” at what the appropriate value for the probability of the Queen wearing a blue hat is, we are given another robust answer.

Thursday, April 7, 2011

Parthenon and the Golden Ratio

Today I visited the Parthenon which is one of my favourite buildings and the preeminent landmark of Athens and located in the Acropolis. And yet, many people are unaware of the beautiful mathematical design it holds in plain view. Built into the Parthenon’s design you will often discover that many porportions are designed to the golden ratio, also known as the divine proportion, the ratio that many Greek’s believed to be the source of proportional mathematical beauty.

Shown above: Parthenon

Many believed that this ratio occurred often in nature and was the basis for beautiful proportions (for example it appears in Leonardo Da Vinci’s Vitruvian Man as the ratio between the length of the arms and legs).  Another occurance is with Fibonacci numbers as the ratio between them is an approximation of this same ratio (Fibonacci numbers are numbers in a sequence in which the next number is the sum of the previous two numbers). The approximation becomes more accurate the further out you go. Note: It's only an approximation because the first two numbers are one and one.

The golden ratio is mathematically defined as follows:

(A + B) / A = A / B

Or in layman’s terms, the proportion between two measures is the same as the proportion of their combined length and the larger length. Note that this definition can be recursive and applied over multiple lengths repeatedly.

Also note, that the proof for the actual ratio is self is quite elegant:

(A + B) / A = A / B (Multiplying both sides by A x B we get)

AB + B^2 = A^2

0 = A^2 – AB – B^2

= A^2 – AB + 1/4 B^2 – 1/4 B^2 – B^2

= (A – 1/2 B)^2 – 5/4 B^2

Now assume that B is 1.

= (A – 1/2)^2 – 5/4

5/4 = (A – 1/2)^2

Sqrt(5) / 2 = A – 1/2

Therefore, the golden ratio, A = [1 + sqrt (5)] / 2

Note that this makes A approximately 1.62. (it is an irrational mathematical constant)

Intricate in its beauty, elegant in its simplicity.

Near the Parthenon was once the statue of Athena overlooking the city.

Shown above: Athena's perch from which she offered Victory to the Athenians, now long vacant.

Previously, this building was home to statue of the grey-eyed goddess who watched over her namesake and in her raised hand held Nike, the winged goddess of victory. While the statue is long gone, perhaps it is time for the Greeks to look after her and her fellow Olympians as they recover and protect artifacts of the past.

Saturday, April 2, 2011

Being Made Whole in Bankruptcy

In our discussions in our Managing Corporate Turnarounds class during our financial restructuring session, I got to thinking about what it would take to be made whole in a bankruptcy scenario. While the hard math will tell you that it is impossible in the short term (EV = 80, Net Debt = 100), I began to think back to the PIK and using a high yield to restore value in the future. So my question became this: If I hold the debt of an insolvent company, what can I negotiate to help me restore value? The most obvious solution is to renegotiate the terms of my debt which will probably result in me taking a haircut (discount) on the principal or face value of my debt. However, we’ve acknowledged that in scenarios where people become riskier, obviously the company’s related securities should bear a higher return. So my question then evolved to: If I have to take a discount of X percent, what additional spread Y would I have to earn in order to be made whole in N years. It turns out:

FV x (1 + kd) ^ N = FV x (1 – X) x (1 + kd + Y) ^ N


However, this assumes that you can break even with (or more accurately, catch up to) where your security would have been if the company had not defaulted to begin with. After playing with these numbers, however, it was quite clear that even with a small discount (say 20% discount), the spread Y had to be astronomically (unreasonably) higher in order to have any chance of being made whole relative to the standard debt, so I thought it would be unrealistic not to include a factor which accounts for the value lost:

FV x (1 + kd) ^ N = FV x (1 – X) x (1 + kd + Y) ^ N + Value Lost


In trying to understand what these numbers mean, I looked at Value Lost / FV as a proxy for the default rate of this type of security in distress which is obviously closely tied to the actual economic circumstances of the company. In the graph, it is reflected by the distance between the Standard Debt curve and the PIK (Realistic) curve.

Also, Y can probably be determined by looking at the spread between similar bonds with different credit ratings (dropping from BBB to C for instance).

X is reflective of the economic scenario (so if EV was 80 and Net Debt was 100, X would be 20%). It is also reflective of the negotiations, as well as considering a discount in order to liquidate the current assets of the company.

Another problem is also that once a company switches from PIK to cash sweep, its risk profile drops and it stops earning high yields, dropping the return on capital and therefore making it impossible to “catch up”. Also, a bank which was happy to finance your debt will not be interested in converting neither into a mezzanine structure better suited for hedge funds nor into equity.

This model is similar to the VC model of predicting the failure rate using the discount rate except in reverse. It is also similar to the interest rate parity (IRP) model and boot strapping by using compounding to determine where you would have / should have been otherwise as a benchmark for where you are going.

I guess the real lesson is that bankruptcy is really expensive and that being made whole in this scenario is difficult, regardless of the financial engineering and patience, although these two factors can be used to ease the pain.

Tuesday, February 8, 2011

Bridge to Value

One graph I've seen which I thought was clever was a breakdown of change in EV. This brings together many other details I've learned about M&A, LBO's and transactions in general.

Previously, I mentioned a framework for PE deal success, but it is easy to cut into more detail if necessary and really define and put a mathematical value to "synergies".

For example: After a transaction, we've increased sales by 21%. How does that affect EV? Well on one hand, you've immediately realized a 21% increase in revenue. After you account for associated costs with that increase in revenue (ie. You've sold more widgets, but it still costs you money to make those widgets), what do your future growth prospects look like as a result of this new growth (ie. Should you trade at a higher multiple? Have you gone from "boring" to "exciting"? Or is it just general market conditions?)

Previously, you had:

Market Cap = $100
Shares outstanding = 100
Price per share = $1

Debt = $100 (@ 5%)
Excess Cash = 0

EV = $200

Revenue - $100
COGS - $40
GPM = $60

Op Ex - $20
EBITDA = $40

DA - $10
EBIT = $30

Interest = $5

Tax = 40%

NOPAT = $18
NI = $15

Therefore:

EPS = 15 cents

P/E = ($1/$0.15) = 6.67x

EV/EBITDA = ($200/$40) = 5.00x

Let's tell a story: The 21% increase comes from opening a new line of products. You are selling 10% more products by introducing a new product line and this new product line actually increases your revenue per unit (across the board) by 10% (110% x 110% = 121%). All margins are the same.

What should we do? Bring everything down to the EBITDA level:

Now:
Revenue - $121
COGS - $44 (10% more products at same costs)

GPM = $77

Op Ex - $22

EBITDA = $55

DA - $10
EBIT = $47

Interest = $5

Tax = 40%

NOPAT = $28.20
NI = $25.20

EPS = 25.20c

(Magic happens - Which we will explain shortly)

New Price per share = $1.80

Market Cap = $1.80 x 100 shares = $180
Debt = $100
EV = $280

P/E = ($1.80) / ($0.2520) = 7.14x
EV/EBITDA = ($280 / $55) = 5.09x

Analysis:
So a lot is going on. The price of the equity and the enterprise has changed, but how can we do a cross section such that we know exactly where all the value is being driven from?

How much of this value is because of leverage (hint, we didn't change amount of leverage)?
How much of this value is simply because we are operationally better?
How much of this value is because we have a "brighter future" (better growth prospects)?

Step 1: Value from leverage arbitrage:
No change = 0

Step 2: Value from "synergies":
Total EBITDA level changes: $40 to $55 or $15
At a multiple of 5.00x (previous multiple), value increased is $75

Step 3: Value from "Brighter future"
Brighter future (higher multiple) due to either market conditions or expected future growth:
$55 at 5.00x versus at 5.09x = $55 x (5.09 - 5.00x) = $5

Total value created: $75 + $5 = $80 (note total increase in value of EV / Market Cap)

Next step, look closer at Step 2:
Change of $40 to $55 is created by:
$21 in Revenue (Price +10%, Volume +10%)
$4 in COGS (Volume + 10%)
$2 in Opex (Volume +10%)

For a $21 increase in revenue, keeping margins constant we would have expected an increase of:
$8.4 in COGS (40% of revenue) and $4.2 in Opex (20% of revenue). COGS is lower by $4.4 and Opex is lower by $2.2 versus what is expected.

Note we mentioned we can sell products for 10% more across the board.
This created value for existing product base (at EBITDA level) of
$110 - $40 - $20 or $50 versus $40 creating $10 of additional EBITDA level value (makes sense, increase topline growth by 10% without changing expenses / sales volumes results in increase of EBITDA by 10% of revenue)

Also, selling an additional 10% at old price we would expect:
$10 (additional sales) - COGS ($4) - Opex ($2) or $4

But selling new products at new price: Gain $1 (similar to math shown above)

Total change in EBITDA: $10 + $4 + $1 = $15
At 5.00x
$55 or ($10 + $1) x 5.00x of EV is generated from selling at a higher price
$20 ($4 x 5.00x) of EV is generated from selling new products (higher volume)

Above is what the bridge would look like if a PE firm had 60% ownership and management had 40%.

Note, this framework is iterative and can be applied across multiple product lines to help do a break out and sum of the parts analysis for companies to see where value is hidden in undervalued divisions.

Also note that as an interesting aside, if you were actually to build out a proper DCF model of this (using some basic business assumptions holding margins constant etc.), your short term growth rate would have to be adjusted upwards in order to come to the same intrinsic valuation that would justify the higher multiple.

Thursday, January 6, 2011

Back from the Holiday

Well after a well deserved break, all the students are back.

First years are in their Negotiations class which odd for me as I didn’t do this last year due to the Middle East study tour, but now I see what it was like with the Atrium constantly being flooded by ambitious MBA students trying to get better deals in their exercises. There have also been requests for help with preparing for recruitment week which is coming up next week and some postings already up.

Even second years are at school, many having the clever idea of taking an intensive or two to lighten their final term course load.

Yesterday, we did a presentation for our ICP in Islamic Finance. Arash and I did a presentation on two comparable securities, one conventional and one Islamic and we showed they were strategically and operationally comparable (same industry, business model, enterprise value, capital structure, debt ladder, similar maturity, seniority and economic conditions, but different country and terms) and we analyzed the yield, adjusting for country risk and broke down the spread accounting for liquidity risk, minor maturity differences and increased cost of capital related to Sharia compliant terms.

This material will be used as part of Rotman’s new Executive MBA program class on Islamic Finance. While we aren’t quite finished with our work, the next step being to propose a term sheet for what the conventional financing would look like if it were Sharia compliant, I’m very happy with our progress and the insight we were able to bring into this new product class.

Thursday, November 25, 2010

Multi-Factor Models – Applying the Lessons Learned from the Numbers

In Finance 1 last year, we were introduced to the idea of multi-factor models (MFM) originally explained by Fama and French as an alternative to the traditional Capital Asset Pricing Model (CAPM) for assessing systematic risk. Additional factors include small versus big (SML) and value versus growth (HML).

In our Business Analysis and Valuation class, we discussed a merger case in which a large company acquired a smaller company. We talked about what would be the best way to approximate beta. The method I used (which was the best method I could conceive, I’d be happy to hear criticism or suggestions otherwise) was to weight the betas by market cap and take an average.

However, there was some discussion about the fact that one entity was much smaller than the other. While we were having a conversation of what that would actually mean, I would suggest a mathematical method for expressing the quantitative effect of size, using FF’s MFM.

  1. 1. Express both company’s re as a three factor MFM
    Re1 – RFR = beta1 (Rm – RFR) + betas1 (SMB) + betav1 (HML)
    Re2 – RFR = beta2 (Rm – RFR) + betas2 (SMB) + betav2 (HML)
  2. Take the larger company’s size beta and apply it to the smaller entity
    betasnew = betas2
  3. Recalculate re for both entities
    Re1new – RFR = beta1 (Rm – RFR) + betasnew (SMB) + betav1 (HML)
    Re2new – RFR = beta2 (Rm – RFR) + betasnew (SMB) + betav2 (HML)
  4. Take a weighted average (by market cap) as the expected return of the combined entity

By taking the larger company’s size beta for both, what you are saying is that you expect the smaller company to have the size “characteristics” of the larger entity. I might even be more appropriate to add the size factors (Would that be appropriate? As the MFM is a linear regression, is it appropriate to add these factors?) and use that for new return on equity for each entity as it relates to the combined entity.

betasnew = betas1 + betas2

While there are some significant assumptions which are required for this to work, it is the best solution I can conjure based on information given. I would really appreciate any additional ideas for creating a more robust model.

Abnormal Earnings Method – Not Entirely Useless

When we were introduced to the Abnormal Earnings method in Business Analysis and Valuation, I was decomposing the math formula which constructs the value of the equity. As far as I was concerned, it didn’t really tell us anything we didn’t already know through a equity based discounted cash flow (FCFE discounted at re).

However, there was an interesting scenario in which this method actually told us something unique. First the formula:

Market Value = Book Value + (NI1 – re*BV)/re + (NI2 – re*BV)/re^2 + …

Nix is Net Income in year x

BV is book value

While in theory, this formula should return a similar value to an equity based DCF, one unique value is that the valuation is relative to book value, rather than strictly looking at only cash flows. Essentially, what it is saying is, the company is worth it’s book value, PLUS it’s “abnormal earnings” where abnormal earnings are the earnings you get in excess of what you would expect (re).

So in looking at a company that is trading below book value, I used to think that it meant that the market did not believe in the company’s management to perform (the company was burning cash). But it doesn’t just have to be that the company is on a “crash” course. It could also just be that the company is not performing as “expected” that is to say there net income is not necessarily negative, but simply less than what is expected.

Friday, October 15, 2010

Levering and Unlevering Beta - Mechanics of the Model

It occurs to me that I didn't really describe the formula for levering and unlevering Beta and why it works. Let's have a closer look:

First think of any given industry. There are some major assumptions required to get levering and unlevering beta to work. For instance, a dollar invested in an industry will give a constant return proportional to it's risk (beta). This is one of the fundamental assumptions in CAPM.
Levering beta is the idea that I can amplify the result of a deal by using debt. Let's look at an example:
I'm in an industry that has a beta of 1.2
ERP is 5%
RFR is 4%
ke = RFR + beta * ERP = 4% + (1.2) 5%
= 10%
So if I invest a dollar, I can expect to get a return of 10%.
However, I can use leverage (borrow money) so that I want to experience an amplified result:
I'll invest 1 dollar, but allow borrow another dollar to invest with. So I have two dollars in play, but only one of those is mine. My return? Well, assuming you can borrow at RFR (a huge assumption), you'll gain another 10% on that additional dollar, but have to pay 4% (RFR) so you'll gain another 6% on top of your original 10% for a total of 16%. You've doubled your systematic risk by having two dollars in play. Let's check with CAPM:
Beta = 2*1.2 (twice as much exposure)
ke = RFR + beta * ERP
= 4% + 2.4 * (5%) = 16%
This makes sense. For every additional dollar you borrow at RFR and invest in the industry, you will make the spread between their returns (or the beta * ERP).
So if you borrow D, dollars (D in this case stands for Debt), versus E dollars of your own money (Equity), and because CAPM is a linear function, you can expect:
  • Your exposure will be based off of D+E dollars (total capital in the game - systematic risk, exposure)
  • But off a base of E dollars invested (the money you have invested yourself, or Equity)
So if an industry has a beta of Ba, and you use leverage D on your initial investment E, you can expect your new beta of Be, to be:
Be = Ba * (E+D)/E
Note, however, that (E+D)/E = 1 + (D/E)
Look familiar? It's the levering formula:
Be = Ba * (1 + D/E)
But there is still one piece missing. The (1-t) portion of the formula is to simply account for the fact that debt that you borrow provides a tax shield, so the final version of the formula replaces D with (1-t)*D to get:
Be = Ba * (1 + (1-t)*D/E)
Hope this decomposition helps. I noticed this relationship is very prevalent in financial calculations as Anita McGahan clarified in one of her lectures on Starbucks in first year when talking about Operating Strategy versus Financial Strategy in the Dupont Decomposition. This was something I had also wondered about previously.

Thursday, October 14, 2010

Accounting – The Story Behind the Numbers

It seems like the major topic for this week has been related to working capital. In our financial management course, however, there was a great example case where simply knowing the numbers is not enough.

Simplified Case Info (expressed in thousands):

Revenue = 17805
AR = 6000
Average Day’s Receivable in the industry = 59 days

Analysis:

Company’s Average Day’s Receivable = 123 days

Proposed financing solution: Collect on AR to reduce Day’s Receivable to industry average of 59 days.

If Days Receivable = 59 days, implied new AR is 2878. The change in AR would be 6000 – 2878 or 3122.

So looking at this *mathematical* solution, it seems as if the company can get a free 3 million dollars just by tightening its AR, right? Well as it turns out probably not. The reason?

Most companies define default as non-payment of debts of 90 days or more. Previously, we’ve talked about how debts decay in value as they are outstanding for longer and longer (probability of collection and bad debt expense). If you look at this number, essentially what it is saying that the many of your accounts are in default with an average age of 120 days!

Sometimes you can’t just assume you can make operational changes to reflect a reality that you want. The truth of the matter is that those funds are probably lost. The firm probably won’t collect those accounts and will incur a significant bad debt expense.

In reading more of the case, it also mentioned that the company had a “no returns” policy with its distribution channel partners. Looking at this number not only meant that they probably weren’t going to collect, but that their distributors were telling them that they didn’t want to do business with them any more (affecting their potential future revenue growth). Not only will they not be able to pull 3 million dollars out of working capital, there are some critical red flags appearing about their ability to continue as an ongoing concern.

Wednesday, October 13, 2010

Unlevered Beta

A quick recap:
Beta of a company is determined by a statistical regression of returns of a given security against returns of the market.

As a result, "beta" usually refers to a company's equity beta or observable beta. Using CAPM, we can calculate the expected cost of equity (ke) using:

ke = RFR + beta * ERP

Where ERP is Equity Risk Premium

So what is unlevered beta and why is it important?

Well, recall from CAPM that you can change a company's capital structure in order to add leverage to increase beta and thereby increase expected return of equity (with a company's cost of equity being the investors return on equity).

Unlevered beta tells you how much a company's industry is expected to return, regardless of the leverage employed by looking at the systematic risk of an industry regardless of the financing decisions. Theoretically, the unlevered beta should be constant across companies in an industry.

Why is this important? Here is one example. You are trying to determine the cost of equity (so you can determine WACC for DCF) of a new company in an industry. However, because it is a new company, there is no previous history in terms of what they can be expected to return relative to the market. So it is impossible to calculate its equity beta. So what can you do?

You can look at a variety of other companies in the space, unlever all their betas to get asset betas and average them (as they should be theoretically the same, but will most likely differ slightly) and then relever it to the new company’s capital structure to approximate its expected equity beta. Armed with its equity beta, you can calculate its cost of equity using CAPM.

While this is how it would work in theory, there are some major problems with this model, the most obvious being:

  • How do you do “comps”? How do you define “a variety of companies in the space”, especially if few or none of the companies are exactly the same?
  • CAPM has its own issues which make it a less than perfect model
  • As a new company, they will probably not behave in the same manner as more mature companies in the space.

Here is an example:



There are four companies with different betas and leverage levels. By unlevering all the equity betas, you can have various approximations for what the asset beta should be. An average of all four gives you a decent approximation for the beta of the industry.


Now assume we have a new company in this space that will have a D/E of 50% at the same tax rate. We can use the leverage formula to calculate what the beta equity should be:

Equity Beta = Asset Beta * (1+(1-Tax)*D/E)


In this case, the equity beta would be approximately 1.3, which makes sense as it has a leverage between B and C and therefore should have an equity beta in between.

Thursday, July 29, 2010

Bidding Strategy - The Mechanics

So I've been lucky enough to receive all the classes I want in all the sections I want and it turns out that LBS doesn't use a "bidding" system per say (classes awarded based on listed "preference" - an ordinal system).

A few people were asking about how my bidding formula works and while it's hardly perfect, I figured I'd put up some of the details just for laughs (or a least as building blocks for someone who plans on taking this model to the next level). It uses only public information available to all students at the time of bidding.

In this model, each course bid is determined by three factors. The first is the inital base and most people will choose one of two initial bases: Last year's minimum bid or last year's median bid (depending on how competitive the class is).

After determining the appropriate bases for your five courses, the remaining points (“the Remainder”) can be divided amongst your courses to make your bids more competitive. But like all dilemmas in bidding, you want to assign just enough points so that you get the courses you want, but not so much that you jeopardize your chances of getting the other courses. So how do you do it?

I propose that the two major factors you should look at are what I call:
  1. The Ballot factor (anticipated) (x% of the Remainder, or “X-Factor”)
  2. The Historic factor (backward-looking) ([100% - x%] of the Remainder, or the “Y-Factor”)

Where x% is the weight of value of your Ballot factor versus your Historical factor (In other words: how much you believe your Ballot Factor represents real bidding behaviour versus historical).

Ballot Factor:

This factor accounts for the number of people who say they will take the course. A few notes:

  • People don’t always bid for the courses they ballot for
  • Use the numbers as guidance to see if the course is oversubscribed
  • Calculate the expected utilization capacity = total number of students balloting for any course in that section / total class capacity
  • Square the utilization capacity to create an “intensity factor”
  • Total all the factors and express each factor as a percentage of the total
  • Multiply the percentages by the X-Factor
  • The result is each individual courses’ Ballot Factor offset

Example:

  • 2 classes have a capacity of 40 people each
  • You have 200 points allocated to Ballot Factor
  • 20 people bid on Class A (fairly certain everyone who bids will get in… There is even a chance that a 0 point bid could win) has utilization 50% and Ballot “intensity factor” of .25
  • Class B has 60 bidders has utilization 150% (red flag: guarantee that not everyone will get in) and it’s “intensity factor” is 2.25.
  • Class A’s weight is .25/(.25+2.25) = 10%
  • Class B’s weight is 2.25 /(.25+2.25) = 90%
  • Class A’s Ballot factor offset is 10% * 200 points = 20 points (a non-zero bid with decent margin, you'll probably get in)
  • Class B’s Ballot factor offset is 90% * 200 points = 180 points (a strong bid, considering an average of 100)

This model tries to account for the fact that only very high bids will win the competative class, but you also don't want to low ball Class A incase a few stray bids appear from people who take the class last minute (obviously, the less people who originally bid on the class, the less you have to worry about dark horse bidders).

Note that it is 9x because at least 20 people are guaranteed to not get in the class. Classes that are oversubscribed will have intensity factors much higher than 1 with much heavier weights and undersubscribed much lower than 1 with much lower weights. This accounts for the premium on variation and intensity due to the number of bids in a competitive environment. Note that in this pure form, this is a best effort bidding mechanism with the scaling of points to consume all remaining points.

Historical Factor:

Another way to try to guess what the bidding will look like is to use the historical bidding as guidance for the variation of bids (were the bids tight or across a broad range?) One indicator of that is the minimum and median bid. If you make some HUGE assumptions, you can use these two points to create a normal curve with standard deviations. Since the mechanics of this are taught in stats in first quarter, I won’t bore my readers with a poor facsimile of Prof. Krass’ lecture.

Even if you don’t technically know the actual distribution of the curve, you can also use Chebyshev's inequality to position yourself within a certain percentile (also looking at the expected capacity utilization of the class based on your previous calculations). How? Here’s a hint (shown above): the bidding percentiles (% of students bidding that are not successful being admitted into the class) should be the same as the bid oversubscription capacity (again, huge assumptions) to provide the number of standard deviations. Combine this fact with the distance from the median to the minimum should provide a clue as to size of a standard deviation. Note that using this method, you may not (probably won't) have enough points to guarantee getting into the courses you want (unless like me, you probably have a surplus of points or are taking unpopular courses), but it is probably one of the best mechanical methods for balancing aggresive bidding with conserving points as well as building a view for what the bidding landscape looks like. In practical terms, at this point you can use a best effort model similar to the one shown above using the Y-Factor.

Also, I’ve deliberately left out methodology for mechanically scaling up courses based on your individual preferences (ie rating courses from 1 to 10 and incorporating that into your bidding strategy). Also, there are huge economic implications for bidding strategy considering that the involved parties do communicate with each other and affect the bidding levels of courses (ie Friends talk to each other about how they plan to bid). Signalling, game theory and strategy all come into play.

While not perfect, this model will give you some perspective into what a reasonable, very mechanically inclined bid would be. Admittedly, while I built this model, I did do some “emotional” adjustments to my bids (there was one course where I wanted to work with my friends on their team, so I wanted to be CERTAIN that I got the course). Like anything done on a computer, it’s just a tool.

Disclaimer: Like anything on this blog, this model does not guarantee any degree of success. This post is intended as a conversation / pensive reflection piece only. It is possible for you to use this model and not get ANY courses you want. For instance, it is physically impossible to get both Top Management Perspective AND Value Investing because both courses usually require exceptionally high bids. Note that by definition, there will be some people who don't get the courses they want. The more you want to be certain that you are in one course, the less certain that you will be in another (almost like the Heisenberg uncertainty principle). For better or worse, it is a zero-sum game.

Also, more importantly, I've been told that it's all a wash and at the end of the day, after the drop and add periods are over, most people get the courses they want anyways.

Monday, April 26, 2010

Allowance for Doubtful Collection

In continuing with the thoughts on revenue from last week, I thought it might be worthwhile to look to look at allowance for doubtful collection of funds. Generally, there are two methods which are used: % of receivables and the aging method.

The first method, % of receivables is quite simple and self explanatory. There is a predetermined percentage of receivables (calculated based on historic numbers) which is expected to default.

So assume that this is 1.85%. If the firm's gross revenue is $1M (and all credit sales), then the expected bad debt associated with those revenues is $18.5k. The net value of receivables is $1M - $18.5k or $981.5k.

The aging method is more sophisticated and provides a better picture, however, it requires a great deal more work.

For example: Assume the following definitions:
Normal - Receivables is between 0 and 60 days old (99% chance of collection)
Distressed – Receivables is between 60 to 90 days old (95% chance of collection)
In Default – Receivables over 90 days old (50% chance of collection)

Now assume that of that $1M, there is:
$900k of Normal AR
$90k of Distressed AR
$10k of In Default AR

The expected value of those receivables would be:
Normal – $900k x 99% = $891k
Distressed – $90k x 95% = $85.5k
In Default – $10k x 50% = $5k
Total value = $981.5k

I’ve chosen very particular numbers to get the same result as above, but this also proves a point. If the structure of your debts is predictable enough (or with little variation), a percentage of receivables method is essentially a sort of weighted average of probable defaults.

Monday, April 19, 2010

BATNA and Sharpe Ratios in M&A

While I missed the Negotiations class my classmates took for the Middle East Study tour, I was fortunate enough to have taken a Negotiations class at the Analyst Exchange in NYC where they explained concepts like BATNA (Best Alternative to a Negotiated Agreement) in a simulated negotiation environment.

Also, in ITP, we talked about the model for "rational experimentation", that is to say the formula which describes the logic between probability of successful outcomes versus the risk and initial investment required (looks suspiciously similar to an NPV calculation because it uses the same mathematical components with probability of success superimposed on the cash flow, similar to how the CFA teaches to account for risk).

With my last post on M&A and splitting synergies with the target's share holders, this got me thinking about what would be "rationally" fair in M&A negotiations. I thought about it and decided it might be a good idea to integrate the thoughts from the post below with the idea of a Sharpe Ratio (or more exactly, Roy’s Safety First Criterion – where we use a “minimum return” rather than risk free rate).

S = (E[R] - Rf) / sigma

Where:
  • E[R] is the expected return of the project
  • Rf is the risk free rate (or in Roy's SFC, minimum return)
  • sigma is the standard deviation of the investment
Obviously, there are some of the same undertones that we have learned from CML or CAPM. While I was thinking about Synergies and Premiums analysis of an M&A deal, it struck me that while there is some "risk" in the total Synergies achievable, the Premium is paid in advance and essentially risk free. Immediately, some of the same terminology which was used in the previous post suddenly rung a bell with regards to the Sharpe Ratio (from CFA Level I).

I would propose that the Sharpe ratio calculation can be used in an analogous manner for an M&A deal with synergies. For example:
  • Expected Returns --> Synergies
  • Risk Free Rate --> Premium
  • Sigma --> some sort of volatility related to success of M&A deals to achieve expected returns

In fact, you can take this a step further and get:

  • Synergies / EV as a proxy for M&A incremental ROA
  • Premium / EV as a proxy for M&A minimum return

S = (Synergies – Premium) / (EV * sigma)

The formula would then calculate something very similar to marginal excess ROA or value creation per unit risk by the deal. Besides helping you understand your BATNA, this metric might also help you select acquisition targets from a financial perspective.

Monday, March 29, 2010

Gravity as a Analogy to Globalization

Our professor just used one of the most clever analogies for international trade I've ever seen. It's surprising how much the physics of gravity can model relationships involving size and proximity.

The formula for the physics of gravity is:

Force = Gravitational Constant x Mass 1 x Mass 2 / Distance ^ 2

In this analogy:
  • Force -> Strength of trade relationship
  • Gravitational Constant -> Trade coefficient <-- trade barriers / regulations / tarrifs?
  • Mass 1 -> Size (GDP as proxy?) of country 1
  • Mass 2 -> Size (GDP as proxy?) of country 2
  • Distance -> Distance

Our professor, Blum, took it a step further and did a logarithmic deconstructed the formula to further show how changing different values of each variable (pulling different strings) results in intuitive changes in the relationship. For example: Decreasing distance between countries increases. He even quotes his research (2004). This is his criticism of the idea that the world is truly "flat".

Imagine the game theory implications also. If you could use this relationship to predict how countries would trade and grow, you could build a model with multiple components (countries) to see how they'd develop.

So... It turns out that when Roger Martin tells us that Rotman has a world class research faculty which impacts the material we learn in our classes, he certainly wasn't lying.

Wednesday, March 24, 2010

Why isn't my deal accretive?

I'm building an M&A model for two firms in the same industry and I noticed that the deal I was modeling wasn't accretive. However, the PE for the target was lower (marginally) than the PE of the acquirer and I was using a capitalization structure that was similar for both (both had about the same debt to equity ratios implying similar capital structures). Both also paid about the same interest rate on their debt.

According to simplification and a common interview question, as I mentioned before: buying a high return equity with a low return equity means you should get to "keep the difference". So in this case, when I modeled an acquirer with implied cost of equity lower than the target, I couldn't figure out why my model was telling me the deal was dilutive!

Turns out, I had forgotten about my asset write ups. What I had done was allocated 25% (not sure if this is a reasonable number - I don't have practical experience yet and I also don't have intimate / insider knowledge of the company being modeled) of my good will to writing up intangible assets. What does that mean?

Often a company develops intangible assets (brand value, patents). Companies are generally not allowed to record the value of their own internally developed intangible items on the books (because this would be a very subjective exercise). However, when companies are purchased, the value above book value is recorded as goodwill. Companies can further allocate portions of this good will (not sure the legal or accounting regulations, although I'm sure there are plenty) on the books as intangible assets and amortize them over time.

So what happened in my model? I merged two companies that had similar capital structures and costs of capital (with the target returning *slightly* more), but these synergies were being offset (at least temporarily in the short run) by the increase in D&A expense due to the write up of intangible assets resulting in an apparently dilutive deal.

Note, however, that for an owner with "foresight" (that is to say, not earnings focused), having a higher write up value increases the D&A expense which results in an increase in cash flow (from the tax shield of a non-cash expense) and also reduces debt and interest payments in the long term (model assumes a sweep with a portion of debt financed in a revolver). That is if you can stomach the low to negative earnings results in the short run. The deal would look more dilutive in the short run, but actually be much more accretive in the long run.

Tuesday, March 23, 2010

Operating Leverage

Anita McGahan once explained to us the nuances in a decomposition of the DuPont formula. Where:

ROE = ROA x FLA

In non-math terms: The profitability of a company is a function of it's operating strategy (ROA) and it's financial strategy (FLA).

We've been looking more at this topic (focusing on Operating Strategy) in Operations Management and were introduced to a very interesting idea: Operating Leverage.

Operating Leverage, put simply, is loading up fixed costs to reduce variable costs. Another way of looking at it is similar to capitalized leases versus operating leases. Like financial leverage, increasing operating leverage also increases risk, but increases potential reward.

While debt provides the lever in the financial leverage analogy (and interest expense provides the potential downside), in operating leverage the lever is fixed cost (and sunk cost is the potential downside).

If the volume of quantity demanded isn't equal to the break even amount, there is a significant loss. If the volume of quantity demanded is higher, there is a relatively amplified effect on the profit base through the cost side of the equation.

Excluding the effect of taxes, a basic formula for profit is:

Profit = Revenue - Costs
Profit = (P x Q) - (FC + VC x Q)
= Q (P - VC) - FC

Break even (BE) is when Profit = 0 so

FC = Q (P - VC), where P - VC is contribution margin, CM (profit per unit sold)

Therefore the break even quantity, Q, is defined as:
Q = FC / CM

Using financial leverage as an analogy, I would suggest that it is only truly increasing "operating margin" if the BEQ increases (risk increases). This would only happen if the percentage change in FC is greater than the percentage change in CM (very similar to elasticity).

I believe this would be analogous to a type of operating accretion? In finance, deals are accretive if the cost of capital of the source is cheaper than the cost of capital of the use (buying high return instruments with low return instruments) and is the foundation of financial leverage.

Also, because CM is defined as P - VC, there is an inherent leverage relationship as well as it relates to cost in the same way a commodities based company has a leveraged exposure to it's underlying commodity price. If CM is anchored on one end (P), a change in VC has an amplified effect.

Tuesday, March 16, 2010

Managerial Accounting / Operations - Capacity Management

One interesting topic which has surfaced in our introductory classes of Managerial Accounting (this morning) and Operations (yesterday) is the idea of capacity management. This is a topic I've been very interested in for a variety of reasons, particularly focusing on the idea of stock-outs and capacity planning.

For example, the ideal scenario is to create *just enough* inventory to satisfy's the period's needs. Creating any more (assuming a perishable good) results in inflated costs related to waste and/or inventory carrying costs. Creating any less results in lost revenue related to stock-outs.

However, in real life, it is unrealistic to assume perfect inventory planning all the time, so chances are there will be some days with over stock and some days with stock-outs.

The basic formula for profit is: Profit = Revenue - Costs

and

Profit Margin = Marginal Revenue - Marginal Cost or Marginal Revenue - Variable Cost

We know that we will incur some sort of inefficiency or uncertainty cost in the form of over / under stocking as mentioned above. However, the idea is to minimize this "capacity cost" we have to understand how operations affect these costs. For example:

A bakery sells donuts for $1.00. Donuts cost 10c to make. Therefore, over-stocking results in a cost of 10c per donut due to wastage. However, stock-outs cost $1.00 per donut due to lost sales. Therefore there is a 10 to 1 cost per unit on either side of the ideal capacity target. Let's say on any given day, the average sales is approximately 1000 donuts.

To minimize the cost side of the profit equation, we have to look at the probability of capacity distributions above and below the target.

Let's make a HUGE assumption (for simplicity) and say that there is a uniform distribution about the target (not normal, but uniform). Let's say there is a 10% chance of the actual daily sales being:
  1. 950
  2. 960
  3. 970
  4. 980
  5. 990
  6. 1000
  7. 1010
  8. 1020
  9. 1030
  10. 1040

How many donuts should the bakery produce?

I would propose that you should overlay the capacity with the associated cost of capacity management. What do I mean? If you produced 1000 donuts and you only sold 950, you're capacity related costs would be amount of capacity variance x cost per unit of variance. Generally this would be expressed as:

Capacity Related Cost = Capacity Variance x Cost per unit of Variance

So in this case:

Capacity Related Cost = 1000-950 x (10c)

=$5.00

What about baking 1000 donuts and then selling all 1000, but having an additional 20 donut customers who go unserved? Then:

Capacity Related Cost = 1000-1020 x ($0.90)

= $20.00

Notice that for a smaller number of donuts not sold, there is a much larger effect on Capacity Related Costs. This is a reflection on the profit margin (profit from one lost sale = marginal revenue - variable cost). That is to say, it is much worse to not sell 1 donut rather than have 10 donuts go stale.

To optimize planning (create a capacity level which will optimize profits) it would make sense to minimize the Capacity Related Costs. Since we have the probabilities of the capacity distribution and the associated costs with under production, what target capacity creates the minimal expected capacity related cost? Below is a chart outlining the capacity related costs for given target and actual production levels.


You'll notice that the optimal production level is actually 1040 or 1030. This sort of makes sense as the costs for missed sales are so high. Generally, because of the structure of the model, there are two major factors affecting the end result for capacity planning:

  • Volatility and variablity of demand
  • Profit margin (difference between cost of wastage and cost of lost sales)

Saturday, February 20, 2010

Equity Near Bankruptcy (or NPV = 0) Behaving as Call Options

This might be one of the most brilliant finance things I've ever seen taught a few days ago by our finance prof. I've always been interested in options thinking about how they behave and how to value them (and with the current financial crisis, have been taking more looks at bankruptcy).

First consider an oil company which can extract oil out of the ground for $70 per barrel with 1M barrels in the ground. The current cost of oil is $60. It costs more to get the oil out of the ground than it does to sell it on the open market, so the project is negative NPV right?

Well what happens if the oil prices rise to $80 a year from now? Then with a return of 10% (assume that it takes a year to get the oil out), you can make $10 per barrel on 1M barrels. The NPV works out to be about $9.1M.

But there is some inherent risk in this position which relies on the price of oil moving up. Sound familiar? It is the exact same behaviour as a call option.If the value of oil drops, the land is worth nothing, but if the value of oil appreciates, the value of oil appreciates accordingly also. The analogy holds up if you replace Exercise Price with Extraction Cost.

Here is another example of option like behaviour: Companies near bankruptcy.

Scenario 1: Healthy
Net Debt = $5M
Enterprise Value = 11M (Enterprise value calculated based on DCF)
Market Cap = 6M

Scenario 2: Near Bankruptcy / Highly leveraged:
Net Debt = 5M
EV = 6M
Market Cap =1M

Scenario 3: Bankruptcy
Net Debt = 5M
EV = 4M
Market Cap = 0

Because of the nature of capital at risk for corporations, the equity cannot fall below zero. A company in this position might also take on excessive risk (deliberately stir volatility on extremely risky projects) because there is nothing to lose.

However, in the absence of that, a company's equity at or near bankruptcy will be have much like a call option. Because of this relationship, a vulture fund might use the Black-Scholes model could potentially apply as an appropriate valuation metric to value the time value of the equity.

Wednesday, February 17, 2010

Money and GDP Multipliers

I love geometric series. It describes so many natural phenomena especially as it relates to finance and economics. For example:

GDP mutiplier and Marginal Propensity to Consume (Save)
Marginal Propensity to Consume (MPC) is for every additional dollar of income, how much will people spend. Marginal Propensity to Save (MPS) is the opposite: for every additional dollar of after tax income how much will people save. By definition:

$1 = MPC + MPS

Different cultures will have different MPC and MPS. Americans are notorious for having high MPC (bordering on higher than $1, using financial instruments like credit cards and lines of credit to boost short term liquidity). Japanese are stereotypically savers in contrast.

However, let's assume a culture with MPC of 40%.
  1. A spends $100 on B.
  2. B receives $100 and spends $40 (40% of $100) on C.
  3. C receives $40 and spends $16 on D.
  4. D spends $6.40 on E etc. and the process continues.
Look familiar? It should. This pattern can be described as an infinite geometric series (the same formula which is used to described a perpetuity for DCF evaluation).

For a GDP multiplier, the initial amount is $1 by definition. The "discount rate" or rate of decay is related to the MPC. Recall (Using the same math trick for geometric series):

GDP Multiplier = $1 + $1 x MPC + $1 x MPC^2 + ...
MPC x GDP Multiplier = $1 x MPC + $1 x MPC^2 +...
(1 - MPC) GDP Multiplier = $1
GDP Multiplier = $1 / (1 - MPC)

But recall: $1 = MPC + MPS
MPS = $1 - MPC

Therefore:
GDP Multiplier = $1 / MPS

Money Supply Multiplier and Reserve (Lending) Ratios

This is EXACTLY the same case for Money Supply and Bank Reserves. A bank (by policy or regulation) has a reserve ratio (RR). That is, for every additional $1 in deposits, it keeps a given percentage and lends out the rest. Let's also define lending ratio (LR) as the complement and by definition:

$1 = Reserve Ratio + Lending Ratio

Imagine "the bank" (representing all banks in the economy) has a reserve ratio of 20%.
  1. "The Bank" receives a $100 deposit and lends out $80.
  2. The $80 it lends out to "the Economy" (representing all depositors and borrowers) takes the $80 and "uses" it and it is redeposited into the Bank.
  3. With the new $80 deposit, the Bank lends out $64.
  4. The borrower uses it and it is redepositied into the bank.
  5. The bank receives $64 and lends out $51.20 etc and the process continues.

Again, this is a pattern described by the same concept and the same formulas apply:

Money Supply Multiplier (MSM) = $1 + $1 x LR + $1 x LR^2 + ...
LR x MSM = $1 x LR + $1 x LR^2 +...
(1 - LR) MSM = $1
MSM = $1 / (1 - LR)

But recall: $1 = RR + LS
RR = $1 - LR

Therefore:
MSM = $1 / RR

Notes:

Some key points about this formula, notice that the multiplier effect is always greater than the initial amount ASSUMING that the reserve (saving) amount is less than the total amount (you don't reserve or save all of it). So when a dollar is spend in the economy (or lent out) the effect on the supply is greater than one.

Also note that for odd values of reserves and saving (aka, American's spending more than a $1 by borrowing), you get a negative MPS and therefore a negative mutliplier which is a non-sense result (in a similar manner as my contest question that Chad got right). Whenever you see non-sense numbers (numbers which tell "stories" that don't make sense) it should always act as a red flag to reinvestigate the initial assumptions of the model.

Sunday, February 14, 2010

[Operating] Working Capital - What's the Difference?

I was working with a buddy on a financial model recently and this came up as an issue. It's very confusing because Working Capital is defined by operating assets and liabilities (short term or current assets and liabilities) and it *seems* synonymous with Operating Working Capital, which it is not. First, the definitions:

Working Capital is defined as:
WC = CA - CL
CA - Current Assets
CL - Current Liabilities

However, when doing M&A or LBO's we aren't concerned with Working Capital as much as we are with Operating Working Capital. Why?

Cash and Short Term debt instruments are actually financing components and aren't actually included in Operating Working Capital (OWC). In an acquisition, the target firms capital structure is usually zeroed out (unless there is a debt roll over clause) and the capital structure used to purchase the company is plugged in (including good will, recognisable intangible assets etc).

So Operating Working capital has some adjustments:
OWC = CA - CL - Cash + Short Term Debt

It's the same idea as excess cash rather than cash in Enterprise Value.