Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, January 16, 2011

[Rotman] 5 Great Speakers at Rotman

[Rotman Series: 1, 2, 3, 4, 5]

Jaime is a part-time student in the MBA program at Rotman. He has worked in the sports media industry since 2002 and is currently Manager, Digital Media for the Canadian Football League. He and I went to the Latin America study tour in May last year. He was gracious enough to do a write up for me on his favourite guest speakers which follows:

By Jaime Stein

One of the first things you notice when you obtain an e-mail account at the Rotman School of Management is the sheer volume of e-mails from a guy named Steve. At first it can be overwhelming, but if utilized wisely, it can be your ticket to an exclusive roster of speakers. Steve and his team are the masterminds behind the A-list speakers that regularly visit the Rotman School.

The hardest choice I have to make each week is which speakers I will NOT listen to. This is a good problem to have because choice is always welcome when working full time and attending school part time. I simply don’t have the time to listen to every speaker that passes through Rotman. However, in almost three years, I have been privileged to listen to close to 100 guest speakers.

Most of the speakers that I have seen have delivered outstanding talks, but for the purpose of this blog I present five of the best speakers I have listened to during my time at the Rotman School:

1. Paul Martin – Former Prime Minister of Canada

Imagine you are in your second semester of a three-year MBA degree and you are studying Macroeconomics. A large focus of the course stems around Canada’s macroeconomic policies during the 1980s and 1990s; specifically the country’s battle with debt and inflation. One day you find out that the man behind the plan to battle inflation will be speaking at your school. That would be like a young basketball player having the opportunity to shoot hoops with Michael Jordan and ask him for tips.

Fortunately for our macro class, Mr. Martin came to speak at the Rotman School one morning and for about an hour took us through his plan that brought Canada back from the brink in the mid-‘90s. Following his talk he took time to speak to each of us and share some more personal insights and war stories from his time as both Finance Minister and Prime Minister. This was one of the great days at school that left me wanting to explore a subject further.

2. Isadore Sharp – Founder, Chairman and CEO of Four Seasons Hotels and Resorts

One of the main selling points of the Rotman School is its focus on Integrative Thinking – the theory coined by the current Dean, Roger Martin. In one of his books on Integrative Thinking (The Opposable Mind), Martin focuses on the story of Isadore Sharp and his path to building the greatest luxury brand of hotels in the world. In many of our classes we study the Four Seasons Model for customer service and other best-in-class management techniques. We were fortunate to have Mr. Sharp visit the Rotman School and explain firsthand how he went from one Four Seasons hotel in 1961 in Toronto to operating a chain of approximately 100 properties worldwide.

For anyone with an ounce of entrepreneurial spirit this was a motivating discussion. You could see the passion, courage and drive that Mr. Sharp possessed to launch his vision and stay true to it along the way. Any successful company will create a competitive advantage – however, these are eventually replicated by the competition over time. When people are your competitive advantage, it becomes truly sustainable as Mr. Sharp has proven. While other hotels provide outstanding service, none of been able to match the formula created by the Four Seasons.

3. Rahaf Harfoush – Digital Strategist and Author

It was November 27, 2008 when Ms. Harfoush spoke (for the first time, I believe) at the Rotman School. There was lots of hype surrounding her talk that day because Barak Obama had recently been elected President of the United States and Ms. Harfoush was a part of his wildly successful digital media campaign. I also remember this talk vividly, because it was one day later on November 28, 2008 that I joined Twitter. A lot in my personal and professional life has changed since that defining moment – all for the better.

The topic of conversation at Rotman that day was, “Applying Barack Obama’s Social Media Strategy to Your Brand’s Communications Needs” and it was Ms. Harfoush’s talk that became the inspiration for a lot of what we have done at the Canadian Football League over the past two seasons in the social media realm. To me, this is what an MBA program is about – an exchange of ideas to help stoke peoples’ imagination and potential. I’m glad I made time to attend her talk that day.

4. Michael Lee-Chin – Founder and Chairman of Portland Holdings Inc.

In October, 2009 I attended the Rotman School MBA Leadership Conference in downtown Toronto. It was a star-studded event with speakers like George Butterfield, Co-President of Butterfield & Robinson, Beth Comstock the CMO for GE, Don Morrison, COO of Research in Motion, Robert Deluce the CEO of Porter Airlines and Michael Lee-Chin, the Founder and Chairman of Portland Holdings.

Mr. Lee-Chin is one of the most engaging speakers I have had the pleasure to listen to in person. Mr. Lee-Chin spoke for about an hour on a variety of subjects including how to create wealth. He focused on a small number of blue chip businesses with long-term growth potential. But he was adamant that you know and understand where you are investing your money. One quote from Mr. Lee-Chin that sticks with me is, “If you don’t understand what you own, are you investing or speculating?” This is important advice that too many people continue to ignore this day and age.

5. Jay Hennick – Founder and CEO of FirstService

Mr. Hennick spoke to our class recently at the Rotman School. He runs FirstService, a company that provides services in commercial real estate, residential property management and property services and generates about US $2 billion in annualized revenue. Mr. Hennick told us his amazing story of how he achieved his current standing atop a multi-national company. He got his start with a company he ran as a tenth grader that brought in an income of $200,000. Yes, you read that correctly – he was in grade 10.

His key message was focused on people management; what he believed was the differentiating factor for the success of his current company. His “Partnership Philosophy” states that impact players must have more than a salary and bonus invested in the business; they must have an equity stake. His company focuses on aligning employees’ interest with shareholders in building long-term value. This was both fascinating and eye opening for most students who believe this is hard to do in a company of 18,000+. Yet FirstService continues to succeed. Listening to Mr. Hennick and his passion for success was rewarding.

As you can see, there are some overarching themes from these speakers such as focusing on people and establishing long-term strategies. But ultimately, each of these speakers is among the leaders in their field and that is why I feel fortunate to have spent the past three years at the Rotman School. The access to these great minds alone was worth the price of admission – well almost!

Wednesday, May 5, 2010

Nextel Institute

[LAIST Tour Begins, Fazenda Tozan, Churrascaria – Nova Pampa, Port of Santos, Deloitte, Embraer, Natura, Gol de Letra, Bom Bril, Agencia Click, Nextel Institute, May 6, Rio, Rio Weekend, Petrobras, PREVI]

The Nextel Institute is a NGO which provides additional training for at risk and vulnerable youth aged 16 to 24. They are funded by Nextel and work in partnerships with companies like Agencia Click.

They have had explosive growth, placing 70% of their 124 participants in 2008 and 60% of 390 in 2009, more than doubling their success rate.

This tremendous growth has caused this foundation to look at developing new locations and partnering with additional companies to absorb the additional youth looking for employment.

Agencia Click

[LAIST Tour Begins, Fazenda Tozan, Churrascaria – Nova Pampa, Port of Santos, Deloitte, Embraer, Natura, Gol de Letra, Bom Bril, Agencia Click, Nextel Institute, May 6, Rio, Rio Weekend, Petrobras, PREVI]

Much of this visit was confidential (as is the nature of high tech I suppose), but there were some interesting ideas which they discussed with us which were fairly clever.

The idea that a product or service needs to be irresistible rather than big, that online media has changed our perception of advertising from a 360 view around us to a 365 day world requiring open, continuous and on demand and that creating time was the new strategy rather than buying time.

Wednesday, September 30, 2009

Constant Dissatisfaction: Google's Approach to Understanding New Media

Jonathan Lister, Country Manager for Google Canada came to give a talk at Rotman about what he calls "Constant Dissatisfaction: Google's Approach to Understanding New Media".

He highlights 3 major changes in technology
1. Ubiquitous Access
2. Cheap Storage
3. Falling Costs of Production

A few interesting points he raised:
- Google Wave released today. Its a new product which integrates many social features like photos and comments. My initial reaction was that it looks an awful lot like Google's version of Facebook.

- there are 20h of video uploaded every 5 minutes on Youtube. There is a shift towards paid premium content which has major implications for the media and advertising industry

- Google's search page has a unique web metric: "Get people OFF our website as fast as possible". They recognize that they are always "one click away from losing market share" and as a result have four focuses for their search engine:
1. Size of index
2. Speed
3. Relevancy
4. User Experience

- they have developed Ad Exchange, a sort of stock exchange of advertising (spot prices). They hope to improve on what they percieve as the inefficiency in display ads

Google's DNA
1. Innovation, not Instant Perfection - Launch early and often
2. Focus on the User and All Else Will Follow
What is scarce? User patience
3. You don't have to be at your desktop need an answer
4. A License to Pursue Your Dreams - 20% Projects
Google News as a needs based project organically spawned from the events of Sept 11
5. Data is Apolitical
6. Morph Projects, Don't kill them
7. Share as much information as you can
8. Make money without being evil
9. Creativity loves constraints

Prognosticate - 5 Google Myths

1. Big beats small: fast beats slow
2 . You need all 4P's: for many brands there are now just 3P's (not price, promotion is less relevant - free flow of info) - Youtube symphony, place reduced by globalization, product is staring role
3. Mass marketing is impersonal: today it is possible to engage 1:1 - on a mass scale
4. Marketing can't be accountable: marketing is the new finance (60s / 70s - tied to actions and responses) - quants are starting to move from Wall st to Madison ave
5. Management comes from the top: - Wisdom of crowds is creating a new bottom-up style of management. For example, Doodle for Google - a project getting kids to design Google's logo: Egypt orphanages and the "my Egypt" project

One of the notable points he mentioned is that creativity thrives with constraints. A very counter intuitive argument, he explained how when you are faced with constraints it requires to you to create unique solutions to overcome those challenges.

I apologize for the format of my notes, but there was so many interesting points it was tricky capturing all of it.


Sent from my BlackBerry device on the Rogers Wireless Network

Friday, May 22, 2009

Business Continuity Series, pt 5 - Implementing the Plan

Once all the homework has been done understanding the relationships and inter-dependencies and once the plan has been put together, it's time to test and implement the plan.

At this stage there is a tricky conundrum. On one hand, in order to do a realistic test, planned outages are required for live services to ensure that systems will be resilient in the manner anticipated (reducing the shock of discovering additional failures during an actual disaster). However, deliberately causing outages is the last resort of any service provider.

Even in the best conditions, where service consumers are notified in advance with a long lead times and everything goes according to plan it is usually heavily orchestrated event that consumes many non-revenue generating resources.

Testing the plan should attempt to avoid being disruptive. As with any change management procedure, downtime should be kept to a minimum and attempts should be made to reduce the impact on live customers (usually translating into "off-peak testing", coming in late on a Saturday night or early Sunday morning).

The shutdown of highly technical and regulated services like nuclear power plants usually requires all hands on deck at the most ungodly hours of the night (colleagues of mine working with in nuclear power remind me that their credo is "Never forget that you work in a very unforgiving industry").

In this stage, often managers and professionals discover more inter-dependencies implying that their plans are either incomplete or not as robust as they had anticipated. This is where GAP analysis comes into play to further develop the plans.

Even in the event of an ideal and perfect implementation of a BCP plan, there is still the requirement of ongoing vigilance. This is because that as the environment changes, certain assumptions which become obsolete suddenly cause vulnerabilities to appear in the system. At this point, BCP projects evolve into on-going BCP maintenance programs.

Thursday, May 21, 2009

Business Continuity Series, pt 4 - Building a BCP Plan

Before you can build a plan you need to understand what value you are deriving out of the system. Unfortunately, in the real world, Business Continuity Program planning is constrained by resource allocation like any other project, so understanding the value derived from the program. It is possible to quantifying the problem by understanding:
  • Frequency of outages
  • Average duration of outage
  • Time value of outage
  • Value of data lost
  • Opportunity cost of capital investment in plan
Total cost of outages = Frequency x Duration x Time Value

This basic consideration will give you a foundation for justifying budgeting more or less funds into your Business Continuity Program.

When you've arrived at a stage where you need to begin to start choosing a strategy, there are several categories for recovery strategies, each with an escalating financial and resource commitment and proportional recovery / resiliency benefit:
  • Passive-Passive - Cold solution. New equipment may need to be ordered at the time of the event. Capital on-hand 'just-in-case'. Can be improved with planning (better use of capital). Essentially a "do nothing" solution. Probably manifests as a paper plan only with no physically available resources.
  • Active-Passive - Warm redundant systems - Literally: Turn-key or push button solutions. There is equipment ready but it is not currently in use. It is on hand and can be activated on short notice. This is usually because of technology or financial limitations.
  • Active-Active - Traffic is load balanced across multiple systems. Disrupted systems are by-passed and traffic is routed to different machines. Usually minor disruptions pass unnoticed. Only catastrophic events knocking out the entire system are noticed by users. The main concerns of an Active-Active system are costs and capacity. Problems generally only become visible when enough modules are knocked out such that the system is over capacity.
As usual, better plans usually cost more resources, however sometimes there are non-zero sum gains to be had. For instance, a Passive-Passive solution might be to have $5M allocated in the budget as "contingency" in the event of a disaster. Perhaps rather than have $5M budgeted as "contingency" you can employ $1M in capital expenditures to build resiliency into your processes. Although this investment will depreciate over time, it could potentially be better than keeping the capital idle and the economic loss of the internal rate of return (IRR) of $5M.

Also, systems which are heavily used or mission critical will require more active plans. For instance, if Google or 911 suffered any downtime, people would notice.

When putting together a plan there are other important considerations. For instance, is the a correlation between risk factors and support infrastructure? What is the geographical distance between my redundant systems and what is the possibility of a single event knocking out both my systems? Understanding and process mapping all interdependency is paramount in any BCP endeavour.

Before you think it is too unlikely, recall the power outage in the summer of 2003 which knocked out power for the Ontario and North East USA. If you located redundant system for Toronto was in New York (or vice versa) thinking that locating in a different country was enough insulation and redundancy, this event showed that it sometimes isn't enough.

Wednesday, May 20, 2009

Business Continuity Series, pt 3 - Service metrics - What are your goals?

Although we try our best to avoid failures with methodologies and goals like six sigma (the idea that output from processes should be contained within six standard deviations or approximately 3.4 failures per million) there are still some failures which need to be dealt with.

In the event of a system failure, there are two key metrics which are a good indicator of resiliency: Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

RPO refers to the amount of assets lost which can be quickly recoverable. For instance, an RPO of 24 hours for a database server means that if there is a failure (server crash, hard drive failure, building burns down) then the data that is restored is at most 24 hours old (or in other words, all data created in the last 24 hours is lost as a worst case scenario). RPO describes how current the information in your back up from auxiliary sources is.

RTO refers to the amount of time the process /service unavailability (time til service resumes). An RTO of 48 hours for cable television means that if a cable TV signal is disrupted (damaged line, transmitter failure, etc) that it will take the cable company 48 hours to restore service to your house.

The counter balance to achieving excellent RPOs and RTO's is cost. Generally speaking, the less latency for RPO and the less delay for RTO required, the more exponentially costly the solution (inversely proportional relationship).

Using a project management framework, the RTO of system system recovery is based on the critical path of recovering services (which in turn is heavily dependent on the system module with the longest RTO). And without a proper context most data will be useless so the weakest RPO in the system usually reflects the RPO of the system in general (a series relationship).

Email Example: A consultant backups their email every month locally on their laptop and their office mail server experiences an outage for 3 hours. The RPO in this scenario is one month (all the emails on their laptop) and the RTO is however long it takes the IT staff to restore email service (3 hours).

Tuesday, May 19, 2009

Business Continuity Series, pt 2 - Parallel versus Serial Failure and Resiliency

Before we can delve into the world of business continuity, we need to understand the underlying logic of systems design and the probability mechanics of describing failure. Taking a systems approach to redundancy planning, let's look at the mathematics behind failure probabilities of parallel systems and systems in series.

First let's look at a system in series:
The system above contains three modules in series, each with an 80% success rate. Each is independent of the others. The success rate of the system is the probability union of all three modules, in other words, in order for this system to work, you must traverse all three modules. The probability of success is as follows:

Success = 80% x 80% x 80% = 51.2%

Look familiar? It should. This is the exact same model I used for my post about the failure of communication between organizational levels and why smart people say stupid things with CEO's being on the left and mid-level managers on the right.

Note that even though each individual module has a fairly high success rate (80%) each incremental and potential failure compounds the overall success of the system. In series, all modules have to work in order for the system to work. This means that a system in series is vulnerable to single points of failure. If there is one point which goes down in the process, the whole system shuts down.

In human resources planning or even individual career development, being irreplaceable is identical to being a single point of failure.

Next let's look at a system in parallel:
The assumptions here is that each module is interchangeable with any other. That is to say that if one system fails, the other systems will pick up the slack. Here each module is fairly mediocre with a 60% success rate (or a 40% failure rate). However, for the system to fail, all three modules have to fail simultaneously. The probability of that happening is the union of all the failures:

Failure = 40% x 40% x 40% = 6.4%
Success = 1 - Failure = 93.6%

Notice that even while each individual component is not of particularly good quality, when they work together to ensure success they collectively cover for each other in the event of individual failures.

This model is analogous to electrical circuits (and the idea of resistance and conductance):
  • Modules are equivalent to resisters (from the perspective of conductance). Where conductance is a process channel.
  • Electrical current is work done.
  • Voltage differential potential work waiting to be done.
Remember, that formulas for electrical are analogous to fluid mechanics (if you come from a chemical or mechanical engineering background and feel more comfortable with those terms).
  • Modules are pipes
  • Water flow is work done
  • Pressure is potential work
With all these analogies, there are also problems associated with capacity. Although an individual failure might not disrupt a system with parallel components, if the system as a whole is operating at 90% capacity, the loss of one third of it's capacity is also a serious problem (system over capacity) and this will manifest in a variety of ways:
  • Unstable queue growth (work is coming in faster than you can process it)
  • Large (and growing) delay times (backlog)
  • Mechanical failures / server crashes / employee sickness (overworked)
In the next section, we will look at the goals of continuity planning, how to set goals and understand how to measure performance in an environment where an anticipated failure has occurred.

Monday, May 18, 2009

Business Continuity Series, pt 1 - Overview

Business continuity was an extremely hot topic during 9/11 as well as with current worries about avian and swine flu. The question posed is this: "How resilient are your business process to disruptions"? Whether that be a building fire, a crashed server or the loss of key personnel due to illness, companies need to know the inter-dependency of related systems as well as the redundancies (or lack thereof).

The next series will look at the math and mechanics of business resiliency planning

  • Part 1. Overview (This post)
  • Part 2. Parallel versus Serial Failure and Resiliency
  • Part 3. Service metrics - What are your goals?
  • Part 4. Building a BCP Plan
  • Part 5. Implementing the Plan

What is important is to differentiate between fear mongering and understanding real business risks associated with the operating environment and taking appropriate steps to mitigate them efficiently and effectively.

The material that will be covered in this series is a combination of engineering statistics principles coupled with business continuity planning as described by the Disaster Recovery Institute (DRI) as part of the Associate and Certified Business Continuity Professional level certifications (ABCP and CBCP respectively).

Tuesday, March 24, 2009

Superstructures in Social Networks

"We can save memory by storing the year as two digits instead of four" ~80's programmer
"640K ought to be enough [memory] for anyone" ~ Bill Gates, 1981

You always have to think of the consequences of your design...
Or, as one of my favourite webcomics, xkcd, puts it:

The most recent reincarnation of the limits of software design has surfaced was Facebook's notorious 5000 friend limit. This is well above Dunbar's number (~150), a proposed theoretical natural limit to the number of useful social connections we can have.

Now I'm sure a Facebook software designer would argue (rightly) that having 5000 friends is probably some form of inappropriate use of Facebook (if not outright abuse), but it turns out that is exactly what happened when popular web personalities tried to use Facebook as a tool to connect with their readership. With sky rocking success, suddenly Facebook's 5000 friend limit was a lot closer than people originally thought. Although most intentions were good (to keep closer contact with their fan bases) it turns out their success was too much for the Facebook frame work.

Certainly, most of us who imagined internet fame didn't conceive of this type of double edged sword.

I was recently speaking with a colleague who is a lead software developer for a popular Facebook application who was describing the processing inefficiencies that managers don't seem to understand when they design systems. Recopying entire databases, poor message protocols etc. Even with the most efficient code, there was that nasty problem of becoming 'popular', and suddenly experiencing exponential network growth. In a system like Facebook, where connections are always 'two-way', it exacerbates the problem.

What does that mean exactly? Unlike Twitter, where connections are generally one way (I follow you, but you don't HAVE to follow me), Facebook connections are all bidirectional - There is one connection shared between us. However, Twitter, each connection is one way and as such, you can have one popular node (such as a Barack Obama) with many connections in without as many connections out. What does this translate into?

I can follow Barack, but I'm pretty sure he doesn't care too much to follow me.

This simple assumption dramatically cuts down on the memory, processing and bandwidth requirements needed to provide the same level of basic service. But even, if the technology will permit it, are our minds too "primitive" to keep up? Or is it just physically impossible to reply and stay current at that volume on such an intimate level (versus the traditional mass communication channel models)?

I'm quite surprised that Facebook servers don't burst into flame each time someone logs into their website. At the time of writing, my Facebook boasts exactly 450 friends. From what I can see on my friends connections, this is hardly staggering. If you asked me to design a search algorithm that identified the latest activity of 450 people and then sorted them in order of time and relevance on my homepage in real time (a few seconds loading and transmission delay), I might have a heart attack (or a very large consulting contract). However, it becomes very clear (especially with their previous incarnation of "More / Less stories about..." design) that there is an inherent (unstated) hierarchy of friends. I would almost expect it to be a sort of "page rank of friends" that helps Facebook efficiently sort interesting stories for you.

Monday, March 23, 2009

Why I love YPBlogs

ypblogs.com
Salience:

Readers may have noticed that I've recently subscribed to various Blog directories in a narcissistic attempt to drive more traffic to my website, boost my page rank and generally solicit more feedback on my postings. I wanted to create a bigger audience for my material beyond my own networks in Facebook and Linkedin.

Causailty:

Anyone who designs or uses webpages will be familiar with Google's pagerank formula, and therefore if you are looking for an effective method of boosting your rank, you need to incorporate that into how you seek links to build your organic ranking. However, there are more metrics for success on the web. For instance, the number of hits, comments and the "quality" of said metrics. Keeping these goals in mind, I've been evaluating some of the tools I use to promote traffic to my website, particularly focusing on blog directories using a framework which cointains these elements and their relationships.

Architechture:

Among these blog category services, my favourite would have to be YPBlogs (Young Professional Blogs). The reason? I get good quality hits from the site is the short answer. But it's because of the subtle differences in how the site is structured (to match my goals). Let's look at the details:
  • By virtue of the site's writers / target audience, it attracts a certain crowd - energetic, exuberant, intelligent and technology proficient young professionals
  • You are listed on the front homepage - Unlike some of the other categories where you are listed on the 97th page among 50 other blogs on that page alone
  • They enjoy a decent page rank
  • It's a free service
  • They aggregate most recent blog posts on their homepage making it easier for you to immediately hit the latest news (and conversely for people to hit your recent posts)
  • They have dynamic content which refreshes *constantly* attracting people to return often
  • Bottom line - You get good hits and comments - I have two blogs posted on their service and I swear that my traffic has doubled (Ok. Maybe my blogs are relatively new, and "doubled" isn't really that impressive, but it is certainly way more traffic than I'm getting from the other services - a more "apples to apples" comparison).
Resolution:

I think the most interesting point here though isn't what YPblogs has been able to do for my Blog, but rather, what it's been able to do for my blog reading habit. I've found it surprisingly difficult to find interesting blogs to read (that aren't major publications by writers who blog as a full time job). What I was looking for was independent blogs of individuals who simply wanted to express themselves or ideas and this was a great place to find material.

In the interest of full disclosure, I'm hoping to become a featured site on YPBlogs by writing this post, however, the outcome of that is uncertain as this is a post they have not solicited and I would have (or rather, have just) written anyways.

Friday, March 20, 2009

Developing Your Personal Brand



We've often heard horror stories of people who've been burned because of a blog post, Facebook photo or Youtube video of them doing something embarrassing, it being discovered by their boss and them getting canned over it.

People should realize by now that their online presence in the public domain can be detrimental if left un-managed. However, the other side of the coin is that a well managed online identity can bolster your image, especially as a recruitment tool.

While most applications for jobs and competitive opportunities limit your application materials (2 page resume / CV, cover letter, essays, etc) it is beneficial to have a strong online presence to supplement your materials. Without the restrictions of a standard application package for example, including a website can stretch the valuable time a recruiter looks at "You" (your personal brand identity) from the standard 30 seconds before you are placed in the "blue bin" filing system to maybe that extra few moments in which they decide you're interesting enough to call in for an interview.

Some particularly good examples? Check out Jamie Varon's twittershouldhireme.com which only started on March 9, 2009. Although she's placed all her eggs in one basket so to speak, she's already got a fairly large following (she explains on an online interview that she was one of the top followed uh... twitter-ers).

In her example, she had a unique and bold idea, great execution and strong social networking / advertising. Her story is in it's interim stage as she is being called in for an interview by Twitter, but her fan base has its fingers crossed for good news.

Thursday, March 19, 2009

Twitter, Facebook and Software Modularity

I've just recently added Twitter to my account after receiving numerous invitations from friends and colleagues and after watching several episodes of The Daily Show with Jon Stewart, figured I shouldn't allow the technological gap between myself and US Senator's get extended too far.

I've also added the Twitter app to update Facebook status as well as Twitterberry, an application to update Twitter through your BlackBerry. It occurs to me how much layering is now involved in social networking.
When I update my Twitterberry application, it updates my Twitter account which in turn updates my Twitter application on Facebook which updates my Facebook status. All this in less time than it takes for my Firefox browser to refresh my Facebook homepage.

This is a testament to modular software design. However, as seamless as this progression appears to be, the recent updating of Facebook's "faceplate" for lack of a better word, brings up an interesting question when it comes to systems design.

Facebook was previously divided into several categories of communication: Status Updates, Messages, Wall Posts and (more recently) Comments. However, this new design begins to blur some lines (particularly between Wall Posts and Status Updates. While users may remember Status Updates as preceding with "is", Facebook has quietly removed the "is" as the default precursor to your activity update and slowly blended it such that status updates appear simply to be posts on your own wall. Similarly for sharing Notes or Links.

Now, this has some users up in arms based on the user interface and online "form factor", but in looking at my old Facebook for BlackBerry app ("old" by the online versions new standards, but still the "latest version") I noticed that much of the context that made sense in the previous versions now don't make much sense at all. It's a wonder that the API still works as well as it does (legacy code probably not yet depreciated) however, it's only a matter of time before this application gets a major overhaul.

Software vendors and developers need to catch up with Web 2.0 (it's hardly "new" anymore as it's been around for years now) and understand that in the development life cycle, there are major issues associated with changing form factors and even templates when your services are so integrated and intertwined with other services.

Modularity was a nice to have when your software was standalone and you simply wanted the ability to cheaply and quickly roll out new releases of software and be flexible with passing your code from one developer to another for outsourcing purposes, however now with the interdependency of message protocols and databases, it has become a critical necessity.

Especially as enterprise customers begin to rely on social networking technologies such as Twitter and Facebook as part of their mobility strategy, software design for increasingly critical applications must be more robust.

Monday, March 9, 2009

Facebook: Privacy and Broken Business Models

Facebook had originally started as an online community for University students unlike MySpace which was inherently considered more public domain. Only as recently as last year did Facebook start allowing the creation of "public profiles" which are now not much more improved than short summaries of friends lists.

Even recently, Facebook had to retract changes made to their ToS because of IP issues associated with the changes. As they look for creative ways to make money from their services, they have to come to terms with the initial strategy with which they were originally modeled.

Because of this initial definition of their space, they are having difficulty when it comes to leveraging their extensive network. Their network size is larger than MySpace, but their revenue is struggling to match their growth pattern.

Using an advertising model to make up most of their revenue, Facebook users will notice that their interface has become plagued with irrelevant ads (IQ tests, get rich quick schemes and all the garbage we hated on other parts of the net).



They should be leveraging their "intimate" knowledge of us for more targeted advertising. This has already started to appear in its infancy as a form of social advertising: Your friend, X, has joined the Y group or become a fan of Z product (with the implied suggestion that maybe you'd also be interested). There is potential here to do more direct communication with your fan bases in a similar vein as a membership or frequent purchasers program.

Also as differentiated products and brands can also be a form of self expression. Actively becoming a member or fan of such products on your social networking site become a natural extension of this expression. Marketing Mavens in the community can create a "celebrity endorsement" in their communities by proclaiming their interest in certain products by signing up for and participating in these fan memberships and groups.

These fan memberships and groups also provide focused and attentive groups for targeted communication. They are a captive audience eager to consumer your products but more importantly are interested in learning more about updates or possibly participating in feedback relating to these products.

This greatly affects current advertising models as well as how corporations solicit feedback and communicate with their loyal customer bases.

This may be good solution and business model to follow which allows Facebook to have it's privacy cake and eat it too. For now it seems like this type of model is rapidly gaining popularity as a nice to have, but could very quickly become a necessity for businesses who want to stay in close quarters range of their customers.