Paul Krugman wrote (link) about the “stubborn persistence of bad food in England.” Paraphrasing the article, the quality of food in London dropped significantly due to rapid urbanisation that preceded good transportation system that brought fresh food from the farms. This created a big demand for canned food-based diet, and soon the demand for quality food dropped. This resulted in London being stuck in a bad equilibrium, where good food was not supplied because good food was not demanded. Enough critical mass needed to demand good quality food was created only after many Londoners were able to afford frequent foreign trips.
Krugman also noticed (link) that Felix Salmon attributes (link) a similar reason (among others) for lack of central heating units in Mexico. A given Mexico dweller (including the rich) does not have central heating because other Mexico dwellers don’t have central heating. This “path dependency” creates a bad equilibrium, where one gets through the short-spanned Mexican winter without central heating.
Time columnist, Fareed Zakaria, explains (link) that reforms and revolutions often go hand-in-hand in oppressive regimes. He says that the most dangerous phase in an autocratic regime is when the dictator decides to reform the economy. Reforms expose the citizens to new possibilities and create a demand for better governance. When the government is not able to meet the demands, revolutions occur. This account (link) articulates the line of thought expressed by Zakaria. Zakaria also states that stagnant countries like Syria and North Korea have remained more stable. Thus, a lack of knowledge creates a bad equilibrium.
So, what is common among the English food, Mexico’s central heating, and the Egyptian uprising? They follow a demand-driven economic model, where a bad equilibrium is possible.
I am unimpressed by Google’s “sting” operation and their accusations (link) against Bing. All Google did was expose a vulnerability in Bing’s algorithm and then cry wolf. Google should have conducted two more experiments before coming their current conclusion. First, I will explain why the current operation does not mean anything (partly explained here). Second, I will discuss other experiments that Google needed to do before making their accusation.
Going back a few years, a search term, “miserable failure,” linked to George W Bush. This “Google Bomb” was achieved in the following way: Suppose a link reads ‘apples’, but links to a webpage on oranges, Google makes as association between apples and the webpage on oranges. If there were 500 webpages had links reading ‘apples’ but pointed to oranges, Google search on ‘apples’ would have led to the webpage on oranges. Similar tactics were used on the search term, “miserable failure,” to lead it to George W Bush’s site. Today, Google’s algorithm is robust enough not to fall for the same tactic.
Bing also makes similar associations based on search strings (used in Google. Amazon eBay etc.) and the websites clicked by the users. Google hard-coded some synthetic, gibberish search string to lead to a specific webpage. After fifteen days of using IE to search for the same synthetic queries and then clicking the link, Bing made the association between the search strings and the webpages. This happened to less then 10% of the synthetic queries. Google then accused Bing of copying their result. Isn’t Google’s experimental results similar to Google Bombs? The only difference is the source of the data. This just exposed the vulnerability of Bing, and nothing more. Yes, the data comes from the publicly-available results from a rival company. It was not intentional (benefit of the doubt). In the next update, Bing should try and reduce its dependence on that to be fair.
Two more experiments are needed to conclude that Bing copies Google. First experiment is look for common queries that give different results in Google and Bing. Google should freeze the results for that query and then observe Bing to see if the results on Bing change over time to match Google’s results. Second experiment is to look for commonly-used queries that give same results in both Google and Bing. Google should manually change their results and then wait for Bing to reflect the same changes. If Google had designed these experiments and disclosed the results, I would have given them some credit.
I am sure Google knows how to derive conclusions from a set of experiments. They should know better than conclude that Bing is copying Google based on their synthetic-query experiment. Google’s PageRank algorithm and search results (by Google and others) rely of statistical aggregates to work efficiently. Synthetic queries are not statistically significant. Google should not be crying wolf.
Update: Don’t you think that Google looked for examples other than “torsoraphy” and did not find any? If they did, don’t you think they would have reported that?