Wednesday, November 18, 2009

Reading Comprehension

Here's a curious puzzle involving DNA molecules. DNA is regularly damaged by ordinary wear and tear and the constant buffeting of ionising radiation. However, cells possess an extraordinary collection of molecular machines such as repair enzymes that rapidly identify the defects and repair them.


The puzzle is how they do it. One idea is that repair enzymes simply float about for long enough and eventually find damaged regions. But the numbers just don't stack up. Genes are usually between 1000 and 1,000,000 base pairs long. By contrast, a typical mutation usually involves just a handful of base pairs. That's just too small to find using a random walk with any reliability. Some other form of active location finding must be going on.


One theory is that mutations change the electrical characteristics of a stretch of DNA and that this creates a signal that repair enzymes can home in on, like electricians locating a break in a circuit. The trouble is that DNA doesn't conduct electricity like a power cable and so it isn't clear how this would work.


Now Arkady Krokhin at the University of North Texas and few buddies have worked out how DNA may do it. The key turns out to be that different regions of DNA have different electrical characteristics. The group has calculated from first principles the way in which charge flows in different regions. They say that in exons--the information carrying parts of genes--the energy spectrum of the molecule allows delocalised electrons to exist. In these areas, charge can flow.

However the energy spectrum of the regions that do not carry information--the introns--does not allow for delocalised electrons. So introns are effectively insulators.


That sets up well defined regions within DNA that can be identified electronically.It also means that any change in electronic properties caused by a mutation would be largely confined too. That immediately suggests a way that repair enzymes can home in on damage.


Of course, this work is just one step towards a coherent theory that explains DNA repair (which actually involves many different processes).


But the beauty of this approach is that it could also explain why some damage goes unrepaired, leading to cell death and even cancer.


The thinking is that certain mutations cause less of an electrical change than others. These mutations are "electronically masked" and so go undetected by repair enzymes. There is even experimental evidence for this from resistance measurements done on DNA with cancer-causing mutations.


If this theory is true, one important question is how might it be possible to exploit DNA's electrical characteristics to detect and even prevent cancer in future?

Reading Comprehension

Given the high price of wine and the enormous number of choices, a system in which industry experts comb through the forest of wines, judge them, and offer consumers the meaningful shortcut of medals and ratings makes sense.


But what if the successive judgments of the same wine, by the same wine expert, vary so widely that the ratings and medals on which wines base their reputations are merely a powerful illusion? That is the conclusion reached in two recent papers in the Journal of Wine Economics.


Both articles were authored by the same man, a unique blend of winemaker, scientist and statistician. The unlikely revolutionary is a soft-spoken fellow named Robert Hodgson, a retired professor who taught statistics at Humboldt State University. Since 1976, Mr. Hodgson has also been the proprietor of Fieldbrook Winery, a small operation that puts out about 10 wines each year, selling 1,500 cases.


A few years ago, Mr. Hodgson began wondering how wines, such as his own, can win a gold medal at one competition, and "end up in the pooper" at others. He decided to take a course in wine judging, and met G.M "Pooch" Pucilowski, chief judge at the California State Fair wine competition, North America's oldest and most prestigious. Mr. Hodgson joined the Wine Competition's advisory board, and eventually "begged" to run a controlled scientific study of the tastings, conducted in the same manner as the real-world tastings. The board agreed, but expected the results to be kept confidential.


There is a rich history of scientific research questioning whether wine experts can really make the fine taste distinctions they claim. For example, a 1996 study in the Journal of Experimental Psychology showed that even flavor-trained professionals cannot reliably identify more than three or four components in a mixture, although wine critics regularly report tasting six or more. There are eight in this description, from The Wine News, as quoted on wine.com, of a Silverado Limited Reserve Cabernet Sauvignon 2005 that sells for more than $100 a bottle: "Dusty, chalky scents followed by mint, plum, tobacco and leather. Tasty cherry with smoky oak accents…" Another publication, The Wine Advocate, describes a wine as having "promising aromas of lavender, roasted herbs, blueberries, and black currants." What is striking about this pair of descriptions is that, although they are very different, they are descriptions of the same Cabernet. One taster lists eight flavors and scents, the other four, and not one of them coincide.


That wine critiques are peppered with such inconsistencies is exactly what the laboratory experiments would lead you to expect. In fact, about 20 years ago, when a Harvard psychologist asked an ensemble of experts to rank five wines on each of 12 characteristics—such as tannins, sweetness, and fruitiness—the experts agreed at a level significantly better than chance on only three of the 12.


Psychologists have also been skeptical of wine judgments because context and expectation influence the perception of taste. In a 1963 study at the University of California at Davis, researchers secretly added color to a dry white wine to simulate a sauterne, sherry, rosé, Bordeaux and burgundy, and then asked experts to rate the sweetness of the various wines. Their sweetness judgments reflected the type of wine they thought they were drinking. In France, a decade ago a wine researcher named Fréderic Brochet served 57 French wine experts two identical midrange Bordeaux wines, one in an expensive Grand Cru bottle, the other accommodated in the bottle of a cheap table wine. The gurus showed a significant preference for the Grand Cru bottle, employing adjectives like "excellent" more often for the Grand Cru, and "unbalanced," and "flat" more often for the table wine.


Provocative as they are, such studies have been easy for wine critics to dismiss. Some were small-scale and theoretical. Many were performed in artificial laboratory conditions, or failed to control important environmental factors. And none of the rigorous studies tested the actual wine experts whose judgments you see in magazines and marketing materials. But Mr. Hodgson's research was different.


In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges—some 70 judges each year—about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson's study, however, every wine was presented to each judge three different times, each time drawn from the same bottle.


The results astonished Mr. Hodgson. The judges' wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.


Mr. Hodgson also found that the judges whose ratings were most consistent in any given year landed in the middle of the pack in other years, suggesting that their consistent performance that year had simply been due to chance.


Mr. Hodgson said he wrote up his findings each year and asked the board for permission to publish the results; each year, they said no. Finally, the board relented—according to Mr. Hodgson, on a close vote—and the study appeared in January in the Journal of Wine Economics.

This September, Mr. Hodgson dropped his other bombshell. This time, from a private newsletter called The California Grapevine, he obtained the complete records of wine competitions, listing not only which wines won medals, but which did not. Mr. Hodgson told me that when he started playing with the data he "noticed that the probability that a wine which won a gold medal in one competition would win nothing in others was high." The medals seemed to be spread around at random, with each wine having about a 9% chance of winning a gold medal in any given competition.


To test that idea, Mr. Hodgson restricted his attention to wines entering a certain number of competitions, say five. Then he made a bar graph of the number of wines winning 0, 1, 2, etc. gold medals in those competitions. The graph was nearly identical to the one you'd get if you simply made five flips of a coin weighted to land on heads with a probability of 9%. The distribution of medals, he wrote, "mirrors what might be expected should a gold medal be awarded by chance alone."


Mr. Hodgson's work was publicly dismissed as an absurdity by one wine expert, and "hogwash" by another. But among wine makers, the reaction was different. "I'm not surprised," said Bob Cabral, wine maker at critically acclaimed Williams-Selyem Winery in Sonoma County. In Mr. Cabral's view, wine ratings are influenced by uncontrolled factors such as the time of day, the number of hours since the taster last ate and the other wines in the lineup. He also says critics taste too many wines in too short a time. As a result, he says, "I would expect a taster's rating of the same wine to vary by at least three, four, five points from tasting to tasting."


One critic who recognizes that variation is an issue is Joshua Greene, editor and publisher of Wine and Spirits, who told me, "It is absurd for people to expect consistency in a taster's ratings. We're not robots." In the Cruse trial, the company appealed to the idea that even experienced tasters could err. Cruse claimed that it had bought the cheap Languedoc believing it was the kingly Bordeaux, and that the company's highly-trained and well-paid wine tasters had failed to perceive that it wasn't. The French rejected that possibility, and 35 years ago this December, eight wine dealers were convicted and given prison terms and fines totaling $8 million.


Despite his studies, Mr. Hodgson is betting that, like the French, American consumers won't be easily converted to the idea that wine experts are fallible. His winery's Web site still boasts of his own many dozens of medals.


"Even though ratings of individual wines are meaningless, people think they are useful," Mr. Greene says. He adds, however, that one can look at the average ratings of a spectrum of wines from a certain producer, region or year to identify useful trends.


As a consumer, accepting that one taster's tobacco and leather is another's blueberries and currants, that a 91 and a 96 rating are interchangeable, or that a wine winning a gold medal in one competition is likely thrown in the pooper in others presents a challenge. If you ignore the web of medals and ratings, how do you decide where to spend your money?


One answer would be to do more experimenting, and to be more price-sensitive, refusing to pay for medals and ratings points. Another tack is to continue to rely on the medals and ratings, adopting an approach often attributed to physicist Neils Bohr, who was said to have had a horseshoe hanging over his office door for good luck. When asked how a physicist could believe in such things, he said, "I am told it works even if you don't believe in it." Or you could just shrug and embrace the attitude of Julia Child, who, when asked what was her favorite wine, replied "gin."

Reading Comprehension

Kenny Perry could taste history. He had a two-shot lead with two holes to go at the 2009 Masters - all he had to do was not make any big mistakes and he would become, at 48, the oldest Masters champion in history. For three days at Augusta, he had played the best golf of his life: on the first 70 holes, he made only four bogeys. But then, at the 71st hole, everything started to fall apart.

It began with his approach shot, which sailed left over the green. On the next shot, Perry watched as his chip and run went horribly awry and the ball raced downhill, past the hole and off the green. The crowd gasped. Perry was lucky to double-putt for a bogey, his first in 22 holes.

On the final hole, the tee shot that looked straight ended up twisting left and landing in a bunker. He then short-sided himself on the green, so that the ball came to rest on a treacherous downhill slope. His next shot got him within 15ft of the hole, and putting for the championship. Perry's face was etched with anxiety. He took out his plumb-bob - a tool that helps golfers determine the break of the green - and tried to measure the subtle curve of the grass. Then he measured again. And again. It's as if Perry no longer trusted his eyes or his instincts. He missed the putt.

The play-off didn't go much better. At the first hole, after a solid drive, Perry's next shot went far right and landed in thick grass. He needed a masterful stroke just to eke out par. And then, on the second extra hole, he unravelled. From the fairway he hit an ugly hook and the ball landed with a thud in the pine trees. Perry looked to be on the verge of tears: he knew he had just lost the Masters. It was not quite a Van de Velde moment - named after the Frenchman who squandered the 1999 Open - but it was not far off.

The next day Perry was stoic. "Great players make it happen," he said. "Your average players don't. And that's the way it is." In other words, the dividing line between winning and disappointment isn't about technique or athleticism or talent. It's about performing under pressure, hitting the shots when they matter most.

We call such failures "choking", if only because a person frayed by pressure might as well not have oxygen. What makes choking so morbidly fascinating is that the performers are incapacitated by their own thoughts. Perry, for example, was so worried about not making a mistake on the 17th that he played a disastrous chip. His mind sabotaged itself.

Scientists have begun to uncover the causes of choking, diagnosing the particular mental differences that allow some people to succeed while others wither in the spotlight. Although it might seem like an amorphous category of failure, their work has revealed that choking is triggered by a specific mental mistake: thinking too much.

The sequence of events typically goes like this: when people get nervous about performing, they become self-conscious. They start to fixate on themselves, trying to make sure that they don't make any mistakes. This can be lethal for a performer. The bowler concentrates too much on his action and loses control of the ball. The footballer misses the penalty by a mile. In each instance, the natural fluidity of performance is lost; the grace of talent disappears.

Sian Beilock, a professor of psychology at the University of Chicago, has helped illuminate the anatomy of choking. She uses golf as her experimental paradigm. When people are learning how to putt, it can seem daunting. There are just so many things to think about. Golfers need to assess the lay of the green, calculate the line of the ball, and get a feel for the grain of the turf. Then they have to monitor their putting motion and make sure that they hit the ball with a smooth, straight stroke. For an inexperienced player, a golf putt can seem unbearably hard, like a life-sized trigonometry problem.

But the mental exertion pays off, at least at first. Beilock has shown that novices hit better putts when they consciously reflect on their actions. The more time they spend thinking about the putt, the more likely they are to hole the ball. By concentrating on their game, by paying attention to the mechanics of their stroke, they can avoid beginner's mistakes.

A little experience, however, changes everything. After golfers have learned how to putt - once they have memorised the necessary movements - analysing the stroke is a waste of time. The brain already knows what to do. It automatically computes the slope of the green, settles on the best putting angle, and decides how hard to hit the ball. Bradley Hatfield, a professor of kinesiology and psychology at the University of Maryland, has monitored the brain wave activity of expert athletes during performance. (Because the subjects have to wear a bulky plastic cap full of electrodes, Hatfield can only study golfers, archers and Olympic rifle shooters.) While the brain waves of beginners show lots of erratic spikes and haphazard rhythms - this is the neural signature of a mind that is humming with conscious thoughts - the minds of expert athletes look strangely serene. When they are performing, they exhibit a rare mental tranquility, as their brain deliberately ignores interruptions from the outside world. This is neurological evidence, Hatfield says, of "the zone", that trance-like mindset which allows experts to perform at peak levels. (As the corporate motto says, the best athletes don't think: they just do it.)

Beilock's data further demonstrate the benefits of relying on the automatic brain when playing a familiar sport. She found that when experienced golfers are forced to think about their putts, they hit significantly worse shots. All those conscious thoughts erase their years of practice. "We bring expert golfers into our lab, we tell them to pay attention to a particular part of their swing, and they just screw up," Beilock says. "When you are at a high level, your skills become somewhat automated. You don't need to pay attention to every step in what you're doing."

This is what happens when people "choke". The part of their brain that monitors their behaviour starts to interfere with actions that are normally made without thinking. Performers begin second guessing skills that they have honed through years of practice. The worst part about choking is that it tends to spiral. The failures build upon each other, so a stressful situation is made more stressful.

Tuesday, November 17, 2009

Vocab: Bad Words

In one of the sessions last week I was talking about the root word "cac" which means bad. As usual I set about troubling people by asking for words that originate from "cac"and one of the words that was suggested was cactus. That actually set me thinking 'cause I was not sure whether it did or not. Could it be? It sure seemed that way. Anyway I said I would cross-check. I did. It does not. Cactus originates from the Greek Kaktos which means cardoon (a kind of an artichoke, cultivated for its edible leafstalks and roots)


But then I thought this is a perfect root word to look at. So here are a few "Bad" words... I am sure you will enjoy them.... :-)


Cacodoxy

Erroneous doctrine; heresy; heterodoxy.

Etymology: cac(o) + doxa = bad + opinion


Cacography

Bad handwriting; poor penmanship.

Incorrect spelling.

Etymology: cac(o) + graphos = bad + (something) drawn or written, one who draws or writes


Cacoëthes

An irresistible urge; mania.

Etymology: cac(o) + ethos = bad + character

In other words: of bad character


Cacology

Defectively produced speech; socially unacceptable diction.

Etymology: cac(o) + -logy = bad + word, speech


Caconym

A name, esp. a taxonomic name, that is considered linguistically undesirable.

Etymology: cac(o) + -onym = bad + name



And on that note the Bad Man says,


Ciao


Sunday, November 15, 2009

Reading Comprehension

A short post taken from the popular Freakonomics Blog. This one is on the environment and the possible consequences of cloud seeding. Interesting. This was also tried in India many years back in Chennai. Was a damp squib then. :-(

Also if you have the time please do visit http://freakonomics.blogs.nytimes.com/ While all articles are not fantastic you will find the occasional gem.



For the second time this month, the Chinese government has reportedly induced a snowstorm in Beijing by seeding clouds with silver iodide. This form of geoengineering has been around for quite a while. The second storm in Beijing was the heaviest snowfall the city had seen in 54 years. The government’s apparent motivation for forcing precipitation was to relieve a long-standing drought. Beyond creating the various kinds of havoc that such big storms create, there are unintended consequences as well: for instance, the chloride used to rid the streets of snow after the storm is thought to lead to environmental and perhaps even structural damage.

What is the appropriate response to this news?

It probably depends on your view of the world — of politics, the environment, and human nature. Should one ignore the snowstorms and chalk them up to the Chinese simply being Chinese? Or should one think about these small-scale geoengineering exercises as a potential threat to the world’s geopolitical balance? It isn’t hard to imagine the trouble that might result if governmental snow- and rain-making became commonplace: one drought-ridden country declares war on its neighbor after the neighbor “steals” its rainfall.

There are some geoengineering schemes that scientists are considering to cool the earth if global warming becomes dangerous. One involves increasing the reflectivity of oceanic clouds; another suggests mimicking the effect of large volcanoes by spraying sulfur dioxide into the stratosphere to diminish solar radiation. These ideas are extremely unpopular in environmentalist circles.

Many environmentalists who argue that intensive carbon mitigation is the sole route to address global warming seem to feel that too many of the world’s citizens (including some political leaders) have their heads stuck in the sand, denying the reality of global warming.

But the point is that those who argue for carbon mitigation as the sole route to address global warming may have their heads stuck in a different pile of sand, and these Chinese snowstorms show why. Here’s what we write in the book:

As of this writing, there is no regulatory framework to prohibit anyone — a government, a private institution, even an individual — from putting sulfur dioxide in the atmosphere. … But of course this depends on the individual. If it were Al Gore, he might snag a second Nobel Peace Prize. If it were Hugo Chávez, he’d probably get a prompt visit from some U.S. fighter jets.

So while environmentalists may find the very notion of geoengineering repugnant, the fact is that geoengineering is already with us, and will likely be put to use whether we like it or not.

This leads to the very important matter of governance. While some environmental activists might like to hope that geoengineering is just science fiction that neither will nor should ever come into play (much as one might have liked to hope the same of atomic weapons), the facts on the ground (and in the Chinese clouds) do not support this view. Government leaders are getting together in Copenhagen next month to discuss collective carbon mitigation. It is becoming increasingly clear that they should be discussing the rules going forward for collective geoengineering as well, whether it is small-scale schemes like the Beijing snowstorms or large-scale ideas that address global warming.

Saturday, November 14, 2009

Reading Comprehension

I have never posted a topic related to Maths so I thought that this is as good a time as any to post something that is just a wee bit related to it. The write-up is very simple but since a lot of us have this phobia of the subject, when we encounter it in a Verbal section we might be, just might be, well .... flummoxed. Read and summarize.


Happy reading.


Why were mathematicians so interested in effective methods in the 1920s? They wanted to solve a problem that was nagging them at the time. Sometimes someone discovered a theorem and published a proof. Then some time passes, months, years or decades, and then someone discovered another theorem that contradicted the first and published a proof. When this happens, mathematicians are left scratching their heads. Both theorems cannot be true. There must be a mistake somewhere. If the proof of one of the theorems is wrong then the problem is solved. They revoke the theorem with a faulty proof and move on. But what if both proofs are sound and stand scrutiny?


These kinds of discoveries are called paradoxes. They are indicative that there is something rotten in the foundations that define how you prove theorems. Mathematical proofs are the only tool that can uncover mathematical truth. If you never know when the opposite theorems will be proven, you can't trust your proofs. If you can't trust your proofs, you can't trust mathematics. This is not acceptable. The solution is to look at the foundations, find the error and correct it so paradoxes don't occur any more. In the early twentieth century, mathematicians were struggling with pretty nasty paradoxes. They felt the need to investigate and fix the foundations they were working on.


The relationship between algorithms and mathematical proofs is an important part of computation theory. It is part of the evidence that software is abstract and software is mathematics. The efforts to fix the foundations of mathematics in the early twentieth century are an important part of this story.


One of the most prominent mathematicians of the time, David Hilbert, proposed a program that, if implemented, would solve the problem of paradoxes. He argued that mathematical proofs should use formal methods of manipulating the mathematical text, an approach known as formalism. He proposed to base mathematics on formal systems that are made of three components.


1. A synthetic language with a defined syntax

2. An explicit list of logical inference rules

3. An explicit list of axioms


Let's review all three components one by one to see how they work. Mathematics uses special symbols to write propositions like a+b=c or E=mc2. There are rules on how you use these symbols. You can't write garbage like +=%5/ and expect it to make sense. The symbols together with the rules make a synthetic language. The rules define the syntax of the language. Computer programmers use this kind of synthetic language every day when they program the computer. The idea of such a special language to express what can't be easily expressed in English did not originate with computer programmers. Mathematicians thought of it first.


How do you test if your formula complies with the syntax? Hilbert required that you use an effective method that verifies that the rules are obeyed. If you can't use such a method to test your syntax then the language is unfit to be used as a component of a formal system.


Inference rules are how you make deductions. A deduction is a sequence of propositions written in the language of mathematics that logically follows. At each step in the sequence there must be a rule that tells you why you are allowed to get there given the propositions that were previously deduced. For example suppose you have a proof of A and separately you have a proof of B. Then the logical deduction is you can put A and B together to make "A and B". This example is so obvious that it goes without saying. In a formal system you are not allowed to let obvious things be untold. All the rules must be spelled out before you even start making deductions. You can't use anything that has not been spelled out no matter how obvious it is. This list of rules is the second component of a formal system.


It was known since the works of philosopher Gottlob Frege, Bertrand Russell, and their successors that all logical inference rules can be expressed as syntactic manipulations. For example, take the previous example where we turn separate proofs of A and B into a proof of "A and B" together. The inference consists of taking A, taking B, put an "and" in the middle and we have "A and B". You don't need to know what A and B stand for to do this. You don't need to know how they are proved. You just manipulate the text according to the rule. All the inferences that are logically valid can be expressed by rules that work in this manner. This opens an interesting possibility. You don't use a human judgment call to determine if a mathematical proof is valid. You check the syntax and verify that all inferences are made according to the rules that have been listed. This check is made by applying an effective method.


If there is no human judgment call in verifying proofs and syntax, where is human judgment to be found? It is when you find a practical application to the mathematical language. The mathematical symbols can be read and interpreted according to their meanings. You don't need to understand the meaning to verify the proof is carried out according to the rules of logic, but you need to understand the meaning to make some real-world use of the knowledge you have so gained. For example when you prove a theorem about geometry, you don't need to know what the geometric language means to verify the correctness of the proof, but you will need to understand what your theorem means when you use it to survey the land.


Another place where human judgment is required is in the choice of the axioms. Any intuitive element that is required in mathematics must be expressed as an axiom. Each axiom must be written using the mathematical language. The rules of logic say you can always quote an axiom without having to prove it. This is because the axioms are supposed to be the embodiment of human intuition. They are the starting point of deductions. If some axiom doesn't correspond to some intuitive truth, it has no business in the formal system. If some intuitive truth is not captured by any axiom, you need to add another axiom.


The list of axioms are the third component of a formal system. Together the syntax, the rules of inferences and the axioms include everything you need to write mathematical proofs. You start with some axioms and elaborate the inferences until you reach the desired conclusion. Once you have proven a theorem, you add it in the list of propositions you can quote without proof. The assumption is that the proof of the theorem is included by reference to your proof. And because all the rules and axioms have been explicitly specified from the start and meticulously followed, there is no dark corner in your logic where something unexpected can hide and eventually spring out to undermine your proof.


To recapitulate [aah the summary given to you... not bad], effective methods play a big role in this program. They are used (1) to verify the correctness of the syntax of the mathematical language and (2) to verify that the mathematical proofs comply with the rules of inferences. The implication is that there is a tie between formal methods and abstract mathematical thinking. If you consider the Church-Turing thesis, there is a further implication of a tie between computer algorithms and abstract mathematical thinking.


This passage has been taken from the article "An Explanation of Computation Theory for Lawyers". The link is: http://www.groklaw.net/article.php?story=20091111151305785

Thursday, November 12, 2009

Reading Comprehension

This article has been taken from Newsweek. The Link is: http://www.newsweek.com/id/222472


When I looked at the article title I thought it would be a very interesting one but after the first or maybe the second paragraph I lost complete interest. The moment that happened my immediate thought was that this article should go out to all of you. ;-) Yenjoy yourself.


The question is asked in every language, in every era: "So, dear, when will you give me grandchildren?" Darwin would approve.


At least he would if the "grandma hypothesis" is right. According to this idea, the reason women—uniquely among primates—outlive their child-bearing years is that a female who survives past menopause can contribute to the care of her children's children, improving their chances of reaching adulthood. Natural selection favors behavior that increases an individual's genetic contribution to future generations; surviving long enough to help grandkids is thus an evolutionary adaptation.


Too bad data don't support this intriguing notion. In some studies, a grandmother living nearby was indeed associated with better survival of grandchildren, as the hypothesis predicts. But other studies found no such benefit. Leslie Knapp, a biological anthropologist at the University of Cambridge, and her graduate student Molly Fox wondered if the inconsistency reflected a basic fact of genetics—namely, that because of how the X chromosome is passed down from parents to children, grandmothers are more closely related to some grandkids than to others.


Here's why. A paternal grandmother, like all women, has two X chromosomes. She passes one to her son (who gets his Y chromosome from Dad, which is why he's a he). He then passes grandma's X—the one and only X he has—to his daughter. But Dad passes his Y chromosome to his son, who therefore does not carry his paternal grandma's X. A maternal grandmother, too, passes one of her X's to her daughter; there is a 50–50 chance that that X will be transmitted to the daughter's child, of either sex. A maternal grandmother, therefore, has only a 50–50 chance that her X will be transmitted to a grandchild. A little math shows that maternal grandmothers are related to granddaughters and grandsons equally, for an "X-relatedness" of 25 percent. But paternal grandmothers are twice as close to granddaughters (50 percent) and not at all to grandsons (zero percent), explains Knapp. It may seem arbitrary to focus on X, one of 23 chromosomes, but it has 8 percent (1,529) of all our genes, including some for fertility and intelligence, which affect reproductive success.


Many of those earlier, inconsistent tests of the grandma hypothesis lumped together both kinds of grandmas (maternal and paternal) and both sexes of grandkids. Given the different degrees of X-relatedness, says Knapp, "we decided to look at the data from a genetic perspective. Since it is adaptive to favor those with whom we share the most genes, evolution should favor women who invest in grandchildren in a way that mirrors X-relatedness."


She, Fox, and colleagues analyzed existing data on the survival of 43,000 children in seven traditional societies, from rural farming villages in Japan and Malawi to towns in Germany and Canada, from the 1600s to today. "The most striking effect was of the paternal grandmother," says Fox. In six of the seven societies, having a paternal grandmother nearby improved the survival of granddaughters (50 percent X-relatedness) by up to 4.5-fold, but for some unknown reason decreased the survival of grandsons (zero percent) by 8 to 29 percent. And a boy had a greater chance of survival if he lived with his maternal grandmother (25 percent X-relatedness) than with his paternal grandmother (zero percent). In four of the seven societies, a girl had a better chance of survival if she lived with her paternal grandmother (50 percent) than her maternal grandmother (25 percent).


In other words, the effect of a grandmother perfectly tracked the DNA. "The higher the X-relatedness," the scientists write in Proceedings of the Royal Society B, "the more beneficial effect the grandmother has on that child's" survival. That the correlation held across four continents and four centuries suggests a biological, not cultural, explanation.


But what? There is no evidence grandmothers consciously treat grandsons and granddaughters differently, or a son's children different from a daughter's. The best guess is that grandchildren transmit some signal of genetic relatedness, such as resemblance or a pheromone, which Grandma unconsciously uses to apportion how much she invests in different grandkids. Grandmothers will surely recoil at the very idea, which is why the reader is advised not to leave this column lying around during a multigenerational Thanksgiving.

.

Reading Comprehension

This post has been taken from a speech on Governance Institutions and Development by Avinash Dixit of Princeton University. The link is: http://www.rbi.org.in/content/Pub_Governance%20Institutions%20and%20Development.aspx

It is a looooong speech and therefore I am posting only a small section of the same. If you so desire you can read the speech at the link provided, maybe one section at a time.


Economic governance comprises many organizations and actions essential for good functioning of markets, most notably protection of property rights, enforcement of contracts, and provision of physical and informational infrastructure. In most modern economies, governments provide these services more or less efficiently, and modern economics used to take them for granted. But the difficulties encountered by market-oriented reforms in less-developed countries and former socialist countries have led economists to take a fresh look at the problems and institutions of governance. In this lecture I offer a brief and selective look at this research, and attempt to draw a couple of conclusions that may be relevant to India today.

The importance of secure property rights can hardly be overstated. Without them, people will not create or improve the assets, physical and intellectual, that are essential for economic progress. De Soto (2000) builds the argument and marshals the evidence in a thorough and compelling book. Security of rights improves the incentives to save and invest. Land and capital can be rented out to others if they can use it more efficiently, so inefficient internal uses are avoided. And the assets can be used as collateral to borrow and expand one’s business. Field (2006) has taken the case even further. Security of property rights not only increases the supply of capital and efficiency in its allocation; it also increases labor supply. When titles to land and capital are official and secure, people need not spend time and effort to guard their rights, so they can put the labor and time to productive uses. Field’s empirical research on the titling program in Peru finds large and significant effects: “For the average squatter household, property titles are associated with a 14% increase in household work hours, a 28% decrease in the probability of working inside the home, and a 7.5% reduction in the probability of child labor among single-parent households. Panel estimates … support the cross-section results: between 1997 and 2000 household labor supply increased an additional 13 hours per week for squatters in neighborhoods reached by the program.”

In the Indian context, security of land titles may be the most important issue of property rights. The controversy regarding land sales in the context of the Special Economic Zones (SEZ) is a case in point. The merits of the SEZ policy can and should be debated, but if the debaters raise fears of revocation of rights and benefits that have been granted through a proper policy process, this uncertainty will deter investors and merely ensure that the potential benefits will not materialize. At a more micro level, insecurity of land rights and fragmentation of land arising from disputes in extended families constitute serious constraints on agricultural growth.

The relevance of security of contracts may not seem so obvious, but it is equally important. In most economic transactions that can create economic gains for all parties, some or all of them can gain an extra private benefit while hurting the others, by violating the terms of their explicit or implicit agreement. The fear of such exploitation by the other party may deter each from entering into the agreement in the first place. This was brilliantly illustrated by Diego Gambetta in his ethnographic sociological study of the Sicilian Mafia (1993, p. 15). In the course of his interviews, a cattle breeder told him: “When the butcher comes to buy an animal, he knows that I want to cheat him [by supplying a low-quality animal]. But I know that he wants to cheat me [by reneging on payment]. Thus we need … Peppe [the Mafioso] to make us agree. And we both pay Peppe a commission.” By providing a mechanism of contract enforcement, Peppe makes it possible for the two to enter into a mutually beneficial transaction. And he does this with a profit motive, exactly as would any businessperson providing any service for which others are willing to pay.

This example also demonstrates something else that is an important theme for me: governance does not have to be provided by the government as a part of its public services; private parties may do so with other motives. In most countries, even advanced ones, we find a mixture of the formal legal system and a rich and complex array of informal social institutions of governance. These mixtures reflect the country’s level of economic development, and in turn help determine its economic prospects.

The issue is not the old-style one of “market versus government.” Rather, it is one of how different kinds of institutions (governmental and non-governmental, formal and informal, industry-based or community based, singly or in combination) provide the support that is required for successful economic activity (exchange, production, asset accumulation, innovation, and so on), and the activity may or may not take place in conventional markets. I cannot emphasize too strongly the need to get beyond the old sterile debates and on to issues that really matter.

What forces threaten property rights and contracts? And how can we design and reform institutions to counter these threat? Let us look at some theoretical concepts and examples.....

Wednesday, November 11, 2009

Reading Comprehension

Given that this Blog caters, by and large, to people who are interested (anxious?) in getting into Bschools this article seems a bit strange. However this article has been put up not just for RC practice but also because, to a very large extent, I agree with the general thinking that Management Consultants are a pain in the somewhere.

As I have observed, the trick is to write a successful first book (not that that is easy, of course) and then mine the same for life. It is painful.

But then again maybe my irritation with Management Consultants is more personal in nature... Without further waste of time.... On to the RC.


The three habits......of highly irritating management gurus

STEPHEN COVEY is fond of telling people that he is writing a book on the evils of retirement, “Live Life in Crescendo”. There is no danger of a diminuendo for this particular guru. Mr Covey is working on nine other books, including one on how to end crime. He also presides over a business empire that is even more sprawling than his ever-growing family (he had 51 grandchildren as The Economist went to press).

Mr Covey has been stretching his brand since 1989, when the publication of “The 7 Habits of Highly Effective People” turned him into a superstar. He followed up with a succession of spin-offs such as “The 7 Habits of Highly Effective Families” and “The 8th Habit”. He is also the co-founder of a consultancy, FranklinCovey, that markets success-boosting tools and techniques. So far the original “7 Habits” has sold 15m copies in 38 languages and three of Mr Covey’s other books have sold more than a million copies.

His stroke of genius was to blow up the wall between management and self-help. “The 7 Habits” mixes the language of management consultancy—“synergy” and the like—with the moral exhortations that you find in Samuel Smiles’s “Self-Help”, Norman Vincent Peale’s “The Power of Positive Thinking” and the 12-step literature put out by Alcoholics Anonymous and its offshoots. Mr Covey insists that the key to success, for both individuals and organisations, is to unleash the power that resides in everyone. “Private victories precede public victories,” as he likes to say.

It is tempting to dismiss Mr Covey as merely a fringe figure. But this would be a mistake. He is a paid-up member of the management-theory club, with an MBA from Harvard. The club contains many serious thinkers, some of whom, such as Clayton Christensen, have endorsed him in glowing terms. He says that he got the idea for “The 7 Habits” in part from the claim of Peter Drucker, the most hallowed of gurus, that “effectiveness is a habit” and that the third (curiously) of the seven habits, “put first things first”, comes straight from Drucker. FranklinCovey claims 90% of Fortune 100 and 75% of Fortune 500 companies as clients.

Nor is Mr Covey the first to mix management with self-help. In the early 1900s Frank Gilbreth, one of the pioneers of industrial psychology, tried to raise his 12 children according to Frederick Taylor’s principles of scientific management. He discovered that you could cut the time it took to shave if you used two razors at once—but then abandoned the idea when he found that it took an additional two minutes to bandage the resulting wounds.

Mr Covey is only an outlier in the sense that he embodies, in an extreme form, many of the most irritating habits of the guru industry, not least the habit of producing numbered lists of habits. Three habits are particularly worth noting.

The first is presenting stale ideas as breathtaking breakthroughs. In a recent speech in London Mr Covey declared capitalism to be in the middle of a “paradigm shift” from industrial management (which treats people as things) to knowledge-age management (which tries to unleash creativity). Gary Hamel, who according to the Wall Street Journal is the world’s most influential business thinker, proclaims, “For the first time since the dawning of the industrial age, the only way to build a company that’s fit for the future is to build one that’s fit for human beings.”

But management gurus have been making this point for decades. William Ouchi announced it in 1981 in the guise of “Theory Z”. Elton Mayo and Mary Parker Follet had made much the same point 60 years before. It makes you long for some out-of-the-box thinker who will argue that the future belongs to companies that are unfit for human beings (which it may well do).

The second irritating habit is that of naming model firms. Mr Covey littered his speech in London with references to companies he thinks are outstandingly well managed, including, bizarrely, General Motors’ Saturn division, which is going out of business. Tom Peters launched his career with “In Search of Excellence” in 1982. Jim Collins has written a succession of books celebrating the great and the good of the corporate world.

In search of rigour

But do these corporate hagiographies prove anything? The gurus routinely ignore such basic precautions as providing a control group. Five years after “In Search of Excellence” appeared, a third of its ballyhooed companies were in trouble. Andrew Henderson of the University of Texas has recently subjected “excellence studies” to rigorous statistical analysis. He concludes that luck is just as plausible an explanation of their success as excellence.

The third irritating habit is the flogging of management tools off the back of numbered lists or facile principles. Mr Covey reinforces his eight habits with various diagnostic devices such as “the XQ test” (which measures organisational efficiency much as an IQ test measures intelligence). Consultancies like to tell their clients that the key to success lies in “customer-relationship management” and then sell tools to improve it.

But most of these rules are nothing more than wet fingers in the wind. Gurus preach the virtues of “core competences”. But in the developing world many highly diversified companies are sweeping all before them. Customer-relationship management is all about learning about and from your clients. But Henry Ford pointed out that if he had listened to his customers he would have built a better horse and buggy.

Which points to the most irritating thing of all about management gurus: that their failures only serve to stoke demand for their services. If management could indeed be reduced to a few simple principles, then we would have no need for management thinkers. But the very fact that it defies easy solutions, leaving managers in a perpetual state of angst, means that there will always be demand for books like Mr Covey’s.


This article has been taken from The Economist. The link to the original article is: http://www.economist.com/node/14698784

Monday, November 9, 2009

Commonly Confused Words

Will be up in a few hours.........

Sunday, November 8, 2009

Reading Comprehension


Take this as a RC passage, take it as an article. Does not matter. The moment I saw this article I knew I had to share it with you folks. The operative parts are in the first and the last para but I have included all the paragraphs for your benefit.

I hope you do not think that I am too macabre in my interests!!!

Pissed Off Bear: 2, Islamic Terrorists: 0

In Kashmir, an Islamic terrorist leader, and one of his followers was killed by a black bear. Two other terrorists were wounded, but were able to flee to a nearby village. Although the terrorists were armed with assault rifles, the bear attacked quickly, and at night, and the men were unable to use their weapons in the restricted confines of the cave. Apparently the bear was going to use the cave to hibernate in, and was upset to find that the terrorists had moved in. The four terrorists thought the cave was abandoned, and a good place to hide out in.

The Asiatic Black Bear is related to the American black bear, but is larger (up to 400 pounds for an older male), and is much more aggressive towards humans. The Asiatic bear has a more powerful jaw, and bugger claws. The smaller American black bear usually flees humans, although they have been known to attack and kill small children. In the Americas, and parts of Eurasia, the larger (half a ton) brown bear, especially the American Grizzly, is the most dangerous bear for humans. American brown bears are more aggressive towards humans than the Eurasian cousins. It is the black bear you have to be wary of in Eurasia.

All black bears can climb trees, which makes it more difficult for fleeing humans. If travelling in woods that contain bears, the safest thing to do is take along someone you can outrun. The European black bears are vegetarians (unless very hungry, in which case they will eat meat), and a big pest to farmers located near forests (where black bears prefer to live). The black bear, of several types, is native all of Eurasia, from Europe to Japan.

The black bear population in Kashmir has become more of a problem over the last twenty years, because the Indian police disarmed the rural Moslem population. Police are now called in to kill bears that become a nuisance. In the past, the rural folk would often hunt the Black Bears, because some of their body parts are very valuable (to folk medicine practitioners, especially in China.) Those parts could be worth $2,000 or more. The pelts are very warm, and Kashmir gets very cold in the Winter. It is believed that other Islamic terrorists have come to grief after encounters with black bears in the large forested areas that make up so much of Kashmir. But in these other cases, there were no survivors, and the bodies were never found. Several small groups of Islamic terrorists have simply disappeared in the hills, and the local black bears are the top predator in the area.