A Posteriori

Attempts to grapple with and elucidate empirical knowledge

New Home June 30, 2012

Filed under: Uncategorized — Rāhul @ 14:08

This blog is moving to my new home at- http://www.pvrahul.com/blog/. I hope you will follow me there, although most of my posts there will be quite different from what I have here.

 

Olympic medals and their rankings March 3, 2010

Filed under: Uncategorized — Rāhul @ 01:11

As the winter olympics in Vancouver drew to a close last Sunday, the media went through the usual exercise of tabulating national medal tallies. Although the International Olympic committee doesn’t recognise any of these arrangements, they make for interesting analysis. While some of the medal tallies are arranged in descending order of total medals won, some others are  ordered based on the number of gold medals won, with silver and bronze medals used only to break ties. The latter scheme apparently enjoys wider appeal and is used in the wikipedia entries of the medal tables too but the total medal count is favoured by most American newspapers. Each scheme has advantages and disadvantages.

Ranking performances based on total medal count obviously treats all medals the same and hence fails to appropriately credit a Gold medal over silver or bronze. Also, this scheme falls prey to one handicap of the system where the third best performance in a discipline is awarded a medal alsongside the first and second while the fourth is not. So, a country which had one 3rd place performance is ranked above another with many 4th place performances.

Rankings based primarily  on the number of Gold medals won and using silver and bronze medals progressively to split ties, while failing to allay the injustice to the 4th place completely is at least not as glaring. Any number of third places now don’t make up for a second or first place. But, it instinctively seems unfair to rank 1 gold medal over 10 silver medals either.

A fair system of assigning weights ot Gold, Silver and Bronze medals is required to compare them against each other in compiling the final medals table. Wikipedia lists such  schemes that have been previously used by newspapers like 5:2:1, 3:2:1 and 4:2:1 as the ratio for Gold, Silver and Bronze medal weights respectively. Perhaps a poll among past olympians in a field can be used to set the right ratio for olympic medals in that field. It seems to be a matter of opinion but that shouldn’t stop us from exploring the possibilities!

Applying these schemes to some of the top nations in the 2008 olympics, we have the following rank list-

Gold First Total 05:02:01 03:02:01 04:02:01
China United states China China China
United States China United States United States United States
Russia Russia Russia Russia Russia
Great Britain Great Britain Great Britain Great Britain Great Britain
Germany Australia Australia Australia Australia
Australia Germany Germany Germany Germany
South Korea France South Korea France South Korea
France South Korea France South Korea France

The three weighted schemes used are obviously attempts at compromising the colour-blind medals list and gold first list. Considering this list, it isn’t surprising why the American media insisted on using the total medals are their ranking criterion!

 

Science: Not just a vehicle for technology January 10, 2010

Filed under: General Physics,Public Policy — Rāhul @ 00:14
Tags:

Modern life is linked inextricably to Science and Technology. Those two words, although very different in origin and meaning are so intertwined today that their history makes interesting reading. The Merriam Webster online dictionary defines Technology as the practical application of knowledge and Science as a system of knowledge covering general truths or laws that is capable of making falsifiable predictions. This definition suggests that Science creates knowledge which then propagates into Technology that is used to enrich our lives. But the relationship between the two is not always so clearly unidirectional.

2009 was celebrated worldwide as the International year of Astronomy because it was the 400th anniversary of the great Italian physicist Galileo Galilei setting about building his “spyglass” which he soon improved to discover the satellites of Jupiter, sun-spots and the phases of Venus. Galileo did all this without a scientific understanding of the propagation of light which had to wait for Isaac Newton. Instead, the technology of the telescope advanced by continued experimentation, enough for Galileo and others to gather sufficient empirical evidence for a then fledgling theory, that the earth is not the centre of the universe.

Against popular belief of the time, Nikolaus Copernicus in the 16th century had proposed that it is the earth that revolves around the sun and not vice-versa. However, it wasn’t until the preponderance of astronomical evidence gathered by telescopic observations a century later that Copernicus could be proved right. The technology of the telescope completed the most important step in the elevation of Copernicus’ proposal to the level of scientific theory. An overwhelming amount of data was gathered, all of which supported Copernicus’ idea over the model placing the earth at the center of the universe, thus convincing all rational sceptics of its merits. In this case, the advance of technology allowed us to see farther and deeper into nature’s mysteries, thus revealing scientific facts.

On the other hand, modern day technologies are inextricably linked to scientific advances. Quantum theory led to our understanding of semiconductor electronics without which the computer industry wouldn’t have taken off. General relativity allowed us to understand gravity well enough to build spacecrafts that put a human being on the moon. Enhanced understanding of the human body and the germ theory of disease led to the design of cures to many infectious diseases. Science and technology, it seems, are advancing together feeding off the achievements in each other, like a system with positive feedbacks. Improved technology allows us to probe further into phenomena that perplex us and lead to scientific theories that help design still better technologies that add value to life. But this relationship between science and technology was not always so close.

Technological advances have been a hallmark of human civilisation throughout history. Our ancestors controlled fire, learned agriculture, invented the wheel and used natural medicines, all by empirical studies that established their utility without any real understanding of the underlying principles. Despite this challenge, technology made tremendous advances, the importance of which is underscored by the fact that historians use technological strides to define particular ages of human history.

Modern Science also had a precursor in history. All ancient civilisations developed natural philosophy to explain the mysteries that surrounded them. While technologies added comfort to life, philosophical inquiry addressed the relentless questions of the mind. But these endeavours did not mesh effectively together to feed off the advances of each other like modern science and technology do. For instance, some schools of Indian philosophy postulated the atomic theory of matter long before it became a scientific theory based on empirical evidence. But the former cannot be called science because it was not based on experiments. The ancient Chinese on the other hand, made practical use of the observation that magnets always tend to align along the same direction, but they did not attempt to explain it using fundamental principles like we do now.

It was only post-renaissance that modern science, as defined by the scientific method, was born. Natural philosophy began to be buttressed by structured falsifiable experiments. Technologies increasingly made use of scientific advances and contributed to them too. This process of co-mingled development has led today to a situation where we cannot imagine excelling in the pursuit of either without also excelling in the other. But what are the consequences of this blurring of differences?

It is easy to see a causal relationship between technology and tangible benefits to society. In a capitalist economy, technological advances can be easily commercialised and the inventors rewarded handsomely. So there is tremendous societal interest in incubating and facilitating technological endeavours. But, science, on the contrary, is more of a personal pursuit. Although it leads  to technologies, its major purpose is to satisfy our innate curiosity and thirst for knowledge. While this is as, if not more, important than material progress, it is difficult to make the case for a result-oriented society to support science for its own sake, purely for the joy of exploration.

Hence, scientists in modern times have tended to use the interlinkages between science and technology and how advances in the former translate into technological marvels in attempts to win more societal support for science. While there is nothing wrong with the reasoning and it has been successful in increasing science funding, the question has to be asked if this is the right approach in the long run. By restricting the utility of science to the narrow channel of technological progress, we risk de-legitimising, in the eyes of society at large, the science that searches for answers to our basic questions.

Space exploration provides one of the best examples for this malaise. Although human beings have always yearned to unlock the mysteries beyond our earth and to go beyond the frontiers of generations past, we have now got used to justifying space missions for their perceived military or medical value. This has affected policy to such a great extent that we choose space stations that add little to our understanding or sense of our place in the universe against grander missions into outer space.

When the pursuit of science is justified in terms of technological dividends, it advances the cause of neither science nor technology. The greatest contributions to technological progress have come from science that is done for its own sake. Taking the long view to appreciate the historical differences between the two and the different purposes they serve in enriching human life can help us put today’s connections between them in perspective. The pursuit of technology and material progress is a choice. But scientific temper and understanding provide succour to the soul and is a necessity. We should be vigilant not to make support for the latter contingent on our desire for the former.

 

Graphene ICs around the corner? January 3, 2010

Filed under: Electronics,Semiconductors — Rāhul @ 13:51
Tags:

2009 marked the 50th anniversary of the Physicist Richard Feynman’s speech to the American Physical Society in 1959 where he foresaw the coming age of nanotechnology. Much of his vision of atomic level fabrication and nanoscale surgical robots in the bloodstream are yet to be realised. However, the Electronics industry has surged relentlessly down this path, following the now famous Moore’s law which says that the number of electronic building blocks called transistors on a chip roughly doubles every two years.

Ever since 1965 when Gordon Moore, the co-founder of Intel, foresaw this exponential decrease in the sizes of individual transistors on a computer chip, this unbridled advance of electronics has allowed progressively more powerful and compact computers. But as applications and user aspirations demand ever more powerful computers, Moore’s law is finally coming against the brick wall of Quantum Physics. Transistors on today’s chips are only of the order of a millionth of an inch, nearing the subatomic level where large scale physical laws break down and the uncertainties of Quantum Mechanics take over.

Scientists are now looking at ways to redesign the transistor in such a way as to take advantage of the quantum laws. All such approaches mean a shift from Silicon, which is used near-universally today. One promising new material is Graphene which is a one atom thick sheet of Graphite, one of the pure forms of elemental Carbon.

Graphene was only isolated in 2004 by peeling off a layer from Graphite using scotch tape. Since then, ways have been found to grow Graphene on Silicon Carbide and to deposit it from a solution. The thermal and electronic properties of Graphene have been found ideal for transistor operation. Current carriers in Graphene can travel very fast while picking up very low noise. Very high ability to conduct heat also makes it attractive in densely packed integrated circuits which need to dissipate heat efficiently.

Integrated circuits found on computer chips are essentially a clever combination of millions of transistors and other electronic elements in a logical circuit that achieves a certain output depending on the inputs. Individual transistors, connected to form electronic switches lie at the heart of the circuit. Mechanical switches can be flipped on and off to control the flow of current through a circuit. Similarly, transistors have a channel through which the flow of current is regulated by the voltage at a terminal called gate. When the gate voltage is flipped between high and low, the transistor channel can be flipped on and off. By cascading many thousands of such switches or gates, complicated logical operations can be performed electronically.

Although single transistors of Graphene had already been demonstrated, it was only recently that a Graphene Gate was demonstrated. Floriano Traversi and Roman Sordan from Politecnco di Milano and Valeria Russo from the Department of Energy, Micro and Nano structured materials laboratory, both in Italy reported in Applied Physics Letters, their demonstration of a Graphene Inverter. An Inverter is the most basic logic gate, which gives a low output if the input is high and vice versa, essentially inverting the input at the output.

The researchers deposited two adjacent Graphene layers on a Silicon substrate cove red with oxide to form the channels of two different transistors. Metal contacts were then formed behind the Silicon substrate to act as the controlling gate and in between the Graphene layers so that they were connected end to end. In an ingenious step, they then electrically annealed just one of the graphite channels so that the gate voltage at which it flips from off to on changed. If a constant voltage is now applied across both the transistors together, the output voltage at the terminal they share will depend on which transistor is off and which is on. By controlling the gate voltage, the scientists then controlled the channel conductivities and hence the output voltage.

Tying together many such switches in a circuit similar to the Silicon Integrated Circuits, much faster computer chips with lesser noise can be obtained. But this technology is yet only in a nascent stage. Output voltage from the demonstrated gate does not switch between values that can be directly fed to another gate input. Also, unlike today’s Silicon switches, this Graphene gate cannot be turned fully off shutting off current completely. So, they dissipate power even while not switching.

Silicon electronics has matured today after continuous improvements over half a century. The intense research interest and rapid progress in Graphene based devices and this demonstration of the feasibility of Graphene integrated circuits indicates that post Silicon Electronics might be just around the corner. Moore’s law will probably continue its resilience into the near future.

 

Trailblazing December 20, 2009

Filed under: Uncategorized — Rāhul @ 20:39
Tags: ,

The Royal Society of The UK is celebrating its 350th anniversary this year. In this regard, they have made available some of the trailblazing leaps in Science, as seen in the proceedings of the Royal Society over the years, freely on the internet. From Isaac Newton’s theory of Light and Colours in 1672 to Benjamin Franklin flying a kite in an electric storm in 1752, Bayes’ essay on chance in 1763, Maxwell’s theory of the Electromagnetic field in 1865, Dirac’s theory of the electron in 1928 and Watson and Crick’s DNA structure in 1954.

It was interesting to me how the only example from the 21st century they chose to highlight was a paper on Geoengineering. I wonder if this amounts to an endorsement for research into the field by the Royal Society. With the current international impasse on emissions reductions, it is very likely that Geoengineering will become increasingly prominent in the near future.

Partly due to my background in Electrical Engineering, my favourite paper among all the highlights is Maxwell’s masterpiece tying up Electricity and Magnetism into the unified Electromagnetic theory. Although the paper itself makes difficult reading today, even a cursory look betrays the rigour of the analysis and genius of Maxwell. Even today, it is perhaps the most elegant unification in history of forces and fields that were previously thought to be separate. For the sake of reductionist Physics and the intellectual clarity that goes with it, I hope it doesn’t remain so much longer!

 

All the room at the bottom December 8, 2009

Filed under: General Physics — Rāhul @ 18:42
Tags:

In December 1959, 50 years ago this month, Richard Feynman gave a talk to the American Physical Society at Caltech. Titled “There’s plenty of room at the bottom”, it laid out the promise of the as yet unborn field we call today nanotechnology, and challenged physicists to turn their attention to unlocking the consequences of the laws of physics at this small scale. The potential of nanotechnology is widely recognised today and significant efforts and funding are directed to it. On this occasion of the 50th anniversary, I would like to review briefly the orignal talk by Professor Feynman and to explore how it has shaped nanotechnology research.

Feynman starts the talk appreciating the unique journey of an experimentalist who makes the first inroads into a hitherto unreachable field like Kamerlingh Onnes in low temperature physics and proposes as a similar area, the “problem of manipulating and controlling things on a small scale”. He then goes on to lay out the interesting challenge of writing the entire 24 volumes of the Encyclopaedia Britannica on a pinhead by reducing all its writing linearly by a factor of 25000 and in the same vein, of having all the information in the great libraries in a small block that can be carried about. Then, he talks about using codes of a few atoms instead of letters and symbols as a way to compress information to even smaller dimensions, which he illustrates as showing the “plenty of room” that is at the bottom. The central advance in technology that Feynman anticipates would drive all this is a better electron microscope. In 1959, electron microscopes could resolve dimensions as low as 1 nm. He challenges physicists to reduce this to 10 pm, an improvement of 100 times, which will help us look at and manipulate individual atoms.

Throughout the lecture, Feynman only described possibilities that follow the laws of Physics as then understood, but were beyond the realm of technology. He focussed on the effects of miniaturisation on computers. In the 50s, Computers with relatively few circuits filled entire rooms. If all the devices and circuits were to be made at the atomic level, he suggested that we could have computers with far more complicated circuits in a smaller space, which is exactly what we have today. Then, he talked of how the problems of lubrication and heat dissipation would scale in a favourable way at small dimensions. He also talked about the possibility of nanorobots entering the blood stream to conduct surgery, an idea that has since received considerable play in Science Fiction. Adressing the problem of assembling at the nano level, he suggested using a cascade of master-slave connections, either mechanical or electrical, that would progressively assemble at smaller and smaller levels and identified the need to improve the precision of the apparatus at each stage. As the final frontier, he considered the problem of re-arranging atoms themselves so as to create from elements and compounds to minerals and virtually anything. He ended by talking about how the physical laws are very different at such a small scale and announcing prizes for a technology challenge in this direction.

Although his groundbreaking work in Quantum Electrodynamics was well behind him, Professor Feynman didn’t then enjoy the public reputation of the supremely brilliant and erudite yet witty and charming scientist that he does today.  So, it is interesting why so many papers in nanotechnology quote this lecture as the beginning of the field. There is no direct link between the talk and the various advances that came later. But in many ways, Feynman has been prophetic. The electron microscope can today resolve down to 50 pm, which is as good as a biologist needs. Computers have indeed packed more and more circuits, devices and memory into shorter areas and grown powerful and complicated. But his vision of nano-level assembly and surgery don’t seem any closer today than when he talked about them.  In a series of articles this month, Nature Nanotechnology points to how a nascent field looked to this lecture as a focal point which drove the enormous advances that we have seen in the last few decades. While Feynman got a lot right through his crystal ball, he also got some which aren’t right yet!

Through the whole talk, the reader (and the listener, I am sure!) can sense the scientific zeitgeist of the 50s,  which was a reductionist viewpoint where everything could finally be analysed by a set of physical laws. Chemistry, Biology and other studies, it was thought, could eventually be reduced to Physics and once we had all the fundamental physical laws, we could build everything else from them. Although this point of view still holds much water and an incessant romantic sway, it is undeniable that the major advances of the last few decades have been in Biology, Psychology and Neuroscience and even many Physicists are today taking an emergent, rather than reductionist, view of the science. It can be argued that this signifies a failure of the vision and intellectual firepower required to make fundamental advancements. Perhaps, we will again return, with a momentous discovery, to the reductionist viewpoint. But for now, Science continues to look where the light is for the needle lost in the dark and tries to push the frontiers of the lighted area ever so much outwards. Maybe it will be the ability to manipulate things on an atomic scale that will eventually lead us  to the next great leap forward!

 

Heating and Car Mileage November 17, 2009

Filed under: General Physics — Rāhul @ 23:51
Tags:

We are all aware of the mileage hit we unwillingly take when we decide to run the AC on a hot day in the car. But, what about the reverse? What effect does running the heater have on petrol mileage? One of my friends recently brought up this topic and it made for some interesting discussion.

I don’t usually use heating in my car in the winter because I am usually dressed warm and my car rides are pretty short anyway. So, I haven’t had the opportunity to try out my hypothesis (to follow) by trying to mimic other conditions and see if I can get more or less miles for a full tank of Petrol if I have heating on. But, I do manage a pretty high mileage for my ’93 Corolla (nearly 30 miles to the gallon in cold weather with 50%ish highway miles), which could be related to my heating preference.

In my analysis below, the key fact to remember is that the mileage achieved by a car is a function of the ambient temperature. At normal ambient temperatures, a controlled quantity of fuel is injected into the cylinder during each cycle, which is then lit up to produce power. But, when temperature is very low, the car lets in a higher quantity of fuel every cycle in an effort to compensate for the lower temperature by having a higher fuel to air ratio. As temperature decreases, the air in the chamber needs to be more rich in fuel for the fuel to burn at the same rate (in terms of power produced per cycle) as at a higher temperature.  So, we see that when ambient temperature goes down, fuel efficiency reduces too. This is the reason why we see a large drop in petrol mileage in cold weather.

Fuel efficiency decreases also when the temperature is very high. But, this has mostly to do with the enhanced cooling needs of the engine and the passengers rather than the the internal combustion engine itself. Cooling requires energy which saps into the fuel efficiency. So, we see that there is a range of optimum ambient temperature when petrol mileage is the highest, on either side of which it tapers off. This is a feature of many systems when it comes to efficiency.

Coming back to the original question, let us look at the effect of heating on mileage is to see how it affects the temperature in the engine. Car heating systems usually work by siphoning off a part of the heat generated in the engine to warm the passenger area. In winter, especially right after the car is started, the temperature in the engine is much below the optimum range. If at this time, some of the heat being generated is diverted to heat up the passenger cabin, it will reduce the fuel mileage. But, if the car had been running for a while and the engine already above the optimum heat range (unlikely in harsh winters and short drives), then the fuel mileage will increase if the engine is cooled by letting some of its heat out. Of course, when the heat is on, there will be a fan which channels the heat as required. This fan is a drain on the engine whatever the temperature. At sufficiently high temperatures, the mileage gain from losing some engine heat is higher than the mileage loss from the fan. But then I am not sure why I would use the heater on such a hot day!

This post goes awry from the stated purpose of the blog. Rather than trying to form scientific conclusions from observed facts, this post tries to use a priori knowledge to predict what will happen. I hope to gather data and a make an a posteriori post on this matter sometime soon. Till then, my hypothesis is that using the heater in winter does reduce fuel mileage although the effect is likely much less than the effect of A/C in summer.

 

Mixes made equal November 9, 2009

Filed under: Uncategorized — Rāhul @ 16:38
Tags: ,

Today, I shall explore a simple puzzle which illustrates the advantages of extreme conditions in getting pointers towards the answer. Consider this scenario found at the Times’ Tierneylab blog a few weeks ago. Essentially, the question is-

If we have two vessels, X and Y, X containing a volume “a” of liquid A and Y containing  a volume “b” of liquid B which mixes with liquid A uniformly, and we pour a volume x of liquid A from X to Y first followed by pouring a volume x of the mixture of liquids in Y back to X, will there now be more of liquid A in Y than of liquid B in X? Assume that b is greater than a.

Before solving this algebraically, let’s think about the problem. Since the volume b is more than a, there is more of liquid B than liquid A. When x of b is poured into X, X will now have a+x amount of a mixture of A and B whereas Y will have b-x amount of just liquid B. When x amount of the mixture in X is poured back to Y, X will go back to having just a amount, but of the mixture of A and B, and Y will go back to having b amount, again of the mixture of A and B. There doesn’t seem to be any immediately compelling reason why the amount of B in X and A in Y should be the same or for the percentages to be the same either. So, let’s consider an extreme case for some help with the arguments.

Let us consider the case where x=0. Then, we can see that no liquid is exchanged between X and Y and so at the end the amount of B in X and A in Y are the same, which is zero.  Now consider the case where x=a. After the first exchange, we’ll have 2a amount of liquid in X, which is made of a amount of A and a amount of b, i.e. 50% of each. When x=a amount is poured back from X, this would involve a/2 amount each of A and B. Adding this to the b-a amount of B that was left in Y, we’ll have b-(a/2) amount of B and a/2 amount of A there. At the same time, X will have a/2 amount each of A and B. So, the amount of B in X and A in Y are the same- a/2.

In both the special cases we considered, the amount of B in X and amount of A in Y are the same. It is interesting to see that the concentration of B in X and of A in Y are not the same because the volumes a and b are not the same. Given these two special cases, which give the same result for very different values of x, it seems likely that the result might hold for all x. These examples do not prove so, but it does motivate us as to the answer to look for. So, while we solve it algebraically, not only will we have a higher confidence in the result if we prove it, it is likely that we are more likely to get to it since we know what we are looking for!

Getting to the algebra now, in the general case, after the first exchange of liquids, X will now have a-x amount of A and Y will have a total amount of b+x, of which there is b of B and x of A. When x amount of the liquid from Y is now poured back to X, since the liquids in Y mixed well, the amount poured back will involve [b/(b+x)]*x of B and [x/(b+x)]*x of A. In the proportion of their volumes, this now adds up to x amount poured back from Y to X.  So, the amount of A left in Y is x-[(x*x)/(b+x)] which is bx/(b+x). The amount of B now in X would just be the amount of B poured from Y to X, which again is bx/(b+x). Voila, we have the same number!

This means that the concentration of A in Y is much less than the concentration of B in X. So, if I gulp in a little sea water and spit out the same amount, there will be the same amount of sea water left in my mouth as the amount of my spit in the sea, but my mouth will taste much saltier than the sea will of my spit!

 

A fair map of the world November 1, 2009

Filed under: Public Policy — Rāhul @ 13:10
Tags:

Maps seek to represent the curved surface of the spherical earth on a flat surface. All methods we have of doing this introduce some distortion or another and involve trade-offs between distortions of shapes, relative distortion of areas at different latitudes, variations in distance between points on the map which are actually equidistant and so on. Despite these issues, it is clear that maps are a valuable resource in understanding geography and increasing awareness about the world. They are more compact and easier to carry than globes and are especially well suited for representation on flat computer screens as increasingly popular internet maps are. While there are infinitely many methods of map projection, of which a few are most popular, this post is about the considerations involved in choosing the relative orientations of maps- which part of the surface of the earth will they be centered on, which directions will form the horizontal and vertical parallels and which part of our wide world will be consigned to the far left and right, split partially between those two ends.

There seems to be a certain natural reasoning behind choosing the north and south poles as two opposite sides of a rectangular map. The earth spins on a north-south axis. So, it would seem logical to have the north-south axis either horizontal or vertical on the map. The north and south poles which are distorted the most by this projection are also the least populated areas in the world, which makes the map the most helpful for most people around the world. But, the question of which part of the world the map would center on doesn’t seem to follow from any easy utilitarian argument. The alignment I have most commonly encountered in the United States is shown below in Fig: 1. I have sourced all figures in this post from the ever helpful google maps. It has the United States right in the middle and cuts through Russia, China and countries in Southeast Asia.

Fig: 1 Americas in the center

Fig: 1 Americas in the center

Understandably, this is not very popular outside the Americas. Even as an objective analyst, it doesn’t strike as particularly helpful. The two biggest oceans of the world are close to the center whereas some of the most populous regions are consigned to the far corners or ever cut into parts. Another version of the map, which is more popular in much of Eurasia is shown below in Fig: 2.

Fig: 2 Greenwich in the middle

Fig: 2 Greenwich in the middle

This map has the zero degree longitude at the center and splits at the international date line. This has the advantage of splitting the map in the pacific ocean which is the least populated stretch of the planet and which has no large land masses that are cut into two by the map. Although this map is a product of the Eurocentrism of the 19th century and reflects the primacy of the British Empire in global affairs in how the center of the map passes through London, it has the advantage of having much of the populated areas of the world towards the center of the map. A natural argument can also be made in favour of this alignment. Pangaea, the supercontinent which split to form all the current continents of the world would be left uncut by this map scheme. Pangaea was surrounded by what are today the Pacific, Arctic and Southern oceans, which this map scheme shows towards the corners, surrounding the inhabited world.

Fig:3 The human story

Fig: 3 The human story

Finally, Fig: 3 above shows another plausible map scheme which retains many of the advantages of Fig: 2, but adds a human element to the arguments. Here, the map is cut in the atlantic, which also avoids cutting through any large populated countries (except Greenland). Although the large pacific ocean is towards the center of the map, this alignment traces the migration of modern humans from Africa to populate the rest of the World. Humans crossed from Africa to the Middle East and onto Asia, Europe, Australia and across the Berring strait to the Americas. This map shows that entire route without break, which makes it an ideal candidate for a shared human map.

I do intend this only as a purely academic exercise. Maps are inherently political and are unlikely ever to be objectively drawn. Besides, with human progress I hope these issues fade away in importance rather than having to be renegotiated. Internet maps already are flexible enough to be centered at customisable places. But, that doesn’t take away from the fun of exploring what might be the pros and cons of different hypothetical possibilities!

 

Martin Gardner and the Jones sisters October 26, 2009

Filed under: Mathematics — Rāhul @ 20:04
Tags: ,

Today’s puzzle at New York Times’ Tierneylab blog was interesting enough to whet my appetite and yet not hard enough to elude me more than a tiny bit.  I used elementary probabiltity theory and algebra, and with a little help from ever-dependable OpenOffice Calc, I got results most amusing. So, let’s dive straight into it.

Quoting the first problem,

If you happen to meet two of the Jones sisters (this assumes that the two are random selections from the set of all Jones sisters), it is an exactly even-money bet that both girls will be blue-eyed. What is your best guess as to the total number of blue-eyed Jones sisters?

To attack this, I first realise that there isn’t one unique answer as the existence of one almost surely means the existence of others greater than it.  Assuming the total number of Jones sisters to be n and the subset of n that is blue-eyed to be m, we have- Probability of the two Jones sisters being blue eyed= (Probability of the first Jones sister being blue eyed) X (Probability of the second Jones sister being blue eyed). As the probability is 0.5, we have,

(n/m) ((n-1)/(m-1))=0.5

This is because first part follows because the first Jones sister could be blue eyed if any n of the m is chosen. But since 1 blue eyed sister has already been chosen as the first, the second could be blue eyed only if any one of the n-1 blue eyed sisters among the m-1 sisters remaining is chosen as the second. We are able to do this only because the choices of sisters are random. Attempting to solve this as a quadratic equation in n, we see that n has a different value for each value of m. Since the Jones sisters can only exist in whole numbers, the solutions should be the ones where both n and m are whole numbers. Do such solutions exist at all?

Starting from n=1 and counting all natural numbers, we see that n=3 and m=4 is one combination that agrees. In fact, that is the only combination of n and m under 10. So, the answer must be that there are 4 Jones sisters of whom 3 are blue eyed. Then, the problem goes on to say-

A simplified version of eye color inheritance, long held to be true, went as follows. Whether we are blue or brown-eyed is decided by a single pair of eye-color genes, one of which we get from our mother and the other from our father. For a person to have blue eyes, both genes must be for blue eyes; if either gene is for brown eyes, the person’s eye color is brown. The brown eye-color gene is dominant over blue. Each parent randomly passes on either one of his or her two eye-color genes to any given child. Thus if Mr. Jones has one blue and one brown eye color gene, he will be brown-eyed, and any given child will get either the blue or brown eye-color gene from him. We now know that human eye-color inheritance is much more complex than this, but for the purposes of the next question, assume that this simple model is true.

Based on the solution of problem 3, what would you guess are the colors of Mr. and Mrs. Jones’s eyes? What is your best guess about the eye-color genes that Mr. and Mrs. Jones have? How confident can you be about this, assuming that the only information you have is the colors of their daughters’ eyes?

Since the Jones’ have 3 blue-eyed and one brown-eyed daughter, the 4 eye-genes they have are not all blue or all brown. If one of them has both genes brown, then all the daughters will be brown eyed. So, that is ruled out too. So, at least one of the parents needs to have one blue and one brown gene. The other parent may have either both blue genes or one brown and one blue genes. Let us consider the former case first. The probability of each daughter being blue eyed is 0.5, that is when she gets blue-eye genes from both parents. Using binomial probability distribution (which I won’t elaborate here), the probability then of having 3 blue eyed and one brown eyed daughter is 0.25. In the latter case, the probability of each daughter being blue eyed is 0.25, when she gets blue eyed genes from both parents which in this case happens only in one case out of four. Using binomial probability distribution again, the probability now of having 3 blue eyed and 1 brown eyed daughter is 0.046875. Comparing the two probabilities, 0.25 and 0.046875, we can be reasonably confident that one of the parents is blue-eyed and the other is brown-eyed with one brown-eye gene and one blue-eye gene. Our confidence, in my opinion will be the ratio of 0.25 to the combined probability, which is close to 84.21%. Not, the widely accepted 95%, but very high.

Then, the puzzle goes on-

In the same neighborhood there are a whole bunch of animal couples who want to keep up with the Joneses and produce offspring that have some prized trait that we will metaphorically call being “blue-eyed.” The animals (and the range of eggs they produce) are: snakes (10-40), tortoises (50-150), frogs (500-800), beetles (1000-5000), lobsters (5000-50,000), bees (100,000-500,000), termites (500,000-1 million), codfish (1 million-8 million) and oysters (10 million-80 million).

It turns out that all these animal couples produced a bunch of eggs within their expected range, such that (as with the Jones sisters) if you picked any two eggs from the bunch randomly, there is an exactly even chance that they would both be “blue-eyed.” How many “blue-eyed eggs” and total eggs did each of these animal couples produce? Can you further extend this series of numbers using some analogy other than animal reproduction (which we may have exhausted, notwithstanding nature’s amazing profligacy)?

Here, we have to find values of n and m beyond 3 and 4. For this, I sought the help of MS excel and using the relationship connecting n and m and iterating n through increasing natural numbers while searching for whole number m I got the following list.

n m (n+1)/n (m+1)/m) m/n
3 4 1.3333333333
15 21 5.0000000000 5.2500000000 1.4000000000
85 120 5.6666666667 5.7142857143 1.4117647059
493 697 5.8000000000 5.8083333333 1.4137931034
2871 4060 5.8235294118 5.8249641320 1.4141414141
16731 23661 5.8275862069 5.8278325123

1.4142011834

97513 137904 5.8282828283 5.8283250919 1.4142114385

Although, I didn’t go into the millions as the question demands, we can see patterns emerginng already. The ratio of any value of n or m to its preceeding value increases quickly at first and then plateaus at some point a little above 5.8. Also, the ratio of m to n also increases first and then plateaus near 1.414. The puzzle goes on to explore these numbers.

Not surprisingly, the ratio between “blue-eyed” eggs to the total number of eggs approaches a finite limit as we go towards infinity. But, unexpectedly, so does the ratio between consecutive numbers of eggs in the series, both blue-eyed and total. Why should this be? Can you figure out what these limits are and come up with a general formula that will yield all possible numbers of even-probability blue-eyed babies?

The last question asks for the general formula for all possible values of n, at which my limited math skills seem stumped. But, to find the finite limit for the ratio of m to n, let’s go back to the first equation. As n and m tend to infinity, 1 can be neglected in comparison to n or m. So, n-1 and m-1 tend to n and m respectively. Thus, the equation reduces to-

(n/m) (n/m)=0.5

which can be solved to get

(m/n) = sqrt{2} = 1.41421...

which is indeed what we saw it plateauing at. After some thought, I am still at a loss why the ratio of consecutive values of n and m also converge to a number as n and m tend to infinity. I suppose I’ll have to wait till other Tierney lab readers solve that! But, since the ratio of n and m converges to a values, if the ratio of the consecutive values of n converges to a number, we can see that the ratio of consecutive values of m also converges to the same number.