Saturday, May 5, 2012

How A Private Data Market Could Ruin Facebook

The growing interest in a market for personal data that shares profits with the individuals who own the data could change the business landscape for companies like Facebook

Facebook's imminent IPO raises an interesting issue for many of its users. The company's value is based on its ability to exploit the online behaviours and interests of its users. 

To justify its sky-high valuation, Facebook will have to increase its profit per user at rates that seem unlikely, even by the most generous predictions. Last year, we looked at just how unlikely this is. 

The issue that concerns many Facebook users is this. The company is set profit from selling user data but the users whose data is being traded do not get paid at all. That seems unfair.

Today, Bernardo Huberman and Christina Aperjis at HP Labs in Palo Alto, say there is an alternative. Why not  pay individuals for their data? TR looked at this idea earlier this week.

Setting up a market for private data won't be easy. Chief among the problems is that buyers will want unbiased samples--selections chosen at random from a certain subgroup of individuals. That's crucial for many kinds of statistical tests.

However, individuals will have different ideas about the value of their data. For example, one person might be willing to accept a few cents for their data while another might want several dollars.

If buyers choose only the cheapest data, the sample will be biased in favour of those who price their data cheaply. And if buyers pay everyone the highest price, they will be overpaying. 

So how to get an unbiased sample without overpaying? 

Huberman and Aperjis have an interesting straightforward solution. Their idea is that a middle man, such as Facebook or a healthcare provider, asks everyone in the database how much they want for their data. The middle man then chooses an unbiased sample and works out how much these individuals want in total, adding a service fee. 

The buyer pays this price without knowing the breakdown of how much each individual will receive. The middle man then pays each individual what he or she asked, keeping the fee for the service provided. 

The clever bit is in how the middle man structures the payment to individuals. The trick here is to give each individual a choice. Something like this:

Option A: With probability 0.2, a buyer will get access to your data and you will receive a payment of $10. Otherwise, you’ll receive no payment.
Option B: With probability 0.2, a buyer will get access to your data. You’ll receive a payment of $1 irrespectively of whether or not a buyer gets access

So each time a selection of data is sold, individuals can choose to receive the higher amount if their data is selected or the lower amount whether or not it is selected.

The choice that individuals make will depend on their attitude to risk, say Huberman and Aperjis. Risk averse individuals are more likely to choose the second option, they say, so there will always be a mix of people expecting high and low prices. 

The result is that the buyer gets an unbiased sample but doesn't have to pay the highest price to all individuals.

That's an interesting model which solves some of the problems that other data markets suffer from.

But not all of them. One problem is that individuals will quickly realise how the market works and work together to demand ever increasing returns.  

Another problem is that the idea fails if a significant fraction of individuals choose to opt out altogether because the samples will then be biased towards those willing to sell their data. Huberman and Aperjis say this can be prevent by offering a high enough base price. Perhaps.

Such a market has an obvious downside for companies like Facebook which exploit individual's private data for profit. If they have to share their profit with the owners of the data, there is less for themselves.

And since Facebook will struggle to achieve the kind of profits per user it needs to justify its valuation, there is clearly trouble afoot.

Of course, Facebook may decide on an obvious way out of this conundrum--to not pay individuals for their data.

But that creates an interesting gap in the market for a social network that does pay a fair share to its users (perhaps using a different model to Huberman and Aperjis'). 

Is it possible that such a company could take a significant fraction of the market? You betcha!

Either way, Facebook loses out--it's only a question of when.  

This kind of thinking must eventually filter through to the people who intend to buy and sell Facebook shares. 

For the moment, however, the thinking is dominated by the greater fool theory of economics--buyers knowingly overpay on the basis that some other fool will pay even more. And there's only one outcome in that game.

Ref: arxiv.org/abs/1205.0030: A Market for Unbiased Private Data: Paying Individuals According to their Privacy Attitudes


View the original article here

Twitter Cannot Predict Elections Either

Claims that Twitter can predict the outcome of elections are riddled with flaws, according to a new analysis of research in this area

It wasn't so long ago that researchers were queuing up to explain Twitter's extraordinary ability to predict the future.  

Tweets, we were told, reflect the sentiments of the people who send them. So it stands to reason that they should hold important clues about the things people intend to do, like buying or selling shares, voting in elections and even about paying to see a movie. 

Indeed various researchers reported that social media can reliably predict the stock market, the results of elections and even box office revenues

But in recent months the mood has begun to change. Just a few weeks ago, we discussed new evidence indicating that this kind of social media is not so good at predicting box office revenues after all. Twitter's predictive crown is clearly slipping. 

Today, Daniel Gayo-Avello, at the University of Oviedo in Spain, knocks the crown off altogether, at least as far as elections are concerned. His unequivocal conclusion: “No, you cannot predict elections with Twitter.”

Gayo-Avello backs up this statement by reviewing the work of researchers who claim to have seen Twitter's predictive power. These claims are riddled with flaws, he says.

For example, the work in this area assumes that all tweets are trustworthy and yet political statements are littered with rumours, propaganda and humour. 

Neither does the research take demographics into account. Tweeters are overwhelmingly likely to be younger and this, of course, will bias any results.   "Social media is not a representative and unbiased sample of the voting population," he says.

Then there is the problem of self selection. The people who make political remarks are those most interested in politics. The silent majority is a huge problem, says Gayo-Avello and more work needs to be done to understand this important group.

Most damning is the lack of a single actual prediction. Every analysis on elections so far has been done after the fact. "I have not found a single paper predicting a future result," says Gayo-Avello.

Clearly, Twitter is not all it has been cracked up to be when it comes to the art of prediction. Given the level of hype surrounding social media, it's not really surprising that the more sensational claims do not stand up to closer scrutiny. Perhaps we should have seen this coming (cough).

Gayo-Avello has a solution. He issues the following challenge to anybody working in this area: "There are elections virtually all the time, thus, if you are claiming you have a prediction method you should predict an election in the future!" 

Ref: arxiv.org/abs/1204.6441: “I Wanted to Predict Elections with Twitter and all I got was this Lousy Paper”: A Balanced Survey on Election Prediction using Twitter Data


View the original article here

Twists 'n' Turns

Sorry, I could not read the content fromt this page.

View the original article here

The New Science of Online Persuasion

Researchers are using Google Adwords to test the persuasive power of different messages.

The Web has fundamentally changed the business of advertising in just a few years. So it stands to reason that the process of creating ads is bound to change too. 

The persuasive power of a message is a crucial ingredient in any ad. But settling on the best combination of words is at best a black art and at worst, little more than guesswork.  

So advertisers often try to test their ads before letting them out into the wild.

The traditional ways to test the effectiveness of an advertising campaign are with a survey or a focus group. Surveys are shown to a carefully selected group of people who are asked to give their opinion about various different forms of words. A focus group is similar but uses a small group of people in more intimate setting, often recorded and watched from behind a one way mirror. 

There are clear disadvantages with both techniques. Subjects are difficult to recruit, hard to motivate (often requiring some kind of financial reward ) and the entire process is expensive and time consuming. 

What's more, the results are hard to analyse since any number of extraneous effects can influence them. Focus groups, for example, are notoriously susceptible to group dynamics in which the view of one individual can come to dominate. And there is a general question over whether recruited subjects can ever really measure the persuasiveness of anything.

Then there is the obvious conflict created by the fact that a subject is not evaluating the messages under the conditions in which they were designed to work ie to get the attention of an otherwise disinterested reader.

So there's obvious interest in finding a better way to test the value of persuasive messages. One approach is to use crowdsourcing services such as Mechanical Turk to generate an immediate readership willing to take part. 

But Turks are paid to take part. So the results are no better than those that conventional methods produce, although they are cheaper and quicker to collect.

Today, Marco Guerini at the Italian research organisation Trento-Rise and a couple of buddies say they've found an interesting way round this: to test messages on Google's AdWords service.

The idea here is to use Google Adwords to place many variations of a single message to see which generates the highest click through rate.  

That's a significant improvement over previous methods. The subjects are not paid and make their choice in the very conditions in which the message is designed to work. And the data is quick and relatively cheap to collect.

Google already has a rudimentary tool that can help with this task. The so-called AdWords Campaign Experiments (ACE) tool allows users to test two variations of an ad side-by-side.

But to really get to the heart of persuasiveness requires a much more rigorous approach. Guerini and co make some small steps in this direction by testing various adverts for medieval art at their local castle in Trento.

These guys used Google's ACE tool to test various pairs of adverts and achieved remarkable success with some of their ads. One ad, for example,  achieved a click through rate of over 6 per cent from just a few hundred impressions--that's an impressive statistic in an industry more used to measuring responses in fractions of a percent.

However, this click through rate  was not different in a statistically significant way from its variant so there's no way of knowing what it was about the message that generated the interest.

So while Guerini and co's experiments are interesting pilots they are not extensive enough to provide any insight into the nature of persuasive messaging. That will need testing on a much larger scale.

These will not be easy experiments to perform and present numerous challenges.  For example, the process of changing the wording of an advert is fraught with difficulty. Then there is the question of whether this method is able to test anything other adverts designed for AdWords. It might have limited utility for testing the messages in magazine adverts or billboard posters, for instance.

But the important point is that these kinds of experiments are possible at all. And it's not hard to imagine interesting scenarios for future research. For example, AdWords could be used as part of an evolutionary algorithm. This process might start with a 'population' of messages that are tested on Adwords. The  best performers are then selected to 'reproduce' with various random changes to form a new generation of messages that are again tested. And so on. 

Who knows what kind of insight these kinds of approaches might produce into the nature of persuasiveness and the human mind. But we appear to have a way to carry out these experiments for the first time.

Ref: arxiv.org/abs/1204.5369: Ecological Evaluation of Persuasive Messages Using Google AdWords


View the original article here

The Worrying Consequences of the Wikipedia Gender Gap

Male editors dramatically outnumber female ones on Wikipedia and that could be dramatically influencing the online encyclopedia's content, according to a new study

There was a time when the internet was dominated by men but in recent years that gap has dissolved. Today, surfers are just as likely to be male as female. And in some areas women dominate: women are more likely to Tweet or participate in social media such as Facebook. Even the traditionally male preserve of online gaming is changing too.   

So what's wrong with Wikipedia? Last year, the New York Times pointed out that women make up just 13 per cent of those who contribute to Wikipedia, despite making up almost half the readers. And a few months ago, a study of these gender differences said they hinted at a culture at Wikipedia that is resistant to female participation.

Today, Pablo Aragon and buddies at the Barcelona Media Foundation in Spain suggest that the problem is seriously influencing Wikipedia's content.

These guys have studied the biographies of the best connected individuals on 15 different Wikipedia language sites. They chose the best connected individuals by downloading all the biographies and then constructing a network in which individuals with Wikipedia biographies are nodes. They then drew links between nodes if that person's Wikipedia biography contained a link to another individual.

Finally, they drew up a list of the best connected people.The table above shows the top five for each of the 15 language sites. 

There are some curious patterns. In many countries, politicians and leaders are the best connected individuals. For example, on the Chinese language site, Chiang Kai-shek is the best connected individual; on the English-speaking site it's George W Bush; and on the German site, Adolf Hitler tops the list.

In other countries, entertainers head the list; Frank Sinatra in Italy, Michael Jackson in Portugal and Marilyn Monroe in Norway. 

But most curious of all is the lack of women. Out of a possible total of 75, only three are women: Queen Elizabeth II, Marilyn Monroe and Margaret Thatcher.

That's a puzzling disparity and one for which Aragon and co point to an obvious possibility--that the gender gap among editors directly leads to the gender gap among best connected individuals. 

Of course, that's only speculation but Aragaon and co call it "an intriguing subject for future investigation." We'll be watching to see how that pans out.

In the meantime, the Wikimedia Foundation has  set itself the goal of increasing the proportion of female contributors to 25 per cent by 2015, a step in the right direction but still an embarrassing blot on the landscape of collaborative endeavour.

Ref: arxiv.org/abs/1204.3799: Biographical Social Networks On Wikipedia - A Cross-Cultural Study Of Links That Made History


View the original article here

How Dark Matter Interacts with the Human Body

Dark matter must collide with human tissue, and physicists have now calculated how often. The answer? More often than you might expect.

One of the great challenges in cosmology is understanding the nature of the universe's so-called missing mass.

Astronomers have long known that galaxies are held together by gravity, a force that depends on the amount of mass a galaxy contains. Galaxies also spin, generating a force that tends to cause this mass to fly apart. 

The galaxies astronomers can see are not being torn apart as they rotate, presumably because they are generating enough gravity to prevent this.

But that raises a conundrum. Astronomers can see how much visible mass there is in a galaxy and when they add it all up, there isn't anywhere enough for the required amount of gravity.  So something else must be generating this force. 

One idea is that gravity is stronger on the galactic scale and so naturally provides the extra force to glue galaxies together.

Another is that the galaxies must be filled with matter that astronomers can't see, the so-called dark matter. To make the numbers work, this stuff needs to account for some 80 per cent of the mass of galaxies so there ought to be a lot of it around. So where is it? 

Physicists have been racing to find out with detectors of various kinds and more than one group says it has found evidence that dark matter fills our solar system in quantities even more vast than many theorists expect. If they're right, the Earth and everything on it is ploughing its way through a dense sea of dark matter at this very instant. 

Today, Katherine Freese at the University of Michigan in Ann Arbor, and Christopher Savage at Stockholm University in Sweden outline what this means for us humans, since we must also be pushing our way through this dense fog of dark stuff.

We know that whatever dark matter is, it doesn't interact very strongly with ordinary matter, because otherwise we would have spotted its effects already.

So although billions of dark matter particles must pass through us each second, most pass unhindered. Every now and again, however, one will collide with a nucleus in our body. But how often?

Freese and Savage calculate how many times nucleii in the average-sized lump of flesh ought to collide with particles of dark matter. By average-sized, they mean a 70 kg  lump of meat made largely of oxygen, hydrogen carbon and nitrogen. 

They say that dark matter is most likely to collide with oxygen and hydrogen nuclei in the body. And given the most common assumptions about dark matter,  this is likely to happen about 30 times a year. 

But if the latest experimental results  are correct and dark matter interactions are more common than expected, the number of human-dark matter collisions will be much higher. Freese and Savage calculate that there must be some 100,000 collisions per year for each human on the planet. 

That means you've probably been hit a handful of times while reading this post.  

Freese and Savage make no estimate of the potential impact on health this background rate of collisions might have. That would depend on the energy and motion of a nucleus after it had been hit and what kind of damage it might wreak on nearby tissue. 

It must surely represent a tiny risk per human but what are the implications for the population as a whole?  That would be an interesting next step for a biological physicist with a little spare calculating time. 

Ref: arxiv.org/abs/1204.1339: Dark Matter Collisions With The Human Body


View the original article here

Friday, May 4, 2012

The Amazing Trajectories of Life-Bearing Meteorites from Earth

The asteroid that killed the dinosaurs must have ejected billions of tons of life-bearing rock into space. Now physicists have calculated what must have happened to it.

About 65 million years ago, the Earth was struck by an asteroid some 10 km in diameter with a mass of well over a trillion tons. We now know the immediate impact of this event—megatsunamis, global wildfires ignited by giant clouds of superheated ash, and, of course, the mass extinction of land-based life on Earth.

But in recent years, astrobiologists have begun to study a less well known consequence: the ejection of billions of tons of life-bearing rocks and water into space. By some estimates, the impact could have ejected as much mass as the asteroid itself. 

The question that fascinates them is what happened to all this stuff.

Today, we get an answer from Tetsuya Hara and buddies at Kyoto Sangyo University in Japan. These guys say a surprisingly large amount of Earth could have ended up not just on the Moon and Mars, as might be expected, but much further afield. 

In particular, they calculate how much would have ended up in other places that seem compatible for life: the Jovian moon Europa, the Saturnian moon Enceladus, and Earth-like exoplanets orbiting other stars.

Their results contain a number of surprises. First, they calculate that almost as much ejecta would have ended up on Europa as on the Moon: around 10^8 individual Earth rocks in some scenarios. That's because the huge gravitational field around Jupiter acts as a sink for rocks, which then get swept up by the Jovian moons as they orbit. 

But perhaps most surprising is the amount that makes its way across interstellar space. Last year, we looked at calculations suggesting that more Earth ejecta must end up in interstellar space than all the other planets combined.

Hara and co go further and estimate how much ought to have made its way to Gliese 581, a red dwarf some 20 light years from here that is thought to have a super-Earth orbiting at the edge of the habitable zone.

They say about a thousand Earth-rocks from this event would have made the trip, taking about a million years to reach their destination.

Of course, nobody knows if microbes can survive that kind of journey or even the shorter trips to Europa and Enceladus. But Hara and buddies say that if microbes can survive that kind of journey, they ought to flourish on a super-Earth in the habitable zone. 

That raises another interesting question: how quickly could life-bearing ejecta from Earth (or anywhere else) seed the entire galaxy?

Hara and co calculate that it would take some 10^12 years for ejecta to spread through a volume of space the size of the Milky Way. But since our galaxy is only 10^10 years old, a single ejection event could not have done the trick.

However, they say that if life evolved at 25 different sites in the galaxy 10^10 years ago, then the combined ejecta from these places would now fill the Milky Way.

There's an interesting corollary to this. If this scenario has indeed taken place, Hara and co say: "then the probability is almost one that our solar system is visited by the microorganisms that originated in extra solar system."

Entertaining stuff!

Ref: arxiv.org/abs/1204.1719: Transfer of Life-Bearing Meteorites from Earth to Other Planets


View the original article here

Men 'n' Women

Sorry, I could not read the content fromt this page.

View the original article here

Past 'n' Future

Sorry, I could not read the content fromt this page.

View the original article here

Network Science Reveals The Cities That Lead The World's Music Listening Habits

If you live in North America, you'll soon be listening to the music now playing in Atlanta whereas in Europe, Oslo leads the scene, according to a new analysis of global listening habits

The evidence that ideas and fashions spread through society like viruses or like wildfire is compelling. Numerous studies have examined the networks in which this spread takes place and with increasingly large data sets to work with, researchers have become increasingly confident in their network-centric view of the world. These tools are teasing apart the large scale behaviour of humanity in ever increasing resolution.

In the fashion world, London, New York and Paris are generally considered the leaders that everyone else follows. So an interesting question is whether network science can tell us which cities play a similar role for music.

That's exactly the question that  Conrad Lee and Pádraig Cunningham at the Clique Research Cluster in Ireland set out to answer by analysing data from Last.fm, an social website for music.  

Last.fm is interesting because it publishes lists of the most listened to artists divided geographically. So if you live in Seattle, for example, you can see what people in your area are listing to.

So Lee and Cunningham have studied the way these charts vary in time and looked to see whether some cities consistently lead others in terms of listening habits.

These guys studied the Last.fm data for 200 cities around the world dating back to 2003. This is compiled from some 60 billion pieces of data that Last.fm collects from its users.

This is a noisy data set. Some cities have so few listeners that their data is hard to distinguish from noise.  So Lee and Cunningham have to apply some fairly robust cleaning techniques to remove this noise. 

They then use recently developed statistical techniques to decide which cities lead others. They then construct  a network in which a link pointing from one city to another indicates that one follows the other.

The results are interesting. They show that certain cities appear to lead others for various genres of music. For example, Montreal seems to lead North American in indie music listening habits and the leader for hip hop is Atlanta. In Europe, Paris leads for indie music whereas Oslo leads for music as a whole. 

There are other interesting patterns too. For example, cities that have similar listening habits are not linked in this network. For example, Portland and San Francisco; Cracow and Warsaw; and Birmingham and Manchester.

Lee and Cunningham suggest that when two cities' listening habits are synchronised there is little to be gained from following the listening habits in the other city so residents look elsewhere.  

There's another interesting pattern. It's easy to imagine that the biggest cities ought to be those furthest ahead of the curve because they have biggest populations from which new and interesting bands can emerge. That doesn't seem to be the case in this data--big cities such as New York, LA and London do not lead. "We find only weak support for this hypothesis," say Lee and Cunningham.

That may cause some alarm bells to ring.  An interesting body of work has recently suggested that big cities benefit disproportionally for their size since qualities such as efficiency, productivity and innovation all scale super linearly with population.  

An important question for Lee and Cunningham is why that doesn't hold for music too.    

There is also a question over whether the trends that Cunningham and Lee have found really reflect their hypothesis that some cities' listening habits lead others. Humans are notoriously good at finding patterns in random data. 

The ultimate test, of course, is whether their discovery has any predictive value. For example, could they predict how listening habits will change in the near future? "We have not yet demonstrated that our models have this predictive power, although we plan to attempt this validation in future work," they say.

So we must wait and see. If they manage any kind prediction based on this work, it'll be an impressive feat.

In the meantime, if you want to know what you're likely to be listening to in the near future, cough, tune in to the music now playing in Atlanta and Oslo. 

Ref: arxiv.org/abs/1204.2677: The Geographic Flow of Music


View the original article here

Molecular "Wankel Engine" Driven By Photons

Chemists say exotic clusters of boron atoms should behave like rotary Wankel engines when bathed in circularly polarised light

One of the great discoveries of biology is that the engines of life are molecular motors--tiny machines that create, transport and assemble all living things. 

That's triggered more than a little green-eyed jealousy from  physicists and engineers who would like to have molecular machines at their own beck and call. So there's no small interest in developing molecular devices that can be easily harnessed to do the job.

Today, Jin Zhang at the University of California Los Angeles and a few pals say they've identified a machine that fits the bill.

A couple of year ago, chemists discovered that groups of 13 or 19 boron molecules form into concentric rings that can rotate independently, rather like the piston in a rotary Wankel engine. Because of this, they quickly picked up the moniker "molecular Wankel engines". The only question was how to power them. 

Now Zhang and buddies have calculated that this should be remarkably easy--just zap them with circularly polarised infrared light. That sets the inner ring counter-rotating relative the outer one, like a Wankel engine.

Of course, nanotechnologists have identified many molecular motors and even a few rotary versions (ATP springs to mind). 

What makes this one special is that the polarised light doesn't excite the molecule's electronic ground state,  leaving it free to be chemically active.

By contrast, other forms of molecular power such as chemical or electric current can generate heat that has a critical effect on the system.

For the moment, the photon-powered molecular Wankel engine is merely an idea, the result of some detailed chemical modelling.

Zhang and co leave it to others, who are  happy to get their hands dirty, to actually get one of these molecules turning. 

If they've got their sums right, that should be sooner rather than later.

Ref: arxiv.org/abs/1204.2505: Photo-driven Molecular Wankel Engine, B13+


View the original article here

Thursday, May 3, 2012

Mathematics of Eternity Prove The Universe Must Have Had A Beginning -- Part II

Heavyweight cosmologists are battling it out over whether the universe had a beginning. And despite appearances, they may actually agree

Earlier this week, Audrey Mithani and Alexander Vilenkin at Tufts University in Massachusetts argued that the mathematical properties of eternity prove that the universe must have had a beginning. 

Today, another heavyweight from the world of cosmology weighs in with an additional argument. Leonard Susskind at Stanford University in California, says that even if the universe had a beginning, it can be thought of as eternal for all practical purposes. 

Susskind is good enough to give a semi-popular version of his argument:

"To make the point simply, imagine Hilbertville, a one-dimensional semi-infinite city, whose border is at x = 0: The population is infinite and uniformly fills the positive axis x > 0: Each citizen has an identical telescope with a finite power. Each wants to know if there is a boundary to the city. It is obvious that only a finite number of citizens can see the boundary at x = 0. For the infinite majority the city might just as well extend to the infinite negative axis. 

Thus, assuming he is typical, a citizen who has not yet studied the situation should bet with great confidence that he cannot detect a boundary. This conclusion is independent of the power of the telescopes as long as it is finite."

He goes on to discuss various thermodynamic arguments that suggest the universe cannot have existed for ever. The bottom line is that the inevitable increase of entropy over time ensures that a past eternal universe ought to have long since lost any semblance of order. Since we can see order all around us, the universe cannot be eternal in the past.

He finishes with this: "We may conclude that there is a beginning, but in any kind of inflating cosmology the odds strongly (infinitely) favor the beginning to be so far in the past that it is eff ectively at minus infinity."

Susskind is a big hitter: a founder of string theory and one of the most influential thinkers in this area. However, it's hard to agree with his statement that this argument represents the opposing view to Mithani and Vilenkin's.  

His argument is equivalent to saying that the cosmos must have had a beginning even if it looks eternal in the past, which is rather similar to Mithani and Vilenkin's view. The distinction that Susskind does make is that his focus is purely on the practical implications of this--although what he means by 'practical' isn't clear.

That the universe did or did not have a beginning is profoundly important from a philosophical point of view, so much so that a definitive answer may well have practical implications for humanity. 

But perhaps the real significance of this debate lies elsewhere. The need to disagree in the face of imminent agreement probably tells us more about the nature of cosmologists than about the cosmos itself.

Ref: arxiv.org/abs/1204.5385: Was There a Beginning?


View the original article here

Mathematics of Eternity Prove The Universe Must Have Had A Beginning

Cosmologists use the mathematical properties of eternity to show that although universe may last forever, it must have had a beginning

The Big Bang has become part of popular culture since the phrase was coined by the maverick physicist Fred Hoyle in the 1940s. That's hardly surprising for an event that represents the ultimate birth of everything.

However, Hoyle much preferred a different model of the cosmos: a steady state universe with no beginning or end, that stretches infinitely into the past and the future. That idea never really took off.

In recent years, however, cosmologists have begun to study a number of new ideas that have similar properties. Curiously, these ideas are not necessarily at odds with the notion of a Big Bang.

For instance, one idea is that the universe is cyclical with big bangs followed by big crunches followed by big bangs in an infinite cycle. 

Another is the notion of eternal inflation in which different parts of the universe expand and contract at different rates. These regions can be thought of as different universes in a giant multiverse. 

So although we seem to live in an inflating cosmos,  other universes may be very different. And while our universe may look as if it has a beginning, the multiverse need not have a beginning.

Then there is the idea of an emergent universe which exists as a kind of seed for eternity and then suddenly expands. 

So these modern cosmologies suggest that the observational evidence of an expanding universe is consistent with a cosmos with no beginning or end. That may be set to change.

Today, Audrey Mithani and Alexander Vilenkin at Tufts University in Massachusetts say that these models are mathematically incompatible with an eternal past. Indeed, their analysis suggests that these three models of the universe must have had a beginning too.

Their argument focuses on the mathematical properties of eternity--a universe with no beginning and no end. Such a universe must contain trajectories that stretch infinitely into the past. 

However, Mithani and Vilenkin point to a proof dating from 2003 that these kind of past trajectories cannot be infinite if they are part of a universe that expands in a specific way. 

They go on to show that cyclical universes and universes of eternal inflation both expand in this way. So they cannot be eternal in the past and must therefore have had a beginning. "Although inflation may be eternal in the future, it cannot be extended indefinitely to the past," they say.

They treat the emergent model of the universe differently, showing that although it may seem stable from a classical point of view, it is unstable from a quantum mechanical point of view. "A simple emergent universe model...cannot escape quantum collapse," they say.

The conclusion is inescapable. "None of these scenarios can actually be past-eternal," say Mithani and Vilenkin. 

Since the observational evidence is that our universe is expanding, then it must also have been born in the past. A profound conclusion (albeit the same one that lead to the idea of the big bang in the first place).  

Ref: arxiv.org/abs/1204.4658: Did The Universe Have A Beginning?


View the original article here

Computer Scientists Build Computer Using Swarms of Crabs

One of the hot topics in computer science is the study of unconventional forms of computation.  

This is motivated by two lines of thought. The first is theoretical--ordinary computers are hugely energy inefficient--some eight orders of magnitude worse than is theoretically possible. The second is practical--Nature has evolved many much more efficient forms of computation for specific tasks such as pattern recognition.

Clearly, we ought to be able to do much better--hence the interest in different ways of doing things. 

Various groups have tried computing with exotic substances such as chemicals like hot ice and even with a single celled organism called a slime mould. 

Today, we look at one of the more curious variations on this theme--a computer that exploits the swarming behaviour of soldier crabs. 

First, a little background on the theory behind this idea. Back in the early 80s, a couple of computer scientists--Ed Fredkin and Tommaso Toffoli--studied how it might be possible to build a computer out of billiard balls. 

The idea is that a channel would carry information encoded in the form of the presence or absence of billiard balls . This information is processed through gates in which the billiard balls either collide and emerge in a direction that is the result of the ballistics of the collision, or don;t collide and emerge with the same velocities. 

Now Yukio-Pegio Gunji from Kobe University in Japan and a couple of pals have built what is essentially billiard ball computer using soldier crabs. "We demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment," they say.

These creatures seem to be uniquely suited for this form of information processing . They live under the sand  in tidal lagoons and emerge at low tide in swarms of hundreds of thousands.

What's interesting about the crabs is that they appear to demonstrate two distinct forms of behaviour. When in the middle of a swarm, they simply follow whoever is nearby. But when they find themselves on on the edge of a swarm, they change. 

Suddenly, they become aggressive leaders and charge off into the watery distance with their swarm in tow, until by some accident of turbulence they find themselves inside the swarm again.

This turns out to be hugely robust behaviour that can be easily controlled. When placed next to a wall, a leader will always follow the wall in a direction that can be controlled by shadowing the swarm from above to mimic to the presence of the predatory birds that eat the crabs. 

Under these conditions, a swarm of crabs will follow a wall like a rolling billiard ball. 

So what happens when two "crab balls" collide? According to Gunji and co's experiments, the balls merge and continue in a direction that is the sum of their velocities. 

What's more, the behaviour is remarkably robust to noise, largely because the crab's individuals behaviours generates noise that is indistinguishable from external noise. These creatures have evolved to cope with noise.

That immediately suggested a potential application in computing, say Gunji and co. If the balls of crabs behave like billiard balls, it should be straightforward to build a pattern of channels that act like a logic gate. 

And that's exactly what Gunji and co have done. These guys first simulated the behaviour of a soldier crab computer in special patterns of channels. Then they built one in their lab to test the idea with real crabs.

To be fair, the results were mixed. While Gunji and co found they could build a decent OR gate using soldier crabs, their AND-gate was much less reliable. 

However, it's early days  and they say it may be possible to produce better results by making conditions inside the computer more crab-friendly. (No crabs were harmed in the making of their computer, say Gunji and co.) 

So there you have it--a computer in which the information carriers are swarming balls of soldier crabs. 

Not a sentence you expect to read every day. But it surely cannot be long before we all have one of these on our desktops. 

Ref: http://arxiv.org/abs/1204.1749: Robust Soldier Crab Ball Gate


View the original article here

10 GHz Optical Transistor Built Out Of Silicon

In a significant step forward for all-optical computing, physicists build a silicon transistor that works with pure light

Electrons are pretty good at processing information but not so good at carrying it over long distances. Photons, on the other hand, do a grand job of shuttling data round the planet but are not so handy when it comes to processing it.

As a result, transistors are electronic and communication cables are optical. And the world is burdened with a significant amount of power hungry infrastructure for converting electronic information into the optical variety and vice versa.

So it's no surprise that there is significant interest in developing an optical transistor that could make the electronic variety obsolete. 

There's a significant problem, however. While various groups have built optical switches, optical transistors must also have a number of other properties so that they can be connected in a way that can process information. 

For example, their output must be capable of acting as the input for another transistor--not easy if the output is a different frequency from the input, for instance. What's more, the output must be capable of driving the input for at least two other transistors so that logic signals can propagate, a property known as fanout.  This requires significant gain. On top of this, each transistor must preserve the quality of the logic signal so that errors do no propagate. And so on. 

The trouble is that nobody has succeeded in making optical transistors that can do all and can also be made out of silicon. 

Today, Leo Varghese at Purdue University in Indiana and a few pals say they've built a device that take a significant step in this direction. 

Their optical transistor consists of a microring resonator next to an optical line. In ordinary circumstances the light supply enters the optical line, passes along it and then outputs. But at a specific resonant frequency, the light interacts with the microring resonator, vastly reducing the output. In this state, the output is essentially off even though the supply is on. 

The trick these guys have perfected is to use another optical line, called the gate, to heat the microring, thereby changing its size, its resonant frequency and its ability to interact with the output. 

That allows the gate to turn the output on and off.   

There's an additional clever twist. The microring's interaction with the gate is stronger than with the supply-output line. That's significant because it means a small gate signal can control a much bigger output signal.

Varghese and co say the ratio of the gate signal to the supply is almost 6 dB. That's enough to power at least two other transistors, which is exactly the fan out property that optical transistors require. 

These guys have even built a device out of silicon with a bandwidth capable of data rates of up to 10 GHz.

That's an impressive result, particularly the silicon compatibility. 

Nevertheless, there are significant hurdles ahead before an all-optical computer made with these devices can hope to compete against its electronic cousins. 

The biggest problem is power consumption. Much of the power consumption in electronic transistors comes from the need to charge the lines connecting them to the operating voltage. 

In theory, optical transistors could be even more efficient--their lines don't need charging at all. But in practice, lasers burn energy as if it were twenty dollar bills. For that reason, it's not at all clear that optical transistors can match the efficiency of electronic chips.  

And with the computer industry now responsible for almost 2 per cent of global carbon dioxide emissions, almost as much as aviation, power consumption may turn out to be the overarching factor for the future direction of information processing.

Ref: arxiv.org/abs/1204.5515: A Silicon Optical Transistor


View the original article here

How to Perfect Real-Time Crowdsourcing

The new techniques behind instant crowdsourcing makes human intelligence available on demand for the first time.

One of the great goals of computer science is to embed human-like intelligence in common applications like image processing, robotic control and so on. Until recently the focus has been to develop an artificial intelligence that can do these jobs. 

But there's another option: using real humans via some kind of crowdsourcing process. One well known example involves the CAPTCHA test which can identify humans from machines by asking them to identify words so badly distorted that automated systems cannot read them. 

However, spammers are known to farm out these tasks to humans via crowdsourcing systems that pay in the region of 0.5 cents per 1000 words solved. 

Might not a similar process work for legitimate tasks such as building human intelligence into real world applications?

The problem, of course, is latency. Nobody wants to sit around for 20 minutes while a worker with the skills to steer your robotic waiter is crowdsourced from the other side of the world.

So how quickly can a crowd be put into action.?That's the question tackled today by Michael Bernstein at the Massachusetts Institute of Technology in Cambridge and a few pals. 

In the past, these guys have found ways to bring a crowd to bear in about two seconds. That's quick. But the reaction time is limited to how quickly a worker responds to an alert.

Now these guys say they've find a way to reduce the reaction time to 500 milliseconds--that's effectively realtime. A system with a half second latency could turn crowdsourcing into a very different kind of resource. 

The idea that Bernstein and co have come up with is straightforward. These guys simply "precruit" a crowd and keep them on standby until a task becomes available. Effectively, they're paying workers a retainer so that they are available immediately when needed

The difficulty is in the messy details of precruitment. How many workers do you need to keep on retainer, how do cope with drop outs  and how do you keep people interested so that they are available to work at a fraction of a second's notice?

Bernstein and co have used an idea called queuing theory to work out how to optimise the process of precruitment according to how often the task comes up, how long it takes and so on. 

They've also developed an interesting psychological trick to keep workers ready for action. When workers are precruited, a screen opens up on their computer which downloads the task. The download occurs extremely quickly but if no task is to hand, the screen shows a "loading" bar. 

It turns out that the loading bar keeps workers focused on the forthcoming task for up to ten seconds, at which point their attention begins to wander. At that point, if no task materialises, the worker can be paid off.

Bernstein and co have even tested how well this works using a whack-a-mole type task which appears on workers screens after a randomly chosen period between 0 and 20 seconds. They recruited 50 workers to carry out 373 whacks and found the median length of time between the mole's appearance and the worker moving the mouse toward the mole to click on it was 0.50 seconds.

"Our model suggests...that crowds could be recruited eff ectively instantaneously," they say.

That could change the nature of crowdsourcing. Bernstein suggest that real time crowdsourcing could be used to point cameras, control robots and produce instant opinion polls.

But first crowdsourcers will have to change the way they do business. Bernstein and co suggest that they could build a retainer into their system design so that they have a pool of ready-to-go workers available at any instant. 

Workers could even be assessed at how good they are for this kind of task allowing them to build up a reputation for the work. Bernstein and co suggest two new reputation statistics--the percentage of time workers respond to a precruitment request and how quickly they respond on these occasions.

Shouldn't be too hard to set up. An interesting new business model for the likes of Mechanical Turk and others or perhaps for an enterprising new start up.

Ref: arxiv.org/abs/1204.2995: Analytic Methods for Optimising Real time Crowdsourcing 


View the original article here

Did Einstein's First Wife Secretly Coauthor His 1905 Relativity Paper?

Various historians have concluded that Einstein's first wife, Mileva, may have secretly contributed to his work. Now a new analysis seeks to settle the matter.

In the the late 1980s, the American physicist Evan Walker Harris published an article in Physics Today suggesting that Einstein first wife, Mileva Maric, was an unacknowledged coauthor of his 1905 paper on special relativity.

The idea generated considerable controversy at the time, although most physicists and historians of science have rejected it. 

Today, Galina Weinstein, a visiting scholar at The Centre for Einstein Studies at Boston University, hopes to settle the matter with a new analysis.

The story begins after Einstein's death in 1955, when the Soviet physicist Abram Fedorovich Joffe described some correspondence he had with Einstein early in their careers in a article published in Russian. 

Joffe had asked Einstein for a preprints of some of his papers and wrote: "The author of these articles—an unknown person at that time, was a bureaucrat at the Patent Office in Bern, Einstein-Marity (Marity the maiden name of his wife, which by Swiss custom is added to the husband's family name)." (Marity is a Hungarian variant of Maric.)

The conspiracy theories date from this reference to Einstein as Einstein-Marity, says Weinstein. The result was an increasingly complex tangle of deliberate or accidental misunderstandings. 

The problem seems to have begun with a popular Russian science writer called Daniil Semenvich Danin, who interpreted Joffe's account to mean that Einstein and Maric collaborated on the work. This later transformed into the notion that Maric had originally been a coauthor on the 1905 paper but her name was removed from the the final published version. 

This is a clear misinterpretation, suggests Weinstein.   

Walker reignited this controversy in his Physics Today article. He suggests that Einstein may have stolen his wife's ideas. 

There's another interesting line for the conspiracy theorists. Historians have translated the letters between Einstein and Maric into English, allowing a detailed analysis of their relationship. However, one of these letters includes the phrase: "bringing our work on relative motion to a successful conclusion!" This seems to back up the idea that the pair must have collaborated.

However, Weinstein has analysed the letters in detail and says that two lines of evidence suggest that this was unlikely. First, Einstein's letters are full of his ideas about physics while Maric's contain none, suggesting that he was using her as a sounding board rather than a collaborator.

Second, Maric was not a talented physicist or mathematician. She failed her final examinations and was never granted a diploma.

Weinstein argues that Maric could therefore not have made a significant contribution and quotes another historian on the topic saying that while there is no evidence that Maric was gifted mathematically, there is some evidence that she was not.

There is one fly in the ointment. Maric and Einstein divorced in 1919, but as part of the divorce settlement, Einstein agreed to pay his ex-wife every krona of any future Nobel Prize he might be awarded.

Weinstein suggests that everybody knew Einstein was in line to win the prize and that in the postwar environment in Germany, this was a natural request from a wife who did not want a divorce and was suffering from depression.

Walker, on the other hand, says: "I find it difficult to resist the conclusion that Mileva, justly or unjustly, saw this as her reward for the part she had played in developing the theory of relativity."

Without more evidence, it's hard to know one way or the other. But there's surely enough uncertainty about what actually happened to keep the flames of conspiracy burning for a little while longer.

Ref: arxiv.org/abs/1204.3551: Did Mileva Maric Assist Einstein In Writing His 1905 Path Breaking Papers?


View the original article here

Wednesday, May 2, 2012

Ancient Egyptians Recorded Algol's Variable Magnitude 3000 Years Before Western Astronomers

A statistical analysis of a 3000-year old calendar reveals that astronomers in ancient Egypt must have known the period of the eclipsing binary Algol

The Ancient Egyptians were meticulous astronomers and recorded the passage of the heavens in extraordinary detail. The goal was to mark the passage of time and  to understand the will of the Gods who kept the celestial machinery at work. 

Egyptian astronomers used what they learnt to make predictions about the future. They drew these up in the form of calendars showing lucky and unlucky days. 

The predictions were amazingly precise. Each day was divided into three or more segments, each of which was given a rating lying somewhere in the range from very favourable to highly adverse.

One of the best preserved of these papyrus documents is called the Cairo Calendar. Although the papyrus is badly damaged in places, scholars have been able to extract a complete list of ratings for days throughout an entire year somewhere around 1200 BC.

An interesting question is how the scribes arrived at their ratings. So various groups have studied the patterns that crop up in the predictions. Today, Lauri Jetsu and buddies at the University of Helsinki in Finland reveal the results of their detailed statistical analysis of the Cairo Calendar. Their conclusion is extraordinary.

These guys arranged the data as a time series and crunched it with various statistical tools designed to reveal cycles within it. They found two significant periodicities. The first is 29.6 days--that's almost exactly the length of a lunar month, which modern astronomers put at 29.53059 days.  

The second cycle is 2.85 days and this is much harder to explain. However, Jetsu and co make a convincing argument that this corresponds to the variability of Algol, a star visible to the naked eye in the constellation of Perseus.

Algol is interesting because every 2.867 days, it dims visibly for a few hours and then brightens up. This was first discovered John Goodricke in 1783, who used naked eye observations to measure the variability.

Astronomers later explained this variability by assuming that Algol is a binary star system. It dims when the dimmer star passes in front of the brighter one. 

Nothing else in the visible night sky comes close to having a similar period so it's reasonable to think that the 2.85 and the 2.867 day periods must refer to the same object. "Everything indicated that the two best periods in [the data] were the real periods of the Moon and Algol," say Jetsu and co.

And yet that analysis leaves a nasty taste in the mouth. The ancients were extremely careful observers. If Goodricke measured a period of 2.867 days (68.75 hours), the Egyptians ought to have been able to as well. 

This is where the astronomy becomes a little more complex. The period of binary star systems ought to be easy to predict. But in recent years, astronomers have discovered that Algol's period is changing in ways that they do not yet fully understand. 

One reason for this is that Algol turns out to be a triple system with a third star in a much larger orbit. And of course, the behaviour of triple systems is more complex. It is also hard to model based on real data since observations of Algol's variability go back only 300 years.

Or so everyone had thought. Jetsu and co now think that the difference between the ancient and modern measurements is no accident and that the period was indeed shorter in those days. So the Egyptian data can be used as an additional data point to better constrain and understand Algol's behaviour.

So not only did the ancients' discover the variable stars 3000 years before western astronomers, the data is good enough to help understand the behaviour of this complex system. A truly remarkable conclusion. 

Ref: arxiv.org/abs/1204.6206: Did The Ancient Egyptians Record The Period Of The Eclipsing Binary Algol – The Raging One?  


View the original article here

Psychologists Use Social Networking Behavior to Predict Personality Type

The ability to automatically determine personality type could change the way social networks target services to users

One of the foundations of modern psychology is that human personality can be described in terms of five different forms of behavior. These are:

1. Agreeableness--being helpful, cooperative and sympathetic towards others
2. Conscientiousness--being disciplined, organized and achievement-oriented 
3. Extraversion--having a higher degree of sociability, assertiveness and talkativeness 
4. Neuroticism--the degree of emotional stability, impulse control and anxiety 
5. Openness--having a strong intellectual curiosity and a preference for novelty and variety

Psychologists have spent much time and many years developing tests that can classify people according to these criteria. 

Today, Shuotian Bai at the Graduate University of Chinese Academy of Sciences in Beijing and a couple of buddies say they have developed an online version of the test that can determine an individual's personality traits from their behavior on a social network such as Facebook or Renren, an increasingly popular Chinese competitor.

Their method is relatively simple. These guys asked just over 200 Chinese students with Renren accounts to complete online, a standard personality test called the Big Five Inventory, which was developed at the University of California, Berkeley during the 1990s.

At the same time, these guys analyzed the Renren pages of each student, recording their age and sex and various aspects of their online behavior such as the frequency of their blog posts as well as the emotional content of the posts such as whether angry, funny or surprised  and so on. 

Finally, they used various number crunching techniques to reveal correlations between the results of the personality tests and the online behavior. 

It turns out, they say, that various online behaviors are a good indicator of personality type. For example, conscientious people are more likely to post asking for help such as a location or e-mail address; a sign of extroversion is an increased use of emoticons; the frequency of status updates correlates with openness; and a measure of neuroticism is the rate at which blog posts attract angry comments.

Based on these correlations, these guys say they can automatically predict personality type simply by looking at an individual's social network statistics. 

That could be extremely useful for social networks. Shuotian and comapny point out that a network might use this to recommend specific services. They give the rather naive example of an outgoing user who may prefer international news and like to make friends with others. 

Other scenarios are at least as likely. For example, such an approach might help to improve recommender systems in general. Perhaps people who share similar personality characteristics are more likely to share similar tastes in books, films or each other. 

There is also the obvious prospect that social networks would use this data for commercial gain; to target specific adverts to users for example. And finally there is the worry that such a technique could be used to identify vulnerable individuals who might be most susceptible to nefarious persuasion.

Ethics aside, there are also certain questions marks over the result. One important caveat is how people's response to psychology studies online differs from those done at other times. That could clearly introduce some bias. Then there are the more general questions of how online and offline behaviours differs and how these tests vary across cultures. These are things that Shuotian and Co. want to study in the future.

In the meantime, it is becoming increasingly clear that the data associated with our online behavior is a rich and valuable source of information about our innermost natures. 

Ref: arxiv.org/abs/1204.4809: Big-Five Personality Prediction Based on User Behaviors at Social Network Sites


View the original article here

Quantum Rainbow Photon Gun Unveiled

A photon gun capable of reliably producing single photons of different colours could become an important building block of a quantum internet

We've heard much about the possibility of a quantum internet which uses single photons to encode and send information protected by the emerging technology of quantum cryptography. 

The main advantage is of such a system is perfect security, the kind of thing that governments, the military, banks and assorted other groups would pay handsomely to achieve. 

One of the enabling technologies for a quantum internet is a reliable photon gun that can fire single photons on demand. That's not easy. 

One of the significant weaknesses of current quantum cryptographic systems is the finite possibility that today's lasers emit photons in bunches rather than one at a time. When this happens, an eavesdropper can use these extra photons to extract information about the data being transmitted.

So there's no shortage of interest in developing photon guns that emit single photons and indeed various groups have made significant progress towards this.

Against this background, Michael Fortsch at the Max Planck Institute for the Science of Light in Erlangen, Germany, and a few pals today say they've made a significant breakthrough. These guys reckon they've built a photon emitter with a range of properties that make it far more flexible, efficient and useful than any before--a kind of photon supergun.

The gun is a disc-shaped crystal of lithium niobate zapped with 582nm light from a neodymium-doped yttrium aluminium garnet (Nd:YAG) laser. Lithium niobate is a nonlinear material that causes single photons to spontaneously convert into photon pairs. 

So the 582nm photons ricochet around inside the disc and eventually emerge either as unchanged 582nm photons or as a pair of entangled photons with about twice the wavelength (about 1060nm). This entangled pair don't have quite the same wavelength and so all three types of photon can be easily separated. 

The 582 nm photons are ignored. Of the other pair, one is used to transmit information and the other is picked up by a detector to confirm that the other photon is ready form transmission.

So what's so special about this photon gun? First and most important is that the gun emits photons in pairs. That's significant because the detection of one photon is an unambiguous sign that another has also been emitted. It's like a time stamp that says a photon is on its way.

This so-called photon herald means that there can be no confusion over whether the gun is secretly leaking information to a potential eavesdropper. 

This gun is also fast, emitting some 10 million pairs of photons per second per mW and also two orders of magnitude more efficient than other photon guns.

These guys can also change the wavelength of the photons the gun emits by heating or cooling the crystal and thereby changing its size. This rainbow of colours stretches over 100nm (OK, not quite a rainbow but you get the picture).

That's important because it means the gun can be tuned to various different atomic transitions allowing physicists and engineers to play with a variety of different atoms for quantum information storage.

All in all, an impressive feat and clearly an enabling step along the way to more powerful quantum information processing tools.

Ref: arxiv.org/abs/1204.3056: A Versatile Source of Single Photons for Quantum Information Processing


View the original article here

Prices 'n' Posts

Sorry, I could not read the content fromt this page.

View the original article here