Thursday, May 3, 2012

10 GHz Optical Transistor Built Out Of Silicon

In a significant step forward for all-optical computing, physicists build a silicon transistor that works with pure light

Electrons are pretty good at processing information but not so good at carrying it over long distances. Photons, on the other hand, do a grand job of shuttling data round the planet but are not so handy when it comes to processing it.

As a result, transistors are electronic and communication cables are optical. And the world is burdened with a significant amount of power hungry infrastructure for converting electronic information into the optical variety and vice versa.

So it's no surprise that there is significant interest in developing an optical transistor that could make the electronic variety obsolete. 

There's a significant problem, however. While various groups have built optical switches, optical transistors must also have a number of other properties so that they can be connected in a way that can process information. 

For example, their output must be capable of acting as the input for another transistor--not easy if the output is a different frequency from the input, for instance. What's more, the output must be capable of driving the input for at least two other transistors so that logic signals can propagate, a property known as fanout.  This requires significant gain. On top of this, each transistor must preserve the quality of the logic signal so that errors do no propagate. And so on. 

The trouble is that nobody has succeeded in making optical transistors that can do all and can also be made out of silicon. 

Today, Leo Varghese at Purdue University in Indiana and a few pals say they've built a device that take a significant step in this direction. 

Their optical transistor consists of a microring resonator next to an optical line. In ordinary circumstances the light supply enters the optical line, passes along it and then outputs. But at a specific resonant frequency, the light interacts with the microring resonator, vastly reducing the output. In this state, the output is essentially off even though the supply is on. 

The trick these guys have perfected is to use another optical line, called the gate, to heat the microring, thereby changing its size, its resonant frequency and its ability to interact with the output. 

That allows the gate to turn the output on and off.   

There's an additional clever twist. The microring's interaction with the gate is stronger than with the supply-output line. That's significant because it means a small gate signal can control a much bigger output signal.

Varghese and co say the ratio of the gate signal to the supply is almost 6 dB. That's enough to power at least two other transistors, which is exactly the fan out property that optical transistors require. 

These guys have even built a device out of silicon with a bandwidth capable of data rates of up to 10 GHz.

That's an impressive result, particularly the silicon compatibility. 

Nevertheless, there are significant hurdles ahead before an all-optical computer made with these devices can hope to compete against its electronic cousins. 

The biggest problem is power consumption. Much of the power consumption in electronic transistors comes from the need to charge the lines connecting them to the operating voltage. 

In theory, optical transistors could be even more efficient--their lines don't need charging at all. But in practice, lasers burn energy as if it were twenty dollar bills. For that reason, it's not at all clear that optical transistors can match the efficiency of electronic chips.  

And with the computer industry now responsible for almost 2 per cent of global carbon dioxide emissions, almost as much as aviation, power consumption may turn out to be the overarching factor for the future direction of information processing.

Ref: arxiv.org/abs/1204.5515: A Silicon Optical Transistor


View the original article here

How to Perfect Real-Time Crowdsourcing

The new techniques behind instant crowdsourcing makes human intelligence available on demand for the first time.

One of the great goals of computer science is to embed human-like intelligence in common applications like image processing, robotic control and so on. Until recently the focus has been to develop an artificial intelligence that can do these jobs. 

But there's another option: using real humans via some kind of crowdsourcing process. One well known example involves the CAPTCHA test which can identify humans from machines by asking them to identify words so badly distorted that automated systems cannot read them. 

However, spammers are known to farm out these tasks to humans via crowdsourcing systems that pay in the region of 0.5 cents per 1000 words solved. 

Might not a similar process work for legitimate tasks such as building human intelligence into real world applications?

The problem, of course, is latency. Nobody wants to sit around for 20 minutes while a worker with the skills to steer your robotic waiter is crowdsourced from the other side of the world.

So how quickly can a crowd be put into action.?That's the question tackled today by Michael Bernstein at the Massachusetts Institute of Technology in Cambridge and a few pals. 

In the past, these guys have found ways to bring a crowd to bear in about two seconds. That's quick. But the reaction time is limited to how quickly a worker responds to an alert.

Now these guys say they've find a way to reduce the reaction time to 500 milliseconds--that's effectively realtime. A system with a half second latency could turn crowdsourcing into a very different kind of resource. 

The idea that Bernstein and co have come up with is straightforward. These guys simply "precruit" a crowd and keep them on standby until a task becomes available. Effectively, they're paying workers a retainer so that they are available immediately when needed

The difficulty is in the messy details of precruitment. How many workers do you need to keep on retainer, how do cope with drop outs  and how do you keep people interested so that they are available to work at a fraction of a second's notice?

Bernstein and co have used an idea called queuing theory to work out how to optimise the process of precruitment according to how often the task comes up, how long it takes and so on. 

They've also developed an interesting psychological trick to keep workers ready for action. When workers are precruited, a screen opens up on their computer which downloads the task. The download occurs extremely quickly but if no task is to hand, the screen shows a "loading" bar. 

It turns out that the loading bar keeps workers focused on the forthcoming task for up to ten seconds, at which point their attention begins to wander. At that point, if no task materialises, the worker can be paid off.

Bernstein and co have even tested how well this works using a whack-a-mole type task which appears on workers screens after a randomly chosen period between 0 and 20 seconds. They recruited 50 workers to carry out 373 whacks and found the median length of time between the mole's appearance and the worker moving the mouse toward the mole to click on it was 0.50 seconds.

"Our model suggests...that crowds could be recruited eff ectively instantaneously," they say.

That could change the nature of crowdsourcing. Bernstein suggest that real time crowdsourcing could be used to point cameras, control robots and produce instant opinion polls.

But first crowdsourcers will have to change the way they do business. Bernstein and co suggest that they could build a retainer into their system design so that they have a pool of ready-to-go workers available at any instant. 

Workers could even be assessed at how good they are for this kind of task allowing them to build up a reputation for the work. Bernstein and co suggest two new reputation statistics--the percentage of time workers respond to a precruitment request and how quickly they respond on these occasions.

Shouldn't be too hard to set up. An interesting new business model for the likes of Mechanical Turk and others or perhaps for an enterprising new start up.

Ref: arxiv.org/abs/1204.2995: Analytic Methods for Optimising Real time Crowdsourcing 


View the original article here

Did Einstein's First Wife Secretly Coauthor His 1905 Relativity Paper?

Various historians have concluded that Einstein's first wife, Mileva, may have secretly contributed to his work. Now a new analysis seeks to settle the matter.

In the the late 1980s, the American physicist Evan Walker Harris published an article in Physics Today suggesting that Einstein first wife, Mileva Maric, was an unacknowledged coauthor of his 1905 paper on special relativity.

The idea generated considerable controversy at the time, although most physicists and historians of science have rejected it. 

Today, Galina Weinstein, a visiting scholar at The Centre for Einstein Studies at Boston University, hopes to settle the matter with a new analysis.

The story begins after Einstein's death in 1955, when the Soviet physicist Abram Fedorovich Joffe described some correspondence he had with Einstein early in their careers in a article published in Russian. 

Joffe had asked Einstein for a preprints of some of his papers and wrote: "The author of these articles—an unknown person at that time, was a bureaucrat at the Patent Office in Bern, Einstein-Marity (Marity the maiden name of his wife, which by Swiss custom is added to the husband's family name)." (Marity is a Hungarian variant of Maric.)

The conspiracy theories date from this reference to Einstein as Einstein-Marity, says Weinstein. The result was an increasingly complex tangle of deliberate or accidental misunderstandings. 

The problem seems to have begun with a popular Russian science writer called Daniil Semenvich Danin, who interpreted Joffe's account to mean that Einstein and Maric collaborated on the work. This later transformed into the notion that Maric had originally been a coauthor on the 1905 paper but her name was removed from the the final published version. 

This is a clear misinterpretation, suggests Weinstein.   

Walker reignited this controversy in his Physics Today article. He suggests that Einstein may have stolen his wife's ideas. 

There's another interesting line for the conspiracy theorists. Historians have translated the letters between Einstein and Maric into English, allowing a detailed analysis of their relationship. However, one of these letters includes the phrase: "bringing our work on relative motion to a successful conclusion!" This seems to back up the idea that the pair must have collaborated.

However, Weinstein has analysed the letters in detail and says that two lines of evidence suggest that this was unlikely. First, Einstein's letters are full of his ideas about physics while Maric's contain none, suggesting that he was using her as a sounding board rather than a collaborator.

Second, Maric was not a talented physicist or mathematician. She failed her final examinations and was never granted a diploma.

Weinstein argues that Maric could therefore not have made a significant contribution and quotes another historian on the topic saying that while there is no evidence that Maric was gifted mathematically, there is some evidence that she was not.

There is one fly in the ointment. Maric and Einstein divorced in 1919, but as part of the divorce settlement, Einstein agreed to pay his ex-wife every krona of any future Nobel Prize he might be awarded.

Weinstein suggests that everybody knew Einstein was in line to win the prize and that in the postwar environment in Germany, this was a natural request from a wife who did not want a divorce and was suffering from depression.

Walker, on the other hand, says: "I find it difficult to resist the conclusion that Mileva, justly or unjustly, saw this as her reward for the part she had played in developing the theory of relativity."

Without more evidence, it's hard to know one way or the other. But there's surely enough uncertainty about what actually happened to keep the flames of conspiracy burning for a little while longer.

Ref: arxiv.org/abs/1204.3551: Did Mileva Maric Assist Einstein In Writing His 1905 Path Breaking Papers?


View the original article here

Wednesday, May 2, 2012

Ancient Egyptians Recorded Algol's Variable Magnitude 3000 Years Before Western Astronomers

A statistical analysis of a 3000-year old calendar reveals that astronomers in ancient Egypt must have known the period of the eclipsing binary Algol

The Ancient Egyptians were meticulous astronomers and recorded the passage of the heavens in extraordinary detail. The goal was to mark the passage of time and  to understand the will of the Gods who kept the celestial machinery at work. 

Egyptian astronomers used what they learnt to make predictions about the future. They drew these up in the form of calendars showing lucky and unlucky days. 

The predictions were amazingly precise. Each day was divided into three or more segments, each of which was given a rating lying somewhere in the range from very favourable to highly adverse.

One of the best preserved of these papyrus documents is called the Cairo Calendar. Although the papyrus is badly damaged in places, scholars have been able to extract a complete list of ratings for days throughout an entire year somewhere around 1200 BC.

An interesting question is how the scribes arrived at their ratings. So various groups have studied the patterns that crop up in the predictions. Today, Lauri Jetsu and buddies at the University of Helsinki in Finland reveal the results of their detailed statistical analysis of the Cairo Calendar. Their conclusion is extraordinary.

These guys arranged the data as a time series and crunched it with various statistical tools designed to reveal cycles within it. They found two significant periodicities. The first is 29.6 days--that's almost exactly the length of a lunar month, which modern astronomers put at 29.53059 days.  

The second cycle is 2.85 days and this is much harder to explain. However, Jetsu and co make a convincing argument that this corresponds to the variability of Algol, a star visible to the naked eye in the constellation of Perseus.

Algol is interesting because every 2.867 days, it dims visibly for a few hours and then brightens up. This was first discovered John Goodricke in 1783, who used naked eye observations to measure the variability.

Astronomers later explained this variability by assuming that Algol is a binary star system. It dims when the dimmer star passes in front of the brighter one. 

Nothing else in the visible night sky comes close to having a similar period so it's reasonable to think that the 2.85 and the 2.867 day periods must refer to the same object. "Everything indicated that the two best periods in [the data] were the real periods of the Moon and Algol," say Jetsu and co.

And yet that analysis leaves a nasty taste in the mouth. The ancients were extremely careful observers. If Goodricke measured a period of 2.867 days (68.75 hours), the Egyptians ought to have been able to as well. 

This is where the astronomy becomes a little more complex. The period of binary star systems ought to be easy to predict. But in recent years, astronomers have discovered that Algol's period is changing in ways that they do not yet fully understand. 

One reason for this is that Algol turns out to be a triple system with a third star in a much larger orbit. And of course, the behaviour of triple systems is more complex. It is also hard to model based on real data since observations of Algol's variability go back only 300 years.

Or so everyone had thought. Jetsu and co now think that the difference between the ancient and modern measurements is no accident and that the period was indeed shorter in those days. So the Egyptian data can be used as an additional data point to better constrain and understand Algol's behaviour.

So not only did the ancients' discover the variable stars 3000 years before western astronomers, the data is good enough to help understand the behaviour of this complex system. A truly remarkable conclusion. 

Ref: arxiv.org/abs/1204.6206: Did The Ancient Egyptians Record The Period Of The Eclipsing Binary Algol – The Raging One?  


View the original article here

Psychologists Use Social Networking Behavior to Predict Personality Type

The ability to automatically determine personality type could change the way social networks target services to users

One of the foundations of modern psychology is that human personality can be described in terms of five different forms of behavior. These are:

1. Agreeableness--being helpful, cooperative and sympathetic towards others
2. Conscientiousness--being disciplined, organized and achievement-oriented 
3. Extraversion--having a higher degree of sociability, assertiveness and talkativeness 
4. Neuroticism--the degree of emotional stability, impulse control and anxiety 
5. Openness--having a strong intellectual curiosity and a preference for novelty and variety

Psychologists have spent much time and many years developing tests that can classify people according to these criteria. 

Today, Shuotian Bai at the Graduate University of Chinese Academy of Sciences in Beijing and a couple of buddies say they have developed an online version of the test that can determine an individual's personality traits from their behavior on a social network such as Facebook or Renren, an increasingly popular Chinese competitor.

Their method is relatively simple. These guys asked just over 200 Chinese students with Renren accounts to complete online, a standard personality test called the Big Five Inventory, which was developed at the University of California, Berkeley during the 1990s.

At the same time, these guys analyzed the Renren pages of each student, recording their age and sex and various aspects of their online behavior such as the frequency of their blog posts as well as the emotional content of the posts such as whether angry, funny or surprised  and so on. 

Finally, they used various number crunching techniques to reveal correlations between the results of the personality tests and the online behavior. 

It turns out, they say, that various online behaviors are a good indicator of personality type. For example, conscientious people are more likely to post asking for help such as a location or e-mail address; a sign of extroversion is an increased use of emoticons; the frequency of status updates correlates with openness; and a measure of neuroticism is the rate at which blog posts attract angry comments.

Based on these correlations, these guys say they can automatically predict personality type simply by looking at an individual's social network statistics. 

That could be extremely useful for social networks. Shuotian and comapny point out that a network might use this to recommend specific services. They give the rather naive example of an outgoing user who may prefer international news and like to make friends with others. 

Other scenarios are at least as likely. For example, such an approach might help to improve recommender systems in general. Perhaps people who share similar personality characteristics are more likely to share similar tastes in books, films or each other. 

There is also the obvious prospect that social networks would use this data for commercial gain; to target specific adverts to users for example. And finally there is the worry that such a technique could be used to identify vulnerable individuals who might be most susceptible to nefarious persuasion.

Ethics aside, there are also certain questions marks over the result. One important caveat is how people's response to psychology studies online differs from those done at other times. That could clearly introduce some bias. Then there are the more general questions of how online and offline behaviours differs and how these tests vary across cultures. These are things that Shuotian and Co. want to study in the future.

In the meantime, it is becoming increasingly clear that the data associated with our online behavior is a rich and valuable source of information about our innermost natures. 

Ref: arxiv.org/abs/1204.4809: Big-Five Personality Prediction Based on User Behaviors at Social Network Sites


View the original article here

Quantum Rainbow Photon Gun Unveiled

A photon gun capable of reliably producing single photons of different colours could become an important building block of a quantum internet

We've heard much about the possibility of a quantum internet which uses single photons to encode and send information protected by the emerging technology of quantum cryptography. 

The main advantage is of such a system is perfect security, the kind of thing that governments, the military, banks and assorted other groups would pay handsomely to achieve. 

One of the enabling technologies for a quantum internet is a reliable photon gun that can fire single photons on demand. That's not easy. 

One of the significant weaknesses of current quantum cryptographic systems is the finite possibility that today's lasers emit photons in bunches rather than one at a time. When this happens, an eavesdropper can use these extra photons to extract information about the data being transmitted.

So there's no shortage of interest in developing photon guns that emit single photons and indeed various groups have made significant progress towards this.

Against this background, Michael Fortsch at the Max Planck Institute for the Science of Light in Erlangen, Germany, and a few pals today say they've made a significant breakthrough. These guys reckon they've built a photon emitter with a range of properties that make it far more flexible, efficient and useful than any before--a kind of photon supergun.

The gun is a disc-shaped crystal of lithium niobate zapped with 582nm light from a neodymium-doped yttrium aluminium garnet (Nd:YAG) laser. Lithium niobate is a nonlinear material that causes single photons to spontaneously convert into photon pairs. 

So the 582nm photons ricochet around inside the disc and eventually emerge either as unchanged 582nm photons or as a pair of entangled photons with about twice the wavelength (about 1060nm). This entangled pair don't have quite the same wavelength and so all three types of photon can be easily separated. 

The 582 nm photons are ignored. Of the other pair, one is used to transmit information and the other is picked up by a detector to confirm that the other photon is ready form transmission.

So what's so special about this photon gun? First and most important is that the gun emits photons in pairs. That's significant because the detection of one photon is an unambiguous sign that another has also been emitted. It's like a time stamp that says a photon is on its way.

This so-called photon herald means that there can be no confusion over whether the gun is secretly leaking information to a potential eavesdropper. 

This gun is also fast, emitting some 10 million pairs of photons per second per mW and also two orders of magnitude more efficient than other photon guns.

These guys can also change the wavelength of the photons the gun emits by heating or cooling the crystal and thereby changing its size. This rainbow of colours stretches over 100nm (OK, not quite a rainbow but you get the picture).

That's important because it means the gun can be tuned to various different atomic transitions allowing physicists and engineers to play with a variety of different atoms for quantum information storage.

All in all, an impressive feat and clearly an enabling step along the way to more powerful quantum information processing tools.

Ref: arxiv.org/abs/1204.3056: A Versatile Source of Single Photons for Quantum Information Processing


View the original article here

Prices 'n' Posts

Sorry, I could not read the content fromt this page.

View the original article here